Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)Z

znonymous [comrade/them, love/loves]

@ znonymous @hexbear.net

Posts
7
Comments
37
Joined
1 yr. ago

  • M

  • U

  • B

  • Other examples of recent attacks on politicians... Firebombed by man who disagreed about the "war" in Gaza.

    Imagine still calling it a war.

  • Damn. You are so dense and argumentative on purpose.

    I have only noticed you make two posts. Both were AI slop, and one was pure disinfo that was deleted.

    None of your comments are interesting or insightful. They are all extremely irritating to read.

    I am blocking you. For everyone else's sake. I really hope you just go into read-only mode. It's been truly awful reading any of your thoughts.

  • John Barnett didn't kill himself.

  • No. That is not in Texas.

  • Armageddon is gonna be here way sooner than I expected.

  • Watching a camrip on cineby on my phone right now and it isn't terrible.

  • That’s how you stay early. That’s how you stay sharp. That’s how you keep the edge.

    I am definitely knocking the content. This entire article sucks.

  • My rich lib extended family member always shuts down my shares of Hedges with, "Oh, Chris must be selling another book." Fucking libs.

  • Seismoooooooo!

  • Ugh I see my old self in this comment and I hate it.

  • I have an idea. Have every single article or comment posted by a user scanned by an LLM. Prompt the LLM to identify logical fallacies in the post or comment. Post the user logical fallacies counts on a public scoreboard hosted on each federated instance. Now, ban the top 10% scoring users each quarter who have a fallacy ratio surpassing some reasonable good faith objective.

    Pros: Everyone is judged by the same impassive standard.

    Cons: 1) A fucking LLM has to burn coal for every stupid post we make. 2) LLM prompt injection/hijacking vulnerability.