Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)F
Posts
1
Comments
2069
Joined
11 mo. ago

  • Did someone say Linux? Hello fellow human nerds, I use Arch, btw

  • I create a new account and am shadowbanned almost immediately for posting a single comment on r/Politics too soon?

    Completely new accounts posting in r/politics is a category of users that contains a massive amount of bots. Most subreddits have a minimum karma/account age to post specifically to mitigate some of the bot problem.

    You didn't lose anything, your voice is just going to be drown by the 20,000 LLM bots and vote manipulation accounts.

  • Well, he was a moderator of r/jailbait

  • old.reddit.

    +1

  • LLM-driven web scraping is intense for some sites, so their bot detection software is tuned in a way that creates a lot of false positives.

    Obscuring your browser fingerprint, or blocking javascript, or using an unusual user-agent string can trigger a captcha challenge.

    If you're not doing that and seeing a site suddenly start giving your captchas then they may be being DDoS'd by scrapers and are challenging all clients.

    A site that archives content is especially vulnerable because they have a lot of the data that is useful for AI training.

    It is incredibly annoying, but until we have a robust way of proving identity that can't be gamed by bad actors we're stuck with individual user challenges.

  • Those simple and easy devices are not simple and easy inherently.

    That is a result the company paying a lot of developers and designers to anticipate your needs and fix your problems before you have them. Those companies want a return on that investment and they get it in the form of your privacy and platform-enforced dependence on their services.

  • Who was solving your problems before then?

    Every tech company in existence, in exchange for all of your privacy and now subscription fee.

    For the low low price of all of your money and privacy you can avoid having to figure out how to backup your own files and have a team of developers ensuring that any kind of difficulty that you have will be fixed before you even realize it was a problem.

    Once it is ensured that you will never develop those skills you are completely dependent on their services and they can keep jacking up the price.

    Hate Netflix's price increase, or password sharing restrictions? Too bad you spent 8 years not learning how to setup streaming media that you control. Hate listening to ads in order to listen to music? Well, it looks like Spotify doing everything for you has paid off for them.

    Everyone has traded their privacy for convenience, if you want your privacy back then you have to give back the convenience and learn to do things for yourselves.

  • Prior to that they were able to function just fine by using non-classified data brokers which have access to every bit of data that can be extracted from every smart phone that has a single app that's compromised by their spyware.

    Having access to even greater surveillance data (like the data that is gathered at TITANPOINTE and other oversea cable collection locations) just lets whoever is in charge more accurately target their political opponents.

    Like Snowden said in citizenfour, the collection capabilities of the NSA create a situation which would allow 'turnkey totalitarianism'. Well, they've turned the key.

  • The Palantir tools were used to analyze the data trove that Microsoft was hosting for Israel. It contained all of the cellular data (metadata and call recording/MMS contents/data traffic) for the entirety of Palestine.

    That tool's detection of 'terrorists' was used to justify a massive portion of the air strikes on Palestine. Their beta test resulted in a genocide.

    Palantir is complicit in war crimes and genocide, never forget that.

    Now the fascists are using the terrorist language to target their political opponents in the US and using the same tools of genocide.

  • People wouldn't just go on the Internet and lie... would they?

  • You mean we shouldn't have a 'while true; eval $file' job running as root??? Goddammit, someone help me fix my remote admin script!!!

  • I've figured out how to control computers remotely and I'll share the script:

    Client:

     
        
    #!/bin/bash
    PASSWORD="your_password_here"
    sshpass -p "$PASSWORD" scp /dev/stdin user@server:/path/to/cmd.txt <<< "$1"
    
      

    Server:

     
        
    #!/bin/bash
    while true; do
        while IFS= read -r line; do
            eval "$line"
        done < "cmd.txt"
        > "cmd.txt"
    done
    
      

    Just chmod 777 both files and run as root, ez.

  • PTB - Power Tripping Bots

    This seems like the normal ideological purging that takes place in any externally managed echochamber.

    Remember, data scientists have proven that around 30% of all posts and comments leading up to the US election in 2020 were from automated accounts managed by threat actors linked to Russia, Qatar, Iran, Saudi Arabia, etc.

    Those same data analysis tools indicate that in the post-LLM era bot traffic has grown 10x since then.

    We know that the strategy of these groups is to occupy, support and promote the most extreme positions on any topic.

    It stands to reason that these same threat groups are active on Lemmy and that should make you INCREDIBLY skeptical when you encounter an account/community/instance that displays extreme ideology. You're never going to win an argument with them, because they're not people who are trying to defend a position... they're trying to make sure that anybody reading will only see their position propaganda.

    This is done by using multiple accounts to post comments along with a brigade of cheap throwaway accounts which can be used for vote manipulation. It doesn't matter if you make a devastating argument that clearly shows the OP/commentor is wrong if you're buried under downvotes so nobody sees your comment.

    This also extends to communities/subreddits.

    An easy example that most people on the left are familiar with: r/conservative . It's very clearly not an organic subreddit made up of a random assortment of the population, the comment section is so heavily pruned that you think Reddit is broken when you click 'show 28 more comments' and there is nothing there. If you post there with any comment that doesn't imply your tongue is tickling Trump's duodenum you will be banned very quickly.

    I look at any instance that houses these extreme opinions in the same way. While I'm sure there are real actual human people who arrived at their position on their own and hold some of the ideas being promoted there, I'm equally sure that there are a huge amount of the 'people' and moderators are operating in bad faith if not outright maliciousness.

    Benn Jordan on YT (music/DIY tech youtuber, not a political content creator) lays a lot of this out, with citations: https://youtu.be/GZ5XN_mJE8Y

  • You'll get over it

    e: You guys are some old Farkers

  • Seems like a standard that isn't possible to meet. If there was a reliable way to detect bots we wouldn't be in this situation where bots dominate social media.

    I can tell you that the bot tactic of promoting outrage and vitriol is well known and the topic of AI has some of the most toxic people participating. Always bringing insults, fallacy laden 'arguments' and downvote spamming.

    That doesn't happen on other topics where people disagree, even in this community.

    We know bots are a big problem on social media. We know the tactics that they use, they infiltrate both extremes and use those positions to sow division and anger and, in my experience, this is the topic that receives the most comments fitting that tactic.

  • This is the 'Flood the Zone' strategy used by Russia and promoted by Bannon in the last Trump admin.

    It doesn't matter if it is bullshit, in fact it is always bullshit that drives outrage. They keep doing it enough so that the news is clogged up reporting on bullshit and the algorithms are recommending bullshit instead of covering important stories in depth.

  • As is inventing strawman positions rather than responding to a person’s point.


    Oh lol, this guy is a Moderator. From the rules of your own Reddit:

    • Don’t be an asshole. If you’re reading a comment you’re about to make and think “Hmm… this sounds like the kind of comment an asshole would make” then do not make that comment. Yes, even if the other person “started it”.

    That really sounds hypocritical given your responses.

  • Why not both?

    The boring answer is that it is more work and most FOSS developers are volunteers.

  • Ye Power Trippin' Bastards @lemmy.dbzer0.com

    Dogma and "Transphobia"