Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)A
帖子
4
评论
1443
加入于
2 yr. ago

  • I think it's wrong that they carry no liability. At the end of the day they know the product can be used this way, they haven't implemented any safety protocols to prevent this, and while the users prompting Grok are at fault for their own actions, the platform and AI LLM are being used to facilitate it where other AI LLM'S have guard rails to prevent it. In my mind that alone should make them partially liable.

  • I didn't. But I also can't say I've been paying attention.

  • Honestly? It'll probably be an amalgamation of different tech to do it. That's at least part of the reason I'm not sure it should work. Using identity to certify age or age gate products in this way when so much data is being collected already about users kind of doesn't make sense in and of itself. It either leads to a database of data that's dangerous to store, or it leads to government entities using such services to spy on people. Or both.

    If the data that's already out there about me being collected by data brokers can't prove what age I am (and it absolutely can even when it's anonymized) then I suspect no other system by itself will work. Because really what were talking about here is four things.

    1. Linking access to age verification.
    2. Linking identity to age verification.
    3. Anonymizing that data so the service/or anyone with access can't store it or use it for anything other than age verification.
    4. Verifying that the person who device/token/certificate/verified medium is linked to is the person using the device.

    So, say you were to use the block chain method. And say the device was verified. How would I verify it's me using the device (me being the person who certified their age via block chain or some other method). What prevents me from unlocking the device and handing it to my kid? What prevents my kid from using the device without my knowledge (circumventing the password etc).

    That's at least part of the reason Roblox want to use facial recognition to verify users. But how often are we doing that check? Once isn't enough. It's not a hard barrier to cross. And say it's twice, three times. Once a week. Say you use AI generated pictures to bypass that. Then Roblox or the service they contract with for verification has to maintain a database and compare pictures to each other etc.

    Databases can be hacked. That information can be stolen. And linked to driver's licenses, used for reverse image searches etc. If you or your child has ever posted a picture to the internet etc that can be used against you or your kid. It could be used to verify further accounts outside your control etc.

    Following this to it's logical conclusion you'd need to use a combination of things. Something you have (yubikee or some kind of authenticator, ID, credit card). There's nothing stopping a person from selling this with the account credentials.

    Something you know (password, passphrase etc). The account credentials to be sold.

    Something you can't change about yourself (iris scan, fingerprint, voice clip, etc). The dangerous to store information that when leaked or breached would cause damage to the life of the user in question.

    Someone somewhere is going to need to keep a record of that to prove you are you which means it can't by design be anonymous. And it means that there's a database and it there that's dangerous to the users but had to be maintained for the purpose of authentication. And that's why this doesn't work.

  • If the knife was sold to you with the knowledge that you were going to commit murder, the person who sold it to you could be up for a felony murder charge, or as an accessory.

    There's also product liability law. Meaning that if a company sells or proliferates a product without proper safety protocols in place and it causes harm that can be foreseen, they are liable.

  • Tumblr has been losing their mind about it for weeks. My sister keeps sending me memes I have no context for. Apparently it's very good.

  • Then what's tik tok?

  • There's nothing to stop them selling that email address with cert.

  • “We need to get beyond the arguments of slop vs sophistication,” Nadella wrote in a rambling post flagged by Windows Central, arguing that humanity needs to learn to accept AI as the “new equilibrium” of human nature. (As WC points out, there’s actually growing evidence that AI harms human cognitive ability.)

    Going on, Nadella said that we now know enough about “riding the exponentials of model capabilities” as well as managing AI’s “‘jagged’ edges” to allow us to “get value of AI in the real world.”

    “Ultimately, the most meaningful measure of progress is the outcomes for each of us,” the CEO concludes, in an impressive deluge of corporate-speak that may or may not itself be AI-generated. “It will be a messy process of discovery, like all technology and product development always is.”

    TLDR: That's not what he said and rehashing the same interview in article after article with this frankly clickbait headline is getting old.

    Fuck Nadella and his AI bullshit, but could we not keep rehashing this?

  • GiGo.

  • I've never been comfortable with ring cameras specifically because even if it isn't a tool to be harnessed by the state it's still a tool to be harnessed by anyone holding a grudge. The vast majority of IoT users don't know the basics of securing their network or their cameras. They connect things to the internet for the convenience and that's it. And the cameras pick up the comings and goings of people who don't really have the ability to not consent to having someone record when they leave their house or return to it. My neighbor doesn't need that information. And why yes they could sit in their house and watch at all hours through the curtains, there would still be a physical limit to what they could see.

    For the same reason I don't want drones constantly surveiling my home, I don't want camera footage I have no access to but that can be used against me by someone who doesn't like how I take the leaves in my driveway.

    Anyone who's been in a dispute with a neighbor who's got a ring camera knows this struggle. And the advice you get, by and large is to get one of your own. No thanks.

  • My main concerns are mostly to do with the fact that Google in my experience has always had the benefit of enticing software and services that are extremely invasive but also very convenient (even if we remove IoT from the table for a moment). This is mostly due to how invasive Google Play Services is, and how invasive the Google app has been since the first iterations of Google Assistant (Google Now). I'm concerned that even those of use who have done what we can to turn off Gemini and not use Generative AI are still compromised regardless because big tech has a choke hold on the services we use.

    So I suppose I'm trying to understand what the differences are in how these two types of technology compromise cyber security.

  • I assumed based on the subtext requesting we read the article that the contents of said article said that we should trust it because it's constantly changing and challenging itself.

    However I think some people might be inclined to downvote specifically because this kind of headline is basically clickbait. I'm not sure they're wrong in doing so.

  • Pre-Generative AI, lots of companies had AI/Algorithmic tools that posed a risk to personal cyber security (Google's Assistant and Apple's Siri, MS's Cortana etc).

    Is the stance here that AI is more dangerous than those because of its black box nature, it's poor guardrails, the fact that it's a developing technology, or it's unfettered access?

    Also, do you think that the "popularity" of Google Gemini is because people were already indoctrinated into the Assistant ecosystem before it became Gemini, and Google already had a stranglehold on the search market so the integration of Gemini into those services isn't seen as dangerous because people are already reliant and Google is a known brand rather than a new "startup".

  • Military bases are often powered by renewables at least partly and they are making the shift away from relying on civilian infrastructure because it is so vulnerable. This article is more about reliance on American Tech Companies than it is about making the case that the data centers of these corps are pretty synonymous with military bases because of how they use civilian infrastructure and cost tax payers money (which I think was the point of the title but still am not sure after reading most of the article).

  • One of the articles I linked you to had not just Steam but other payment processors talking about it.

    So are we talking about Steam making statements about why they refused to accept the game Horses on their platform, or are we talking about payment processors? Because the thread you started responding to me in is the one about payment processors and as a result that is the vein in which my responses have been directed. And since news outlets have been very outspoken about the likelihood that Horses was refused due to payment processors pressuring Steam to better adhere to their Terms for content sold, it was reasonable to assume that that's what you meant.

    If you would like to talk about Steam's removal of other games, or you would like to talk about Horse's rejection specifically, you're going to have to say so.

    Microsoft isn't selling products on GitHub. They bought it to have control over open source projects and code.

    Even if they were going to sell ad space that's still not the same conversation as the one about payment processors. At best the only similarity might just be that MS might find porn content to be detrimental to their image. Because that's the BS reason payment aggregators gave for not allowing porn content every time this has come up.

    But MS has been disallowing nudity, pornography, and other adult content on their products and ad aggregation service for more than a decade now. So either this was house keeping, it was an afterthought, or someone complained. And considering just how little MS cares about the complaints of consumers and consumer groups normally, I doubt it's the latter.

  • What you said and what you meant were two different things.

    The wording of the OG comment original commenter's absolutely lent itself to conspiracy theory level inference that it was steams fault.

    They not only didn't actually answer the questions I asked. They claimed "nobody is talking about it" which is demonstrably not true.

    Further, they went out of their way to play what about blah, but didn't give and explaination of how that related to the conversation being had or their original point.

    Then you show up with language that could be taken one of two ways, and when I respond with proof from what I took from what you said "I now have reading comprehension problems" because you "didn't mean" what they said in relation to payment processors (which only entered the conversation because one person who was not the OG commenter brought it up), and I continued the conversation in that vein.

    So either you chose to answer me on the wrong part of the thread, or it's your own fault you were misunderstood.

  • https://www.theguardian.com/world/2025/jul/29/mastercard-visa-backlash-adult-games-removed-online-stores-steam-itchio-ntwnfb

    In the two weeks since announcing the letters sent to major payment providers including PayPal, Mastercard and Visa, video game marketplaces Itch.io and Steam have announced policy changes.

    Steam, which has an estimated 132 million active monthly users, earlier this month removed an estimated hundreds of titles in response to pressure from payments processors.

    https://exploringthegames.substack.com/p/why-steam-removed-nsfw-lgbtq-games

    Recently, several NSFW and adult-only games were removed from Steam and Itch.io, not because Valve or Itch.io wanted to, but because payment processing companies, such as Visa and Mastercard told them to do so.

    What started as an effort to remove something truly horrible ended up as censorship hurting innocent creators. While the intention may have been to pull illegal, immoral, or exploitive games, games that were removed were also just NSFW or adult only games. One of these games was VILE, and the first time I heard about this situation.

    https://www.pcgamer.com/software/platforms/valve-confirms-credit-card-companies-pressured-it-to-delist-certain-adult-games-from-steam/

    "We were recently notified that certain games on Steam may violate the rules and standards set forth by our payment processors and their related card networks and banks," said Valve. "As a result, we are retiring those games from being sold on the Steam Store."

    Valve's reaching out to devs impacted by the change "and issuing app credits should they have another game they’d like to distribute on Steam in the future." Just, you know, so long as those games get the seal of approval from Valve's payment processors, I suppose.

  • I said what I said. You decided my argument was something other than what it actually was. You decided to engage me about it in a bad faith argument. You're fault not mine.

  • Technology @lemmy.world

    Windows Defender Anti-virus Bypassed Using Direct Syscalls & XOR Encryption

    cybersecuritynews.com /researchers-bypassed-windows-defender-antivirus-using-direct-syscalls/
  • Technology @lemmy.world

    Sweeping Cyber Security Order

    www.theregister.com /2025/01/17/biden_cybersecurity_eo/
  • Technology @lemmy.world

    UBO Lite Pulled from Firefox Store by developer

    www.pcworld.com /article/2474353/popular-ad-blocker-removed-from-firefox-extension-store.html
  • Technology @lemmy.world

    A Novel Approach to Youtube Ads

    9to5google.com /2023/11/25/youtube-ads-speed-up-workaround/