Skip Navigation

Posts
0
Comments
48
Joined
3 yr. ago

  • where anyone thinks it's ok or normal to recommend suicide to people

    Except that's already happening even without it being normalized, there have always been assholes that are gonna tell people to kill themselves, especially if they've never seen the person they're talking to before. I don't see how this is any different.

    Literally the whole thing would not have happened without the policy.

    It also wouldn't have happened if a fucked up system wasn't withholding actual, reasonable alternatives that the person was clearly asking for. That's my point. Let's fix the actual problems, rather than try to silence the symptoms.

  • ...and did you notice how everyone was outraged by that? That incident was not an issue with assisted suicide being available, that was an issue with fucked up systems withholding existing alternatives and a tone-deaf case worker (who is not a doctor) handling impersonal communications. Maybe it's also an issue with this kind of thing being able to be decided by a government worker instead of medical and psychological professionals. But definitely nothing about this would have been made better by assisted suicide not being generally available for people who legitimately want it, except the actual problem wouldn't have been put into the spotlight like this.

  • I don't want to create a future where, "I've tried everything I can to fix myself and I still feel like shit," is met with a polite and friendly, "Oh, well have you considered killing yourself?"

    Are you for real? This kind of thing is a last resort that nobody is going to just outright suggest unprompted to a suffering person, unless that person asks for it themselves. No matter how "normalized" suicide might become, it's never gonna be something doctors will want to recommend. That's just... Why would you even think that's what's gonna happen

  • Maybe we should clarify what a slur is? Because to my knowledge, a slur is a term that has such negative connotations that it is considered offensive and discriminatory against a certain group of people in itself, without any additional context. You simply do not use it unless you want to insult or offend someone from that group. If a term is only offensive based on how it's used, it's just a regular insult, not a slur.

    So, "can be used as a slur" is not a thing. A word is either a slur, or it isn't. Neither trans nor cis are slurs at the moment. I've never seen trans be used as an insult before. And even cis is almost never meant as a direct insult, merely as a reminder that someone is talking about things they have no lived experience with and should probably check their privilege. Yes, that can be in a demeaning way, but the goal there is not to hurt you, but to make you piss off. It's an act of self protection. Nobody is seeking cis people out and starting to call them names unless they insert themselves into trans spaces and start talking shit about trans issues. If you're doing that, and getting told off insults you or hurts your feelings, then, frankly, that's a you problem.

  • ...yeah, it is. What are you implying?

  • Except the email in question is not a newsletter. Companies often use separate mail list services for important product announcements and similar things as well. Obviously there should be a process in place that removes you from these external services too when you delete your account, but I assume this is what broke down in this case

  • It's not quite that simple, though. GDPR is only concerned with personally identifiable information. Answers and comments on SO rarely contain that kind of information as long as you delete the username on them, so it's not technically against GDPR if you keep the contents.

  • And science fiction somehow can't be fascist?

  • I was thinking of an approach based on cryptographic signatures. If all images that come from a certain AI model are signed with a digital certificate, you can tamper with metadata all you want, you're not gonna be able to produce the correct signature to add to an image unless you have access to the certificate's private key. This technology has been around for ages and is used in every web browser and would be pretty simple to implement.

    The only weak point with this approach would be that it relies on the private key not being publicly accessible, which makes this a lot harder or maybe even impossible to implement for open source models that anyone can run on their own hardware. But then again, at least for what we're talking about here, the goal wouldn't need to be a system covering every model, just one that makes at least a couple models safe to use for this specific purpose.

    I guess the more practical question is whether this would be helpful for any other use case. Because if not, I hardly doubt it's gonna be implemented. Nobody is gonna want the PR nightmare of building a feature with no other purpose than to help pedophiles generate stuff to get off to "safely", no matter how well intentioned

  • Yeah but the point is you can't easily add it to any picture you want (if it's implemented well), thus providing a way to prove that the pictures were created using AI and no harm has been done to children in their creation. It would be a valid solution to the "easy to hide actual CSAM between AI generated pictures" problem.

  • AI is just impossibly far away.

    Sure it's pretty far away, but it's also moving at break neck speed. Last year low-res spaghetti-eating Will Smith body horror was the pinnacle of ai generated video, today we're already generating videos that take at least a second look to determine that it was AI generated. The big question is at what point that improvement rate will start to level off.

  • I mean... It might be. Just depends on how much potential there still is to get models up to higher reasoning capabilities, and I don't think anyone really knows that yet

  • I can get behind that

  • Ohh the "what time is it in films" argument is good, haven't heard that one before, thanks

  • It's gonna get much worse when you start to try mapping days of the week onto the new times. Are days gonna be the same everywhere as well, to stay from 0 to 24? If so, have fun saying things like "Let's find a time on Wednesday/Thursday". People likely couldn't be bothered and would probably just use the day that their normal wake-up time falls on to mean the full solar day instead. At which point you could also just say okay, weekdays are still following local solar days. But now what weekday is it halfway around the world? Now you need to look up their solar day.

    All this to say - abolishing time zones will introduce the reverse problem for every problem that it seemingly solves. You can't change the fact that our planet rotates and people in different locations will follow different schedules. Turning the lookup-table upside down is just a cosmetic change that doesn't remove the situation that's causing the confusion. I'd rather just stick with the set of problems that we're already used to dealing with.

  • Ah, yeah, for employment that's different, sure. That doesn't really seem to be a thing here in Germany (might even be illegal?), so didn't think of that

  • Perhaps that could make drug tests unconstitutional.

    Heavily depends on the context, I'd say? Being drunk while driving should absolutely stay illegal, and having drug tests for that would be a necessity I guess

  • Drugs can be regulated by availability, not by illegality of ingestion

    I generally don't disagree with you, but just want to point out that killing legal ways to get drugs usually doesn't stop people from getting them, instead it just makes the black market flourish and makes it harder to make sure you're getting clean stuff. When it comes to drugs, efforts need to be on education, prevention and rehabilitation, rather than criminalizing any part of the process