Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)N
Posts
2
Comments
463
Joined
9 mo. ago

  • Yet, with the way the California bill is written, so long as that data was collected at account creation, it would be adaquate.

  • They provide a suite of services, most of which can be provided in a private manner. Blowing a hole in that by providing email seems counter productive. As I said, they could point you at a separate email service. Even if they provided that service it could ensure an adaquate break between the private services and and the non-private.

    As a service, is it more privacy conscious than, say, Gmail? Yes, but you're still ultimately just asking the postman not to read your postcards.

  • The burden is still on the parents, but this would actually provide a useful tool for them to address that burden.

  • There's fairly clear evidence at a societal level that access to, for instance, hardcore pornigraphic material is harmful to children, but that is very different to having evidence that a particular child is currently being exposed to it.

  • To bring charges under those sorts of laws there's going to have to be some external evidence of harm. Either the kid is acting in a way that causes an agency sufficient concern that they investigate the family, or the government mandate much stricter monitoring of exactly who is doing what online. The former case is unlikely, but should probably be persued vigerously when it does hapoen, and the latter case is something I imagine we all very much want to avoid.

    By providing a simple, privacy conscious, way of taking some of the burden of vigilence off of the parents (the child is less likely to stumble on inappropriate material) it makes it easier for them to provide actually beneficial guidance, and reduces the risk of law enforecement getting involved to investigate minor transgressions.

  • Ah, thats good to know, I looked at their signup page and it didn't have those options listed.

  • It's not the furry bit that's the problem, it's what you do with the pineapple and barbeque tongs that disturbs people, and the Lederhosen are just gratuitous.

  • Whilst parents absolutely should be guiding and helping the kids determine where they go online, and what they look at, I'm trying to envision where, or how, parents would be liable for them looking at something inappropriately "adult", barring actual child neglect.

    A system like this would actually help parents be more confident that little Johnny wasn't going to stumble across something in appropriate, because, yes, in a way this is about control. It's about controlling what kids are exposed to before they are intellectually ready for it. Yes, there are potentially serious issues around that, such as limiting access to LGBTQ+ or other helpful material for young adults, but that should be a discussion around what information is needed at each age, rather than how to indicate that age.

    Age gating on the open internet will happen, I don't see any way that it wont, what matters is how it is implemented. We know that submitting government issued ID to every site with potentially contentious content is a terrible idea; this neatly sidesteps the need for that, and actually forbids it.

  • It one of the reasons I like the way the California bill has been written, it's very clear that you set the flag, or provide a date, and not only makes no mention of verifying it in any way, but also requires that anything using it trusts it and may not perform any other checking. A service using that data is also explicitly not liable if it's wrong, so they have no insentive check any further.

    It is, obviously, possibly that laws will change in future, but it seems to me that having something like this in place actually makes it harder to implement anything more intrusive later.

  • The Califirnia law, at least, states the age flag should be set when the account is created, presumably by the controller of the computer, and holds that controller responsible for setting it correctly, and the developer responsible for ensuring it's set and works correctly, at least, that's my reading of it. If it's your computer, that makes you resoonsible for setting your age and that of accounts you create for your children.

  • Including an age flag field in user data on Linux is fairly trivial, and I've seen several proposals for it. Once that's in place it's up to browsers, "app stores", or anything else that needs it to request the data and use it.

  • This is why it's always struck me as unreasonable for proton to claim they care about user privacy. If they did, they wouldn't provide an email service, as it is inherently impossible to adaquately protect the metadata if it is sent to a different mail server. A better approach would be for them to explain why you can have email or privacy, but not both, and to point people to a separate service if they insist on email, so it is decoupled from any of their other services. Accepting payment through a means that isn't tied to your personal identity would be a good step too.

  • We risk "AI" destroying civilization not because it is stupid, but because we are. It doesn't matter what an LLM churns out, if people weren't daft enough to trust it it couldn't do any harm (environmental impact aside obviously. That could be addressed by switching them off when we stop using them).

  • Why would we be against stupid people?

    Because stupid people are easily lead to act against their own, and other's, interests.

    The good news is that stupid is curable, the bad news is that it involves the person in question putting in some effort, that they may not have spare, and doing so against the command of those manipulating them.

  • Once you've downloaded a massive Ram, your next download should be a massive Sheep, so you can create your own supply of fresh Rams.

  • The thing is, the US administration desperately needs other countries to be seen to be acting like this, so they can point and say "see, this is perfectly normal", rather than scrambling to avoid calling concentration camps concentration camps, or terrorists terrorists.

  • I didn't ask how big the room is, I said "I cast fireballgrenade"

  • I'm no fan of AI, but don't blame this on it, this is 100% organic slop. It has neither the A, nor the I.

  • We know they want the Iranian people to rise up, overthrow their government, and welcome the US forces as glorious liberators, (IMHO the probabilities are maybe, possibly, no chance), so I wonder if they're trying to cause as much harm as possible so people will go "if we just overthrow our government, the US will stop hurting us". It would be psycopathic of the US to think so, but I have seen no evidence to suggest that makes it impossible.

    Of course, now that they're using "AI" they get to blame that for any targeting "mishaps" that become a liability.