Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)N
Posts
2
Comments
449
Joined
9 mo. ago

  • Not really, please see my response to towerful's sibling comment to save me duplicating it.

  • These age band laws basically work in the opposite way to the usual parental controls. Rather than having to install and maintain the control software and having the filtering at the client end of the connection, parents need only set a flag and filtering will occur at the source end of the connection.

    Will these laws provide perfect protection that eliminates the need for parental oversight of childrens' internet access? No. Will they help stop kids accidentally stumbling into unsuitable content, reducing harm overall? Yes. As a parent, one of the things I worry about is my kids browsing sites such as youtube. Even if they're using it for research for school projects, I can never be certain it wont prompt them to watch an unsuitable video. With a simple "this user is a child, don't show them anything unsuitable" flag, I wouldn't have to spend so much energy monitoring everything and could spend more energy talking to them about what they're actually watching.

    One of the key parts of the Californian law is that if the client machine sends the flag, the service must treat it as authoratative, and should not use other means of checking. That is good news, as it means there is no incentive for sites to integrate more intrusive measures such as third parties scanning givernment issued ID.

  • I don't think there would be any difficulty with a kid setting up a computer, as in most juristictions the parents are responsible for their childrens' actions until they are adults themselves. So the oarents would still be responsible for what the kid did with the computer in the same way they often are now.

  • That's one bit that I do think could do with clarifying. As written resposibility seems to be split between the developer and the controller. From the rest of the bill, it seems like the developer is in the clear if the system functions, and it's down to the computer controller to ensure users are correctly set up.

  • I'm not sure what you mean by "massive leak able tracking" in this case. It's literally a flag that indicates the user's age bracket, and means sites don't use the really invassive options.

  • No, and this wouldn't be impossible to bypass either. I don't think the aim is 100% perfection, so much as harm reduction, and I don't think you'll get more than that no matter how onerous the law becomes. Most kids, most of the time, are not going to be trying to circumvent it, and it would still be up to the parents to look out for cases where they were.

    The current proposal requires storing and transmitting a flag that can take one of four values (under 13, 13-16, 16-18, 18+), and prohibits sites using other means of age verification. It'll work adaquately to stop kids accidentally seeing pornography, and hopefully things like andrew tate, giving the parents some space to do their part to help their kids learn how to understand what they migjt be exposed to.

  • Yet, with the way the California bill is written, so long as that data was collected at account creation, it would be adaquate.

  • They provide a suite of services, most of which can be provided in a private manner. Blowing a hole in that by providing email seems counter productive. As I said, they could point you at a separate email service. Even if they provided that service it could ensure an adaquate break between the private services and and the non-private.

    As a service, is it more privacy conscious than, say, Gmail? Yes, but you're still ultimately just asking the postman not to read your postcards.

  • The burden is still on the parents, but this would actually provide a useful tool for them to address that burden.

  • There's fairly clear evidence at a societal level that access to, for instance, hardcore pornigraphic material is harmful to children, but that is very different to having evidence that a particular child is currently being exposed to it.

  • To bring charges under those sorts of laws there's going to have to be some external evidence of harm. Either the kid is acting in a way that causes an agency sufficient concern that they investigate the family, or the government mandate much stricter monitoring of exactly who is doing what online. The former case is unlikely, but should probably be persued vigerously when it does hapoen, and the latter case is something I imagine we all very much want to avoid.

    By providing a simple, privacy conscious, way of taking some of the burden of vigilence off of the parents (the child is less likely to stumble on inappropriate material) it makes it easier for them to provide actually beneficial guidance, and reduces the risk of law enforecement getting involved to investigate minor transgressions.

  • Ah, thats good to know, I looked at their signup page and it didn't have those options listed.

  • It's not the furry bit that's the problem, it's what you do with the pineapple and barbeque tongs that disturbs people, and the Lederhosen are just gratuitous.

  • Whilst parents absolutely should be guiding and helping the kids determine where they go online, and what they look at, I'm trying to envision where, or how, parents would be liable for them looking at something inappropriately "adult", barring actual child neglect.

    A system like this would actually help parents be more confident that little Johnny wasn't going to stumble across something in appropriate, because, yes, in a way this is about control. It's about controlling what kids are exposed to before they are intellectually ready for it. Yes, there are potentially serious issues around that, such as limiting access to LGBTQ+ or other helpful material for young adults, but that should be a discussion around what information is needed at each age, rather than how to indicate that age.

    Age gating on the open internet will happen, I don't see any way that it wont, what matters is how it is implemented. We know that submitting government issued ID to every site with potentially contentious content is a terrible idea; this neatly sidesteps the need for that, and actually forbids it.

  • It one of the reasons I like the way the California bill has been written, it's very clear that you set the flag, or provide a date, and not only makes no mention of verifying it in any way, but also requires that anything using it trusts it and may not perform any other checking. A service using that data is also explicitly not liable if it's wrong, so they have no insentive check any further.

    It is, obviously, possibly that laws will change in future, but it seems to me that having something like this in place actually makes it harder to implement anything more intrusive later.

  • The Califirnia law, at least, states the age flag should be set when the account is created, presumably by the controller of the computer, and holds that controller responsible for setting it correctly, and the developer responsible for ensuring it's set and works correctly, at least, that's my reading of it. If it's your computer, that makes you resoonsible for setting your age and that of accounts you create for your children.

  • Including an age flag field in user data on Linux is fairly trivial, and I've seen several proposals for it. Once that's in place it's up to browsers, "app stores", or anything else that needs it to request the data and use it.

  • This is why it's always struck me as unreasonable for proton to claim they care about user privacy. If they did, they wouldn't provide an email service, as it is inherently impossible to adaquately protect the metadata if it is sent to a different mail server. A better approach would be for them to explain why you can have email or privacy, but not both, and to point people to a separate service if they insist on email, so it is decoupled from any of their other services. Accepting payment through a means that isn't tied to your personal identity would be a good step too.

  • We risk "AI" destroying civilization not because it is stupid, but because we are. It doesn't matter what an LLM churns out, if people weren't daft enough to trust it it couldn't do any harm (environmental impact aside obviously. That could be addressed by switching them off when we stop using them).

  • PieFed Meta @piefed.social

    An option to be notified of responses to all comments under a a specific one

  • PieFed Meta @piefed.social

    QoL Feature Request - Have a way to avoid being auto subscribed to communities on signup