Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)S
Posts
2
Comments
418
Joined
3 yr. ago

  • By the sound of it, the disagreement is mostly in how direct an impact AB1043 will have on government plans for data collection and authoritarianism.

    That's not really the original disagreement i was referencing, nor is it a position i've taken, we agree that the local only bill isn't the big bad.

    You twice referenced the slippery slope fallacy when replying to comments clearly describing future actions, i was pointing out that it doesn't meet that criteria because there is a reasonable assumption that the described escalation will occur.

    Your original responses to which i was referring:

    This is a slippery slope falicy. Just because the option is provided to self-identify age, doesn’t mean that it will be replaced with more complex and direct data collection (which I am against, if it wasn’t clear) later

    You’re again relying on slipery slope falacy to say that because I’m okay with this one specific form of age gating, I’m okay with every other one, which I have repeatedly made clear is not true.

    The first one is the main issue i was pointing out, the second one isn't how the fallacy is applied at all.

    As no one is taking the position that AB1043 is the actual danger most of what you are arguing doesn't really apply.

    Similarly with the Overton window, where it has been standard practice for over a decade to have a “are you at least 18?” popup, and for every single service to ask you your age, if not more. We absolutely need more data protections for systems such as this (ideally an outright ban on saving this information) but this doesn’t seem to make it worse.

    Emphasis mine.

    Hard disagree, moving the responsibility of this from individual websites to the OS is a big jump in scope.

    The same kind of jump as making it the ISP's responsibility if they serve illegal content from individual websites ( as has been suggested ).

    Aside from that it centralises the surface area for future changes and enforcement.

    Basically, from my understanding, this isn’t a step towards data collection or authoritarianism, and provides no significant benifit to either of those causes - its effectively a technical standard.

    This is the disagreement, i (and obviously many others) are pointing at the long and comprehensive list of similar initiatives, both recent and historic, that were stepping stones to further encroachment and saying "oh look another small step in the continued and provable encroachment upon privacy" and you seem to be advocating for the benefit of the doubt.

    Like, if this age-verification flag was proposed by the Linux Foundation, and agreed to by others, would the backlash be this big?

    If the linux foundation had the same history of shenanigans, then yes.

    Similarly, I don’t see any contradition between wanting a ban on storage/sharing of user data, and the implementation of a flag like this - even if we are able to ban all storage of user data, this law would be unaffected. That’s what I’m trying to figure out - how do people think that this leads towards those end goals? How would blocking it improve anything?

    Ignore the technical implementation of this one step, nobody is saying this is the endgame big bad.

    Think of it as a prevention measure, a single ant in the kitchen isn't a problem in and of itself, but it's almost certainly an indication of a larger potential future problem.

    You are arguing it's not a problem because the ant only has 5 legs, everyone else is saying the leg count doesn't matter it's still an ant.

    Is it just a difference in opinion about the signicance of the Overton window?

    See above

    Is there a technical aspect I’m missing?

    Not necessarily , it's just that you are arguing a single technical issue in a conversation about perceived intentionality.

    Is there some legal advantage this provides to survailance that I’ve missed?

    See above

    Right now, it seems like everyone is arguing against a strawman, implying that I support the idea of government/corporate surveillance and censorship, that I don’t expect that they’ll continue to be evil, or they’re simply saying its bad because its cosmetically similar to laws that do impede on freedoms. Given how unanimous the backlash is, I must be missing something?

    That you are using a point nobody disagrees with to imply correctness in a context where said point doesn't really apply makes it seem like you are coming at this in bad faith.

    When bad faith is assumed, people look for underlying reasons.

  • Or the avocado is bland? not all avocado are built equally.

    I would hedge that the penis consists of more than just regular skin there is a fair amount of erectile tissue in there as well, though i can't vouch for a scientific difference is the taste experience.

  • In the case of ducks, that's quack on quack crime

  • Ah, i think i see where the difference in opinion is, claiming this event leads directly to (as in the very next step is) AI/ID verification could be considered an unreasonable jump i suppose.

    In my case i was interpreting the argument as this event will almost certainly lead to further encroachment events into privacy, one of which would probably be the AI/ID verification.

    To me this is a reasonable assumption because it's what has happened in pretty much all of the recent instances of similar event occurring and therefore not a slippery slope fallacy.


    TL;DR

    On further examination, the technical things you mention seem to be correct if you assume that this bill alone is the vector for privacy encroachment, but they don't pan out at all if it is assumed that other steps will follow; which, given precedent, is highly likely to happen.


    On the technical implementation:

    The reason its a slippery slope fallacy is the assumption that this law is a direct attempt to implement those systems, in spite of the fact that AB1043 implements a system that would be redundant with AI or ID based methods,

    As an aside i'm not sure anyone is claiming that this bill is a direct attempt at a hard AI/ID verification system, rather they are claiming that this another step in a series of encroachments that will lead to escalating requirements and enforcement, AI/ID verification being an obvious step in that series.

    From a technical standpoint you are correct, it outright states that photo ID upload isn't required, yet.

    Opinion : A cynic might see this as indication that the politicians understand that political and public appetite for full photo id requirements is less than optimal, so this is just a small step in shifting the Overton window on this subject.

    technically doesn’t offer any good way to transition into an AI or ID based system (since it all has to be done locally),

    That is only correct in a very narrow set of circumstances, that local requirement isn't set in stone at all.

    All that needs to happen to go from this to full ID checks is to mandate they use a "trusted" service for verification. It wouldn't need to be an always online thing either, think of how the bullshit online verification systems that already exist work, i.e. you need to go online every x days or your system/service/app will stop working.

    opinion: I fully expect any "trusted" service they designate to be something that serves the governmental and corporate desire for as much data as they can get away with, this isn't even a stretch, just look at the service discord was trying to implement, the one with deep ties to palantir

    and legally, imposes additional data protection laws that are likely to interfere with AI-based age verification.

    This isn't wrong as much as it seems naive, we are talking about bills that change laws, any law introduced can be revoked, superseded or have "exceptions" carved out, such as the current favourite "think of the children" thin veneer they are using.

    It wouldn't take much to move from "all data is protected" to "all data is protected, unless we need it to protect the children"

    That's not even taking in to account that the laws are only as good as the system upholding them, the current US system is sketchy AF, other countries have similar issue with uneven application of laws.

    Not to say we should throw out hands up, say "what’s the point?" and just do nothing, but pretending that these laws aren't susceptible to the same issue affecting everything else doesn't help anyone either.

    The problem with AI and ID age verification isn’t the age verification. Its the data collection, limits on personal freedom, and to some, the inconvenience.

    Agreed.

    So far as I can tell, AB1043 doesn’t have a significant impact on data collection (it does add another metric that could be used for fingerprinting, but also adds stricter regulation on data collection when this flag is used,) or personal freedoms - esspecially not when compared to what is already the existing standard of asking the user for their age and/or if they’re over 18.

    Mostly agreed.

    the points i'd raise are that the whole idea of age verification is an encroachment upon personal freedoms for some, so there's an aspect of subjectivity to that.

    I addition, relying on data collection regulations at this point is almost dangerously naive, corporations and governments alike have shown that they will basically ignore them outright or make up some exception, this isn't conjecture, this is something easily searchable, think flock, ring camera's, stringray , PRISM, anything palantir is involved in, cambridge analytica, broad warrantless data requests etc.

    There is absolutely no reason to give the benefit of the doubt to parties that have repeatedly proven to be doing sketchy shit.

  • The fallacy is the expectation that following escalating events would arise from the event in question.

    It's only a fallacy if it's unreasonable to expect the subsequent steps to occur or in this case, be attempted.

    Does that mean it's a guarantee, of course not, just that the fallacy doesn't apply.

    The intention or plan for escalating steps doesn't have to be laid out perfectly to draw the parallels between this and previous similar events that were then subsequently used as foundations for greater reach.

    Your reasoning around the technical implementation of such escalation isn't applicable here (in the conversation about whether or not the fallacy applies)

    If you want to argue that they won't escalate, or it's not possible , go right ahead, but raising a fallacy argument when it doesn't apply isn't a good start.

    If you want i can address your arguments around implementation directly,as a seperate conversation? I don't think you're correct on that either, but as I said I also don't think correctness in that subject matters in the context of the fallacy.

  • If you're going to reference the slippery slope fallacy so much, you should probably read where and when it actually applies.

    From the wikipedia entry:

    When the initial step is not demonstrably likely to result in the claimed effects, this is called the slippery slope fallacy.

    You yourself just acknowledged that the worst-case is already happening, so the assumption that the worst case will continue to happen is reasonable.

    Unless you wish to argue that :

    The worst-case scenario is already happening

    followed by you saying

    Okay, but

    isn’t an acknowledgement ?

  • My initial thoughts are that my original ask was this :

    because I’ve yet to hear about anything above a toy project that has had any verifiable success with AI code generation as a major component of their workflow.

    and the example you provided was a toy project used as a publicity stunt.

    On the technical side i don't know enough rust to be able to weigh in on the technical accuracy of the project.

    The ability for current LLM's to churn out something that looks relatively good at first glance isn't my point of contention, most of us know it can do that.

    I'm just looking for a single medium to large project that is successfully being used in production (close to production is also fine) that was created with significant LLM involvement.

    There is so much talk around this, that the fact i haven't come across any mention of a successful deliverable (in the context i mentioned) raises all sorts of red flags for me, personally.

    I'm not trying to catch you out, it's just that i haven’t seen one so i was wondering if you have, if you haven't that's fine, it's not a trap.

    I think it shows a lot of limitations but also a lot of potential. I don’t personally think the AI needs to get the code perfect on the first go – it has to be compared to humans and we definitely don’t do that.

    Iterative progress is generally the way of things, but most non-trivial agentic workflows already work with iterative code generation and testing so expecting a correct solution at the end of that process is more reasonable than you would think.

    The difference between people and LLM's is the types of interactions you have with them, you can ask the LLM to explain why it did something, but if you've ever tried that I’m sure you can understand why it's not the same as the kind of answers you'd get from a person.

    Yes, of course. I think it’s important to look passed the blowhards and think about what it’s actually doing: that is the perspective I’m trying to talk about this from.

    As am i, I’m not against LLM usage, I’m against the pretense that it has capabilities it does not, in fact, have.

    Selling something on the basis of it being able to do something it can't do is where term "snake oil salesman" comes from.

  • That's on me, I meant the equivalent of a "trust me bro" , in this case an anecdotal "me and the people I know all say..."

    showing Claude submissions is sufficient for analyzing code in the context I believe it is good

    Yes, in the context you provided it makes sense, as a response to my question which specified examples of larger projects/workflows, it does not.

    Im not here to argue either, I asked a specific question and your answer didn't really address any of it, i was just pointing that out.


    I too find it frustrating but it seems for different reasons.

    I really really dislike the way it's being sold as a solution for things it's in no way a solution for.

    They do certain things fine, good even, but blanket statements like "their code is great" without appropriate qualifiers is contributing to the validation of these bullshit sales-oriented claims of task competency.

    1: agreed

    2: then I think you are missing the fundamental limitations of the current approaches, but we can agree to disagree on this.

    3: see 2

    I agree with jobs on the chopping block, though i think that's in large part due to poor due diligence and planing by management, but that's nothing new, the same thing has and is still happening with offshoring (throwing more people at a problem generally won't solve design and governance issues).

    I also think the current systems aren't capable of being a viable replacement for anything above junior level stuff, if that ( not that that doesn't present it's own problems )

    I think the difference in opinion comes from my belief that LLM's and the current tooling around them aren't fundamentally capable of replacing existing resources, not that they just don't have the power yet.

    Putting increasing large compute in a calculator won't magically make it a spreadsheet application.

  • I appreciate the answer but that's not at all what i asked.

    I have anecdotes and personal experience i could cite but that's not particularly helpful in a general sense.

    Pointing to claude submissions in projects is actively less than helpful in this case because it only proves that single files in isolation look like they are well written, it gives no indication of overall project quality.

    People that I know to be good developers have also shared their experiences with it and say yes, it has written good code for them. I’ve personally used ChatGPT to generate very mundane tasks and the code it output was more than adequate.

    So in a very limited context the code generated for you personally was acceptable, that's great, i've found much the same, but that's a far cry from "AI writes great code; I think we just want it to suck."

    It's somewhat my bad though, when i say "citation" i don't need a full research paper (though that would be nice) i'd like something a bit more substantial than a "trust me bro".

    It introduces security bugs and subtle bugs at probably the same rate as a human (I have no “citation” there, just what I’ve seen)

    That's a load-bearing probably, my experience has been the polar opposite of that, I’ve been involved in two major AI initiatives and both choked hard on security and domain bugs. That could very well be a project management or company specific issue, hence the search for successful projects to compare.

    My quest continues.

  • It really doesn’t suck at them. AI writes great code; I think we just want it to suck.

    Citation? I’m really asking because I’ve yet to hear about anything above a toy project that has had any verifiable success with AI code generation as a major component of their workflow.

    As in a like for like improvement in code quality, security, bug occurrences and severity, developer efficiency, all that jazz, not just the standard "we've funnelled so much money in to this we are almost fiscally required to claim success"

    its not a dig, i really want to see one so i can found out how it was done.

  • I think it honestly might just be my sublime muscle memory ruining it for me, that and it feels like it should be more IDE than text editor.

    I use helix quite a bit for dev so lsp's and editor based coding is known to me.

    Perhaps it was just bad timing.

  • Do you have any good resources for how to use kate in a dev scenario ?

    I've tried multiple times, but it always seemed clunky to me.

    As a text editor its great, though i prefer sublime ( not FOSS however ) but i haven’t been able to get it to click as any kind of ide or part of one.

  • Not who replied to you originally but,

    You aren't wrong (you even stated that more is probably better) , just not necessarily presenting the whole picture.

    Ram compression isn't a benefit only scenario, there is a cost in processing power to make that happen.

    So it's a trade off of memory utilisation vs processing requirements.

    Whether or not it's worth it is down to circumstance, though i agree that generally i think it's worth the tradeoff.

    Unified memory is useful in specific circumstances, most notably LLM/ML scenarios where high vram utilisation is part of the process.

    It's not an apples to apples comparison by any means.

  • I appreciate it.

    Yeah, I’m on the default but i'll explore the other ones now, see if there is anything i prefer.

  • You’re making quite a lot of frankly weird assumptions.

    I've clearly stated what i'm referring to and how i got there, if you think there is an unsupported statement then reference it directly and i will respond.

    That being said, fuck, i think i've seen two posts next to each other and missed where it changed from them to you.

    That's entirely my bad and i apologise, my response was supposed to be for the other person.

  • Perhaps summed up as , "no true leftist" ?

  • "You can't reason someone out of a position they didn't reason themselves in to."

    Though it is occasionally possible to point out how their arguments don't stand up to scrutiny and get them to engage on it.

    Only works with the ones not doing it on purpose, however.

  • key words there are discourse and discussion.

    As is explained in a few responses to your paradox of tolerance reply (that you seem to have conveniently not replied to so far), the kind of discussion or conversation they are referencing requires both parties to be working in good faith.

    from your own reference

    as long as we can counter them by rational argument

    If one party can't or won't provide logic or reasoning to their side of an exchange, that's not a discussion because there is nothing to discuss with someone not willing to engage in good faith.

    There are absolutely places that are ideological echo chambers, despite claiming otherwise, but banning someone for the inability (or unwillingness) to engage in good faith isn't a removal based on ideology it's a removal based on not adhering to the basic tenets of how discussions are supposed to work.

    If it just so happens that most of that kind of banning happens to people with ideologies you subscribe to, perhaps it's worth considering how you can help these people understand how to have an actual conversation.

    That all being said, from what i've seen here I’d guess you're on the purposeful bad faith side of things so I’m not expecting any reasonable consideration, but feel free to surprise me (or block me, i suppose).

  • The DSM doesn't include that specific diagnosis any more right, it's all the ASPD and DPD spectrum now ?

    They removed it because of the absolute shitshow that was trying to reliably diagnose psychopathy as it was originally described.(and possibly the negative connotations associated with the word itself)

    So now they have a series of metrics to measure things they can somewhat reliably measure over time.

    Like how the medical diagnosis of idiot doesn't exist anymore, but there are more accurate and nuanced terms and diagnosis for intellectual disabilities in various forms.

    I could be wrong however, my understanding of this area of research is middling at best.

  • Selfhosted @lemmy.world

    Self Hosted SCM & CI/CD Chicken and Egg

  • DevOps @programming.dev

    Self Hosted SCM & CI/CD Chicken and Egg