Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)S
Posts
2
Comments
418
Joined
3 yr. ago

  • IKR, fuck those people with disabilities for inconveniencing you.

    How dare they use a physical impairment as an excuse to not do things to your satisfaction.

    Selfish Is what they are.


    in case it wasnt abundantly clear, that was sarcasm.

  • Failing to do your best to deal with it.

    A small difference, but important.

  • Significant whitespace should be a considered a crime against humanity as a whole. (yes, i'm looking at you python and yaml)

    I will die on this hill.

  • Human interaction is all politics to varying degrees.

    Unless you are an extreme outlier someone at some point will need to interact with someone outside of the business to get the things the business needs.

    Unless you are a dictatorship of a business, coming to a consensus on what the business needs is going to require dealing with people.

    I agree that politics for personal gain or CV juice is bullshit though.

    Being competent should be enough, unfortunately that's not generally how it works, which is also bullshit.

  • Not necessarily, they have been known to shut down accounts they think are trying to evade a ban.

    Not sure how sophisticated it is, but it does exist.

  • If you look at a bullet point list and see drama I think I can safely ignore your threshold for "drama".

  • TL;DR;

    On by default is exactly what forcing interaction looks like.

    There are many effects regardless of whether or not you interact with the features directly.


    Disagreeing vehemently doesn't make you correct.

    Some of these things are on by default, that's the very definition of forcing an interaction, even if it's just to hunt through the settings to turn it off.

    Regardless, just don’t interact with the AI stuff and you won’t even know the difference.

    It's so close to the same energy it's even parroting the same kind of "make absolute statements without understanding the realities of the subject" mindset.


    Let's start with some basics.

    • At least some of these features are on by default so from the get go you are already being affected, even if it's just a minor annoyance.
      • You need go out of your way to turn them all off (and keep on top of new additions, again, some of which are on by default).
      • They are also regularly "reset" during upgrades
    • Assuming you're a regular user and don't religiously go turning all of these features off at the root, some of them will be part of interacting with the daily usage of the browser, incurring a performance cost.
      • Right click context actions are a good example of this, as is AI tab grouping and the planned Link Preview (though the latter currently needs a key press to activate)
      • Even if you never use the tab grouping, the analysis of what's needed to suggest things is still a performance cost, possible a non-trivial one.
    • Any new code or feature added to the browser contributes to bug and maintenance surface area, meaning resources used to develop, test and maintain that code.
      • This is resource not being used on other, non AI, bugs and features.
      • Project management is complicated so it isn't a one-to-one ratio of resources from one thing to another, but it's still a non-zero percentage.

    Now for the slightly more esoteric

    • LLM's as a whole are provably bad for the environment, power grids and water supplies , that's both the running and the model generation
      • People might not care about this, but not caring doesn't negate negative impacts in this case.
      • Some might consider the tradeoff to be worth it, that's fine, still affected by it though.

    I'm not arguing for or against LLM features (though for the record i'm not a fan of them being auto-included) i'm saying that your statement about not being affected if you don't use them is incorrect.

  • What do mean by "cognitive empathy"?

  • idk

    Jump
  • TL; DR;

    • Using bad analogies to explain things that are already confusing helps no-one
    • AI is currently a marketing term used to push LLM's
    • Tools used appropriately garner satisfactory results.

    people need to specify that they’re against generative LLMs, like Chat-bots or slop-generators, not “all AI”.

    I agree, how does throwing out bad comparisons relate to that ?

    There was just a thread on Twitter where a company showcased an amazing tool for animators - where you, for example, prepare your walking/sitting/standing animations, but then instead of motion-capturing or manually setting the scene up, you just define two keyframes - the starting and the ending position of the character… and then their AI picks the appropriate animations, merges between them and animates the character walking from one position to the other.

    It’s a phenomenal tool for creatives, but because the term “AI” appeared, the company got shat on by random people.

    if you are talking about cascadeur or something similar, that doesn't use an LLM afaict, it's based on ML Trained on their own internal data (or so they say).

    I don't disagree that tools used in a way that plays to their strength are useful.

    People are often conflating AI with LLM's, which makes sense for the average person, because that's how it's been marketed and sold.

    LLM's aren't even really AI but here we are.

    No. All generative graphical slop AIs and generic chat-bot LLMs have been trained on large corpus of data that has been obtained by various sketchy and illegitimate means.

    I was very specific in my wording, but as i said, i could be wrong, if you can point to any big commercial LLM’s that don't adhere to my classification i will concede the point.

    THAT’S the major difference.

    I mean, yes, that's what i said.

    So i stand my my conclusion that in the context you laid out, Photoshop isn't a good comparison to most, if not all of the current tools that would be considered AI.

    So, he basically says something that directly contradicts what you’re saying - he prefers the generative slop machines, than tools that actually help developers or artists.

    I could be wrong but half of that statement was sarcasm.

    I basically read it as:

    So I’m gonna execute the code of someone who doesn’t know the first thing about coding on my computer? Great! I’d rather have AI art and human code.

    Running code someone vibed up without understanding what it's doing, it stupid If i had to pick one way around or the other, I’d rather have AI art(which is this case is significantly less of a security risk) and human code (which should potentially be of a higher quality)

    I think the fundamental misunderstanding here is how the term AI is used.

    None of these things are really intelligent and LLM's are predictive semi-hallucination machines cobbling together best guesses at what's supposed to come next in the sequence.

    The way i personally see it is that the latest gen "AI" stuff is basically sitting on LLM's in some capacity. Area recognition, language, image/code generation etc.

    Anything else is just normal(perhaps smart) tools, using algorithms of some kind, ML etc

  • idk

    Jump
  • Weak comparisons help no-one, photoshop is nothing like LLM's

    All of the big commercial LLM's (without exception afaik) have been trained on a large corpus of data that has been obtained by various sketchy and illegitimate means. (some legitimate as well).

    That's the major difference between the two.

    If you are using a model that has only been trained on legally obtained data, disregard this point.

    I'm not even against competent tool use of LLM's but please use better arguments.

  • Pointing out potential hypocrisy isn't homophobic, that you think it is, says a lot more about you than them.

  • Firstly, i said it "sounds like that", not "this is what they meant", very different things.

    Secondly, what exactly is it you think happens when some of these false positives occur ?

  • I suspect the downvotes are because it sounds like this :

  • It's less US centric than it is catered to US sensibilities of narrative, but it's a valid criticism.

    The reason i mentioned it is because it deals with the exact situation outlined in the original post.

  • Reading a chart and understanding it are different things, a further different thing is understanding what it means in context.

    This is a good book for learning the ins and outs of how to understanding statistical data in general.

    https://en.wikipedia.org/wiki/How_to_Lie_with_Statistics

    It does come at it through the lens of intentionally deceptive practices but It's a good general introduction as well.