Human interaction is all politics to varying degrees.
Unless you are an extreme outlier someone at some point will need to interact with someone outside of the business to get the things the business needs.
Unless you are a dictatorship of a business, coming to a consensus on what the business needs is going to require dealing with people.
I agree that politics for personal gain or CV juice is bullshit though.
Being competent should be enough, unfortunately that's not generally how it works, which is also bullshit.
On by default is exactly what forcing interaction looks like.
There are many effects regardless of whether or not you interact with the features directly.
Disagreeing vehemently doesn't make you correct.
Some of these things are on by default, that's the very definition of forcing an interaction, even if it's just to hunt through the settings to turn it off.
Regardless, just don’t interact with the AI stuff and you won’t even know the difference.
It's so close to the same energy it's even parroting the same kind of "make absolute statements without understanding the realities of the subject" mindset.
Let's start with some basics.
At least some of these features are on by default so from the get go you are already being affected, even if it's just a minor annoyance.
You need go out of your way to turn them all off (and keep on top of new additions, again, some of which are on by default).
They are also regularly "reset" during upgrades
Assuming you're a regular user and don't religiously go turning all of these features off at the root, some of them will be part of interacting with the daily usage of the browser, incurring a performance cost.
Right click context actions are a good example of this, as is AI tab grouping and the planned Link Preview (though the latter currently needs a key press to activate)
Even if you never use the tab grouping, the analysis of what's needed to suggest things is still a performance cost, possible a non-trivial one.
Any new code or feature added to the browser contributes to bug and maintenance surface area, meaning resources used to develop, test and maintain that code.
This is resource not being used on other, non AI, bugs and features.
Project management is complicated so it isn't a one-to-one ratio of resources from one thing to another, but it's still a non-zero percentage.
Now for the slightly more esoteric
LLM's as a whole are provably bad for the environment, power grids and water supplies , that's both the running and the model generation
People might not care about this, but not caring doesn't negate negative impacts in this case.
Some might consider the tradeoff to be worth it, that's fine, still affected by it though.
I'm not arguing for or against LLM features (though for the record i'm not a fan of them being auto-included) i'm saying that your statement about not being affected if you don't use them is incorrect.
Using bad analogies to explain things that are already confusing helps no-one
AI is currently a marketing term used to push LLM's
Tools used appropriately garner satisfactory results.
people need to specify that they’re against generative LLMs, like Chat-bots or slop-generators, not “all AI”.
I agree, how does throwing out bad comparisons relate to that ?
There was just a thread on Twitter where a company showcased an amazing tool for animators - where you, for example, prepare your walking/sitting/standing animations, but then instead of motion-capturing or manually setting the scene up, you just define two keyframes - the starting and the ending position of the character… and then their AI picks the appropriate animations, merges between them and animates the character walking from one position to the other.
It’s a phenomenal tool for creatives, but because the term “AI” appeared, the company got shat on by random people.
if you are talking about cascadeur or something similar, that doesn't use an LLM afaict, it's based on ML Trained on their own internal data (or so they say).
I don't disagree that tools used in a way that plays to their strength are useful.
People are often conflating AI with LLM's, which makes sense for the average person, because that's how it's been marketed and sold.
LLM's aren't even really AI but here we are.
No. All generative graphical slop AIs and generic chat-bot LLMs have been trained on large corpus of data that has been obtained by various sketchy and illegitimate means.
I was very specific in my wording, but as i said, i could be wrong, if you can point to any big commercial LLM’s that don't adhere to my classification i will concede the point.
THAT’S the major difference.
I mean, yes, that's what i said.
So i stand my my conclusion that in the context you laid out, Photoshop isn't a good comparison to most, if not all of the current tools that would be considered AI.
So, he basically says something that directly contradicts what you’re saying - he prefers the generative slop machines, than tools that actually help developers or artists.
I could be wrong but half of that statement was sarcasm.
I basically read it as:
So I’m gonna execute the code of someone who doesn’t know the first thing about coding on my computer? Great!
I’d rather have AI art and human code.
Running code someone vibed up without understanding what it's doing, it stupidIf i had to pick one way around or the other, I’d rather have AI art(which is this case is significantly less of a security risk) and human code (which should potentially be of a higher quality)
I think the fundamental misunderstanding here is how the term AI is used.
None of these things are really intelligent and LLM's are predictive semi-hallucination machines cobbling together best guesses at what's supposed to come next in the sequence.
The way i personally see it is that the latest gen "AI" stuff is basically sitting on LLM's in some capacity. Area recognition, language, image/code generation etc.
Anything else is just normal(perhaps smart) tools, using algorithms of some kind, ML etc
Weak comparisons help no-one, photoshop is nothing like LLM's
All of the big commercial LLM's (without exception afaik) have been trained on a large corpus of data that has been obtained by various sketchy and illegitimate means. (some legitimate as well).
That's the major difference between the two.
If you are using a model that has only been trained on legally obtained data, disregard this point.
I'm not even against competent tool use of LLM's but please use better arguments.
It wasn't until the (late) 1980's that there was universal acceptance that baby human's felt pain.
https://en.wikipedia.org/wiki/Pain_in_babies
Its not better that this was the case, but it's a little more even-handedly stupid.