Sure, but who knows what shenanigans the techs and pilot were trying to convince the machine to drop the landing gear.
Under normal operations, I'd agree, but I'll bet they were putting something in a maintenance state while it was in the air, and at that point all bets are off.
This is the fundamental problem with LLMs and all the hype.
People with technology experience can understand the limitations of the tech, and will be more skeptical of the output from them.
But your average person?
If they go to Google and ask if vaccines cause autism, and the Google's AI search slop trough contains an answer they like, accurate or not there will be exactly no second guessing. I mean, this is supposed to be a PhD level person, and it was right about the other softball questions they asked, like what color is the sky. Surely they're right about that too, right?
Maybe you even are actually asking AI questions and researching whether or not it's accurate.
Perhaps you really are the world's most perfect person.
But even if that's true, which I very seriously doubt, then you're going to be the extreme minority. People will ask AI a question, and if they like the answers given, they'll look no further.
If they don't like the answers given, they'll ask the AI with different wording until they get the answer they want.
It's a single data data point, nothing more, nothing less. But that single data point is evidence of using LLMs in their code generation.
Time will tell if this is a molehill or a mountain. When it comes to data privacy, given that it just takes one mistake and my data can be compromised, I'm going to be picky about who I park my data with.
I'm not necessarily immediately looking to jump ship, but I consider it a red flag that they're using developer tools centered around using AI to generate code.
"I don't blindly trust AI, I just ask it to summarize something, read the output, then read the source article too. Just to be sure the AI summarized it properly."
Nobody is doing double the work. If you ask AI a question, it only gets a vibe check at best.
If you want to trade accuracy for speed, that's your prerogative.
AI has its uses. Transcribing subtitles, searching images by description, things like that. But too many times, I've seen AI summaries that, if you read the article the AI cited, it can be flatly wrong on things.
What's the point of a summary that doesn't actually summarize the facts accurately?
Sure, but with all the mistakes I see LLMs making in places where professionals should be quality checking their work (lawyers, judges, internal company email summaries, etc) it gives me pause considering this is a privacy and security focused company.
It's one thing for AI to hallucinate cases, and another entirely to forget there's a difference between = and == when the AI bulk generates code. One slip up and my security and privacy could be compromised.
You're welcome to buy in to the AI hype. I remember the dot com bubble.
The most powerful words in the world are the things we tell ourselves and believe.
I went to some workshop my mom really wanted me to attend after my marriage fell apart. It was years ago, and I don't remember much because it really wasn't my thing, but I clearly remember that phrase.
I took that to mean that it starts with how you treat yourself.
As someone who hit rock bottom, it gets better. My marriage ended with me in handcuffs, accused of something I didn't do, with one of my daughters in an ambulance going to a psych hospital and the other daughter with my mom.
The charges got dropped the next day (long story) but I still spent a night in jail, and all I could think about is how long 20 years would be. How old my kids would be. I was 31 at the time.
I'm 35 now, moving in with a woman I couldn't imagine not sharing the rest of my life with. My kids are with me for the school year, and they go stay with my ex for the summer. Literally everyone (even my ex) is better off, even if it doesn't make me happy to admit it.
It gets better. And I think it starts by being nicer to yourself.
It's the constant war on end users that chased me away from windows.
You can't say no to their relentless advertising. It's "maybe later". The pushing to require a Microsoft account. Ads in the start menu. Windows Recall.
The list goes on. You get as much agency as Microsoft allows, or you violate your eula and modify the os to remove things you don't want.
We didn't know it at the time, but windows 7 was peak windows.
What I want is a way to answer the phone like a fax machine. Just press a button and the call gets answered and immediately starts playing that fax machine sound.
I'll bet that would stop calls. Surely they have something that can tell if they're calling a fax machine over and over.
I think the example you're using is closer to emulation.
I'm not an expert by any means, most of my technology experience comes from hardware. But Proton isn't changing the Linux ecosystem, and the programs are still expecting a windows environment when they're run via Proton.
From what I recall, Linux and windows can both do the same stuff, they just have different names or different ways to ask for resources. And Proton receives the request for whatever and converts it to the Linux equivalent.
It's not nearly as bad as it was in the past, now that the graphics APIs are system agnostic.
Most simply put, it's a layer that allows a computer program expecting windows to run on Linux. It isn't emulating anything, just sorta like translating.
Think of it like a language. Windows speaks English, so a program expects to talk in English. But let's pretend like Linux talks Spanish. Proton translates the English commands to Spanish for Linux to understand and execute, and then Proton converts the responses back to English for the program.
Sure, but who knows what shenanigans the techs and pilot were trying to convince the machine to drop the landing gear.
Under normal operations, I'd agree, but I'll bet they were putting something in a maintenance state while it was in the air, and at that point all bets are off.