It’s just my experience as someone who was pretty much forced to use AI for coding by my employer for the last few years. For the longest time it was completely useless. And then it suddenly wasn’t. I’m sure you’ll keep hearing this kind of story though, because people have different requirements and AI assisted coding or even agents don’t have to start working for everybody at the same time.
Sure. How much the language or features change is also important. For example Claude can build entire iPhone apps in Swift but you bet they’re going to be full of warnings about things that are illegal now and you bet if there’s any concurrency stuff it’s going to be a wild mix of everything async that ever existed in Swift. It makes sense too because LLMs are trained on code that’s, on average, outdated.
But what it’s good at and what it’s not good at is just part of what you need to know when using AI, just like with any other tool. I have projects too where it can at best replace google, so I don’t try to make it implement those by itself.
I mean if they came with a cool android body we could talk about it. It should at least be able to do cleaning and cooking. Otherwise my wife won’t like it.
Yes but that’s generally true for low end laptops. They may not always have unified memory on a soc in the Intel world but they don’t get a discrete GPU with its own memory.
The A18 is the previous gen iPhone’s chip. So the target audience is people who could do their work on a phone but want a bigger screen and a keyboard. For people who use the current cheap iPad (A16 with 6 GB) it’ll be an upgrade.
If you want bug ridden code with security issues which is not extensible and which no-one understands, then sure, it's a practical use case.
This assumes you never review it, meaning it’s at best an argument against vibe coding. It’s not an argument against using LLMs for coding in general.
Additionally, I’ve been writing software for a living for almost 30 years, and I could say the exact same thing about a lot of human generated code I’ve reviewed during that time. I don’t even know how often I’ve explained basic stuff like “security goes in the backend, not in the frontend” to humans.
Let's face it the only reason you're saying "coding is a practical use case" is because you yourself don't code, and don't understand it.
I certainly do code and if I don’t understand what the LLM outputs it doesn’t go in the project.
I can't see another reason why would assume the problems experienced in other domains somehow don't apply to coding.
I’m a software engineer, I can’t judge LLMs in most other domains. I also don’t think there are no problems. A tool doesn’t have to be 100% problem free to be useful as long as you recognize the limitations.
So you're going to have to pick your way through every single line it generates in order to have the same confidence you would have if you wrote it
I don’t see a problem with this. The post even mentions pulling code from stackoverflow, which is the same. But nobody ever argued that it has no uses in coding because you still have to read the code.
Honestly at this point any article just flat out dismissing LLMs for coding only reads to me like the author isn’t even trying to stay up to date. Which is understandable if they don’t like AI but makes posting about it a bit pointless.
A year ago I would had a similar opinion as the author but in the last 3-4 months specifically, it feels like AI based tools made a huge leap. I went from using short snippets for learning to letting AI implement entire features and being actually happy with the result.
There is however still a pretty big difference between what it produces for common problems vs. what it produces for specialized difficult ones. It’s also inherently better at some languages than others based on the availability of up-to-date training material. So you need some amount of breadth in your projects to accurately judge it.
If you only try some AI service in free mode on one thing every month, for example, you’ll just have this very polarized opinion that’s either “AI is useless” or “AI can do everything”, but you won’t have a good idea of what it can and can’t do.
Why? It’s an optional feature, if you don’t need your Octave programs to interact with Java you can disable Java support at build time. Loses some of the MATLAB compatibility (since MATLAB has this feature too) but you’re not required to use it.
I think the open slop situation is also in part people who just want a feature and genuinely think they’re helping. People who can’t do the task themselves also can’t tell that the LLM also can’t do it.
But a lot of them are probably just padding their GitHub account too. Any given popular project has tons of forks by people who just want to have lots of repositories on their GitHub but don’t actually make changes because they can’t actually do it. I used to maintain my employer’s projects on GitHub and literally we’d have something like 3000 forks and 2990 of them would just be forks with no changes by people with lots of repositories but no actual work. Now these people are using LLMs to also make changes…
It was called 世界でいちばん透きとおった物語 by Hikaru Sugi, but I don’t think there’s an English translation because this kind of gimmick works a lot better in scripts where all characters are the same size, and a translation that ends up with a comparable arrangement of those letters would be a major pain too.
I don’t think it means that by definition. Not knowing how to do things yourself is a choice. And it’s the same choice we’ve been making ever since human civilization became too complex for one person to be an expert at everything. We choose to not learn how to do jobs we can have someone else, or a machine, handle all the time. If we choose wisely, we can greatly increase our capacity to get things done.
When I went to school in the 90ies, other students were asking the same question about math, because calculators existed. I don’t think they were 100% right because at least a basic understanding of math is generally useful even now with AI. But our teachers who were saying that we shouldn’t rely on calculators because they have limits and we won’t always have one with us were certainly not right either.
Personally I don’t like AI for everything either. But also, current AI assistants are just not trustworthy and for me that’s the more important point. I do write e-mails myself but I don’t see a conceptual difference between letting an AI do it, and letting a human secretary do it, which is not exactly unheard of. I just don’t trust current models nor the companies that operate them enough to let them handle something so personal. Similarly, even though I’ve always been interested in learning languages, I don’t see a big conceptual difference between using AI for translation and asking a human to do it, which is what most people did in the past. And so on.
I basically do option 2, but I’d never mount all my configuration. If I want an isolated environment, I’m not making all my ssh keys available to it. So some things have to stay outside for me.
8 hours a day, 5 days a week is mostly a 20th century thing. Working hours did absolutely go down from 12-16 hours a day to 8 and working days from 6 to 5.
The interesting thing is that at any point, a majority believed that shorter hours would stifle productivity. But at the end of the 19th century and in the early 20th, some industrialists started actually testing it. In the US the 40 hour week was famously popularized by Henry Ford after comparing productivity to the previous 6 days a week, but this also was about 100 years after others had started theorizing about it.
In Germany the 8 hour work day was introduced in 1918, but at the time that still meant 6 working days. The 40 hour work week only started becoming the norm in the 60ies and 70ies. And in 2001 Germans gained the right to work part time in almost any job even if originally hired for full time.
If you go farther back in time it does look different though because before the industrial revolution, most people would have worked in agriculture, i.e. they were peasants. Their work days would have been long during the harvest period and otherwise quite short. Some seasons were less work in general, and there were more religious holidays. But this isn’t entirely fair because automation didn’t just automate our jobs, but also our personal chores. For example washing your clothes was a lot more manual work before we automated it.
Also der den ich meistens nehme hat 0,1g Fett auf 100g. Ne andere Sorte die ich mag hat 6g. Das ist Welten entfernt von Speck aus Tieren der mehr so bei 40-50g liegen würde für vergleichbare Verwendungsformen.
Von daher müsste das nach der Logik im Artikel eigentlich positiv sein, weil man damit ja eher auf etwas fettarmes konditioniert würde wenn man sich später vegan ernährt.
If we’re letting Canada in, we should also reconsider Morocco’s application. Good relations with them too and it’d probably annoy Trump as well if we can expand in random directions while the USA can’t.
privacy-focused users who don’t want “AI” in their search are more likely to use DuckDuckGo
But the opposite is also true. Maybe it’s not 90% to 10% elsewhere, but I’d expect the same general imbalance because some people who would answer yes to ai in a survey on a search web site don’t go to search web sites in the first place. They go to ChatGPT or whatever.
Croatia has some of that too