Dude, put this stuff in your comment then. All you mentioned in your comment was that food doesn't look the way they do in advertisements. I can't read your mind, I can only read what's in your comment.
What? They didn't slightly miscalculate the minimum, they grossly blasted past it. Yes, your point is that false advertising is a spectrum, and a certain amount is generally expected, but this isn't in some mushy grey area of it, it's egregiously over the line. Yeah food doesn't look like it does on the packages, but if this event were food it would be a grey goop. Frankly I find your attempt at ignoring magnitude lazy, sloppy, and cynical.
That's not an 18th century lawn, that's a 1950s pesticide lawn. Lawns used to be mostly clover which doesn't need to be fertilized and requires much much less water. When modern pesticide was invented they couldn't keep it from killing clover, so what did they do? They started a massive advertising propaganda smear campaign to rebrand clover as a weed. Modern lawns are not an outdated concept from the 18th century, it's a result of modern capitalist greed.
Writing single functions just isn't the hard part of programming in the vast majority of programs, the hard part is managing a project in a maintainable, robust, and extensible way.
I think worldview is all about simulation and maintaining state, it's not really about making associations, but rather maintaining some kind of up to date and imaginary state that you can simulate on top of, to represent the world. I think it needs to be a very dynamic thing which is a pretty different paradigm to the ML training methodology.
Yes, I view these things as foundational to freewill and imagination, but I'm trying to think more low level than that. Simulation facilities imagination and reasoning facilities motivation which facilities free will.
Are those things necessary for intelligence? Well it depends on your definition and everyone has a different definition ranging from reciting information to full blown consciousness. Personally, I don't really care about coming up with a rigid definition for it, it's just a word, I care more about the attributes. I think LLMs are a good knowledge engine and knowledge is a component of intelligence.
Look up visual programming languages. When you apply a visual metaphor to programming it really is basically just really detailed and complex flow charts.
I'm really curious what the Google Glass concept would be like with modern technology. I feel like the form factor was poisoned from the backlash at the time, but it seems so much more viable than the stupid bulky headsets.
LLMs build on top of the context you provide them and as a result are very impressionable and agreeable. It's something to keep in mind when trying to get it to come up with good answers as you need to carefully word questions to avoid biasing it.
It can easily create a sense of false confidence in people who are just being told what they want to hear, but interpret that as validation, which was already a bad enough problem in the pre LLM world.
As a developer building on top of LLMs, my advice is to learn programming architecture. There's a shit ton of work that needs to be done to get this unpredictable non deterministic tech to work safely and accurately. This is like saying get out of tech right before the Internet boom. The hardest part of programming isn't writing low level functions, it's architecting complex systems while keeping them robust, maintainable, and expandable. By the time an AI can do that, all office jobs are obsolete. AIs will be able to replace CEOs before they can replace system architects. Programmers won't go away, they'll just have less busywork to do and instead need to work at a higher level, but the complexity of those higher level requirements are about to explode and we will need LLMs to do the simpler tasks with our oversight to make sure it gets integrated correctly.
I also recommend still learning the fundamentals, just maybe not as deeply as you needed to. Knowing how things work under the hood still helps immensely with debugging and creating better more efficient architectures even at a high level.
I will say, I do know developers that specialized in algorithms who are feeling pretty lost right now, but they're perfectly capable of adapting their skills to the new paradigm, their issue is more of a personal issue of deciding what they want to do since they were passionate about algorithms.
I can at least assure you that as a developer, docker is annoying to set up and their documentation is confusing.
Most things in Linux are easier to set up but sometimes installing things happens to be harder than it should be and docker is one of them.
You should keep in mind that compared to other OSs, a lot of Linux software is CLI only, so they won't always show up in the applications list and you'll need to check if you have it in a terminal.
Yeah, I understood the arguments against using tabs for alignment, but never really got the argument against using them for indentation.