I disagree, most people I speak to are unaware of the scale of the problem or still hold delusions that it will be sorted out by some kind of future tech by the end of the century. A shot of doomerism gets their attention way more than banging on about positive change and plastic straws.
I am fully aware fusion isn't a magic bullet, that's the reason I provided it as an example. It's about as likely as convincing the world degrowth is the better option.
The solutions are not consumer based, nor are they possible at a grassroots level. Literally everything tried so far has not moved the dial in terms of emissions. Joining groups like ER, whilst commendable, is not going to affect enough change in time.
Millions may be discussing change as you say but there are billions who are too concerned with day-to-day survival to contemplate long-term issues like climate change.
Best things to do are actually to not have kids and vote for green candidates, both of which I'm doing. When we see a global top-down movement via policies and laws that might make any kind of difference then we can discuss individual action in positive terms.
The "doom and gloom" messaging is just the reality of the situation, it's dishonest to suggest otherwise. Apathy and nihilism are appropriate responses to it as we are entirely at the whims of governments/corporations.
Also you think nihilism and apathy are where the majority of people are? Most of the world barely even understand the full picture so putting a positive spin on it is just allowing them to keep their heads in the sand.
In terms of solutions, at this point it's converting to fusion power in the next few years (unlikely) or getting the whole world to ditch capitalism and actively decarbonise their economies as a priority. We are powerless to affect either. If we were able to I'd be fully on board with your messages of hope.
I had actually written a couple more paragraphs using weather models as an analogy akin to your quartz crystal example but deleted them to shorten my wall of text...
We have built up models which can predict what might happen to particular weather patterns over the next few days to a fair degree of accuracy. However, to get a 100% conclusive model we'd have to have information about every molecule in the atmosphere, which is just not practical when we have a good enough models to have an idea what is going on.
The same is true for any system of sufficient complexity.
This article, along with others covering the topic, seem to foster an air of mystery about machine learning which I find quite offputting.
Known as generalization, this is one of the most fundamental ideas in machine learning—and its greatest puzzle. Models learn to do a task—spot faces, translate sentences, avoid pedestrians—by training with a specific set of examples. Yet they can generalize, learning to do that task with examples they have not seen before.
Sounds a lot like Category Theory to me which is all about abstracting rules as far as possible to form associations between concepts. This would explain other phenomena discussed in the article.
Like, why can they learn language? I think this is very mysterious.
Potentially because language structures can be encoded as categories. Any possible concept including the whole of mathematics can be encoded as relationships between objects in Category Theory. For more info see this excellent video.
He thinks there could be a hidden mathematical pattern in language that large language models somehow come to exploit: “Pure speculation but why not?”
Sound familiar?
models could seemingly fail to learn a task and then all of a sudden just get it, as if a lightbulb had switched on.
Maybe there is a threshold probability of a positied association being correct and after enough iterations, the model flipped it to "true".
I'd prefer articles to discuss the underlying workings, even if speculative like the above, rather than perpetuating the "It's magic, no one knows." narrative. Too many people (especially here on Lemmy it has to be said) pick that up and run with it rather than thinking critically about the topic and formulating their own hypotheses.
Lol indeed, just seen you moderate a Simulation Theory sub.
Congratulations, you have completed the tech evangelist starter pack.
Next thing you'll be telling me we don't have to worry about climate change because we'll just use carbon capture tech and failing that all board Daddy Elon's spaceship to teraform Mars.
You posted the article rather than the research paper and had every chance of altering the headline before you posted it but didn't.
You questioned why you were downvoted so I offered an explanation.
Your attempts to form your own arguments often boil down to "no you".
So as I've said all along we just differ on our definitions of the term "understanding" and have devolved into a semantic exchange. You are now using a bee analogy but for a start that is a living thing not a mathematical model, another indication that you don't understand nuance. Secondly, again, it's about definitions. Bees don't understand the number zero in the middle of the number line but I'd agree they understand the concept of nothing as in "There is no food."
As you can clearly see from the other comments, most people interpret the word "understanding" differently from yourself and AI proponents. So I infer you are either not a native English speaker or are trying very hard to shoehorn your oversimplified definition in to support your worldview. I'm not sure which but your reductionist way of arguing is ridiculous as others have pointed out and full of logical fallacies which you don't seem to comprehend either.
Regarding what you said about Pythag, I agree and would expect it to outperform statistical analysis. That is due to the fact that it has arrived at and encoded the theorem within its graphs but I and many others do not define this as knowledge or understanding because they have other connotations to the majority of humans. It wouldn't for instance be able to tell you what a triangle is using that model alone.
I spot another apeal to authority... "Hinton said so and so..." It matters not. If Hinton said the sky is green you'd believe it as you barely think for yourself when others you consider more knowledgeable have stated something which may or may not be true. Might explain why you have such an affinity for AI...
I question the value of this type of research altogether which is why I stopped following it as closely as yourself. I generally see them as an exercise in assigning labels to subsets of a complex system. However, I do see how the COT paper adds some value in designing more advanced LLMs.
You keep quoting research ad-verbum as if it's gospel so miss my point (and forms part of the apeal to authority I mentioned previously). It is entirely expected that neural networks would form connections outside of the training data (emergent capabilities). How else would they be of use? This article dresses up the research as some kind of groundbreaking discovery, which is what people take issue with.
If this article was entitled "Researchers find patterns in neural networks that might help make more effective ones" no one would have a problem with it, but also it would not be newsworthy.
I posit that Category Theory offers an explanation for these phenomena without having to delve into poorly defined terms like "understanding", "skills", "emergence" or Monty Python's Dead Parrot. I do so with no hot research topics at all or papers to hide behind, just decades old mathematics. Do you have an opinion on that?
You're nearly there... The word "understanding" is the core premise of what the article claims to have found. If not for that, then the "research" doesn't really amount to much.
As has been mentioned, this then becomes a semantic/philosophical debate about what "understanding" actually means and a short Wikipedia or dictionary definition does not capture that discussion.
🇺🇸 "Baadel a waader" 🇺🇸