And yet it’s less infuriating than the people who are directly affected and still engage in cognitive dissonance in order to continue to believe that their vote was a good thing and that this is just the price they gotta pay to ensure “those people” don’t take advantage of them.
- Posts
- 0
- Comments
- 82
- Joined
- 1 yr. ago
- Posts
- 0
- Comments
- 82
- Joined
- 1 yr. ago
I totally get the frustration from watching people talk past each other, neither side taking anything positive away from the exchange, arguing for the sake of arguing or a misplaced desire to “own” the other one. It’s exhausting and there’s lots of times I want to do exactly what you did, so I truly do understand.
No hard feelings, and I really do appreciate the apology. We all get heated sometimes… it’s easy to do on the internet, especially when text makes it easy to misinterpret tone.
It’s nice to end things on a positive note and on the same page - as I said, I’m a consensus builder. For what it’s worth, you were absolutely right that taking your emotion out of your argument and just being sincere works, at least when someone is approaching things in good faith, which I believe we both were.
So all of this tone policing because I wanted to say RTFM, decided that the initialism didn’t work with RTFA and spelled it out instead, and failed to drop the superfluous “fucking”, thereby making it sound overly harsh. I thought since “fucking” was modifying “article” that the profanity wasn’t directed at the commenter, but still expressed some exasperation on my part.
You’re right, I should have dropped the “fucking”. I was upset because I had just read the article about a topic that concerns me greatly, scanned the comments to see if there was any sense of solidarity that “hey, this is a problem we need to address” and got frustrated when the first comment I read was something that shot from the hip and completely ignored the entirety of the actual article. If not for that, I probably would’ve edited the “fucking” out. That was a failure on my part.
You’re right, that did come across overly aggressive and I apologize to the original commenter. It’s hard to convey tone with text and obviously I failed to communicate this as I never intended my comment to sound so aggressive that it warranted everything that I got in response.
Go back and read everything I’ve written, but omit the word “fucking” and I think you’ll see that one concession to my frustration stands in contrast to the rest of my messages where I was blunt but otherwise respectful. I provided quotes and deflected credit to the original author when it seemed like the credit was being given to me. I never engaged in personal attacks and my only actual criticism was literally that opening line. I hope that demonstrates my true intentions, which were apparently buried by the overly aggressive opening.
I’ve since been told I have shitty behavior and that I’m a “douche”. I never called anyone names, and literally my only call out was “did you even read the fucking article?” You’ve been an order of magnitude more aggressive than I was in the first place and I’ve gotten downvoted and chastised and called names while you’ve gotten supportive comments. I’ve apologized for my behavior without any expectation of reciprocation.
I hate conflict. I hate arguing. It makes me want to just not participate at all online. I don’t need this and as an autistic man it gives me a great deal of stress. I don’t expect you or anyone else to understand or care which is why ordinarily I just lurk online because people seem so eager to jump down my throat. Feels weird and shitty to be accused of doing the same when I failed to police my tone appropriately.
Normally when I do speak up, I’m a consensus maker, trying to bring people together. In fact, this particular topic inspired me to speak up because I’m deathly scared of the direction that current events seem to be headed and I wanted to set my comfort aside in the hopes of seeing that there were other people who wanted to combat this.
It’s disheartening that as fascists worldwide are gaining traction, the most relevant article I’ve seen in a while about it and what to do about it has had discussion completely derailed by criticism about the clickbait headline and subsequent tone policing. If this is where all our energy is going, no wonder fascists are seeing so little pushback.
Also, when I said “it’s from the article”, I was literally trying to give credit to the original author, since it seemed like you were attributing that to me. Feels weird to be downvoted for that.
The funny thing about your comment was that was the same sentiment I was expressing towards the other person. So point taken I guess?
To be clear, I didn’t write that, it’s from article.
Is the headline clickbait? Sure. I’m not defending that. Guess the author could have incorporated this into the headline:
Once fascists win power democratically, they have never been removed democratically.
Feels like these are a bunch of nitpicks that distract from the main point of the article - that we need to act urgently and drastically to hope to stop this before fascists consolidate power.
Did you even read the
fuckingarticle?Based on the historical record, there are exactly three ways this goes. Option one: Stop them before they take power. Option two: War. Option three: Wait for them to die of old age.
As far as doomerism goes, he outlines several possible avenues to stop things that require us getting off our “comfy couch”.
True. I’ve been on an extended sabbatical from work in an industry heavily impacted by LLM use, and I’m not looking forward to returning and being forced to use it, especially when it doesn’t benefit my productivity.
Still, even with this forced demand, spending is vastly outweighing the revenue generated. Venture capitalists seem to have an absurd amount of money to spend, but even their resources are finite, especially without ultra low interest rates.
Then again, I expected cryptocurrencies to implode years ago, but even after FTX it stabilized enough that the bubble has kept growing. It hasn’t had to weather a real recession yet and I expect that will apply a lot more pressure when that finally happens.
Between those two bubbles and the tariffs finally starting to take effect, it feels like we’re in for a really bad time. I really hope I’m wrong.
Edit: Forgot to mention that Intel seems to be thrashing. Their current CEO seems to be slashing the people he needs to recover from their current conditions and shutting down construction on fabs that represent their future. Never thought I’d see a day when Intel was facing an existential threat, but here we are.
If the market gets spooked, it doesn’t matter what CEOs do. The companies burning is the bubble popping. See the dot-com or the prime-mortgage bubbles.
It always starts with a little wobble that causes investors to pause and wonder if all the hyper optimism is maybe unfounded and start to look at the fundamentals. Then they realize the emperor has no clothes and the bull turns into a bear.
Not saying that Coreweave is that wobble; many analysts and pundits will try to sweep it under the rug to maintain the irrational exuberance, but once sentiment starts to turn, it can happen fast… pop!
Yeah, but it’s nice to see them branching out from the one pronoun joke that they keep recycling over and over.
Copycat
Unless Al Jazeera edited the title, it does not use the acronym IOF. I assume that was the responsibility of whoever posted it to Hacker News.
It started in 1812. Although the Democratic-Republican party did evolve into the current Democratic party over the course of two centuries, it’s hardly fair to call them the same party. That’s eight generations between then and now and the political landscape has changed dramatically.
As for the “both sides do it” whataboutism, like so many “both sides” issues the current Republican Party benefits far more from gerrymandering than the current Democratic Party, and this is before this especially egregious Texas mid-census redistricting.
No. There’s no indication that any AI code was or was not added to their repository, nor is there any indication that any “vibe coding” was done. It could be that some junior developer installed Cursor on their machine and was playing with it and committed the
.cursorfile which was subsequently removed. More likely, they’re experimenting with introducing some AI into their development workflow as almost every company seems to be doing these days. Not great, but not nearly as alarming or damning as this post suggests.Cursor is a version of a popular coding program that integrates Ai into the editor. A
.cursoris a text file that you put into your code folder that gives extra context and information to the Cursor code editor.Am I afraid to face down a cashier? No.
Is it REALLY that bad? No.
Can I make awkward small talk with a stranger? Yes.
Do I want to make awkward small talk with a stranger? No.
Am I relieved that I’m not forced to interact with a stranger and can continue to have to my own inner thoughts and not have to spend time rehearsing in my head what to say if they ask me how I am because I feel weirdly compelled to answer it honestly instead of simply saying “fine” like most do? Absolutely.
- JumpDeleted
Permanently Deleted
The only thing close to a decision that LLMs make is
That's not true. An "if statement" is literally a decision tree.
If you want to engage in a semantically argument, then sure, an “if statement” is a form of decision. This is a worthless distinction that has nothing to do with my original point and I believe you’re aware of that so I’m not sure what this adds to the actual meat of the argument?
The only reason they answer questions is because in the training data they’ve been provided
This is technically true for something like GPT-1. But it hasn't been true for the models trained in the last few years.
Okay, what was added to models trained in the last few years that makes this untrue? To the best of my knowledge, the only advancements have involved:
- Pre-training, which involves some additional steps to add to or modify the initial training data
- Fine-tuning, which is additional training on top of an existing model for specific applications.
- Reasoning, which to the best of my knowledge involves breaking the token output down into stages to give the final output more depth.
- “More”. More training data, more parameters, more GPUs, more power, etc.
I’m hardly an expert in the field, so I could have missed plenty, so what is it that makes it “understand” that a question needs to be answered that doesn’t ultimately go back to the original training data? If I feed it training data that never involves questions, then how will it “know” to answer that question?
it knows from its training data that sometimes accusations are followed by language that we interpret as an apology, and sometimes by language that we interpret as pushing back. It regurgitates these apologies without understanding anything, which is why they seem incredibly insincere
It has a large amount of system prompts that alter default behaviour in certain situations. Such as not giving the answer on how to make a bomb. I'm fairly certain there are catches in place to not be overly apologetic to minimize any reputation harm and to reduce potential "liability" issues.
System prompts are literally just additional input that is “upstream” of the actual user input, and I fail to see how that changes what I said about it not understanding what an apology is, or how it can be sincere when the LLM is just spitting out words based on their statistical relation to one another?
An LLM doesn’t even understand the concept of right or wrong, much less why lying is bad or when it needs to apologize. It can “apologize” in the sense that it has many examples of apologies that it can synthesize into output when you request one, but beyond that it’s just outputting text. It doesn’t have any understanding of that text.
And in that scenario, yes I'm being gaslite because a human told it to.
Again, all that’s doing is adding additional words that can be used in generating output. It’s still just generating text output based on text input. That’s it. It has to know it’s lying or being deceitful in order to gaslight you. Does the text resemble something that can be used to gaslight you? Sure. And if I copy and pasted that from ChatGPT that’s what I’d be doing, but an LLM doesn’t have any real understanding of what it’s outputting so saying that there’s any intent to do anything other than generate text based on other text is just nonsense.
There is no thinking
Partially agree. There's no "thinking" in sentient or sapient sense. But there is thinking in the academic/literal definition sense.
Care to expand on that? Every definition of thinking that I find involves some kind of consideration or reflection, which I would argue that the LLM is not doing, because it’s literally generating output based on a complex system of weighted parameters.
If you want to take the simplest definition of “well, it’s considering what to output and therefore that’s thought”, then I could argue my smart phone is “thinking” because when I tap on a part of the screen it makes decisions about how to respond. But I don’t think anyone would consider that real “thought”.
There are no decisions
Absolutely false. The entire neural network is billions upon billions of decision trees.
And a logic gate “decides” what to output. And my lightbulb “decides” whether or not to light up based on the state of the switch. And my alarm “decides” to go off based on what time I set it for last night.
My entire point was to stop anthropomorphizing LLMs by describing what they do as “thought”, and that they don’t make “decisions” in the same way humans do. If you want to use definitions that are overly broad just to say I’m wrong, fine, that’s your prerogative, but it has nothing to do with the idea I was trying to communicate.
The more we anthropomorphize these statistical text generators, ascribing thoughts and feelings and decision making to them, the less we collectively understand what they are
I promise you I know very well what LLMs and other AI systems are. They aren't alive, they do not have human or sapient level of intelligence, and they don't feel. I've actually worked in the AI field for a decade. I've trained countless models. I'm quite familiar with them.
Cool.
But "gaslighting" is a perfectly fine description of what I explained. The initial conditions were the same and the end result (me knowing the truth and getting irritated about it) were also the same.
Sure, if you wanna ascribe human terminology to what marketing companies are calling “artificial intelligence” and further reinforcing misconceptions about how LLMs work, then yeah, you can do that. If you care about people understanding that these algorithms aren’t actually thinking in the same way that humans do, and therefore believing many falsehoods about their capabilities, like I do, then you’d use different terminology.
It’s clear that you don’t care about that and will continue to anthropomorphize these models, so… I guess I’m done here.
- JumpDeleted
Permanently Deleted
I watched this entire video just so that I could have an informed opinion. First off, this feels like two very separate talks:
The first part is a decent breakdown of how artificial neural networks process information and store relational data about that information in a vast matrix of numerical weights that can later be used to perform some task. In the case of computer vision, those weights can be used to recognize objects in a picture or video streams, such as whether something is a hotdog or not.
As a side note, if you look up Hinton’s 2024 Nobel Peace Prize in Physics, you’ll see that he won based on his work on the foundations of these neural networks and specifically, their training. He’s definitely an expert on the nuts and bolts about how neural networks work and how to train them.
He then goes into linguistics and how language can be encoded in these neural networks, which is how large language models (LLMs) work… by breaking down words and phrases into tokens and then using the weights in these neural networks to encode how these words relate to each other. These connections are later used to generate other text output related to the text that is used as input. So far so good.
At that point he points out these foundational building blocks have been used to lead to where we are now, at least in a very general sense. He then has what I consider the pivotal slide of the entire talk, labeled Large Language Models, which you can see at 17:22. In particular he has two questions at the bottom of the slide that are most relevant:
- Are they genuinely intelligent?
- Or are they just a form of glorified auto-complete that uses statistical regularities to pastiche together pieces of text that were created by other people?
The problem is: he never answers these questions. He immediately moves on to his own theory about how language works using an analogy to LEGO bricks, and then completely disregards the work of linguists in understanding language, because what do those idiots know?
At this point he brings up The long term existential threat and I would argue the rest of this talk is now science fiction, because it presupposes that understanding the relationship between words is all that is necessary for AI to become superintelligent and therefore a threat to all of us.
Which goes back to the original problem in my opinion: LLMs are text generation machines. They use neural networks encoded as a matrix of weights that can be used to predict long strings of text based on other text. That’s it. You input some text, and it outputs other text based on that original text.
We know that different parts of the brain have different responsibilities. Some parts are used to generate language, other parts store memories, still other parts are used to make our bodies move or regulate autonomous processes like our heartbeat and blood pressure. Still other bits are used to process images from our eyes and other parts reason about spacial awareness, while others engage in emotional regulation and processing.
Saying that having a model for language means that we’ve built an artificial brain is like saying that because I built a round shape called a wheel means that I invented the modern automobile. It’s a small part of a larger whole, and although neural networks can be used to solve some very difficult problems, they’re only a specific tool that can be used to solve very specific tasks.
Although Geoffrey Hinton is an incredibly smart man who mathematically understands neural networks far better than I ever will, extrapolating that knowledge out to believing that a large language model has any kind of awareness or actual intelligence is absurd. It’s the underpants gnome economic theory, but instead of:
- Collect underpants
- ?
- Profit!
It looks more like:
- Use neural network training to construct large language models.
- ?
- Artificial general intelligence!
If LLMs were true artificial intelligence, then they would be learning at an increasing rate as we give them more capacity, leading to the singularity as their intelligence reaches hockey stick exponential growth. Instead, we’ve been throwing a growing amount resources at these LLMs for increasingly smaller returns. We’ve thrown a few extra tricks into the mix, like “reasoning”, but beyond that, I believe it’s clear that we’re headed towards a local maximum that is far enough away from intelligence that would be truly useful (and represent an actual existential threat), but in actuality only resembles what a human can output well enough to fool human decision makers into trusting them to solve problems that they are incapable of solving.
Just a reminder that the Supreme Commander of the Allied Expeditionary Forces in Europe in charge of beating the Nazis and other fascists was later the Republican President of the United States, among whose goals was preventing the spread of communism. You don’t need to be leftist to hate fascists.