On a deeper level than small talk, of course.
I was talking to my father in law today and he’s worried that it’s going to skynet us or like, make us dependent on it and then quit helping or something.
I tried explaining that it’s not a robot, it’s not an AI, it’s not C-3PO. It’s a very elaborate autocomplete. It can’t even do kindergarten level arithmetic consistently, which a basic calculator can accomplish, and it’s because it’s just autocomplete.
“Yeah, but what if it starts teaching itself.”
I love the man but God damn people think this thing is so much more than it is.
The other day someone told me that their partner used ChatGPT instead of going to therapy.
We’re all so cooked.
So many people are going to develop/exacerbate mental illness from doing that.
Turbo cooked
Probing the quicksand with a rod made of quicksand
Sounds safe to me
The point of contention regarding therapy for me is that I’m literally paying for an impersonal conversation in which I express my deepest insecurities to someone who most likely doesn’t give a shit.
I don’t see how AI fixes that but I also don’t understand why it can’t help if your relationship with your therapist is supposed to be a fundamentally clinical one.
“person I pay to pretend to give a shit about my problems” is such a reductive and unhealthy view of therapy that it should be immediately apparent why therapy has not been helpful and why you’re unable to see why an Autocorrect word regurgitation machine wouldn’t be helpful.
If you have an accountant, is that a person you pay to pretend to give a shit about your taxes? Is an orthopedic surgeon someone you paid to pretend to give a shit about your broken leg? You should be able to recognize why this would be an unhealthy and unhelpful framing device.
10% odds the problem is that you haven’t found the right therapist. 90% odds you’re building up mental barriers that are actively preventing you from engaging with the therapeutic model in a beneficial way. Acknowledging this and working to overcome these barriers was life-changing for me and has resulted in an astonishing level of change in not only how effective talk therapy has been, but also in how I feel and think about myself particularly in regards to my mental and physical health.
One big issue with therapy is that it’s difficult to afford unless you have a good job or are on a state-sponsored health plan. The people who are in that welfare cliff are arguably some of the people who need it the most.
And then, once you get it, it’s often restricted to “you will have twelve 45-minute sessions to address the issue”. It really cannot function properly in a transactional, capitalist model. To really have an impact, it needs to tie in with the pace of life, and if this doesn’t take the form of a figure who’s present outside the office/clinic, it’s prohibitively costly and/or slow to try to link the professional up with the on-the-ground reality.
What we now have access to is something that’s on-demand, 24/7, for better or for worse. The provider/client model canmot match this.
if therapy was rigorous we wouldn’t have to suffer through dozens of wrong therapists.
There definitely are crappy providers out there but if you’re at the point where you’ve personally bounced off of multiple dozens of providers, it might be time to start thinking about the “why” of the problem and about your actual needs. Like, what do you want out of therapy? What are your goals? Maybe you need a specific therapy method, or maybe talk therapy straight up cannot meet those needs.
or maybe talk therapy straight up cannot meet those needs.
probably this. drugs didn’t help either,
The classic model of a sufferer of depression has them numb to or unable to see positive stimuli in their life and both talk therapy and SSRIs can help people recognise and maximize the positive things in life.
There’s a scenario colloquially referred to as “shit life syndrome” in which a sufferer is living in untenable and seemingly unchangeable circumstances, which are impacting their mental health. Therapy is largely ineffective here because it doesn’t affect the material aspects of life. It doesn’t stop abuse, it doesn’t put food on the table, it doesn’t make your workplace more tolerable, etc. There also seems to be a high level of correlation between this type of depression and cPTSD.
Interestingly, SLS-type depression may be a major cause of paradoxical reactions to SSRIs, where symptoms actually worsen. Unlike the traditional model of a depression sufferer, it’s not that SLS-type sufferers unable to see the positive stimuli in their life: they’re just overwhelmingly exposed to negative stimuli. Heightening the ability to engage with stimuli that were being missed instead results in a worsening of symptoms.
If you have an accountant, is that a person you pay to pretend to give a shit about your taxes? Is an orthopedic surgeon someone you paid to pretend to give a shit about your broken leg? You should be able to recognize why this would be an unhealthy and unhelpful framing device.
I just don’t agree that these are good analogies.
Why? If they are bad analogies, there must be a reason for that to be the case.
If you have an accountant, is that a person you pay to pretend to give a shit about your taxes? Is an orthopedic surgeon someone you paid to pretend to give a shit about your broken leg?
Yes and yes. I’m hiring them because they perform a service in exchange for money. It’s not reasonable to expect them to care on an individual level about my taxes or my broken leg, I just need them to do their job.
You are in fact having someone perform a service in exchange for money. The service is to identify, analyze, and treat a specific, named issue or group of issues which you are facing. But would you walk up to your parent or neighbor and describe either of those professions as someone you paid to pretend to care?
A therapist is supposed to do the same thing: identify, analyze, and treat an issue or issues. So why the fuck would you frame therapy like you’re trying to pay someone to pretend to be your friend? Why is it uniquely normalized to describe just this one profession in this way?
What are your outcome measures for therapy, personally?
Primarily subjective and patient reported measures (perceived self-esteem, self-reported frequency or severity of events, ability to cope with external pressures) as the physiological measures associated with most of my issues can only be expected to worsen.
I’ve had improvements in terms of things like reduction in negative self-talk, reduction in upsetting intrusive thoughts, and pretty drastic reductions in both frequency and severity of ED events.
The problem is that AI does absolutely not provide a clinical relationship. If your input becomes part of the LLM’s context (which it has to in order to have a conversation) it will inevitably start mirroring you in ways you might not even notice, something humans commonly (and subconsciously) respond to with trust and connection.
Add to that that they are designed to generally agree with and enable whatever you tell them and you basically have a machine that does everything to reinforce a connection to itself and validate the parts of yourself you have concerns about.
There are already so many stories of people spiralling because they started building rapport with an LLM and it’s hard to imagine a setting where that is more likely to occur than when you use one as your therapist
There are already so many stories of people spiralling because they started building rapport with an LLM and it’s hard to imagine a setting where that is more likely to occur than when you use one as your therapist
There are multiple cases where an LLM is alleged to have contributed to someone’s suicide, from supporting sentiments of the afterlife being better to giving practical advice.
Given the way LLMs function, the will have a hard time with therapy. Chat GPT’s context window is 128k tokens. As you chat, your prompts/replies add up and start filling the context window. GPT also has to look at its own responses for context. That fills up the window as well. LLMs suck with nearly empty context windows and nearly full context windows. When you’re close to having a full context window, it will start hallucinating and having problems with responses. Eventually it will only be able to focus on parts of your conversations because you’ve blown past the 128k token mark.
The ways to mitigate this problem have to be done by the user and they disrupt therapy.
We just need to add more tokens then
spoiler
/S
If find that it’s helpful being able to talk to someone that you can’t disappoint. Otherwise I will always lie to make them feel better about how I’m doing
I’ve had therapists with whom that exact scenario has happened, I’ve literally lied to them about how I’m doing.
can’t imagine why trusting the agreeable electrified foolin’ machine created by sociopaths can’t not help
Just wait for the conversation where they come to you telling you their partner divorced them because “ChatGPT told them to.”
I described it to a friend once as an “ass-kissing machine” and it completely changed her view of it, recognising that that is exactly what it does, it just says what you want to hear.
A lot of people feel like they never have any control or any sense of recognition of their “hard work” so an ass kissing bot is perfect to stroke the ego of someone who desperately wants someone to tell them that their ideas are good and clever, and to take “interest” in what they say.
I think Americans are primed to feel very enthusiastic about anything that tells them things they already believe, even moreso if it sounds very confident and organized. And people are already prone to woowoo stuff.
I don’t think you’d break the spell if you informed everyone it’s just lines of code that doesn’t think. A statistically significant number of Americans claim to talk with spirits and angels, or have the gift of prophecy. On the flip side the techbro types believe in garbage like the future basilisk AI that tortures evseryone forever. These same people are already deranged, all the LLM does is organize their mashed potato brains into sentences that can be read. And just about every major LLM seems primed to have a very servile, docile writing style so it’s trivial to get them to say whatever you want so long as you keep saying the same thing. They’re not well designed for confrontation.
I firmly believe one of the best ways to deal with reactionaries, woowoo types peddling scams, or conspiracy theorists is to simply tell them they’re a fucking idiot. “That sounds fucking stupid. Shut up, nerd, and never speak to me about this again.” That’s how you do it, that’s how you dispell stuff. Social embarrassment and confrontation.
An LLM won’t do that, it’ll rub a nice mental salve on your already smooth brain. It’s an actual echo chamber.
On the flip side the techbro types believe in garbage like the future basilisk AI that tortures evseryone forever
makes a bit of noise but i doubt all that many people are true believers compared to the population who has ever touched computer for a living
I think it also has something to do with how distanced most of us are from creation and maintenance of machines, particularly electronics. if you don’t quite understand what a transistor is, or how code works, or how a large language model turns inputs into outputs, then “well there must be a little dude in there somewhere” makes as much sense as anything. Plus people tend to personify inanimate objects to begin with.
then “well there must be a little dude in there somewhere”
One of my earliest memories is my mom showing me how a cash register works and telling me there was a little gremlin inside the machine powering it.
This your mom? ->
Mommy!
ngl, this is creepy af /s
It’s true though, they eat the coins. Times have been tough for cash register gremlins since everyone started using cards to pay for everything.
6 year old me has verified this info as TRUE
Why do you think its called horsepower ? Ofc cars are powered by little horsies
if you don’t quite understand what a transistor is, or how code works, or how a large language model turns inputs into outputs, then “well there must be a little dude in there somewhere” makes as much sense as anything.
In all fairness, this is how people come to believe in a consciousness too.
This is a really goofy argument. There’s not a little dude in there, the whole “there” is a normal sized dude who, incidentally, can be disassembled into parts that are not themselves entire dudes. Most people don’t believe in a Cartesian Theater sort of hypothesis.
There is a little dude in the AI. Millions of little dudes. AI is just pretending to be one of them.
The appeal of LLMs seems uncomfortably similar to how Thomas Jefferson enthusiastically employed dumb waiters to limit interaction with enslaved people.
I don’t really understand how to explain to people who don’t understand that chatgpt is the same as when your phone guesses what you’re going to say next word, but on turbo mode. If they still don’t get it after that then I don’t know what to do to explain it further. Just fucking get it man!!
I try to tell them that the machine is just calculating what the next word is to follow the previous word. It doesnt understand the context of what it’s saying, only that these words fit together right.
At risk of sounding ignorant…
There has to be more to it than that, right? I mean these tools can write working code in whatever language I need, using the libraries I specify, and it just spits out this code in seconds. The code is 90% of the way there.
LLMs can also read charts and correctly assess what’s going on, can create stock trading strategies using recent data, can create recipes that work implying some level of understanding of how to cook, etc. It’s kinda scary how much these things can do. Now that my job is training these models I see how far they’ve come in just coding, and they will 100% replace a LOT of developers.
Because the llms have been trained on however many cured data sets with mostly correct info. It sees how many times a phrase has been used in relation to other phrases, calculates the probability of if this is the correct output, then gambles on a certain preprogrammed risk tolerance, and spits out the output. Of course the software engineers will polish it up with barriers to keep it within certain boundaries,
But the key thing is that the llm doesnt understand the fundamental concepts of what you’re asking it.
I’m not a programmer, so I could be misunderstanding the overall process, but from what I’ve seen on how llms work and are trained, AI makes a very good attempt of what you almost wanted. I don’t how quickly AI will progress, but for now I I just see it as an extremely expensive party trick
That party trick is shipping code and is good enough to replace thousands of developers at Microsoft and other companies. Maybe that says something about how common production programming problems are. A lot of business code boils down to putting things in and pulling things out of databases or moving data around via API calls and other communication methods. This tool handles that kind of work with ease.
can create recipes that work implying some level of understanding of how to cook
Being able to emulate patterns does not actually indicate some sort of higher level of understanding. You aren’t going to get innovative new recipes, they are either just paraphrasing what they have read many people describe or they are cobbling together words.
That may have been a bad example because for recipes it could just search the web and infer that vegetables go with olive oil for a stir fry. Where it’s impressed me so far is in taking a piece of complex code and being able to refactor it, add features, write unit tests, and write up development plans. That text doesn’t exist. It has to do some form of reasoning to interpret the code and come up with solutions for that particular problem.
Syntax is syntax. I think from the standpoint of making a computer do something, it’s really not that different from language processing. That, and just like when you ask it to make a new recipe or whatever else, it is liable to make up something nonsensical and fail to identify the problem unless you spell it out first.
We’ve had opposite experiences training these things.
I’ve been shocked how little they’ve advanced and how absolutely shit they are. I’m training them in math and it’s fucking soul sucking misery. They’re less capable than Wolfram Alpha was 20 years ago. The mistakes they make are so fucking bad, holy shit. I had one the other day try to use Heron’s Formula for the area of a triangle on a problem where there were no triangles!
These things are crap and they aren’t getting better.
There has to be more to it than that, right?
No, there really isn’t. You’re just pigging backing off of the exploited labor of working class engineers and enjoying the luxury of living away from the blood soaked externalities that make your chatbot sing.
If AI actually did what you think it does then why would the capitalist class support it? A computer program that is the might of millions of workers? How would the control of the capitalist class continue to exist?
Or the more reasonable explanation that like smartphones and crypto, there exists a very lucrative profit incentive for the capitalist leach to create profit margins out of thin air. Westerners are trained to overconsume so this doesn’t come as a suprise.
If AI actually did what you think it does then why would the capitalist class support it?
Because server farms are cheaper than hiring developers, artists, writers, etc.? Capitalists don’t care about the environmental impacts as long as their bottom line isn’t affected.
This technology is killing jobs. Thousands are being laid off at Microsoft this month on top of layoffs at lots of other tech companies. The field I went to college to learn is cooked. There’s already thousands of over qualified people applying to the few jobs that are left. This is a way bigger deal than Crypto and another way for the owning class to hoard more wealth for themselves at the expense of us working class folks.
what do you say to people that are like “that’s how the human brain works too”? i mean i know that’s BS but I see that response all the time
I feel really grateful that I was exposed to Markov chain bots on irc back in the day as it was a powerful inoculant.
Also add in the fact that public school education is abysmal and burger brains think a computer that can do some basic common sense shit is godlike.
Eh, back in the day we were getting advice from people who stood over cracks in the ground and got high off ethane or tossing knucklebones (as one of the less gross options) to see the future. Humans have always been susceptible to magical thinking and a lot of us can remember when personal computing was barely functional, so it’s not surprising that ChatGPT seems like a quantum leap to some people.
The fact that someone has written a program that’s capable of convincing people that it’s god still has terrifying implications and I for one am not excited about the prospect of a wave of computer-inspired stochastic terrorism, but I don’t think this is a sign that contemporary people are uniquely dumb.
back in the day
cracks in the ground
Oracles? Back in your day you had oracles? Damn, Hexbear is so diverse that we have immortal leftists shitposting on here.
It’s called the immortal science, if you aren’t working to transcend your flesh prison, you need to elevate your game.
It’s going to be so annoying when someone makes a “24” style show where the villains release a free chatbot designed to radicalize people and spread chaos.
Not sure I’d generalize like that. IMO you’ll find very obvious correlations between the people who tend to use AI regularly because they’re living Dunning Kruger types who always believe they have the great “talent” or “the genious idea” but just need the magic tool to make it work, those who have been “forced” to use it at work e.g some programmers and finaly those that as you say just make excuses for it and may not even use it but nevertheless consume the slop consciously and happily(e.g r/chatgpt users) .
I’d guess is a significant majority of the average population who live outside these bubbles are far less favorable towards AI.
Someone on reddit had an intriguing take for once. The people who are like “ChatGPT revolutionized my work” are people who are just really bad at stuff. I read that comment the other day. And then now if you go to reddit and look at the AI subs, you get stuff like this:
https://www.reddit.com/r/OpenAI/comments/1lpte80/chatgpt_is_a_revelation_for_me_in_my_work/
I know the arguments done to death surrounding AI and being a risk to jobs etc. but I work in a very niche area of law and there’s a lot of complex pieces of case law and legislation that deal with it and, frankly, my memory is terrible with retention of this info. I also struggle sometimes with interpreting judgments, specifically when Judgments are written in very complex “legalese” which I’ve always hated.
It’s a very tempting thesis considering it predicts observation. But I think I would temper it a little bit to avoid ableism or getting too far into technocratic thinking.
I also struggle sometimes with interpreting judgments, specifically when Judgments are written in very complex “legalese” which I’ve always hated.
this person graduated law school
I wouldn’t say that it is a useful tool for anyone struggling with mental or learning disabilities though, it doesn’t help them get better, it does the work for them. Someone in a wheelchair doesn’t want someone to just pick them up and carry them everywhere, they want ramps so they can go places without being wholly reliant on others. LLMs just outsource your thinking to an algorithm.
A Reddit link was detected in your comment. Here are links to the same location on alternative frontends that protect your privacy.
Its just commodity fetishism to a higher level.
I don’t think this is really commodity fetishism in the Marxist sense to a greater degree than if we were talking about brands of soap, even if the facade of the commodity having a personality makes it come off differently.
Plugging my brain into the machine that turns you into Mathematical Average Internet Meemaw
I’m sorta the opposite: social relationships are so shallow and superficial in late stage capitalism that most social relationships can be replaced with a chatbot. If your social relationships can be summed up as water cooler conversations with coworkers and catching up with your drinking buddies, you might as “socialize” with a chatbot instead. Say what you will about a chatbot, but at least a chatbot won’t stab you in the back like the case of socializing with coworkers or turn you into a functioning alcoholic like the case of socializing at a bar.
If your social relationships are limited to having pointless conversations about the weather or traffic or your favorite sports team, then what is the point of the social relationship in the first place?
reread my comment
Okay, that came out a lot more unhinged than how it sounded in my head lmao
Was going to post this as it’s own comment but I think I see what you mean.
I was out with some friends recently and most of the discussion was typical small talk, catching up type stuff. Me and 1 friend got into a discussion about music. We were talking about how listening to music by yourself is really an entirely new thing, and that throughout history music has been a communal/social experience. Like when people used to be forced to work 12+ hour days, 6/7 days a week, and then Sunday they’d get together and play music together, and how it was the likely the only good thing going for them.
Then someone else jumps in and says “wow this is so deep” and just completely fucking killed the whole vibe, conversation went back to shallow small talk. It was as if the conversation being deeper than “what have you been up to?” actually bothered this person. And it was barely even a “deep” thing to talk about!
I think you’re absolutely right that late stage capitalism has utterly destroyed our ability to connect. It’s like everyone is afraid to say anything beyond the superficial for some reason
It was as if the conversation being deeper than “what have you been up to?” actually bothered this person. And it was barely even a “deep” thing to talk about!
There are many people who are uncomfortable with conversations that involve some form of personal investment
Anything could be sentient. It’s pure faith