I just spent some time on Claude 3, and I see how it can be considered ‘better’ than GPT4, however I quickly found that it tends to lie about itself in subtle ways. When I called it out on an error it would say things like ‘I’ll strive to be better’. I called it out on the fact that it’s model doesn’t grow or change based on conversations it has and that it’s impossible for it to strive to do anything outside of, maybe, that chat. It then went on to show me that it couldn’t even adjust within that chat by doing the same thing 5 more times in 5 different ways.
I see the model it used for the apologies (acknowledge, apologize, state intent to do better in the future) which is appropriate for people or beings capable of learning, but it is not. I went from having a good conversation with it about a poem I wrote to being weirdly grossed out by it. GPT does a good job of not pretending to be human, and I appreciate that.
This is going to sound really stupid, and I should note that I am actively in therapy too.
But I had to put my dog down about a month ago, and there was a point where I just needed some validation, so I went to GPT4 and asked it some questions and told it about how I was feeling. I even fed it a poem that I wrote about her and asked if it was good.
The responses were incredibly empathetic and kind, and did an amazing job at speaking directly to the anxiety, pain, and fear I was feeling in those moments. The responses were what I needed to hear and gave me a measure of peace to get me through in those gaps when people weren’t available, or when I wasn’t able to speak them out loud. There was nothing new to me in those responses, but often times we just need to be reminded by someone or something outside of ourselves about what the truth is, and LLMs can absolutely fill that particular hole when trained properly.
My last three months in particular have been tough, and GPT4 has been a useful tool to get through a fair few storms for me.
Yep, when it still had some value. It was a great location with a view over Lake Washington near South Lake Union in Seattle too. That was during the run of a few years that the SLT was maki g good choices which ended this last year and resulted in some layoffs including myself. It was nice while it lasted lol.
My old company saw this in the first 3 months of the COVID lockdown and immediately sold their building which they’d bought less than a year before. This isn’t rocket science.
It’s awful at text in images though. Pretty sure it draws the text rather than writes it, if that makes sense lol. I had it try 4 times and it got it wrong every time
Someone else mentioned the iris test being more accurate but that it also includes the eye area around the iris, including eyelashes and eye shape. That would clearly bias the model.
I wonder if there’s anything else that’s might be giving clues to the machine or if it I limited to what they say it’s determining sex based on. As a trans-nonbinary person myself, I’m very skeptical and anxious about technologies like this leading to biases and prejudices being emboldened.
I spent the last 4 years working on this at a state-wide level at my last job, so I’ve seen a lot in this space. Skills based hiring is extremely effective when done right. The problem is that most employers don’t know how. They take the degree requirement off the listing and then go through the same interview processes as if nothing changed. In tech specifically, there is a huge highly skilled talent pool whose potential is going untapped because of a glass ceiling keeping them from senior positions. If employers were effective at identifying what applicants, and even existing employees, are capable of they’d have a much easier time filling roles and the ‘talent gap’ wouldn’t be nearly as severe as it is.
What? That’s… an entirely different headset? Not even the same manufacturer. There’s also some fairly significant differences between them. The person I replied to said that the Index does all the same stuff that the Vision Pro does which is empirically incorrect.
I have no qualms with a person making a comfortable living off of building a website like Reddit. None at all. I’d rather have someone who’s able to dedicate their full time and even a team to making an experience great for users and making a very healthy living off of it.
But yea, spez is a greedy fuck and the ELT at Reddit are all greedy fucks. Reddit has no business being a publicly traded company.
I just spent some time on Claude 3, and I see how it can be considered ‘better’ than GPT4, however I quickly found that it tends to lie about itself in subtle ways. When I called it out on an error it would say things like ‘I’ll strive to be better’. I called it out on the fact that it’s model doesn’t grow or change based on conversations it has and that it’s impossible for it to strive to do anything outside of, maybe, that chat. It then went on to show me that it couldn’t even adjust within that chat by doing the same thing 5 more times in 5 different ways.
I see the model it used for the apologies (acknowledge, apologize, state intent to do better in the future) which is appropriate for people or beings capable of learning, but it is not. I went from having a good conversation with it about a poem I wrote to being weirdly grossed out by it. GPT does a good job of not pretending to be human, and I appreciate that.