Do you have the full text of the notification that you could post here? Kinda hard discussing the specifics otherwise.
If it really contains the quote "Congress is planning a total ban of TikTok", I do consider that misleading.
People here are often making a lot of noise about disinformation campaigns on sites like Facebook and Twitter and YouTube (and that's just from user-posted content that the sites fail to moderate, not posted by the sites themselves), so I don't see why this would get a pass.
TikTok urged its users to protest the bill, sending a notification that said, "Congress is planning a total ban of TikTok... Let Congress know what TikTok means to you and tell them to vote NO."
Also from a BBC article about the same thing:
Earlier, users of the app had received a notification urging them to act to "stop a TikTok shutdown."
So they were literally sending out misleading notifications (because a forced sale is not a total ban), and then the users wrote to Congress based on that...
The probability that they will sell seems really high to me, as the same thing almost happened back in 2020.
Yes there are, in addition to the thumbs up/down buttons that most people don't use, you can also score based on metrics like "did the person try to rephrase the same question again?" (indication of a bad response), etc. from data gathered during actual use (which ChatGPT does use for training).
Human experts often say things like "customers say X, they probably mean they want Y and Z" purely based on their experience of dealing with people in some field for a long time.
That is something that can be learned. Follow-up questions can be asked to clarify (or even doubts - "are you sure you don't mean Y instead?"). Etc. Not that complicated.
(Could be why OpenAI chooses to degrade the experience so much when you disable chat history and training in ChatGPT 😀)
Today's LLMs have other quirks, like adding certain words can help even if they don't change the meaning that much, etc., but that's not some magic either.
It's not dead, and it's not going anywhere as long as LLM's exist.
Prompt engineering is about expressing your intent in a way that causes an LLM to come to the desired result. (which right now sometimes requires weird phrases, etc.)
It will go away as soon as LLMs get good at inferring intent. It might not be a single model, it may require some extra steps, etc., but there is nothing uniquely "human" about writing prompts.
Future systems could for example start asking questions more often, to clarify your intent better, and then use that as an input to the next stage of tweaking the prompt.
I'm sure it can, but then how does one even have the appointment set up in the first place? Which is a much harder part of the process (especially when starting from zero).
"Getting to a place" being a barrier may be a bit of a stretch (unless it's like really far and interferes with your work, etc.), but actually deciding to do therapy, what kind, finding a good therapist, and setting up the first appointment - that can be quite a massive barrier.
You don't need a facebook account a meta account was available as an alternative.
That's great right? Much better!!!
Actually yes. The problem with needing a Facebook account was that it was part of an unrelated service (social network, messenger, etc.) that you couldn't separate. Meta accounts are separate accounts for VR only, much like the previous Oculus accounts.
For these kind of generic questions, ChatGPT is great at giving you the common fluff you'd find in a random "10 ways to improve your career" youtube video.
Which may still be useful advice, but you can probably already guess what it's going to say before hitting enter.
To be fair, the first iPhone did kinda suck in many ways, especially shortly after launch. Only the 2nd or 3rd generation had most of the basics in place.
As far as I know, that is mainly used where a better, bigger model generates training data for a more efficient smaller model to bring it a bit closer to its level.
Were there any cases of an already state of the art model using this method to improve itself?
I can kind of see his point, but the things he is suggesting instead (biology, chemistry, finance) don't make sense for several reasons.
Besides the obvious "why couldn't AI just replace those people too" (even though it may take an extra few years), there is also a question of how many people can actually have a deep enough expertise to make meaningful contributions there - if we're talking about a massive increase of the amount of people going into those fields.
But what is the actual real-world practical solution for those people?
Are they now just going to be broke (or even in debt) with no place to live?