

Well, now you know otherwise. I use it daily.
Well, now you know otherwise. I use it daily.
Nah, it’s completely different from bookmarks. But obviously there’s no sense trying to sell anyone on it anymore.
“I never give advice, but there is one thing I wish you would do when you sit down to write news stories, and that is: Never use the word ‘very.’ It is the weakest word in the English language; doesn’t mean anything. If you feel the urge of ‘very’ coming on, just write the word ‘damn’ in the place of ‘very.’ The editor will strike out the word ‘damn,’ and you will have a good sentence.”
—William Allen White
This looks like the truck equivalent of a really short guy with huge muscles and a perpetual scowl, who always keeps his shoulders directly above his knees and his hands in karate-chop pose while he walks.
Aaron Sorkin’s criminally-underappreciated “Sports Night” had a subplot about this.
Emergent behavior, for sure. I think the fact that there aren’t a bunch of sentient holograms in the Lower Decks/Picard timeline suggest that it was situational, though.
The Doctor would absolutely agree. He was intended to be a short-term assistant when a doctor wasn’t available, and he was personally affronted when he discovered that he wouldn’t be replaced by a human in any reasonable amount of time.
Honestly a lot of the issues result from null results only existing in the gaps between information (unanswered questions, questions closed as unanswerable, searches that return no results, etc), and thus being nonexistent in training data. Models are therefore predisposed toward giving an answer of any kind, and if one doesn’t exist it’ll “make one up.”
Which is itself a misnomer, because it can’t look for an answer and then decide to make one up when it can’t find it. It just gives an answer that sounds plausible, and if the correct answer is most likely in its training data then that’ll seem most plausible.
“Unintentionally” is the wrong word, because it attributes the intent to the model rather than the people who designed it.
You misunderstand me. I don’t mean that the model has any intent at all. Model designers have no intent to misinform: they designed a machine that produces answers.
True answers or false answers, a neural network is designed to produce an output. Because a null result (“there is no answer to that question”) is very, very rare online, the training data doesn’t include it; meaning that a GPT will almost invariably produce any answer; if a true answer does not exist in its training data, it will simply make one up.
But the designers didn’t intend for it to reproduce misinformation. They intended it to give answers. If a model is trained with the intent to misinform, it will be very, very good at it indeed; because the only training data it will need is literally everything except the correct answer.
Sure, but unintentionally. I heard about a guy whose small business (which is just him) recently had someone call in, furious because ChatGPT told them that he was having a sale that she couldn’t find. The customer didn’t believe him when he said that the promotion didn’t exist. Once someone decides to leverage that, and make a sufficiently-popular AI model start giving bad information on purpose, things will escalate.
Even now, I think Elon could put a small company out of business if he wanted to, just by making Grok claim that its owner was a pedophile or something.
I’m sure there were some forum software packages that offered voting and ranking and such. All of the ones that I was a part of were quiet enough that you didn’t need such a thing, though; you could keep up with every post, even if only to decide that you weren’t interested in it, if you read it every third day or so.
As noted, I don’t want to give Elno any traffic.
At this point, the length of this conversation is way out of proportion to my interest in it.
Our relationship is three days old and has been antagonistic since the start, so I’m not taking homework from you. I don’t feel the need for you to believe me. You may feel free to not.
Unfortunately I see a whole bunch of people who vis bellum, and a whole bunch of people who para pacem, but not a whole bunch of people who both vis pacem and para bellum.
I saw them with my own eyes, on his very Twitter account. They were not screenshots, they were links.
I think it was calling her modest and beautiful one too many times.
Some of the worst ones were, yes. But they were riffing on a couple of real ones that were weird enough.
I don’t think so (I’m not going to give Elon the traffic to check), but it is distressingly believable. He has definitely posted some, uh…eyebrow-raising stuff about his sister before, particularly while she was pregnant.
Mozilla! Stop doing stupid stuff!