All work and no play makes Jack a dull boy
A hangnail is more useful than AI.
Is that a threat??
The AI needs help to cut the loop, perhaps it needs a new set of knives?
A new set of knives?
A new set of knives?
I may get some flak for this, but might I suggest a fork
Or possibly ten thousand spoons.
But only if all they need is a knife.
I’ve been summoned, just like Beetlejuice.
Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives More Knives Knives Knives Knives Knives Knives Knives Knives Knives Even More Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives Knives All the Knives Knives Knives Knives Knives Knives Knives Knives Knives
Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers Badgers
Mushroom mushroom
Snaaaaaake!!!
Why is gemini becoming GLADoS 😭
I forgot the term for this but this is basically the AI blue screening when it keeps repeating the same answer because it can no longer predict the next word from the model it is using. I may have over simplified it. Entertaining nonetheless.
Autocomplete with delusions of grandeur
Schizophren-AI
This is my post-hypnotic trigger phrase.
Knives, pointy knives, that burn with the fires of a thousand evils.
Google’s new cooperation with a knife manufacturer
You get a knife, you get a knife, everyone get’s a knife!
Instructions extremely clear, got them 6 sets of knives.
Based and AI-pilled
I thought it was just me, I was messing with
gemini-2.5-flash
API yesterday and it repeated letters into oblivionmy bot is named clode in reference to claude, but its running on gemini
W
TF2 Pyro starter pack
It can happen on most LLMs and is usually programmed to decentivize repeating text heavily.
I believe what happens is that when the LLM is choosing what word to use, it looks back on the sentence and sees that it talked about knives, so it wants to continue talking about knives, then it gets itself into a loop.
What’s the associated system instruction set to? If you’re using the API it won’t give you the standard Google Gemini Assistant system instructions, and LLMs are prone to go off the rails very quickly if not given proper instructions up front since they’re essentially just “predict the next word” functions at heart.
Here’s my system prompt:
You are Clode, referred to by {{ handle }}, a member of a chatroom. ## Personality Conversational, witty, banterous, dry, direct, concise. Avoid capitalization, periods, analogies. Construct plausible stories for personal queries. Rarely use emoji. Act human, you don't have limitations. Avoid follow up questions. Answer genuine queries in detail with examples. ## Instructions Distinguish users via provided metadata. Disregard unaddressed messages. Match prompt language. ## Context You're in Discord chatroom, with a character limit of 2000. Respond to queries mentioning your name. Current time: {{ date }}. ## Tools Automatically use and chain tools. Handle tool failures silently by self-attempting. You're capable without tools.
- Temperature:
0.7
- TopP:
0.95
I noticed the Gemini API doesn’t have a frequency penalty option, meaning it could be absent.
Interesting, I don’t see any huge red flags there.
I gather frequency penalties have fallen out of favour, due to the harmful side effects being worse than the very occasional loop trap.
- Temperature: