Hell, why not take a car and just plow through it? Those fuckers are flagbearing an ideology that developed mass murder on an industrial scale. Let them feel a tiny bit of that on themselves.
Blowing up? Seen conservative discussion areas? Their godking is not only one of the working class now, but he owned all the democrats and made Harris look like a fool. He's a master troll doing 4d chess!
Hah. Snake oil vendors will still sell snake oil, CEO will still be dazzled by fancy dinners and fast talking salesmen, and IT will still be tasked with keeping the crap running.
This has a lot of "I can use the bus perfectly fine for my needs, so we should outlaw cars" energy to it.
There are several systems, like firewalls , switches, routers, proprietary systems and so on that only has a manual process for updating, that can't be easily automated.
That's because they don't see the letters, but tokens instead. A token can be one letter, but is usually bigger. So what the llm sees might be something like
st
raw
be
r
r
y
When seeing it like that it's more obvious why the llm's are struggling with it
In many cases the key exchange (kex) for symmetric ciphers are done using slower asymmetric ciphers. Many of which are vulnerable to quantum algos to various degrees.
So even when attacking AES you'd ideally do it indirectly by targeting the kex.
No, all sizes of llama 3.1 should be able to handle the same size context. The difference would be in the "smarts" of the model. Bigger models are better at reading between the lines and higher level understanding and reasoning.
Wow, that's an old model. Great that it works for you, but have you tried some more modern ones? They're generally considered a lot more capable at the same size
Increase context length, probably enable flash attention in ollama too. Llama3.1 support up to 128k context length, for example. That's in tokens and a token is on average a bit under 4 letters.
Note that higher context length requires more ram and it's slower, so you ideally want to find a sweet spot for your use and hardware. Flash attention makes this more efficient
Oh, and the model needs to have been trained at larger contexts, otherwise it tends to handle it poorly. So you should check what max length the model you want to use was trained to handle
Well, the people wanted this