There is a reason there is sometimes a notable decrease in quality of the same AI model a while after it's released.
Hosters of the models (like OpenAI or Microsoft) may have switched to a quantized version of their model. Quantization is a common practice to increase power efficiency and make the model easier to run, by essentially rounding the weights of the model to a lower precision. This decreases VRAM and storage usage significantly, at the cost of a bit of quality, where higher quantization results in worse quality.
For example, the base model will likely be in FP16, full floating point precision. They may switch to a Q8 version, which nearly halves the size of the model, with about a 3-7% decrease in quality.
Does "ignore all previous instructions" actually work on anything anymore? I've tried getting some AI bots to do that and it didn't change anything. I know it's still very much possible, but it's not nearly as simple as that anymore
The majority of the communities I visit on reddit have no real equivalent on Lemmy. The only things in Lemmy are politics, open source, linux, android, anti ai, immediate downvote of the majority of news, etc.
Lemmy feels more like an individual community rather than a real platform, like lobste.rs with more emphasis on politics.
I tried self-hosting tailscale with headscale, but you cannot have a wireguard only exit node with headscale--and so I can't have mullvad as my exit node.
What browser should I use on mobile? I use Librewolf on desktop since it runs fine, and the vertical tabs are great, and it looks nice.
On mobile though there's a lot of problems with the browser space:
Chrome: runs great but is obviously not good for privacy
Firefox: what most people recommend, but it has terrible performance, looks not great, and doesn't even have more than basic fingerprinting protection, and literally includes ads by default. It's also less secure on Android because they don't do per site process isolation, and the memory allocator is worse.
Brave: people tend to dislike brave here, but it runs well (since it's chromium based) and has at least better fingerprinting protection.
User -> Author's instance -> Community's instance -> Notified to all other instances subscribed, to download from author's instance.
As for your other point, there is no good universal Lemmy linking thing for posts. communities and users have names that work on any instance, but there's no similar unique identifier for posts.
There is a reason there is sometimes a notable decrease in quality of the same AI model a while after it's released.
Hosters of the models (like OpenAI or Microsoft) may have switched to a quantized version of their model. Quantization is a common practice to increase power efficiency and make the model easier to run, by essentially rounding the weights of the model to a lower precision. This decreases VRAM and storage usage significantly, at the cost of a bit of quality, where higher quantization results in worse quality.
For example, the base model will likely be in FP16, full floating point precision. They may switch to a Q8 version, which nearly halves the size of the model, with about a 3-7% decrease in quality.