Admins of my instance are fine with people using their instance
Fair enough, it appears they are completely happy for you to use up a small piece of there resources
If you can't be bothered to spend 5 minutes to put up the magical "do not use my resources" message (origin: Same-site) in your site's headers, then I think people would believe that using up a portion of your bandwidth is fine and act accordingly.
Wasn't there some guy who wanted a total of 8 dollars to fix the GDPR issue and they didn't get funded? Something tells me the operators aren't too concerned
If the server owner isn't fine with others hottlinking they can simply deny requests not related to there website(s). On that note, I hope you are donating to your instance, otherwise by your logic you are stealing there resources.
Its not free in the traditional sense, its just someone else pays for you. These projects work by being "free" with their biggest/most charitable users supporting it. Every major software project that runs the web, be it curl or python, works that way. You do not pay to use the service, you are instead paying to help delay the abandonment of the project and bring updates to improve your experience.
If you don't particually want this project to succeed, then that's fine, though you should probably pay your instance a dollar to cover the bills incurred by your own use of there resources.
... Can I have your reasoning? Just because they are communists doesn't mean they are foreign agents, all it means is that they are authoritarians. Besides,idue to lemmy's federated nature the governments would be better off infiltrating or straight up buying larger social media companies
I know a large js obferscator has auto detection code, try loading dev tools in first then loading the site on the tab so it doesn't detect the sudden viewport change
... What are you saying exactly? If enough people believe a word has a certain definition, then that word is given that definition, that's how language works. There is nothing stopping the word Frindle for example replacing the use of the word pen.
The harmful bit wasn't the instructions for counterfeit money, its the part where script kiddies use chatgpt to write malware or someone trys to get instructions to make VX nerve agent. The issue is the fact that the air can spit back anything in its dataset in a way that can lower the barrier to entry to committing crimes ( Hay chatgpt, how do I make a 3d printed [gun] and where do I get the stl).
You'll notice they didn't censor the money instructions, but they did censor the possible malware.
The question is, what will happen in 2038 when y2k happens again due to an integer overflow? People are already sounding the alarm but who knows if people will fix all of the systems before it hits.
Yes
Stable Linux variants (also known as distros) are very widely used, and range from Linux mint which is completely stable with no issues for day to day use (assuming you don't use an Nvidia card) to Debian which which has a selling point of not changing anything beyond security updates for like 6 years straight
Most people here will be talking about there bleeding edge systems which will use code that is often in beta or use systems so new they don't have proper documentation (the bcachefs file system which showed up last month comes to mind).
Aren't pointers just an ID given to a verible that currosponds to its "true" position in the array of bytes thay make up a program's memory? I feel like I'm missing something
The tech is great at pretending to be human. It is simply a next "word" (or phrase) predictor. It is not good at answering obscure questions, writing code or making a logical argument. It is good at simulating someone.
It is my experience that it approximates a human well, but it doesn't get the details right (like truthness or reflecting objective reality), making it useless for essay writing, but great for stuff like character AI and other human simulations.
If you are right, give an actual Iogical response only capable by a human, as opposed to a generic ad hominem. I repeat my question, Have you actually used any of the GPT3 era models?
... Have you tried any of the recent ones? As it stands chatGPT and Gemini are both built with guardrails strong enough to require custom inputs to jailbreak, with techniques such as Reinforcement learning from Human Feedback uses to lobotomize misconduct out of the AI's.
If all you need is a one sided conversation designed to make you feel better, LLM's are great at concocting such "pep talks". For some, that just might be enough to male it believable. The Turing test was cracked years ago, only now do we have access to things that can do that for free*.
If you can't be bothered to spend 5 minutes to put up the magical "do not use my resources" message (origin: Same-site) in your site's headers, then I think people would believe that using up a portion of your bandwidth is fine and act accordingly.