Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)
Posts
11
Comments
295
Joined
3 yr. ago

  • I've put together 2 computers the last couple years, one Intel (12th gen, fortunately) and one AMD. Both had stability issues, and I had to mess with the BIOS settings to get them stable. I actually had to under-clock the RAM on the AMD (probably had something to do with maxing-out the RAM capacity, but I still shouldn't need to under-clock, IMO). I think I'm going to get workstation-grade components the next time I need to build a computer.

  • ZFS on TrueNAS SCALE (enables RAID-like functionality, along with many other features).

    Ext4 or NTFS on everything else, simply because it's default and I don't use any advanced features.

  • The EFF link I posted above provides evidence. Again, here's a quote from part of it:

    The process of machine learning for generative AI art is like how humans learn—studying other works—it is just done at a massive scale. Huge swaths of data (images, videos, and other copyrighted works) are analyzed and broken into their factual elements where billions of images, for example, could be distilled into billions of bytes, sometimes as small as less than one byte of information per image. In many instances, the process cannot be reversed because too little information is kept to faithfully recreate a copy of the original work.

    As I mentioned before, Copilot at least, helps people avoid copyright infringement by notifying you if your code is similar to public code. The solution I'm proposing is no new laws, and just enforcing the ones we have. Most of the laws being proposed look like attempts at regulatory capture to me.

  • That we already have laws that protect copyright infringement (which seem like they would still apply if it was spit out by an LLM or not), and no more should be made. That training on public data is fine.

  • I'm saying using code for training is a different issue that copyright infringement. I edited my post above to better lay out my position.

  • I stated that they can do this, and asked if they could be sued if they used near-verbatim code generated from an LLM, just like they could be sued if they copy-pasted AGPL code.

    Edit: Tools like CoPilot tell you if your code is similar to publicly available code so you can avoid these issues.

    Edit: Just looked up EFF's position and I tend to agree with it:

    Artificial Intelligence and Copyright Law

    Artists are understandably concerned about the possibility that automatic image generators will undercut the market for their work. However, much of what is criticized is already considered fair use under copyright law, even if done at scale. Efforts to change copyright law to transform certain fair uses into infringement carry serious implications, are likely to interfere with the innovative potential of AI tools, and ultimately do not benefit artists. In fact, the use of these tools could expand the capacity of artists to create expressive works. Policymakers should emphasize the importance of human labor and investment in what receives copyright protection to maintain wages and dignity. Artists should be protected from efforts by large corporations to both substitute their labor with AI tools and create a new, unnecessary copyright regime around AI-generated art.

    Machine Learning is a Fair Use

    The process of machine learning for generative AI art is like how humans learn—studying other works—it is just done at a massive scale. Huge swaths of data (images, videos, and other copyrighted works) are analyzed and broken into their factual elements where billions of images, for example, could be distilled into billions of bytes, sometimes as small as less than one byte of information per image. In many instances, the process cannot be reversed because too little information is kept to faithfully recreate a copy of the original work.

    The analysis work underlying the creation and use of training sets is like the process to create search engines. Where the search engine process is fair use, it is very likely that processes for machine learning are too. While the act of analysis may potentially implicate copyright, when that act is a necessary step to enabling a non-infringing use, it regularly qualifies as fair use. If the intermediate step were not permitted, fair use would be ineffective. As such, when factual elements of copyrighted works are studied and processed to create training sets—which, once again, is how we humans learn and are inspired by themes and styles in art and other works—that is likely to be found a fair use.

    https://www.eff.org/document/eff-two-pager-ai

  • After all, if an “AI” model, open source or not, is allowed to just “train” on my AGPL code and spit it back (with minor modifications at best) to an engineer in AWS that’s it for my project. Amazon will do the Amazon thing and steal the project. So say goodbye to any software freedom we have.

    An engineer at AWS can already just copy your code, make minor modifications, and use it. I would think the same legal recourse would apply if it was outputted from an LLM or just a copy-paste? This seems like a tangential issue to whether the LLM was trained on your code or not (not training on your code obviously reduces the probability of the LLM spitting it back out near-verbatim though). Personally, I don't see anything wrong with anyone using public code to build statistical models. And I think the pay-to-scrape models that Reddit, Xitter, and others are employing will help big tech build the "moat" they're looking for. Big tech is asking for AI regulation for similar reasons.

  • Information wants to be free.

  • Yep. There's a whole propaganda industry that rails against it (Prager U, Daily Wire, Red pillers, etc), and right-wing states are banning Universities from engaging in it, and banning investment of state funds in companies that take DEI into account (even though it's pretty much just corporate lip-service).

  • I just ask ChatGPT to review pull requests.

  • The media and people in general ignore non-disruptive protest. When protesting pollution, bringing motor vehicles to a halt is arguably a pretty good choice compared to, say, the stone henge (which I don't have a problem with either). Whether the optics are good is debatable. The media is mostly corporate owned, and they'll try to make any protest that goes against their interests look bad anyways. Which is probably why they only cover disruptive protests.

  • I don't think there's a reason to try to get rid of Trump. I imagine he's easily controllable since he has no apparent ideology, and is just a greedy narcissist. So, money and praise should be enough.

  • I wonder if such a system could be designed to be privacy-preserving.

  • Doesn't sound much more complicated than invitation-only services. Most people wouldn't even really need to know the details of how it works.

  • It appears the doctor that co-wrote that book was a quack or grifter that associated himself with other grifters like Dr Oz and The Doctors, and advocated for "alternate health practices" that have no evidence of being helpful (and that sound absurd): https://en.wikipedia.org/wiki/Stephen_Sinatra

    For stuff like this, I usually try to find the most recent meta-analysis that looks reputable. For example: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9316578/

    If I understand it correctly, it says people with total cholesterol above reference levels have a 27% increase in risk of cardiovascular mortality, people with high LDL have a 21% increase, and people with high HDL have 40% decrease in risk.

  • Maybe, I'm no expert. But, I've seen a test showing a consumer water filter increasing microplastics by 1000%. Could just be only that specific filter or filter type. I believe it was a Zero filter, which I think uses resin beads for ion exchange.

  • Filters are usually made out of plastic :)

  • I don't quite understand. Aren't saunas hot, and would increase temperature differences?