This logic appeals to me but I'm curious how it could work legally as well as potential side effects. It seems likely that legal arguments would ensue over intended use of content, and it doesn't seem like it should be illegal to use some created work in a new or unintended manner.
I think the overall goals are to encourage creative and academic work (which requires funding creators), discourage centralization of knowledge (prevent leverage over and manipulation of populace), encourage distrust of llm output without source references in output, and discourage overuse of generative AI. I'm sure there are more, but that's what comes to mind.
I'm half joking, but there's a decent bit of autism in our house, and he fits in pretty well. He gets overstimulated pretty easy, exhibits a lot of self-soothing behavior, and doesn't take cues well at all.
In all fairness, I grew up in a small town in a very red state, but the education system there proved better than larger, more progressive parts of the state. The education I received was likely an outlier and not representative of the norm, but it did teach me that educators in an area do not necessarily mirror the rest of the population.
Thanks for confirming. I probably sounded too condescending but I wasn't sure if it was a false memory.
I loved math as a kid though, so I ran through the curriculum as fast as I could to get to the good stuff. I think having older siblings helped - it gave me a preview of more interesting material.
Well said. I personally enjoy using a systems-level language with a handful of functional programming features. I also enjoy the support for async runtimes and other concurrency features (channels).
Rust allows me to get away from more boring (to me) languages (e.g. JS/TS, Java, Kotlin).
This logic appeals to me but I'm curious how it could work legally as well as potential side effects. It seems likely that legal arguments would ensue over intended use of content, and it doesn't seem like it should be illegal to use some created work in a new or unintended manner.
I think the overall goals are to encourage creative and academic work (which requires funding creators), discourage centralization of knowledge (prevent leverage over and manipulation of populace), encourage distrust of llm output without source references in output, and discourage overuse of generative AI. I'm sure there are more, but that's what comes to mind.