Sand is everywhere. But you can’t refine it anywhere while keeping it economically viable.
Extraction/refining costs vary depending on purity, and there are a million other factors involved, and not just that even.
Setting up a large-scale sustained extraction operation requires many guarantees (including geopolitical, logistical, and stability/security considerations).
Otherwise, all world powers would be about done strip-mining Africa and Antarctica right now.
The average consumer doesn’t care because they already made the purchase. Most of them use whatever OS their machine comes preloaded with unless a more tech inclined friend offers alternatives.
The OEMs do the caring, because the OEMs are the ones with the choice. And they notice this shit. So when the average consumer is buying a new machine, they might be offered alternatives to Windows (already happening with some btw), and most customers will see an extra $200 (or whatever how much nowadays) next to the Windows license, and a flat $0 next to the other option: Linux.
Now the filter is reversed, and only the ones who aren’t paying attention (assuming Windows is the default during check out) or actively want Windows will be paying for it
The savvier ones may even wonder what the difference is, and do some research to understand it, and those ones will buy it knowing exactly what they’re getting into. Some will say “I’ll just pick the free OS and install Windows for free”, but even if they decide that, they may decide to boot it up first out of curiosity.
And that’s what really matters: the exposure. Because people talk.
Not the OP of this post. The OOP of that screenshot on whatever platform they used.
It probably didn’t originate here.
The fact it got here is evidence enough that the propagation strategy is viable, and the fact it won’t spread as effectively here (but it still can if people decide to manually repost/share and spread it to other communities or platforms) does not contradict that.
The overzealous censorship is possibly a feature not a bug. Done deliberately by the OP to ragebait people who find it disagreeable for a free engagement boost.
The meme/trend wouldn’t be so ubiquitous right now if it weren’t so successful at its own self-propagation, because it’s being naturally selected for.
Did you actually read anything I wrote or just skim every other line to confirm your biases and what you wanted to see?
Did you assume that coding is my only calling or something? I rap, write fiction, poetry, and I’m into philosophy. Wtf are you on about?
I wouldn’t be pissed about it if it meant nothing to me. Did you read what I actually wrote?
The first example I gave for an art prompt had an actual artistic premise: galaxy in the shape of a candle
The thoughtless one I gave in contrast below that was “draw me a cat”
The only difference between the first two lines (in my original post) is the explicit behaviour constraint in the code assistant’s case which you wouldn’t want for a freeform creative prompt, and the fact that the image generator can’t stop to ask for advice afaik.
Ohhh. I think we’re both defending different hills! I’m not against the use of generative AI for purposeful creation. What I’m against is the delegation of critical thinking.
It’s the difference between:
“Implement this specific feature this specific way. Never disable type checking or relax type strictness, never solve a problem using trial and error, consult documentation first, don’t make assumptions and stop and ask for guidance if you’re unsure about anything”
“Paint me a photorealistic depiction of a galaxy spinning around the wick of a candle”
(That last one is admittedly my own guilty contribution to the slop soup and favourite desktop background of at least a whole year)
Versus:
“build me an e-shop”
“draw me a cat”.
The difference is oversight and vision. The first two are asking AI to execute well-defined tasks with explicit parameters and rules, the first example in particular offers the LLM an out if it finds itself at an impasse.
The latter examples are asking a prediction engine to predict a vague concept. Don’t expect originality/innovation from something that was forcibly constrained to pick from a soup made of prior art then locked down, because that’s what gradient descent essentially does to the neural networks during training: reduce the error margin by restricting the possible solutions for any given problem to only what is possible within the training set, which is also known as plagiarism.
Edit: a slight elaboration on the last part:
Neural networks trained with gradient descent will do the absolute minimum to reach a solution. That’s the nature of the training process.
What this essentially means is that effort scales with prompt complexity! A simple/basic prompt begets you the most generic result possible. Because it allows the network to slide along the shortest path from the input token to a very predictable result.
I’m a senior full-stack developer of 15 years, and more recently, a new tech lead at an AI startup. I’m definitely not attacking AI as a concept in general.
I work with AI agents every day and all day. That’s how I develop and plan our systems. It did not start that way. I was absolutely against the use of AI during development, but a few months back, I need the assistance because I developed carpal tunnel syndrome, so that’s what I automated, just the typing and the implementation of low-level logic so that my wrists can heal. But do you know what stock AI agents do to code when not given proper guidance? Ask any real developer and they’ll tell you about vibe coding. I guarantee these are not going to be success stories.
I’m not just judging people for being lazy, because lazy people like me will innovate ways to stay lazy by inventing/optimising new shit that allows them to stay lazy. That’s a survival instinct and an evolutionary selection mechanism: minimising energy investiture while doing the same thing as everyone around you is an evolutionary advantage.
No. What I’m judging them for is delegating their critical thinking capacity to an external entity, and stunting their own cognitive growth (their literal reason for existing in the first place, their continuity mechanism to stay in the gene pool, and their sole means of improving at being long-term lazy) by being short-term lazy. Makes sense?
Now to generative AI (for the multimedia substrate):
The vast majority of people you speak of are now polluting the collective “training set” with diluted slop distilled from all art historically created thus far, because the content generation equation went from: X people creating Y novel pieces of art per year, to X models creating Y million images per day, all thanks to a handful of idiots with more greed/money than common-sense. That diluted pool is ever-expanding, growing geometrically, and burying actual novelty with each new image Susan generates and shares for her new “Katz Rule” instagram profile.
The thing is: the next model will be trained on that averaged set, and the next, and the next. With each day and each generation increasing in conformity. And that set is what we’re stuck with for new inspiration (and future models) now. Because everyone is looking at screens for inspiration, and not at mountains or rivers, or even the real stars in the sky at night because we ruined that too.
All while we’re doing the things you just mentioned.
All thanks to a few assholes with more selfishness than common-sense chasing after unlimited quarterly growth in a very limited space that’s closing around us fast.
You guys don’t wash?