The post accurately copies the article's headline without editorialising.
The article itself is shit though.
The post accurately copies the article's headline without editorialising.
The article itself is shit though.
Internally: "YESYESYESYESYESYESYES"
Externally: "Cool, let me know if you need any help"
Laws can only effectively bind international companies if they're applied internationally. So long as I can just move the problem out of your jurisdiction, that jurisdiction is little more than an inconvenience.
On the other hand, just doing nothing because it won't work anyway isn't viable either. I guess the best thing to hope for would be for more countries to follow suit until they're running out of places to dodge to.
are the posting hours normal?
Hey, no judging my sleep schedule arbitrary times when biological necessity triumphs over all the fun things I could do while awake!
Serious reply:
On the individual level we can maybe fortify against the reasons that might make someone want to extract that value.
On the collective level, we should do something about the mechanisms that incentivise that malicious extraction of value in the first place, but that's a whole different beast...
Being a principled conscious consumer makes you a less likely target for advertisement
Agreed, though we should also stress that "less likely" or "unlikely" doesn't mean "never" and that we're not immune against being influenced by ads. That's a point I've seen people in my social circles overlook or blatantly ignore when pointed out, hence me emphasising it.
media literacy
This is probably one of the most critical deficits in general. Even with the best intentions, people make mistakes and it's critical to be aware of and able to compensate that.
Unfortunately, be selective with your trust.
Same as media literacy, I feel like this is a point that would apply even in a world where we're all humans arguing in good faith: Others may have a different, perhaps limited or flawed perspective, or just make mistakes — just as you yourself may overlook things or genuinely have blind spots — so we should consider whose voice we give weight in any given matter.
On the flipside, we may need to accept that our own voice might not be the ideal one to comment on something. And finally, we need to separate those issues of perspective and error from our worth as persons, so that admitting error isn't a shame, but a mark of wisdom.
Be authentic and genuine
That's the arms race we're currently running, isn't it? Developers of bots put effort into making them appear authentic—I overheard someone mention that their newest model included an extra filter to "screw up" some things people have come to consider indicators of machine-generated texts, such as these dashes that are mostly used in particular kinds of formal writing and look out of place elsewhere.
If at all, people tend to just use a hyphen instead - it's usually more convenient to type (unless you've got a typographic compulsion to go that extra step because that just looks wrong). And so the dev in question made their model use less dashes and replace the rest with hyphens to make the text look more authentic.
I wanted to spew when I heard that, but that's beside the point.
So basically, we'd have to constantly be running away from the bots' writing style to set ourselves apart, even as they constantly chase our style to blend in. Our best weapon would be the creative intuition to find a way of phrasing things other humans will understand but bots won't (immediately) be able to imitate.
Being creative on demand isn't exactly a viable solution, at least not individually, and coordinating on the internet is like harding lolcats, but maybe we can work together to carve out some space for humanity.
I think CloudFlare is the direct result of the enshitififcation of development work.
I think it's also a symptom of assholes fucking it up for everyone. You wouldn't need the DoS-protections or security tools if there were no attackers.
Don't know a solution for that, unfortunately. I think you have a point about inadequate development work, but I'm not sure it's the whole puzzle.
Suddenly yanking it out might cause a lot of stuff to collapse, but at least some parts would still be able to operate without it in the long term. Maybe one of the blocks in the upper two stacks?
I sometimes wonder how prevalent bots are on Lemmy. On one hand, the barrier for entry might be lower / the effectiveness of bans harder to gauge. On the other, I'd think we're a smaller target, less attractive as a target.
Either way, the potential to accuse dissenters of being bots or paid actors is a symptom of the general toxicity and slop spilling all over the internet these days. A (comparatively) few people can erode fundamental assumptions and trust. Ten years ago, I would've been repulsed by the idea of dehumanising conversational opponents that way (which may have been just me being more naive), but today I can't really fault anyone.
In terms of risk assessment (value÷effort), I'm inclined to think something with the reach of Ex-Twitter or reddit would be a more lucrative target, and most people here actually are people—people I disagree with, maybe, but still a human on the other side of the screen. Given the niche appeal, the audience here may overall be more eccentric and argumentative, so it's easy to mistake genuine users for propaganda bots instead of just people with strong convictions.
But I hate that the question is a relevant one in the first place.
Punk ain't no religious cultPunk means thinking for yourselfYou ain't hardcore when you spike your hairWhen a jock still lives inside your head
Nazi punks, Nazi punks, Nazi punks: fuck off!Nazi punks, Nazi punks, Nazi punks: fuck off!
– Dead Kennedys
and both the base URL needed to end with a slash, and each path must begin with one.
...so the respective library functions each sanitizes the input meets their requirement, pre- and appending slashes as needed?
...right?
Every reasonable programmer would assume that this is a mistake because the final path would end up with two slashes, but the library actually required that.
For fuck's sake
I'm guessing they have two instances of string validation, the developer for each of which helpfully decided "I'll make sure there's a slash, idk if the other end checks it" and then a third function that trims the slashes from the parts and concatenates them with a slash for a separator.
But, man, is this ever stupid.
But also, what's wrong with having any of those things?
Nothing. I'd take more good games instead of fewer hyperrealistic ones, if I had to choose, but those features themselves aren't anything bad.
The compulsion that every game has to have them, that's what's annoying, particularly when it comes at the price of putting developers under pressure.
I'd argue it's better to have those things with less developer crunch.
If we are to have them at all, yes, less crunch is better.
We don't need children to form "attachments" to video game franchises. That just breeds loyalty to corporations.
The loyalty to corporations is a bad thing, absolutely, but I can also see how forming attachments can be nice. I very much enjoy my attachments to various movie or game franchises.
The shitty part is that these franchises are linked to corporations. I like Star Wars, but fuck Disney.
We need games that are developed with love and care by developers who treat their employees and customers humanely. Whatever that looks like, we want that.
Absolutely. Grand games should get the time and care they warrant. Commercial pressure is poisoning game development and has been for way too long already.
I sympathise with your username. I've picked up a habit of using dashes too, but because LLMs are apparently trained on the same writing style that I'm compulsively imitating, that habit tends to be mistaken as an identifier for LLM-slop—an understandable confusion, given that most people don't casually use it, but I tend to fall into linguistic patterns with little regard for the context I'm writing in. I'll accidentally use informalities in professional writing as well, but whereas I'll make an effort to correct my tone in professional contexts, I just can't be arsed to apply the same diligence in a casual one.
Assigned Male At Birth
When used in the context of gender dysphoria, it emphasises that the gender the person is dysphoric about isn't a fixed trait ("born male") but rather a property that has been assigned to them and can be rectified. It also conveniently avoids mentioning physical traits and sidesteps the complexity of Intersex people that don't neatly fit the male/female dichotomy but usually get assigned one or the other anyway.
And finally, it's just a convenient and pronounceable shorthand.
Its counterpart is AFAB, Assigned Female At Birth. Neither has anything to do with ACAB.
It eternally seemed in a state that almost worked but not quite no matter what model or iteration they went to, no matter how much budget they allocated, when it came down to the specific facts and figures it would always screw up.
This is probably the biggest misunderstanding since "Project Managers think three developers can produce a baby in three months": Just throw more time and money at AI model "development" for better results. It supposes predictable, deterministic behaviour that can be corrected, but LLMs aren't deterministic ny design, since that wouldn't sound human anymore.
Sure, when you're a developer dedicated to advancing the underlying technology, you may actually produce better results in time, but if you're just the consumer, you may get a quick turnaround for an alright result (and for some purposes, "alright" may be enough) but eventually you'll plateau at the limitations of the model.
Of course, executives universally seem to struggle with the concept of upper limits, such as sustainable growth or productivity.
I'm a data analyst and primary authority on the data model of a particular source system. Most questions for figures from that system that can't be answered directly and easily in the frontend end up with me.
I had a manager show me how some new LLM they were developing (which I had contributed some information about the model to) could quickly answer some questions that usually I have to answer manually, as part of a pitch to make me switch to his department so I can apply my expertise for improving this fancy AI instead of answering questions manually.
He entered a prompt, got a figure that I knew wasn't correct and I queried my data model for the same info, with a significantly different answer. Given how much said manager leaned on my expertise in the first place, he couldn't very well challenge my results and got all sheepish about how the AI still in development and all.
I don't know how that model arrived at that figure. I don't know if it generated and ran a query against the data I'd provided. I don't know if it just invented the number. I don't know how the devs would figure out the error and how to fix it. But I do know how to explain my own queries, how to investigate errors and (usually) how to find a solution.
Anyone who relies on a random text generator - no matter how complex that generation method to make it sound human - to generate facts is dangerously inept.
I feel like we ought to expand that traditional quote about the last fish being caught, the last tree being cut and all: When the last emotion is commodified, you will realise that you can't buy happiness.
a super giga 1000€ license for more than 16 Core CPUs
Year of the Linux Desktop! Any day now... any day... huffs copium
Not legally, since that change of statutes would require consent from all member states. Yeah, they kinda shot themselves in the foot by not preparing any response for rogue members.
I still think it's a decent entry choice. I won't touch it myself anymore, personally, but Canonical is still better than Microslop. That bar is set so low that even snap can clear it.