Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)P
Posts
1
Comments
1228
Joined
1 yr. ago

  • I agree with you completely, however because ChatGPT is $10/month (last I heard?) and a licensed psychologist is circa $120 per 30 min session.. I sadly know what most people are going to choose.

  • Fair enough, but that does make the treason laws pretty damn pointless as the US hasn't technically declared war with any nation since the Axis-allied nations in World War 2. They're all just 'special military operations' made by executive order or congressional resolutions. And seeing as The Constitution explicitly states that Congress has the 'sole power to declare war', I'd say the rules are quite flexible, rather that legally rigid - and there's been a lot more Constitutional flexibility lately than rigidity.

  • NSFW/NSFL/Spoiler tag for image next time please. As much as I support the right to freedom and self-determination of Palestine and as appalles as I am with the ongoing genocide, I don't want to see nor enjoy seeing dead toddlers in my main feed.

    I'm sure unspoilered/untagged posts like this will make others block you or the community entirely.

  • If they had good reason to suspect that some of the election denying groups they communicated with contained foreign nationals (especially Iran, Russia, or China as they're officially listed 'US Foreign Adversaries' under the Code of Federal Regulations), and then shared the data with them anyway, it could well be prosecuted as treason.

    Won't happen regardless though, because King Treason is the president.

  • It's hard to be optimistic when Trump keeps asking Ukraine to simply bow to all Russia's demands (why won't you just do what my buddy Vlad says), and it goes nowhere, because that's not a negotiation.

    Zelensky's planned talk:

  • All the instructions I've seen online say those pepper spray cans are intended for use 6-12 feet away (depending on brand), and absolutely no closer than 4 feet to minimize risk of permanent eye damage (from pressure) and blowback. The fucker in the photo seems to be trying to cause blindness.

    Looking at the DoJ Federal Prisons guidelines they seem to treat federal prisoners with more care and concern than free citizens (if you dare protest): https://www.bop.gov/policy/progstat/5576_004.pdf

  • Would be pretty pointless. They just stop trading under the damaged trading name, register a new name and are up and running tomorrow, slate wiped clean. I believe a similar practice is nicknamed 'pheonixing' in the West.

  • Does this not fly in the face of the First Amendment - why are conservatives not mad about this as well?

    Those shot seem to have been protesting peacefully with no weapons or military protective gear on public lands, and then ICE walked into them and pulled a protester away, intentionally escalating conflict, then firing into a protesters face at about 2ft away.

    Insane that there is still no general strike over ICE (and more).

  • Great to read of an EU leader recognizing the the importance of stepping up and taking action.

    They should at least build the agreements, processes, protocols, comms channels and so on immediately, even if they don't go the full way to assembling the army. It'll probably need to be used sooner than later, and will need to respond quickly to whichever aggressor nation has provoked open conflict.

  • Tbh I don't think Microsoft's fault-rate has actually gotten noticeably higher post-AI.

    They were already putting out bad patches causing widespread issues with regularity for well over a decade. They slowly transitioned from "customer experience is key" under Ballmer to "move fast and break things" under Nadella.

    I do love what Microsoft has been doing for Linux adoption though - more slop, please!

  • That's a false dichotomy. Reasonable responses are framed as either "kid has low blood sugar so I'm packing him a banana", or "sorry, we'll beat him with jumper cables". There's like 50 reasonable responses between those extremes.

    Do you think that if you were teaching a class and one of the students punched you in the face so hard that they broke your expensive prescription eyewear, that you would actually just dust yourself off and go, "oh dear, you poor thing - are you acting out due to low blood sugar? I can go get you a banana".

    You really think that's a reasonable response for any human?

  • Step one is to make it a treaty. Step two is accountability.

    Can't expect countries to obey laws and treaties that don't exist.

  • I chuckled.

    I also feel like it wouldn't have been hard to mirror-flip the baby video layer so they were sitting roughly over the US.

  • The issue around them is more being built on time, as they have tight contracts with the scalers that allow them to simply not pay their interval payments or even pull out of the contracts entirely if delivery dates are delayed.

    They're bespoke too - which is why they're getting 'AI datacenter' builds instead of approaching existing datacenters. AI racks require up to a megawatt per rack. That's insane. They have been custom designed and built by UPS and power companies.

    https://blog.se.com/datacenter/2025/10/16/the-1-mw-ai-it-rack-is-coming-and-it-needs-800-vdc-power/

    Yes, they could be pivoted away from AI to host 'something else', but it won't help save the companies that built them get paid, because they'll only be using a small fraction of their power delivery, and the $20,000 AI GPUs have pretty limited use-cases. It will be a massive oversupply issue causing all the datacenters hosting prices to have to drop drastically to get any businesses even into their tenancies. This will cause those hosting companies (which are up to their gills in loans) to go under - They're the ones taking the big risks on AI. Not Meta/Google/MS/etc.

  • Nice job on the Lemmy shout-out, Reg. Every bit of publicity helps people move off the mainstream surveillance-capitalism dumpsterfires (and onto our nicer dumpsterfire).

    One example is the AntiAI subreddit, but there is also a Lemmy instance devoted to it, called Awful.systems. (For those unfamiliar with it, Lemmy is a tool for creating news aggregator and discussion sites – think Reddit or the recently revived, but LLM-infested, Digg – based around the same ActivityPub protocols used by Mastodon and the rest of the Fediverse.)

  • Cognitive bias affects all classes and education levels. But in my experience conservatives are the most flagrant, "the only moral abortion is my abortion" for example.

  • Thanks for your story. I'm a misfit nerd also, so it resonates with me.

    I'm sure its something we can fix at large scale with government-funded mental health care, social programs, funding for community centers, and integrating mental health and social science studies into education, amongst many possible solutions that would get us to a society where everyone can find somewhere to fit in, and everyone has better options than the snake-oil salesmen when they seek help or are angry at a personal situation.

    Unfortunately we presently live in the era of the techofeudalists, and eyeballs on ads and keeping users consuming are pretty much all they care about - not only do they not care if the media they promote and put ads alongside happens to be divisive alt-right hate media, they also benefit from the conservative (anti-regulation, small government, anti-tax) parties being pushed to power by these groups. It's a frustrating feedback loop that we need to break free of, and governments worldwide seem ill-prepared to broach it.

    Ugh. For context, I'm currently trying to steer a younger family member out of the alt-right pipeline and angry about how hard it is to beat constant access to thousands of videos / talking points pushed into their feed daily, with the couple hours I see them a week to attempt to 'deprogram' them.. without trying to come across preachy or put them off learning alternative viewpoints (evidence-based reality). Its a struggle.

  • Asking for sources is always welcome with me.

    Here's a deep dive from Ed Zitron into the whole AI/LLM industry that details the heavy investment from several key banks (Deutchebank being one), and the shrinking finance availability from traditional means (bank loans, hedge funds, managed funds). It's long but it's really worth a read if you have a spare hour or so.https://www.wheresyoured.at/the-enshittifinancial-crisis/

    A glaring tell that I don't recall him highlighting is that the hyperscalers have largely outsourced the risk of AI investment to others. META, Google, and Microsoft are making small bets on AI comparitively - they're using cash assets they have as profits from other business models, which are still significant (measured in low billions) but dont require them to take loans or leverage themselves. This means they are playing it very cautiously, all the while they're shoving AI into all their products to try to make it seem like they're all-in and it's 'the next big thing', which is helping their stock prices in the investor frenzy. Most of the investment capital required for the AI boom is going into hardware, datacenters and direct investment in the software development - and that's mostly being avoided by the big guys. This allows them to minimize risk and still having a decent win if it takes off. Conversely If/when the bubble bursts they'll still take a hit, but they'll also still be making money via other streams so it'll be a bump in the road for them - compared to what will happen to OpenAI, Athropic, Stability, the datacenters and their financiers.https://archive.is/WwJRg (NYTimes article).

  • All this bullshit is for line go up. And its mostly working, so far.

    However, the bankers heavily involved in financing AI datacenters have become nervous and started approaching insurance firms for coverage in case the projects fail.. And the hedge funds have had low, 0 or negative ROI for the last ~4 years due to the prior failures of the Metaverse, NFTs, and now AI not paying off yet.. So new funds are drying up on two fronts, and if they don't magically become profitable in the next year then the line is gonna go down, hard.