Skip Navigation

Posts
0
Comments
159
Joined
1 yr. ago

  • Yeah, I mean that's true of any social space though, if you say something agreeable (definitionally) you're going to get agreement. If you view upvoting as consensus building (i.e "I like this" / "I agree") it's just a more concise representation of a reply saying as much.

    But that is scrutable.

    What becomes a problem is content getting surfaced/buried on non-scrutable metrics (typically engagement) — ragebait isn't anything new, online or in societies. But when algorithms target content that gets engagement, ragebait is naturally surfaced in higher proportions. Often time such platforms completely bury content or make it impossible to find something not explicitly surfaced (YouTube search for example is widely known to be terrible here, FB rabidly buries comments on posts).

    WRT communities, there definitely are instances and communities with very different rules, values and expected behaviors. Federation allows communities to pick and choose what other communities they think they'll get along with. This includes banning individual remote users if they don't follow local rules, or defederating entirely if other instances have drastically different values.

    The federation model as described does well by my metrics. I can pick an instance that shares my values, participate in communities (in the Lemmy technical sense) that share them as well — and largely avoid or choose not to engage with people from communities (in the instance sense) that I don't share values with. This is extending "freedom of association" to online spaces in a way that large platforms largely cannot and willingly do not enable.

  • I would say scrutability in itself doesn’t automatically make an algorithm good. “Demote everything that doesn’t support Trump” is perfectly scrutable but leads to a skewed discussion.

    This is mostly getting into normative vs descriptive philosophy. If it's scrutable that a site/instance is demoting everything non-aligned with a worldview; then on the Fediverse it's users' choice to leave (and part of 'community values').

    In fact I would say any content boosting algorithm at all leads to skew and what you call sycophancy. That includes upvotes/downvotes that affect what posts users see first. So I would get rid of all that stuff and just show purely chronologically.

    To some degree, yes. New Reddit is particularly bad about this, it actively buries unpopular replies (but it goes further, and doesn't just use upvotes) — Software like Lemmy is better, you can easily set Sort by New or sort by Top as the default. There's also no 'Karma' system that propagates across the site.

    Sycophancy is a human trait, so it'll always emerge in social systems; but normatively, our systems should not cater to these negative traits (e.g. Twitter).

  • For algorithms, anything that isn't a straightforward scrutable way of presenting user content is bad, IMO.Algorithms that promote engagement, monetization, and sycophants are bad.

    As for community of communities, that's how the Fediverse works — you have a home instance which communicates with other instances. An instance has (nominally) rules, and expected conduct, and is often centered around a particular interest (game dev, programming, cities or countries, etc) then these communities interact with each other.

    Having home instances with shared values and a subset of the entire userbase allows for recognizing and connecting with other "local" users. The same way people would trust their immediate neighbors more than random people from the city over. It helps form webs of trust, and establish natural networks.This is how human society has functioned up until very recently — it's what the brain evolved to do.

    We can see the consequence of systems that don't respect that fact, sites that try catering to everyone and put us in the same tent, it destroys social regulation, you cannot possibly hope to explain yourself to tens of thousands of angry people on the Internet, nor should people be exposed to such vitriol.

  • It's not the point of the article, but I think it nonetheless speaks to the power that the community-of-communities model provides.

    The algorithmic content surfacing models are what primarily rot online interaction. Having all-encompassing sites is another cause. Letting people join communities with shared values, and those communities collectively deciding who they interact with, is a fundamental working model of human societies since prehistory.

  • This has been an extolled benefit of the new Hall/TMR design keyboard/switches.

    Because they deal with a continuous activation level, you can define in software when the "press down" signal gets fired in the key travel, including immediately stopping the press once it stops traveling down, and resuming it in the reverse; effectively eliminating pre-travel.

    These boards apparently started getting banned in comp play even, from what I've heard. Caveat emptor, I'm not into the comp gaming scene.

  • Same.

    I was sure the emotional manipulation tactic would be extremely effective. Guess it was a little too blatant, even for the general public.

  • The purpose of a system is what it does.

  • Because people want fancy animations, images, videos, stylized text, etc. And the easiest way to accomplish that is to just use a browser under the hood.

  • My experience as well.

    I've been writing Java lately (not my choice), which has boilerplate, but it's never been an issue for me because the Java IDEs all have tools (and have for a decade+) that eliminate it. Class generation, main, method stubs, default implementations, and interface stubs can all be done in, for example: Eclipse, easily.

    Same for tooling around (de)serialization and class/struct definitions, I see that being touted as a use case for LLMs; but like... tools have existed^[e.g. https://transform.tools/json-to-java] for doing that before LLMs, and they're deterministic, and are computationally free compared to neural nets.

  • Because this comes up so often, I have to ask, specifically what kind of boilerplate? Examples would be great.

  • But then your junior colleagues will eventually code circles around you...

    Probably not. I just ran into a dude who suggested using LLMs to fix misplaced braces in source code.

  • Undoubtedly going to have a higher failure rate, however in my experience WD's enterprise drives are extremely high reliability regardless.

    Once these hit the surplus market in ~5 years they'll be neat (if we get them in SATA) for ZFS RAID arrays; faster rebuild speeds will be nice.

  • Not a requirement, but a preference.

    Onlyoffice looks like it might be good, I'll give it a try.

    Can't stand libreoffice, it feels much like Office 2007 which was the worst version I ever had to use — fixed with 2013 and 2016, but libre hasn't caught up.

    E: Found freeoffice which looks to have much closer parity to MS Office. I don't have a problem buying perpetual software licenses in these situations. I'd prefer FOSS, but for productivity software it has to be conducive to getting work done.

  • There's really only two programs that make moving to Linux very problematic for me, that's Photoshop, and Word.

    At least with word I can ultimately just sequester that into a VM, or learn a different document program if push comes to shove (RIP all my workflows for citations and templates).

    But PS is pretty much non-negotiable, it needs GPU acceleration of a native environment to run well, and there just aren't any alternatives that can do what PS does — I need real channel support (painting on channels, copying between them per layer, actual alpha support instead of naive transparency) and more. As much as I hate Adobe, PS is one of those tools that I just know intuitively, all the texture or photo manipulation work feels entirely natural, and I just don't think I'm going to find that ever again.

    So, if Linux people can get it working through Wine, it's a huge relief that I can finally leave the Microslop ecosystem.

  • It's been interesting (though mostly I feel bad for the people being exploited by these AI companies) how this manifests in some highly clustered ways. Angela Collier posted a video several months ago which covers almost exactly the same kind of AI-physics posting in detail.

    https://youtu.be/TMoz3gSXBcY

    The whole video is great and relevant, however if you're strapped for time then just start at 24:20 and within a couple minutes you'll see examples very similar to this OP.

  • The other reply shares much of my thought, just as using psychoactive drugs can trigger episodes, so can interacting with sycophantic AI chat bots. If you are seeing a professional for help, I encourage you to share with them, what you have shared with us today.

    By your own admission, and the nature of psychosis, I don't think further engagement is going to do any good.

  • This is basically a r/LLMPhysics post escaping containment.

    You are witnessing AI psychosis manifest, unfortunately.

  • This question already has answers here:

    [Unrelated question]

    [Question from 7 years ago that is no longer functional]

    Closed 1 year ago.