Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)J
Posts
3
Comments
84
Joined
3 yr. ago

  • (1) Network effects. People want to use social media that everyone else is using. Once a site achieves a critical mass of users it becomes the obvious choice to join. It also becomes difficult to leave because if you have built up a personal network on most sites, you can’t take it with you.

    (2) Convenience. Most sites don’t require a lot of effort to use. In the past few years this one has surprised me a bit. The level of effort most people are willing to put in to trying a new site is basically 0. Using something like lemmy requires you to read a few paragraphs and make a decision about a home instance. That is too much effort for a lot of people.

  • In Texas they are using personal data collected from ALPRs to accuse women of getting abortions. There were also concerns that personal data collected by period tracker apps would be used to accuse women of getting abortions. You could be doing something that suddenly becomes illegal and then those data could be used to harm you

    ICE is using facial recognition and a database of questionable veracity to accuse legal residents of being illegal immigrants. They are collecting facial data of protestors and, apparently, using it compile of list of domestic "terrorists". You could be doing absolutely nothing illegal and the state could use your personal data to harm you.

    Social media companies use data they collect about you to try to get you addicted to their products because you are easier to manipulate when you are addicted. They know a lot of their products have harmful impacts on people, but they don't care because they make more money that way.

  • I can sort of understand this instinct. I am not opposed to new people using linux but I think the obsession with “growth” is the wrong way to think about software tools.

    The way most companies make adoption of their software system grow is by making it more convenient to use, then exploiting network effects to force more users on to their platform. For the vast majority of people “convenient to use” means a locked down environment where they have little or no control and don’t have to make technical decisions.

    Right now to use a Linux OS you are going to have to do a little bit of learning and make some decisions. The requirement that you actually think about an OS for a few minutes acts as a significant barrier for a lot of people, but removing that barrier results in a product that does not allow the user to control their software. Which I think would be bad.

  • The people I have encountered who claim ai helps with creativity always seem to assume they are naturally “creative” but are held back by a lack of technical talent. Which is why they sometimes call actual artists “gatekeepers”.

    I think this whole belief about ai helping people be creative comes from the belief that creativity and skill are separate things. But if you have ever tried to practice an art you would know they are part of the same whole. You can’t be a creative painter without understanding how to represent a perspective, or understanding how light and shadow interact, etc. You build those understandings through practice. There are no shortcuts.

  • Last attempt, I swear.

    By digressing to abstraction, good people can and do justify building tech for immoral purposes. It is irrelevant that tech is not inherently good or bad in cases where it is built to do bad things. Talking about potential alternate uses in cases where tech is being used to do bad is just a way of avoiding the issues.

    I have no problem calling flock or facebooks tech stack bad because the intentions behind the tech are immoral. The application of the tech by those organizations is for immoral purposes (making people addicted, invading their privacy etc). The tech is an extension of bad people trying to do bad things. Commentary about tech’s abstract nature is irrelevant at that point. Yeah, it could be used to do good. But it’s not. Yeah, it is not in-and of-itself good or bad. Who cares? This instantiation of the tech is immoral, because it’s purposes are immoral.

    The engineers who help make immoral things possible should think about that, rather than the abstract nature of their technology. In these cases, saying technology is neutral is to invite the listener to consider a world that doesn’t exist instead of the one that does.

  • Saying “tech is neutral” is a dodge. People say that to avoid thinking about the ethics of what it is they are doing.

  • At no point in this conversation have I ever said that tech in an abstract sense is inherently good or bad. The point that I am making— and this is the last time I will make it— is that it is not interesting to talk about the ethics of some technology in an abstraction in cases where the actual tech is as it is actually implemented is clearly bad.

    Saying “tech is neutral” is a dodge. People say that to avoid thinking about the ethics of what it is they are doing.

  • I don’t see how that is the case.

    It is literally the case. People who have literally made tools to do bad things justified it by claiming that tech is neutral in an abstract sense. Find an engineer who is building a tool to do something they think is bad, they will tell you that bromide.

    OpenCV is not, in itself, immoral. But openCV is, once again, actual tech that exists in the actual world. In fact, that is how I know it is not bad, I use the context of reality—rather than hypotheticals or abstractions—to assess the morality of the tech. The tech stack that makes up Flock is bad, once again I make that determination by using the actual world as a reference point. It does not matter that some of the tech could be used to do good. In the case of Flock, it is not, so it’s bad.

  • As I said before: In a conversation about technology as it actually exists, talking about potentials is not interesting. Yes all technology has the potential to be good or bad. The massive surveillance tech is actually bad right now in the real world

    This issue with asserting that technology is neutral is it lets the people who develop it ignore the impacts of their work. The engineers that make surveillance tech make it, ultimately, for immoral purposes. When they are confronted with the effects of their work on society they avoid according with the ethics of what it is that they are doing by deploying bromides like “technology is neutral.”

    Example: Building an operant conditioning feedback system into a social media app or video game is not inherently bad, you could use it to reinforce good behaviors and deploy it ethically by obtaining the consent of the people you use on. But the operant conditioning tech in social media apps and video games that actually exists is very clearly and unambiguously bad. It exists to get people addicted to a game or media app, so that they can be more easily exploited. Engineers built that tech stack out for the purpose of exploiting people. The tech, as it exists in the real world, is bad. When these folks were confronted with what they had done, they responded by claiming that tech is not inherently good or bad. (This is a real thing social media engineers really said) They ignored the tech—as it actually exists—in favor of an abstract conversation about some potential alternative tech that does not exist. The effect of which is the people doing harm built a terrible system without ever confronting what it was they were doing.

  • “Technology is neutral” is a bromide engineers use to avoid thinking about how their work impacts people. If you are an engineer working for flock or a similar company, you are harming people. You are doing harm through the technology you help to develop.

    The massive surveillance systems that currently exist were built by engineers who advanced technology for that purpose. The scale and totality of the resulting surveillance states are simply not possible without the tech. The closest alternatives are stasi-like systems that are nowhere near as vast or continuous. In the actual world the actual tech is immoral. Because it was created for immoral purposes and because it is used for immoral purposes.

  • If you are in a discussion about the development and deployment of technology to facilitate a surveillance state, then saying “technology is neutral” is the least interesting thing you could possibly say on the subject.

    In a completely abstract, disconnected-from-society-and-current-events sense it is correct to say technology is amoral. But we live in a world where surveillance technology is developed to make it easier for corporations and the state to invade the privacy of individuals. We live in a world where legal rights are being eroded by the use of this technology. We live in a world where this technology is profitable because it helps organizations violate individual rights. If you live in the US, as I do, then you live in a world where federal law enforcement agencies have become completely contemptuous of the law and are literally abducting innocent people off the street. They use the technology under discussion here to help them do that.

    That a piece of tech might potentially be used for a not-immoral purpose is completely irrelevant to how it is actually being used in the real world.

  • I do a compressed schedule. The biggest benefit to me is that I commute less. The extra day off is nice, but I often find I don’t do much with it because I am so tired from work.

  • They probably would have just fired him anyway which would have opened the door to some immoral lawyer.

  • Every day I wake up and think to myself “today is the day I will form a strong opinion about systemd” but it never happens.

  • Deleted

    Permanently Deleted

    Jump
  • A Tu quoque is when you claim that an argument is wrong because the person making the argument behaves in a way that contradicts the argument's premises.

    As In:

    1. The CEO of Exon says that industrial emissions of hydrocarbons contribute to climate change.
    2. Exon is one of the single largest hydrocarbon polluters.
    3. Therefore, industrial emissions of hydrocarbons do not contribute to climate change.

    This is a fallacy because someone being a hypocrite does not mean their conclusion is wrong. Pointing out that some response to an argument as a Tu quoque is not the same thing as claiming that you are not allowed to point out hypocrisy, or you are not allowed to assess the credibility or motivations of a speaker. It just means that hypocrisy does not guarantee that someone's argument is wrong.

  • Most of the people I know who endorse this view would assent to it because it is consistent with how they feel about the world around them, not because it is a proposition they have seriously considered.

    It just feels like everyone hates Christians, so if someone told them they were being persecuted, they would agree. In the same way, it just feels like nefarious forces are trying to "ban Christmas", so when idiots on TV claim that is whats happening, they nod their heads along. When challenged they just retreat into ignorance, saying things like "well that's what I've heard" or "I have no idea about that", because ideas like "the war on Christmas" are not factual claims about the world, they are expressions of sentiments about what the world is like.

  • dude went into the Aokigahara forest in Japan to ridicule the corpses of people who committed suicide

    Yikes

  • Can someone explain to me why this is such a big deal? I have gathered from context the guy who lost was an influencer that people dislike, but that’s really all I know. Is there more to it than that?

  • I think you would struggle to find any serious Constitutional scholar who would agree with your interpretation. “Except in cases of impeachment” is clearly a limit on what cases a president has the power to issue a pardon, not a retroactive “unpardoning” of cases after a president has been impeached. In fact the retroactive nullification of a pardon seems to fly in the face of a basic judicial principle that hold decisions to be final.

  • Law @lemmy.world

    Trump sues Wall Street Journal over alleged Epstein letter

    www.politico.com /news/2025/07/18/trump-sues-wall-street-journal-over-alleged-epstein-letter-00464191
  • Law @lemmy.world

    Beware the Lawyers - Teri Kanefield

    terikanefield.com /beware-the-lawyers/
  • Law @lemmy.world

    Michigan attorney general charges 'false electors' over efforts to overturn the 2020 election

    www.nbcnews.com /politics/2020-election/michigan-attorney-general-charges-false-electors-efforts-overturn-2020-rcna94838