Been meaning to post about this for some time, but I am having some trouble articulating my thoughts in English. So this is probably mostly going to be an incoherent ramble with some personal anecdotes.

I have in my immediate circle a tech person, an engineer who is somewhat introverted and has socially only mingled with others like him. Sees the world in a very engineer sort of way and has some unconscious male supremacy and misanthrophy in the way he has learnt to view the world. Tends to stan Elon Musk type of thinking and dismisses human contribution as lacking and faulty. In essence seems to have formed this weird machine supremacist thinking which might be symptomatic of the tech world as a whole.

I’ve had incredible struggle sessions with him over AI. Which he first was a believer in, then believed it will destroy work. Basically doing the tech bro pipeline with it.

But what is interesting to me is that currently he is aware of the issues with it and still uses it. He clearly thinks it is superior to humans and therefore is willing to listen to it in ways he was never ready to listen to other humans.

Here is an example. He has a particular problem, one he has had for a long time. This is in an area where I am trained in and have attempted to coach him in previously. Not long ago he came to me telling how AI has fixed this problem for him that no human ever could fix. It sounded sus, but I did not confront him on it. Instead I decided to wait and see what it is that he now does.

It turns out that the AI has given him exactly the same exercises I have recommended he do time and time again over the years that he never did. The only difference is that he now does them. Because this clearly very male coded non human model adviced him to do that from a position of authority that he actually listens to.

And this is the thing. I think the AI as a model appeals to people like him as it is something they do not dismiss right out of the gate like they do other people or people they deem (unconsciously) less then they are. And I feel like this touches on something relevant when it comes to the entire AI bubble and the architects of said bubble as a whole.

    • Le_Wokisme [they/them, undecided]@hexbear.net
      link
      fedilink
      English
      arrow-up
      12
      ·
      4 days ago

      it really is an earned misanthropy, especially if you’re old enough to have been bullied all through k-12. Not sure anything will ever give me sympathy for antimaskers though, i hope they all get what they deserve.

  • ChaosMaterialist [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 days ago

    Taking a shot in the dark:

    Was your friend fairly bright and successful in school such that they regularly were praised for how Smart they were by authority figures, thus developed a habit of flexing their intelligence (unconsciously fishing for complements) while also developing an anxiety around looking stupid in front of others?

  • D61 [any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    4 days ago

    There are people who will never accept a solution or advice until it is perceived as “their idea.”

    Maybe, in this instance, its less about AI being a fancy “Magic 8 Ball” and more about needing some vector to receive advice that makes it seem like “their idea.”

  • Sleepless One@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    4 days ago

    Here is an example. He has a particular problem, one he has had for a long time. This is in an area where I am trained in and have attempted to coach him in previously. Not long ago he came to me telling how AI has fixed this problem for him that no human ever could fix. It sounded sus, but I did not confront him on it. Instead I decided to wait and see what it is that he now does.

    It turns out that the AI has given him exactly the same exercises I have recommended he do time and time again over the years that he never did. The only difference is that he now does them.

    I had this happen in real time at my job a few months back. I was paired programming with my tech lead and we were trying to figure out how to implement something. I can’t recall what exactly, but I recall suggesting something to him that I was confident would work. He either ignored me or wanted confirmation (can’t recall which) and asked M$ copilot and it told him the exact thing I suggested. It was only then he went through with it.

    I’m a gender conforming man. I can only imagine I would be ignored harder if I was woman or femme presenting.

  • Beetle [hy/hym]@hexbear.net
    link
    fedilink
    English
    arrow-up
    10
    ·
    4 days ago

    Our society is built on STEM superiority. People in that field will dismiss all other fields because they do not understand the complexities of other sciences and assume they can do it better since they already ‘proved’ their worth by being in STEM.

    I personally avoid these people because they have little incentive to break out of their bubble as they are economically better off than most people in society. I do have a theory dat Open IT could be a radicalising subject for some.

    • This probably is some of it. There is this belief in big datasets ruling out “human error” and it is the human that is always flawed, be it in driving a car or deducting solutions. What I wonder is whether this is a chicken or an egg situation in the way these people get socialized to view things from certain narrow lenses or do these fields select for people who are very willing to think like this. Probably both.

      I’ve managed to radicalize him quite a bit towards understanding that it is the human that is the only thing with agency, the only one that can innovate or discover something. He gets it just fine, but still “listens” to the machine and clearly still puts more trust in it.

  • SchillMenaker [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    9
    ·
    4 days ago

    I have an infinite number of thoughts regarding AI and situations like this, but I think I can boil down the best into a single concept here:

    People label generative AI mistakes as hallucinations. “The machine thinking that people have six fingers is a hallucination, people actually have five fingers.” This is a fundamental misunderstanding of what is happening. The machine does not know how many fingers a person is supposed to have, the machine does not know what a person is, the machine does not know what a finger is. In this regard, the machine is not hallucinating when it is wrong, the machine is hallucinating every detail of what it produces. It’s a fantastic trick to get it to hallucinate correctly most of the time, but it can never hallucinate correctly all of the time for obvious reasons.

    Ask yourself how much stock should we put into the hallucination machine? How much should we trust other people who put significant stock into the hallucination machine? How can we believe in our own value if we place a higher value on the hallucination machine? If a person can be presented with all of that and still think that generative AI is currently all-powerful or that it is ever going to lead to AGI then they absolutely do need artificial intelligence because they clearly lack any natural intelligence.

  • iByteABit [comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    5 days ago

    I have some experience with a friend who is also a tech bro, works in a startup that is currently trying to incorporate some cutting edge AI tools into the work process and see how much they can turn coding into a thing of the past.

    He’s generally quite left leaning, but I think the pride from working on something cutting edge has gotten to him a bit. He didn’t really agree that AI will lead to engineering jobs being lost due to the massive increase of productivity per worker, because he thinks the software giants doing this are going downhill because of doing this and they’re going to get overtaken by the “good” companies.

    This should lead to a pretty long discussion about how capitalism works, and how capitalist competition eventually leads to monopolies like the said software giants, exactly because they are the ones to exploit workers the most and get the most surplus value out of them.

    Also what must be discussed around the improvements that AI brings technologically is, as with any techological advancement, “technology for whom?”. If we lived under a socialist system working for the people, AI would naturally lead to working hours being reduced, increased productivity being utilized for the common good, workers needing less time for coding and more time for creative thinking, and participation in the democratic process. Under capitalism though, the now unneeded workers will just be thrown back into the reserve army of jobless workers, the now easier engineering positions will become a reason for their wages being reduced, the increased overall productivity will as always be used for the needs of capitalists, and the environment will get fucked because it’s not as profitable to do AI with limited resources.

    • bobs_guns@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      10
      ·
      5 days ago

      The most likely scenario imo is there will be some new vibe coded codebases which are horrible messes & impossible to make any progress on so there will be a moderate demand for people who actually know what they are doing to try to fix them up when interest rates get cut after the bubble pops. But time will tell…