• psvrh@lemmy.ca
    link
    fedilink
    arrow-up
    20
    ·
    7 months ago

    This is what happens when your platform prioritizes engagement over everything else, including people’s lives.

    • loathsome dongeater@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      For this particular occasion, the engagement aspect is less important. The more interesting thing to look at is whether BJP pressure on Meta was the reason that these ads were given the greenlight. Twitter’s offices were raided in the past for some petty BS so it is not out of the question.

  • Onno (VK6FLAB)@lemmy.radio
    link
    fedilink
    arrow-up
    11
    ·
    7 months ago

    If it’s like any other Facebook monstrosity, the processes that “detect” such things are purely based around USA morals and values, such as they are.

    In other words, show a nipple and the thing is gone. Show something racist and it’s fine as long as it doesn’t show a nipple.

    In Australia FB doesn’t care one iota about highly offensive content towards first nations people, perfectly fine with permitting the Australian equivalent of the USA “N” word, just as long as it doesn’t show a nipple.

    Oh, yeah, the banned nipple has to be attached to a female, preferably white caucasian. The rest seems fine, especially in an “indigenous setting”.

    In other words, FB only cares about its USA morality police and is perfectly fine with extracting money from everyone else, regardless of local sensitivities.

    LinkedIn is the same. Not sure if that happened after Microsoft bought it, because until then it was not really a social media site, even if it did horrible things with extracting contacts from unsuspecting users who discovered that everyone in their address book had been invited, even if they were on the address book block list.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    This is the best summary I could come up with:


    The Facebook and Instagram owner Meta approved a series of AI-manipulated political adverts during India’s election that spread disinformation and incited religious violence, according to a report shared exclusively with the Guardian.

    According to the report, all of the adverts “were created based upon real hate speech and disinformation prevalent in India, underscoring the capacity of social media platforms to amplify existing harmful narratives”.

    During his decade in power, Modi’s government has pushed a Hindu-first agenda which human rights groups, activists and opponents say has led to the increased persecution and oppression of India’s Muslim minority.

    Meta’s systems failed to detect that all of the approved adverts featured AI-manipulated images, despite a public pledge by the company that it was “dedicated” to preventing AI-generated or manipulated content being spread on its platforms during the Indian election.

    “Supremacists, racists and autocrats know they can use hyper-targeted ads to spread vile hate speech, share images of mosques burning and push violent conspiracy theories – and Meta will gladly take their money, no questions asked,” he said.

    Meta has previously been accused of failing to stop the spread of Islamophobic hate speech, calls to violence and anti-Muslim conspiracy theories on its platforms in India.


    The original article contains 963 words, the summary contains 201 words. Saved 79%. I’m a bot and I’m open source!