Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)D
帖子
4
评论
1699
加入于
2 yr. ago

  • I do believe gender is a social construct that's becoming outdated. And that we shouldn't have nor woman nor men, at all.

    Make of that what you want.

  • I don't know if all european countries. But here the 30 minute break is also counted as work hours.

  • Probably. Here in spain public workers have 35 hours work week and global 37,5 is being introduced. For this we usually take off half an hour or an entire hour each day.

  • Belgium is 38 hours for instance.

  • United States of America is not a planet.

    There are countries with both more and less work hours.

  • Yeah, sorry I forgot, I will release them now. Thanks for reaching me, the person responsible to release the files, through this unrelated lemmy thread.

    How do you know to find me here?

  • Capitalism is working so good that capitalists are the most consistently class conscious group. They are aware which class they belong to and side with it.

  • I don't think it's easy to do. Given how unreliable "AI detectors" are in general.

    Also, why? Music is something very sensitive driven. If you like it you like if you don't you don't, I don't think a quantitative measure on how a song is made is a reasonable approach to distinguish which songs you like and which song you don't.

    I can just imagine:

    • Do you like this song?
    • I don't know yet. (Pulls phone out to measure AIness of the song) No I don't like it.
  • Money. The answer is always money. If it's cheaper to build on land they will.

    The answer would probably to make a special tax that force them to move to more environmentally friendly locations.

  • lads

    跳过
  • Why would they request so many times a day the same data if the objective was AI model training. It makes zero sense.

    Also google bots obeys robots.txt so they are easy to manage.

    There may be tons of reasons google is crawling your website. From ad research to any kind of research. The only AI related use I can think of is RAG. But that would take some user requests aways because if the user got the info through the AI google response then they would not enter the website. I suppose that would suck for the website owner, but it won't drastically increase the number of requests.

    But for training I don't see it, there's no need at all to keep constantly scraping the same web for model training.

  • lads

    跳过
  • Cloudfare have a clear advantage in the sense that can put the door away from the host and can redistribute the attacks between thousands of servers. Also it's able to analyze attacks from their position of being able to see half the internet so they can develop and implement very efficient block lists.

    I'm the first one who is not fan of cloudfare though. So I use crowdsec which builds community blocklists based on user statistics.

    PoW as a bot detection is not new. It has been around for ages, but it has never been popular because there have always been better ways to achieve the same or even better results. Captcha may be more user intrusive, but it can actually deflect bots completely (even the best AI could be unable to solve a well made captcha), while PoW only introduces a energy penalty expecting to act as deterrent.

    My bet is that invidious is under constant Google attack by obvious reasons. It's a hard situation to be overall. It's true that they are a very particular usercase, with both a lot of users and bots interested in their content, a very resource heavy content, and also the target of one of the biggest corporations of the world. I suppose Anubis could act as mitigation there, at the cost of being less user friendly. And if youtube goes a do the same it would really made for a shitty experience.

  • lads

    跳过
  • Most of those companies are what's called "gpt wrappers". They don't train anything. They just wrap an existing model or service into their software. AI is a trendy word that gets quick funds, many companies will say they are AI related even if they are just making an API call to chatGPT.

    For the few that will attempt to train something, there are already a wide variety of datasets for AI training. Or they will may try to get data of a very specific topic. But in order to be scraping the bottom of the pan so hard that you need to scrap some little website you need to be talking about a model with a massive amount of parameters. Something that only like 5 companies in the world would actually need to improve their models. The rest of the people trying to train a model is not going to go try to scrap the whole internet, because they have no way to process and train that.

    Also if some company is willing to waste a ton of energy training some data, doing some PoW to obtain that data, while it would be an inconvenient I don't think it will stop them. They are literally building nuclear plants for training, a little crypto challenge is nothing in comparison. But it can be quite intrusive for legitimate users. For starters it forbids navigation with js deactivated.

  • lads

    跳过
  • I mean number of pirates correlates with global temperature. That doesn't mean causation.

    The rest of the indices would aso match for any archiving bot, or with any bit in search of big data. We must remember that big data is used for much more than AI. At the end of the day scraping is cheap, but very few companies in the world have access to the processing power to train that amount of data. That's why it seems so illogical to me.

    We are seeing how many LLM models which are results of a full train, per year? Ten? twenty? Even if they update and retrain often it's not compatible with the amount of request people are implying as AI scraping that would put services into dos risk. Specially when I would think that any AI company would not try to scrap the same data twice.

    I have also experience an increase in bot requests in my host. But I just think is a result of internet getting bigger, more people using internet with more diverse intentions, some ill some not. I've also experience a big increase on probing and attack attempts on general, and I don't think it's OpenAI trying some outdated Apache vulnerability on my server. Internet is just a bigger sea with more fish in it.

  • lads

    跳过
  • It's very intrusive in the sense that it runs a PoW challenge, unsolicited on the client. That's literally like having a cryptominer running on your computer for each challenge.

    Each one would do what they want with their server, of course. But for instance I'm very fond of scraping. For instance I have FreshRSS running ok my server, and the way it works is that when the target website doesn't provide a RSS feed ot scrapes it to get the articles. I also have other service that scrapes to get pages changes.

    I think part of the beauty of internet is being able to automate processes, software lile Anubis puts a globally significant energy tax on theses automations.

    Once again, each one it's able to do with their server whatever they want. But the think I like the least is that they are targeting with some great PR their software as part of some great anti-AI crusade, I don't know if the devs itself or any other party. And I don't like this mostly because I think is disinformation and just manipulative towards people who is maybe easy to manipulate if you say the right words. I also think that it's a discourse that pushes into radicalization from certain topic, and I'm a firm believer that right now we need to overall reduce radicalization, not increase it.

  • lads

    跳过
  • Not really. I only ask because people always say it's for LLM training, which seem a little illogical to me, knowing the small number of companies that have access to the computer power to actually do a training with that data. And big companies are not going to scrape hundreds of times the same resource for a piece of information they already have.

    But I think people should be more critique trying to understand who is making the request and with which purpose. So then people could make a better informed decision of they need that system (which is very intrusive for the clients) or not.

  • lads

    跳过
  • I don't think is millions. Take into account that a ddos attacker is not going to execute JavaScript code, at least not any competent one, so they are not going to run the PoW.

    In fact the unsolicited and unwarned PoW does not provide more protection than a captcha again ddos.

    The mitigation comes from the smaller and easier requests response by the server, so the number of requests to saturate the service must increase. How much? Depending how demanding the "real" website would be in comparison. I doubt the answer is millions. And they would achieve the exact same result with a captcha without running literal malware on the clients.

  • lads

    跳过
  • Precisely that's my point. It fits a very small risk profile. People who is going to be ddosed but not by a big agent.

    It's not the most common risk profile. Usually ddos attacks are very heavy or doesn't happen at all. These "half gas" ddos attacks are not really common.

    I think that's why when I read about Anubis is never in a context of ddos protection. It's always on a context of "let's fuck AI", like this precise line of comments.

  • lads

    跳过
  • How do you know those reduced request were AI companies and not any other purpose?

  • lads

    跳过
  • I'm not native English speaker. So I would apologize if there's bad English in my response. And would thank any corrections.

    That being said I do host public services, before and after AI was a thing. And I have asked many of these people who claim "we are under AI bot attacks" how are they able to differentiate when a request is from a AI scrapper or just any other scrapper and there was no satisfying answer.