Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)C
Posts
0
Comments
208
Joined
3 yr. ago

  • How is any of this showing 'complicity' in anything? The EU was not involved in the attack to begin with, so how is them saying "Hey we're in contact with everyone involved, and we know that guy was a dictator and all, but this is definitely seems to be clashing with international law, please stop. PS. we look out for our citizens in the affected area" any such thing? That's as clear a condemnation you're going to get on literally the day that it happened, while details are still in the air.

  • I think calling it a vassal state is a bit hyperbolic. Large powers such as the US or China can exert massive political force on any country, even each other. Sometimes it's better to give up your lunch money than be forced to take out your card and empty your bank account. And because the US has traditionally been an ally of Europe, the two are well connected and thus have many contact points to be pressured in (Especially points that hurt, like defense). Points that the EU might want to keep quiet on until they can bear the pressure, especially since the US is currently led by a felon with the mind of a child that wages wars for tantrums.

    The EU has kept it's direction, which has always been further ahead of the US in terms like voting rights, healthcare, and life satisfaction, but the US has now taken a stark turn diverging from that path. Sure there are politicians in the EU that like to be US bootlickers, but even those are not a particular fan of what Trump is doing. We do have Hungary, and some others, but those are far too outnumbered to be considered to be making a vassal state out of Europe.

    Europe definitely has the capacity to be a super power, but indeed we can only do it together. And lets not forget that China and the US also depend on Europe for a lot. If you look at trade, we are effectively equals.

  • I totally agree, but I don't think that's what is being proposed. There is more than one way to become the 'big cube', and we should seek for one that embodies the freedom and peace of Europe, not try to imitate that which has been tried unsuccessfully.

  • For real, I had been using Bitwarden for a couple of years for free and it never once had to show an ad to ask me to buy it's subscription. I just realized that it was giving me tons of value, and that prompted me to buy the (fairly priced) subscription. That's a gold standard imo.

  • Or just... don't buy consoles at all. Buy a mini PC (which you can upgrade too) or wait for the Steam Cube? Which would both be cheaper in the long run. Because why still funnel money into a company that seems to be adamant that it owns that machine (and lets be honest, could try and use any kind of kill switch or safeguard to stop you from doing so) and will wield your money as a weapon against you.

    It's like soliciting a stalker because you enjoy receiving random gifts in the mail with totally no strings attached.

  • Hate to say it, but you might be missing out on something you won't ever be able to experience again afterwards. It's like with episodic releases of TV shows, half the fun is sitting with friends discussing and overthinking what just happened while you wait for the next episode. Being there too long after community wide revelations, you can't experience that head space of mystery and surprise again. Deltarune handles the episodic releases very well honestly, I'd understand if it was a series of bad partial releases.

  • I agree with you, but if you measure the width of the dress at the tip of her fingers, the left and right are about 99-100 pixels, while the middle one is 105 pixels wide. Her face in all three images is about 38-39 pixels wide (measured at the earlobe), so that rules out they stretched the entire image slightly. But 5 pixels is significant enough to kind of muddy the validity of the OP's message since it no longer rules out all but the appearance of the dress. It sadly happens that sometimes effects are exaggerated, even when there is a real effect at play.

  • Mailroom aside, if a delivery guy is fine crossing a city with 20/30k people horizontally in traffic, I don't really see why this is such a bad thing when you break it down.

    I count 35 floors, so you can cut it down to ~850 people on each floor after an elevator ride, and a building like this will probably have at least 4 elevator areas sectioning the building almost equally.

    So you're down to about ~210 people after entering the right side of the building, that's like a big street / small neighborhood (and how far you have to walk should scale closely to that). And with this much people in one area you can really easily batch deliveries. And a delivery place will probably settle quite closely to such a hub of people anyways.

  • Not completely true. It just needs to be data that is organic enough. Good AI generated material is fine for reinforcement since it is still material (some) humans would be fine seeing. So more like: it needs to be human approved.

  • There's really no good way - if you act normal they train on you, and if you act badly they train on you as an example of what to avoid.

    My recommendation: Make sure its really hard for them to guess which you are so you hopefully end up in the wrong pile. Use slang they have a hard time pinning down, talk about controversial topics, avoid posting to places easily scraped and build spaces free from bot access. Use anonimity to make you hard to index. Anything you post publicly can be scraped sadly, but you can make it near unusable for AI models.

  • You are probably confusing fine tuning with training. You can fine tune an existing model to produce more output in line with sample images, essentially embedding a default "style" into every thing it produces afterwards (Eg. LoRAs). That can be done with such a small image size, but it still requires the full model that was trained on likely billions of images.

  • There's always just people that mess up on the form. But they also monitor the sign rate and saw some periods of higher than normal signing in the middle of the night in the EU - indicating someone might have ran a bot to sign with invalid information. The EU only validates the signatures once the petition is closed, so they need a safe margin where even with a significant amount of invalid signatures, they still make it. Afaik 1.2 mil is about what they would expect for a normal vote of this size to be safe, and 1.4 mil is basically more than enough to compensate for any bad actors.

  • Ross explained it in his last video - there are reasons to be skeptical and unsure if it's truly there until at least 1.4 mil signatures. And more votes is never bad. So both need more attention. If it reaches people in the EU it will also reach those in the UK.

  • Deleted

    Permanently Deleted

    Jump
  • This so very much. I've been saying it since 2020. People who think the big corporations (even the ones that use AI), aren't playing both sides of this issue from the very beginning just aren't paying attention.

    It's in their interest to have those positive to AI defend them by association by energizing those negative to AI to take on an "us vs them" mentality, and the other way around as well. It's the classic divide and conquer.

    Because if people refuse to talk to each other about it in good faith, and refuse to treat each other with respect, learn where they're coming from or why they hold such opinions, you can keep them fighting amongst themselves, instead of banding together and demanding realistic, and fair policies in regards to AI. This is why bad faith arguments and positions must be shot down on both the side you agree with and the one you disagree with.

  • Deleted

    Permanently Deleted

    Jump
  • A court will decide such cases. Most AI models aren't trained for this purpose of whitewashing content even if some people would imply that's all they do, but if you decided to actually train a model for this explicit purpose you would most likely not get away with it if someone dragged you in front of a court for it.

    It's a similar defense that some file hosting websites had against hosting and distributing copyrighted content (Eg. MEGA), but in such cases it was very clear to what their real goals were (especially in court), and at the same time it did not kill all file sharing websites, because not all of them were built with the intention to distribute illegal material with under the guise of legitimate operation.

  • Can I add 4. the integrated video downloader actually downloads videos, in whatever format you would want, and with no internet connection required to watch them. This to me is still the biggest scam 'feature' of Youtube Premium. You can '''download''' videos, but not as eg. an mp4, but as an encrypted file only playable inside the Youtube app, and only if you connected to the internet in the last couple of days can you play it.

    That's not downloading, that's just jacking my disk space to avoid buffering the video from Youtube's servers. That's not a feature, that's me paying for Youtube's benefit.

    I cancelled and haven't paid for Premium in years because of it. When someone scams me out of features I paid for, I don't reward that shit until they either stop lying in their feature list, or actually start delivering.

  • It really depends. There's some good uses, but it requires careful consideration and understanding of what the technology can actually provide. And if for your use case there isnt anything, it's just not what you should use.

    Most if not all of the bigger companies that push it dont really try to use it for those purposes, but instead treat it as the next big thing that nobody quite understands, building mostly on hype. But smaller companies and open source initiatives indeed try to make the good uses more accessible and less objectionable.

    There's plenty of cases where people do nifty things that have positive outcomes. Researchers using it for pattern recognition, scambait chatbots, creative projects that try to make use of the characteristics of AI different from human creations, etc.

    I like to keep an open mind as to what people come up with, rather than dismissing it outright when AI is involved. Although hailing it as an AI product is a red flag for me if thats all thats advertised.

  • Deleted

    Permanently Deleted

    Jump
  • I do not share your experience about people that despise AI talking about it more, but if your community does, that's great. But I am kind of skeptical that really is the case because of some of your statements.

    Most communities I see like that are incredibly rude and dismissive of people that see the positive sides of the technology, and even objective statements about the technology, are dismissed because they are not negative of the technology (eg. that AI is advancing medical research and healthcare, or also being used to stop scammers), and people that discuss that are mocked or ostracized by those groups. It's cult like behavior, where only the group opinion is allowed. And if you even dare like something that was made with AI despite more and more media such as games uses it, even if you still have reasonable objections, oh boy.

    I highly disagree with your statement that hate and anger spreads an opinion far more easily, because it contains an assumption that people agree on it ahead of time. Take racism. I hope you're a nice person, so seeing a wildly racist post hating on X people, show up on your feed isn't suddenly going to make you think "Huh maybe they have a point, X people are to be hated.", it just makes you very angry and resentful in return, with an opposing opinion, aka polarization. And that kills the conversation. For racism that's kind of warranted, since the person with the irrational hatred isn't to be taken seriously. And regardless of if the position is pro, neutral, or anti AI, if it is defended with irrationality, they will be the ones in this analogy. I equivalently denounce people that have no respect for artists and see AI as a way to kill the creative industry as I denounce people that pretend nothing good can ever come from AI and everyone that uses it is without a conscience or has no feeling for creativity.

    As for your points about fighting it, I cannot find any point in it that I agree with. Three or four years ago I would have entertained the notion that it might go away, but it has been showing up all over society. It's an unattainable goal. Even if it somehow got banned in one country, that does not stop other countries around the world, with different cultures and values from using it, nor stop bad actors from using it so long as it cannot be proven to be AI. It's like thinking because drugs are illegal, nobody is doing drugs. And to drive that point even further, positive uses such as certain drugs ending up being used for effective treatment of PTSD or chronic pain, end up being undiscovered. That's the kind of world irrational reasoning builds.

    And by having an opinion that can only be satisfied by someone unequivocally agreeing with you, with no room for reasonable disagreeing on some aspects such as fair usage, it makes alliances that could actually get majorities to secure rights and fair treatment impossible.

    They do in the sense that all of them are driven by neophilia and big tent people horny for cash and power.

    See, this is the kind of statement I do denounce if you are saying this applies to AI, and why I don't really believe you are in a community that reasonably discusses AI. It's such a close minded statement that is only applicable to most big companies that use AI. It doesn't respect artists that use it whose work has been systematically undervalued, nor researchers that use it for the common good, nor any other use that has a reasonable grounds to not be considered the same.

  • Deleted

    Permanently Deleted

    Jump
  • It can't simultaneously be super easy and bad, yet also a massive propaganda tool. You can definitely dislike it for legitimate reasons though. I'm not trying to anger you or something, but if you know about #1, you should also know why it's a good tool for misinformation. Or you might, as I proposed, be part of the group that incorrectly assumed they already know all about it and will be more likely to fall for AI propaganda in the future.

    eg. Trump posting pictures of him as the pope, with Gaza as a paradise, etc. These still have some AI tells, and Trump is a grifting moron with no morals or ethics, so even if it wasn't AI you would still be skeptical. But one of these days someone like him that you don't know ahead of time is going to make an image or a video that's just plausible enough to spread virally. And it will be used to manufacture legitimacy for something horrible, as other propaganda has in the past.

    but why do we want it? What does it do for us?

    You yourself might not want it, and that's totally fine.

    It's a very helpful tool for creatives such as vfx artists and game developers, who are kind of masters of making things not real, seem real. The difference is, that they don't want to lie or obfuscate what tools they use, but #2 gives them a huge incentive to do just that, not because they don't want to disclose it, but because chronically overworked and underpaid people don't also have time to deal with a hate mob on the side.

    And I don't mean they use it as a replacement for their normal work, or just to sit around and do nothing, but they integrate it into their processes to enhance either the quality, or to reduce time spent on tasks with little creative input.

    If you don't believe me that's what they use it for, here's a list of games on Steam with at least an 75% rating, 10000 reviews, and an AI disclosure.

    And that's a self perpetuating cycle. People hide their AI usage to avoid hate -> making less people aware of the depths of what it can be used for, making them only think AI slop or other obviously AI generated material is all it can do -> which makes them biased towards any kind of AI usage because they think it's easy to use well or just lazy to use -> giving people hate for it -> in turn making people hide their AI usage more.

    By giving creatives the room to teach others about what AI helped them do, regardless of wanting to like or dislike it, such as through behind the scenes, artbooks, guides, etc. We increase the awareness in the general population about what it can actually do, and that it is being used. Just imagine a world where you never knew about the existence of VFX, or just thought it was used for that one stock explosion and nothing else.

    PS. Bitcoin is still around and decently big, I'm not a fan of that myself, but that's just objective reality. NFTs have always been mostly good for scams. But really, these technologies have little to no bearing on the debate around AI, history is littered with technologies that didn't end up panning out, but it's the ones that do that cause shifts. AI is such a technology in my eyes.