Skip Navigation

Posts
0
Comments
105
Joined
3 yr. ago

  • Rule

    Jump
  • They're both extremely excellent. The original series is a fair bit darker and more depressing, and End of Evangelion is definitely a lot more WTF than anything that happens in the rebuild movies (which isn't a bad thing necessarily). The rebuild movies,meanwhile, have much higher production values, and the fights are generally much better--most of the gifs of Ramiel you see are from the rebuild. The characters are also a lot more mentally stable--they're all still depressed and dealing with heavy shit, but it's "I'm taking my meds" depression instead of "untreated spiral" depression.

  • Rule

    Jump
  • Watch up to the last episode, then watch End of Evangelion for the canon ending. And/or watch the rebuild movies for a condensed retelling that goes in its own direction.

  • That feels like it's rather besides the point, innit? You've got AI companies showing off AI art and saying "look at what this model can do," you've got entire communities on Lemmy and Reddit dedicated to posting AI art, and they're all going "look at what I made with this AI, I'm so good at prompt engineering" as though they did all the work, and the millions of hours spent actually creating the art used to train the model gets no mention at all, much less any compensation or permission for their works to be used in the training. Sure does seem like people are passing AI art off as their own, even if they're not claiming copyright.

  • The fuck is wrong with you? Why do you give a shit about what people enjoy? That's pretty weird, bro

  • What evidence is there that gen AI hasn't peaked? They've already scraped most of the public Internet to get what we have right now, what else is there to feed it? The AI companies are also running out of time--VCs are only willing to throw money at them for so long, and given the rate of expenditure on AI so far outpaces pretty much every other major project in human history, they're going to want a return on investment sooner rather than later. If they were making significant progress on a model that could do the things you were saying, they would be talking about it so that they could buy time and funding from VCs. Instead, we're getting vague platitudes about "AGI" and meaningless AI sentience charts.

  • I actually had some thoughts about this and posted this in a similar thread:

    First, that artist will only learn from a few handful of artists instead of every artist's entire field of work all at the same time. They will also eventually develop their own unique style and voice--the art they make will reflect their own views in some fashion, instead of being a poor facsimile of someone else's work.

    Second, mimicking the style of other artists is a generally poor way of learning how to draw. Just leaping straight into mimicry doesn't really teach you any of the fundamentals like perspective, color theory, shading, anatomy, etc. Mimicking an artist that draws lots of side profiles of animals in neutral lighting might teach you how to draw a side profile of a rabbit, but you'll be fucked the instant you try to draw that same rabbit from the front, or if you want to draw a rabbit at sunset. There's a reason why artists do so many drawings of random shit like cones casting a shadow, or a mannequin doll doing a ballet pose, and it ain't because they find the subject interesting.

    Third, an artist spends anywhere from dozens to hundreds of hours practicing. Even if someone sets out expressly to mimic someone else's style, teaches themselves the fundamentals, it's still months and years of hard work and practice, and a constant cycle of self-improvement, critique, and study. This applies to every artist, regardless of how naturally talented or gifted they are.

    Fourth, there's a sort of natural bottleneck in how much art that artist can produce. The quality of a given piece of art scales roughly linearly with the time the artist spends on it, and even artists that specialize in speed painting can only produce maybe a dozen pieces of art a day, and that kind of pace is simply not sustainable for any length of time. So even in the least charitable scenario, where a hypothetical person explicitly sets out to mimic a popular artist's style in order to leech off their success, it's extremely difficult for the mimic to produce enough output to truly threaten their victim's livelihood. In comparison, an AI can churn out dozens or hundreds of images in a day, easily drowning out the artist's output.

    And one last, very important point: artists who trace other people's artwork and upload the traced art as their own are almost universally reviled in the art community. Getting caught tracing art is an almost guaranteed way to get yourself blacklisted from every art community and banned from every major art website I know of, especially if you're claiming it's your own original work. The only way it's even mildly acceptable is if the tracer explicitly says "this is traced artwork for practice, here's a link to the original piece, the artist gave full permission for me to post this." Every other creative community writing and music takes a similarly dim views of plagiarism, though it's much harder to prove outright than with art. Given this, why should the art community treat someone differently just because they laundered their plagiarism with some vector multiplication?

  • You literally haven't, except maybe by sticking your fingers in you ears and going "NUH UH"

    but go on king

  • Here's the point since you clearly missed it:

    If Brave gets even a moderate market share, Google will continue to mess them around like this as they really don't like people not seeing their adverts.

    Ultimately it's software, so the Brave devs can do pretty much whatever they want, limited by the available time and money. Google's influence extends to making that either easier or harder, it much the same way as they influence the Android ecosystem.

    Brave may not be particularly affected by this change, but that's besides the point. If Brave starts becoming a viable threat to Google, Google can easily start making changes to Chromium that target Brave and breaks the changes they make, just like they targeted uBlock Origin and broke it with manifest v3. Brave might be able to work around these changes, but it costs time and developer labor (i.e. money) that would have been spent elsewhere, and if Google makes things hard enough on Brave they could be forced to abandon the project.

  • This shit right here is why I hate to argue about labels or whether someone is/isn't liberal/leftist/centrist/conservative/whatever. At best, they're an extremely vague, ill-defined, hyper-individualized label that means different things to different people. One person says "I'm a leftist," and they mean it as "I'm a progressive Democrat who supports heavily regulated capitalism, labor unions, LGBT rights, and am pro-choice." Another person says "I'm a leftist," and they mean it as "I'm an anarcho-communist who believes billionaires should forcibly redistribute their wealth, and I don't give a rat's ass about LGBT or minority rights because they're a bourgeoisie distraction from class consciousness."

    I don't care about your label, I care about your policies. Those actually tell me something about you.

  • I personally did read it that way, but I will concede that perhaps I was being uncharitable.

    Regardless, I have seen people explicitly questioning whether it was faked elsewhere, and it makes me cringe every time. Talking about this serves literally zero purpose--it makes the left look crazy, any alternative explanations that make Trump look bad fall apart under the barest scrutiny, and it just serves to keep the assassination attempt in peoples' minds. There are literally hundreds of other things to complain about Trump over, talking about this doesn't help.

  • Okay, but what's the alternative? Trump faked the whole thing in some sort of false flag? He planted a fake gunman to get killed by the secret service, and put two of his close supporters in the hospital in critical condition, for a bump in the polls, when he was already confident that he could beat Biden? Is that really a more plausible explanation than "someone decided to kill Trump over the Epstein files, missed, and was killed"? I absolutely hate the guy, buy I just don't buy it. I can accept "he got hit by a shard of glass instead of a bullet" or "he got grazed elsewhere and it just looks like he was hit in the ear" but claiming the whole thing was faked is just a bridge too far.

    We're supposed to be above this type of shaky conspiracy theory level thinking.

  • In the show just before these were taken, Omni-Man got in a fight with another hero named The Immortal, where The Immortal went for the eyes and tried to blind him by gouging them out. It definitely hurt him, but it didn't work, and Omni-Man ripped The Immortal in half shortly afterwards. (He got better.)

  • There's a world of difference between not having any profit because you're aggressively reinvesting it into your business, and not having any profit because you spent 16 million to keep the lights on for a service that brought in approximately 5% of what you spent.

  • Or anywhere relatively rural. I just got home from a long weekend in rural Minnesota/Wisconsin, and there's literally no viable way to run public transit out there in a manner that wouldn't either be so restrictive as to be useless, or would lose so much money it would be first on the block for service cuts (and therefore become useless). I'm talking "town of 600 residents, most people live on unincorporated county land on a farmstead, and the only grocery store in a 50 mile radius is a Dollar General" rural. Asking these folks to give up cars is an insane prospect.

  • People always assume that generative AI (and technology in general) will continue improving at the same pace it always has been. They always assume that there are no limits in the number of parameters, that there's always more useful data to train it on, and that things like physical limits in electricity infrastructure, compute resources, etc., don't exist. In five years generative AI will have roughly the same capability it has today, barring massive breakthroughs that result in a wholesale pivot away from LLMs. (More likely, in five years it'll be regarded similarly to cryptocurrency is today, because once the hype dies down and the VC money runs out the AI companies will have to jack prices to a level where it's economically unviable to use in most commercial environments.)

  • I'm personally a little nervous about Harris--I remember the 2020 primary where her only notable accomplishments were accusing Biden of being racist over opposition to federal busing policies, and then flaming out shortly after and shuttering her campaign two months before the first caucus and polling single digits in California. Admittedly, she doesn't have the same headwinds now that she had in 2020--she doesn't have to differentiate herself from over a dozen other candidates and she won't struggle to raise money--but she also made some unforced errors (e.g. coming out for total elimination of private insurance before revealing a plan that included private plans, or admitting her own policy on busing was essentially identical to Biden's).

    Hopefully, she'll run a much tighter campaign now since she'll inherit Biden's staff and can focus solely on attacking Trump, but I do have some concerns.

  • Or they think that the people above their station deserve those benefits--they genuinely think and support the rich getting richer is a good thing, regardless of whether they'll see any benefit themselves. It's the mirror image of the progressive mindset of voting to raise their own taxes to help the needy.

  • Ah, so you're a literal 90s-era troll.

  • Yeah, that seems to be the end goal, but Goldman Sachs themselves tried using AI for a task and found it cost six times as much as paying a human.

    That cost isn't going down--by any measure, AI is just getting more expensive as our only strategy for improving it seems to be "throw more compute, data, and electricity at the problem." Even new AI chips are promising increased performance but with more power draw, and everybody developing AI models seem to be taking the stance of just maximizing performance and damn everything else. So even if AI somehow delivers on its promises and replaces every white collar job, it's not going to save any actual money for corporations.

    Granted, companies may be willing to eat that cost at least temporarily in order to suppress labor costs (read: pay us less), but it's not a long term solution