Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)H
Posts
1
Comments
599
Joined
3 yr. ago

  • Sorry, that’s incorrect.

    Autism is commonly comorbid with mental health disorders (aka “mental illnesses”) like anxiety, depression, ADHD, etc., as well as with intellectual developmental disorders, but autism is still considered, at worst, a neurodevelopmental disorder, regardless of where an individual falls on the spectrum.

    Both the DSM-V and ICD-11 are in agreement about this, for what that’s worth, but you could also just do a search for “Is autism a mental illness?” on Duckduckgo, Kagi, Searx, Bing, Google, or whatever, if you want to confirm.

  • The lady was autistic if I remember collectly. She had a boyfriend who also had a mental ilness.

    Autism isn’t a mental illness.

  • Copyright applies to unfinished works, too. There are many reasons it might not protect an unfinished work, but those reasons are still relevant even for finished works.

    If someone steals your physical drawing, that’s theft. If they take a picture of it, then use the picture - or your picture + modifications - without your permission, particularly in a commercial work, then that’s copyright infringement, but not theft. If they steal your physical drawing and then take a picture and so on, then it’s both theft and copyright infringement.

    Most likely this wasn’t considered copyright infringement because the allegedly copied art isn’t copyrightable, e.g., game mechanics; or the plaintiff didn’t own the copyrights themselves and thus couldn’t sue (possibly the arts were still copyrighted by the original artists, having never been purchased; possibly they were stock assets that were re-purchased by the defendant). There are any number of reasons. However, “the work wasn’t published” isn’t one of them.

    On the other hand, it’s quite likely they were able to sue for theft of trade secrets for that very reason. And they might have chosen to do that simply because proving copyright infringement is much more difficult.

  • This happened because the developers allegedly used assets from a game called P3, which was never released, and therefore not subject to copyright infringement claims.

    That isn’t how copyright works. Copyright is awarded upon creation of a work, not upon release.

  • What a misleading, clickbait title:

    Mozilla moves away from open source

    When the author really meant:

    Mozilla does a thing I don’t like

  • Did you turn it off by using Invidious?

  • OP is also in the allegedly ultra rare camp of “successfully configured Jellyfin and lived to tell the tale.” Not what I’d expect of someone unable to configure Plex correctly. I’ve not set up a Plex server myself but my guess is it wasn’t clear that it was misconfigured - it did work previously, after all.

  • If they’re calling it remote streaming when you’re on the same (local) network, that’s not exactly intuitive. I’d say OP’s phrasing was fair.

  • The witch turned the creep into a woman and the spell was complete by the time she flew away. Unfortunately, like many women, the creep was born with the body of a man (she’s AMAB). Maybe the witch could have changed her body, too, but that would have made things far too easy, given that the point of the curse was to teach her empathy.

  • SublimeText seems to have it. I don’t personally use it but it’s a pretty competent editor and it’s not in the feature table from the Wikipedia page someone else shared.

    Sublime 3 was limited to folding by indentation; I’m not sure if that’s true for Sublime 4 as well, but the Markdown plugin docs have a note on folding and mention you can fold by section and heading levels.

  • Your comment wasn’t in a meta discussion; it was on a post where they were venting about people complaining about them having a women’s only space. There was certainly no indication that the regular community rules didn’t apply, nor any invitation for men to comment.

    Commenting that it’s hostile for them to have a women’s only space might be ironic, but couldn’t possibly be good faith, in that context. And if the same mod banned you from multiple communities, then either it was out of line and you could appeal it, or it was warranted due to the perceived likelihood of you causing problems in those other communities and the perceived low likelihood of you contributing anything of value to them.

    Even now, you’re acting like the mod(s) banned you because of her / their emotions. You don’t see how that’s misogynistic?

    It makes logical sense for bad actors to be preemptively banned. Emotions have nothing to do with it.

  • You got the idea!

  • We’re in c/showerthoughts. “What if my grandma was a bike?” would fit right in

  • To be clear, I agree that the line you quoted is almost assuredly incorrect. If they changed it to "thousands of deepfake apps powered by open source technology" then I'd still be dubious, simply because it seems weird that there would be thousands of unique apps that all do the same thing, but that would at least be plausible. Most likely they misread something like https://techxplore.com/news/2025-05-downloadable-deepfake-image-generators.html and thought "model variant" (which in this context, explicitly generally means LoRA) and just jumped too hard on the "everything is an open source app" bandwagon.

    I did some research - browsing https://github.com/topics/deepfakes (which has 153 total repos listed, many of which are focused on deepfake detection), searching DDG, clicking through to related apps from Github repos, etc..

    In terms of actual open source deepfake apps, let's assume that "app" means, at minimum, a piece of software you can run locally, assuming you have access to arbitrary consumer-targeted hardware - generally at least an Nvidia desktop GPU - and including it regardless of whether you have to write custom code to use it (so long as the code is included), use the CLI, hit an API, use a GUI app, a web browser, or a phone app. Considering only apps that have as a primary use case, the capability to create deepfakes by face swapping videos, there are nonetheless several:

    • Roop
    • Roop Unleashed
    • Rope
    • Rope Live
    • VisoMaster
    • DeepFaceLab
    • DeepFaceLive
    • Reactor UI
    • inswapper
    • REFace
    • Refacer
    • Faceswap
    • deepfakes_faceswap
    • SimSwap

    If you included forks of all those repos, then you'd definitely get into the thousands.

    If you count video generation applications that can imitate people using, at minimum, Img2Img and 1 Lora OR 2 Loras, then these would be included as well:

    • Wan2GP
    • HunyuanVideoGP
    • FramePack Studio
    • FramePack eichi

    And if you count the tools that integrate those, then these probably all count:

    • ComfyUI
    • Invoke AI
    • SwarmUI
    • SDNext
    • Automatic1111 SD WebUI
    • Fooocus
    • SD WebUI Forge
    • MetaStable
    • EasyDiffusion
    • StabilityMatrix
    • MochiDiffusion

    If the potential criminals use easier ready-made (commercial) web-services instead of buying a RTX 5090, learning ComfyUI, dealing with the steep learning curve etc, we’d know we have to primarily fight those apps and services, not necessarily the generative AI tools.

    This is the part where, to be able to answer that, someone would need to go and actually test out the deepfake apps and compare their outputs. I know that they get used for deepfakes because I've seen the outputs, but as far as I know, every single major platform - e.g., Kling, Veo, Runway, Sora - has safeguards in place to prevent nudity and sexual content. I'd be very surprised if they were being used en masse for this.

    In terms of the SaaS apps used by people seeking to create nonconsensual, sexually explicit deepfakes... my guess is those are actually not really part of the figure that's being referenced in this article. It really seems like they're talking about doing video gen with LoRAs rather than doing face swaps.

  • Without searching for them myself to confirm, it’s plausible, especially if you take it to mean “apps leveraging open source AI technology.”

    There are a ton of open source AI repos, many of which provide video related capabilities. The number of true open source AI models is very slim, but “Open weight” AI models are commonly referred to as open source, and from the perspective of building your app, fine tuning the model, or creating Loras for it, open weight is good enough.

    Some Loras come with details on the training data set, so even if the base model is only open weights, the Lora can still be open source.

    Until recently, Civitai had Loras for famous people, e.g., Emma Watson, and apparently just regular people. There was a post here last week, I think (or maybe to some other community), to 404 Media, about those being taken down thanks to credit card processors drawing a line in the sand at deepfake imagery.

    ComfyUI is a self hostable AI platform (and there are also many hosts that offer it) that lets you build a workflow from multiple nodes, each of which generally integrates some open source AI tech that was otherwise released. For example, there are nodes that add the capabilities to perform:

    • image generation with Stable Diffusion, Flux, Hidream, etc
    • TTS with KokoroTTS, Piper, F5 TTS, etc
    • video generation with AnimateDiff, Cog, Wan2.1, Hunyuan, FramePack, FantasyTalking, Float
    • video modification, i.e., LatentSync, which takes a video and lipsyncs it to a provided audio file
    • image manipulation, i.e., controlnet, img2img, inpainting, outpainting, or even specific tasks like “remove the background” or “change the face to this other face”

    If you think of a deepfake as just a video of a recognizable person doing a thing, you can create a deepfake by:

    • taking an existing video and swapping the face in each frame
    • faceswap video specific approaches, i.e., Roop.
    • an image to video workflow, i.e., with Wan: “the person dances.” You can expand the options available with Wan by using Loras.
    • a text to video workflow, where you use a Lora for that person
    • an image+audio to video workflow, i.e., with FantasyTalking/Float, creating a lipsync to an audio file you provide
    • a video+audio to video workflow with LatentSync to make it look like they said something different, particularly using a TTS (like F5 TTS) that does voice cloning to generate the new audio

    My suspicion is that most of the AI apps that are available online are just repackaging these open source technologies, but are not open source themselves. There are certainly some, of course, though the ones I know of are more generic and not deepfake specific (ComfyUI, SwarmUI, Invoke AI, Automatic1111, Forge, Fooocus, n8n, FramePack Studio, FramePack Eichi, Wan2GP, etc.).

    This isn’t a licensing issue, as many open source projects are licensed with MIT or Apache licenses, which don’t require you to open source derivative products. Even if they used the GPL, it wouldn’t be required for a SaaS web app. Only the AGPL would protect against that, and even then, only the changes to the AGPL library would need to be shared; the front end app could still be proprietary.

    The other issue could be them not knowing what “app” means. If you think of a Lora as an app, then the sentence might be accurate. I don’t know for sure that there were thousands of Loras for people that published their training data, but I wouldn’t be surprised if that were the case.

  • Have you tried just setting the resolution to 1920x1080 or are you literally trying to run AAA games at 4K on a card that was targeting 1080p when it was released, 4 and a half years ago?

  • It’s the new hyped up version of “no-code” or low-code solutions, but with AI so you have more flexibility to footgun.

  • Not any lazier. Script kiddies didn’t write the code themselves, either.

  • It was already known before the whistleblower that:

    1. Siri inputs (all STT at that time, really) were processed off device
    2. Siri had false activations

    The “sinister” thing that we learned was that Apple was reviewing those activations to see if they were false, with the stated intent (as confirmed by the whistleblower) of using them to reduce false activations.

    There are also black box methods to verify that data isn’t being sent and that particular hardware (like the microphone) isn’t being used, and there are people who look for vulnerabilities as a hobby. If the microphones on the most/second most popular phone brand (iPhone, Samsung) were secretly recording all the time, evidence of that would be easy to find and would be a huge scoop - why haven’t we heard about it yet?

    Snowden and Wikileaks dumped a huge amount of info about governments spying, but nothing in there involved always on microphones in our cell phones.

    To be fair, an individual phone is a single compromise away from actually listening to you, so it still makes sense to avoid having sensitive conversations within earshot of a wirelessly connected microphone. But generally that’s not the concern most people should have.

    Advertising tracking is much more sinister and complicated and harder to wrap your head around than “my phone is listening to me” and as a result makes for a much less glamorous story, but there are dozens, if not hundreds or thousands, of stories out there about how invasive advertising companies’ methods are, about how they know too much, etc.. Think about what LLMs do with text. The level of prediction that they can do. That’s what ML algorithms can do with your behavior.

    If you’re misattributing what advertisers know about you to the phone listening and reporting back, then you’re not paying attention to what they’re actually doing.

    So yes - be vigilant. Just be vigilant about the right thing.