Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast

  • 12 Posts
  • 596 Comments
Joined 2 years ago
cake
Cake day: June 23rd, 2023

help-circle

  • The courts need to settle this: Do we treat AI models like a Xerox copier or an artist?

    If it’s a copier then it’s the user that’s responsible when it generates copyright-infringing content. Because they specifically requested it (via the prompt).

    If it’s an artist then we can hold the company accountable for copyright infringement. However, that would result in a whole shitton of downstream consequences that I don’t think Hollywood would be too happy about.

    Imagine a machine that can make anything… Like the TARDIS or Star Trek replicators. If someone walks up to the machine and says, “make me an Iron Man doll” would the machine be responsible for that copyright violation? How would it even know if it was violating someone’s copyright? You’d need a database of all copyrighted works that exist in order to perform such checks. It’s impossible.

    Even if you want OpenAI, Google, and other AI companies to pay for copyrighted works there needs to be some mechanism for them to check if something is copyrighted. In order to do that you’d need to keep a copy of everything that exists (since everything is copyrighted by default).

    Even if you train an AI model with 100% ethical sources and paid-for content it’s still very easy to force the model to output something that violates someone’s copyright. The end user can do it. It’s not even very difficult!

    We already had all these arguments in the 90s and early 2000s back when every sane person was fighting the music industry and Hollywood. They were trying to shut down literally all file sharing that exists (even personal file shares) and search engines with the same argument. If they succeeded it would’ve broken the entire Internet and we’d be back to using things like AOL.

    Let’s not go back there just because you don’t like AI.




  • Who knows, maybe we might even let them come back to US soil they might leave CECOT alive—some day.

    FTFY.

    When you kidnap people without due process on the regular you’re encouraging people to fight back. When you send them to foreign gulags known for literally torturing and killing people and forcing them into slave labor you’re encouraging them to fight back with deadly force.

    Dying—fighting for your life—sure sounds better than just giving up and letting ICE take you away. At this point the Trump Administration and ICE cannot be trusted to execute due process. They’re operating outside of the law. They are the lawless ones.

    They’re hoping for deadly conflict and I fear they’re going to get it. Though, on the plus side I’m 100% certain they will be unhappy with the outcome. In both the short and long term.



  • To be fair, the world of JavaScript is such a clusterfuck… Can you really blame the LLM for needing constant reminders about the specifics of your project?

    When a programming language has five hundred bazillion absolutely terrible ways of accomplishing a given thing—and endless absolutely awful code examples on the Internet to “learn from”—you’re just asking for trouble. Not just from trying to get an LLM to produce what you want but also trying to get humans to do it.

    This is why LLMs are so fucking good at writing rust and Python: There’s only so many ways to do a thing and the larger community pretty much always uses the same solutions.

    JavaScript? How can it even keep up? You’re using yarn today but in a year you’ll probably like, “fuuuuck this code is garbage… I need to convert this all to [new thing].”




  • I’m not convinced that humans don’t reason in a similar fashion. When I’m asked to produce pointless bullshit at work my brain puts in a similar level of reasoning to an LLM.

    Think about “normal” programming: An experienced developer (that’s self-trained on dozens of enterprise code bases) doesn’t have to think much at all about 90% of what they’re coding. It’s all bog standard bullshit so they end up copying and pasting from previous work, Stack Overflow, etc because it’s nothing special.

    The remaining 10% is “the hard stuff”. They have to read documentation, search the Internet, and then—after all that effort to avoid having to think—they sigh and start actually start thinking in order to program the thing they need.

    LLMs go through similar motions behind the scenes! Probably because they were created by software developers but they still fail at that last 90%: The stuff that requires actual thinking.

    Eventually someone is going to figure out how to auto-generate LoRAs based on test cases combined with trial and error that then get used by the AI model to improve itself and that is when people are going to be like, “Oh shit! Maybe AGI really is imminent!” But again, they’ll be wrong.

    AGI won’t happen until AI models get good at retraining themselves with something better than basic reinforcement learning. In order for that to happen you need the working memory of the model to be nearly as big as the hardware that was used to train it. That, and loads and loads of spare matrix math processors ready to go for handing that retraining.



  • Pressing down too hard breaks the pushbutton functionality. It has nothing to do with stick drift.

    But since we’re talking about what causes things… You know what actually causes potentiometer-based sticks to fail fast? Sweat. That’s right!

    The NaCL in your sweat—even the tiniest microscopic amounts—is enough to degrade the coating and the brushes on potentiometers. The more your hands sweat, the faster your sticks will degrade.

    Got sweaty palms? Best to use hall effect sticks or save up to buy new ones on the regular! 😁

    Also: If you allow your controllers to get really cold and regularly (and rapidly) warm them up with your hands while playing that can have a negative impact too.


  • At scale a hall effect stick is about $0.25 more than a potentiometer version. That’s about $38,000,000 if they sell as many Switch 2s as they sold Switches.

    Sooooo… Nothing. That’s basically a rounding error to Nintendo. Remember: That figure is over eight years.

    If it means they won’t have lawsuits (which cost millions on their own), fewer returns, and happier customers it most certainly would be worth losing out on ~$5 million/year.

    The part you’re missing isn’t the cost. It’s the potential sales from replacement joycons. If you’re going to make a devil’s advocate style, capitalist argument that’s the one to make.

    I don’t think it’s any of that, though. I think it’s just management being too strict about design constraints (which I pointed out in an earlier comment).


  • I design things that use hall effect sensors… The magnets in the joycons would not have interfered. Those magnets are:

    1. Too far away from the sticks to matter.
    2. Perpendicular/orthogonal to the magnets that would be in the sticks.

    Besides, you can cram hall effect stuff super tight just by inserting a tiny piece of magnetic shielding between components. Loads of products do this (mostly to prevent outside magnets from interfering but it’s the same concept). What is this magic magnetic shielding technology? EMI tape.

    There’s a zillion types and they’re all cheap and very widely used in manufacturing. I guarantee your phone, laptop, and many other electronics you own have some sort of EMI tape inside of them.

    Just about every assembly line that exists for mass produced electronics has at least one machine that spits out tape a bit like a CNC machine (or they pay the cheapest worker possible to place it).


  • Note: Hall effect sticks aren’t that much more expensive than potentiometer sticks (difference is less than a dollar at scale). However, they require more space than potentiometer sticks and if you’re doing something custom (which Nintendo always does) it can be a great big expense to change your manufacturing processes to insert tiny magnets into injection molded parts.

    I suspect the latter is the reason why they abandoned using hall effect or TMR sticks for the Switch 2.

    My wild speculation: Nintendo probably gave their engineers some design constraints that limited their ability to use off-the-shelf HE parts (everything I’ve seen really is too big). Rather than change the constraints slightly in order to make the product usable with such parts they stayed stubborn in the hopes that their engineers would come up with an innovative solution. This sort of thing can work to force innovation at really big companies—if they’re not super top-down in terms of decision making.

    I’m sure that the Nintendo engineers came up with their own perfectly-workable HE/TMR stick designs but had them shot down in meetings where they discussed the manufacturing costs.





  • It is true. What’s your upload speed? 😁

    Fiber connections are synchronous. Meaning that the download speed is the same as the upload speed.

    A gigabit fiber connection gives you 1 gigabit down and 1 gigabit up. A “gigabit” cable connection gives you 1.something gigabit down (it allows for spikes… Usually) and like 20-50 megabits upload.

    Fiber ISPs may still limit your upload speeds but that’s not a limitation of the technology. It’s them oversubscribing their (back end) bandwidth.

    Cable Internet really can’t give you gigabit uploads without dedicating half the available channels for that purpose and that would actually interfere with their ability to oversubscribe lines. It’s complicated… But just know that the DOCSIS standards are basically hacks (that will soon run into physical limitations that prevent them from providing more than 10gbs down) in comparison to fiber.

    The DOCSIS 4.0 standard claims to be able to handle 10gbs down and 6gbs up realistically that’s never going to happen. Instead, cable companies will use it to give people 5gbs connections with 100 megabit uploads because they’re bastards.