Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)A
Posts
0
Comments
509
Joined
3 yr. ago

  • The legal aspect is crystal clear. It's blatantly illegal to ban entry based on gender with very few exceptions (such as toilets or domestic violence shelters). I expect the court will be angry that it even went to court at all.

    The purpose of a judge is to settle disagreements. When both sides of a court case agree with the facts, then there is nothing for the judge to do, and it should not go to court at all. It should be settled out of court.

    It's likely to be a really short case "did you have a policy to ban men?" "yes". "case closed; moving on to damages"... but the thing is, even though the meat of the case will be over almost instantly... there will still be weeks of work done in the lead up to the case, by both legal teams, but the court, by the judge, preparing the jury if it's a jury trial, etc (imagine how angry your boss would be if they had to give you paid time off work, delaying project schedules, over this case).

    If you want to make a political statement, the court room is not the place to be doing that. At a minimum I'd expect the court to force one side to pay all of the legal fees of the other side, and on top of that the court might charge them with abuse of the court process which could result in punitive fines and also discipline against the lawyers involved (they could even be banned from practicing their craft). Judges don't have a sense of humour and they are not interested in political debates.

  • No no no, you got all wrong. Think of the children! They are ruining the future of our children!

  • While I agree - the part you're missing is the vast majority of TikTok users are outside the United States.

    TikTok doesn't want to sell. They want some sort of "independent" subsidiary where ByteDance still profits from (and controls) TikTok and the subsidiary worries about compliance with US law. But the thing is, that's already the current structure.

    I wouldn't be surprised if they refuse to sell and wind up being banned. ByteDance doesn't want to lose all their US customers, but they'd likely prefer that to selling.

  • I wonder if a tablet or laptop might be more appropriate for your wife? When I think "web" I want a keyboard or touch screen and a TV typically has neither. But in any case, I think you're making a mistake trying to have one device that can fit every use case. Your TV will have multiple inputs (and it will also probably be a smart TV).

    Plug some sort of Mini PC into the TV for your wife, and let your kids use the TV's built in smart features to watch TV or buy a set top box such as an Apple TV/Nvidia Shield/etc.

    PS: I would 100% use a projector, not a TV. Just project onto the wall (assuming you don't have wallpaper/etc).

  • Sure but in an emergency? They can handle being discharged as long as you don't go too far.

  • An inverter will not let you run your fridge until the battery is "dead". It's going to have a low voltage cut off, likely somewhere around 11 Volts, specifically to avoid damaging batteries by fully discharging them.

    How many hours you'll get from the battery mostly depends on your ambient air temperature and how often you open the fridge. They don't use that much power when they're idle - my fridge averages at about 90 watts (I'm not running off grid, but I do have rooftop solar and our system produces pretty charts showing consumption). A large car battery can sustain 90 watts for a quite long time - well over 2 hours. Probably closer to 10.

    Running a fridge off a car battery long term is a bad idea. But in an emergency? Sure I'd totally do that - especially if your "emergency" is genuine such as needing to keep your medication cold. Just don't open the fridge unless you're taking your medication.

    LifePo4 FTW!

    Sure. Way better than lead acid. But that doesn't mean lead acid is useless. When I lived off grid, LifePo4 didn't exist and we got close ten years (of daily use) out of our lead acid batteries. They were bigger than car batteries and also deep cycle ones, but in an emergency a car battery would be a fine choice if it's the best one you have.

  • Apple said EVERYBODY MAKE ARM APPS NOW

    Uh, no. What they did is make sure x86 software still works perfectly. And not just Mac software - you can run x86 Linux server software on a Mac with Docker, and you can run DirectX x86 PC games on a Mac with WINE. Those third party projects didn't do it on their own, Apple made extensive contributions to those projects.

    I'd like to go into more detail but as a third party developer (not for any of the projects I mentioned above) I signed an NDA with Apple relating to the transition process before you could even buy an ARM powered Mac. Suffice to say the fruit company helped developers far and wide with the transition.

    And yes, they wanted developers to port software over to run natively, but that was step 2 of the transition. Step 1 was (and still is) making sure software doesn't actually need to be ported at all. Apple has done major architecture switches like this several times and are very good at them. This was by far the most difficult transition Apple has ever done but it was also the smoothest one.

    It's 2024, and I still have software running on my Mac that hasn't been ported. If that software is slow, I can't tell. It's certainly not buggy.

  • Apple is working on models, but they seem to be focusing on ones that use tens of gigabytes of RAM, compared to tens of terabytes.

    I wouldn't be surprised Apple ships an "iPhone Pro" with 32GB of RAM dedicated to AI models. You can do a lot of really useful stuff with a model like that... but it can't compete with GPT4 or Gemini today - and those are moving targets. OpenAI/Google will have even better models (likely using even more RAM) by the time Apple enters this space.

    A split system, where some processing happens on device and some in the cloud, could work really well. For example analyse every email/message/call a user has ever sent/received with the local model, but if the user asks how many teeth a crocodile has... you send that one to the cloud.

  • The article says they're talking to OpenAI as well. "Exploring" a partnership mean you're actually going to partner with them - it could just be "what's your roadmap?"

    Apple also "explored" buying Bing and DuckDuckGo.

  • I don't see how it's any different to using Google as the default search engine in Safari.

    Also - phones don't have terabytes of RAM. The idea that a (good) LLM can run on a phone is ridiculous. Yes, you can run small AI models on there - but they're about as intelligent as an ant... ants can do a lot of useful work, but they're not on the same level as Gemini or ChatGPT.

  • You don't need to "have faith". Just test the code and find out if it works.

    For example earlier today I asked ChatGPT to write some javascript to make a circle orbit around another circle, calculating the exact position it should be for a given radius/speed/time. Easy enough to verify that was working.

    Then I asked it to draw a 2D image of the earth, to put on that circle. I know what our planet looks like, so that was easy. I did need to ask several times with different to get the style I was looking for... but it was a hell of a lot easier than drawing one myself.

    Then the really tricky part... I asked it how to make a CSS inner shadow that is updated in real time as the earth rotates around the sun. That would've been really difficult for me to figure out on my own, since geometry ins't my strong point and neither is CSS.

    Repeated that for every other planet and moon in our solar system, added some asteroid belts... I got a pretty sweet representation of our solar system, not to scale but roughly to scale and fully animated, in a couple hours. Would have taken a week if I had to use Stack Overflow.

  • Every video ever created is copyrighted.

    The question is — do they need a license? Time will tell. This is obviously going to court.

  • humans he can recognize their bias

    Can they? I'm not convinced.

    As far as i know chat GPT can’t do that.

    You do it with math. Measure how many females you have with a C level position at the company and introduce deliberate bias into hiring process (human or AI) to steer the company towards a target of 50%.

    It's not easy, but it can be done. And if you have smart people working on it you'll get it done.

  • Game engines are a lot simpler than a web rendering engine, so I'm not sure it's a good comparison.

    Gecko (the FireFox rendering engine) dates back to 1997. And KHTML — the common ancestor shared by Blink/Webkit (Chrome/Safari) is maybe one or two years younger - I wasn't able to find a source. An insane amount of work, by millions of people if you include minor contributes, has gone into those rendering engines.

    Creating another one would be an insane amount of work... assuming you want it to be competitive.

  • My understanding is China's rules are pretty wide open and effectively boil down to "if we don't like your use of AI, we will shut you down"... which isn't really much of a law. China would've done exactly the same thing without that law. There is some stuff in there about oversight/etc but that's about it.

    Most of China's AI legislation is actually focused on encouraging companies to invest in AI, it's not really about regulating it. The US also has a bunch of AI laws in the same vein as China.

    The EU legislation is much more specific and specifically prohibits AI in a bunch of specific situations. For example they have made it illegal to use AI for face recognition in "public spaces".

  • Someday the AI will get good, and I’ll want to chat with it securely.

    GPT4 is pretty good now. I'm not convinced it will be secure until we can run it locally on our own hardware.

    As soon as we can run it locally, I plan to do so. Even if it means using a GPT4 quality LLM when far better exists if I use a cloud service.

    Sure it would be nice to have something that hallucinates less than GPT-4, but I kinda feel like striving for that is making perfect the enemy of good. I'd rather stick with GPT-4 quality, and focus on usability/speed/reliability/etc and let people keep working on the fancy theoretical stuff in the background as a lower priority.

    A Steve Jobs said, Real Artists Ship. They don't keep working on it forever until they can't think of any more improvements. You'll never ship.

    The habit of sending tokens right as they generate is a dumb sales gimmick

    Seems like it would be trivial to just place tokens in a buffer on the server and send output to the client in say 1KB chunks (a TCP packet can't be much bigger than that anyway, and it needs a bit of space for routing metadata).

    And if the entire output is less than 1KB... pad it out to that length. Pretty standard to do that anywhere you care about security... e.g. if you were to dump the password table databases... they're all 256 bits. That's obviously not the real length - most will be shorter, some will be longer. Whatever they are it's cryptographically expanded (or shortened) to 256.

  • WAN throughput limit is nearly 1Gbps

    In my experience, exactly 1Gbps. It has 1Gbps network ports, and it maintains that throughput even with "advanced buffer management" / etc enabled.

    I'm sure it slows own if you have thousands of people using it, but OP isn't planning to do that and anyone who is should buy one with more than four LAN ports anyway. This is a $60 router. If you're working with thousands of people, you should spend more than that.

  • Start with a Ubiquiti EdgeRouter X. It's a tiny little box that's easily hidden away and forgotten about, with five Ethernet ports (one for the internet, four for your home). The web interface is extensive and has every feature you could ever want and thousands of other features you can safely ignore.

    It does not do wifi - and that's fine. Because for wifi to work well, the antenna has to be in a central location where you probably don't want half a dozen ethernet cables, power supplies, etc etc.


    You can use it with almost any wifi access point (or even a full wifi router, configured to not do any routing), but I recommen done of these: https://ui.com/us/en/wifi/flagship

    They have five current models on that page but there are more:

    • U6 Enterprise - designed to be used by several hundred people at the same time. Forget that one.
    • U7 Pro - the latest flagship Wifi 7 model (you said you don't even care about wifi 6, so probably forget that too)
    • U6 Pro - their previous Flaghsip, with Wifi 6. Probably overkill for you but worth considering
    • U6 Long Range - basically the same device but with a physically larger antenna to extend the range over 2,000 feet under ideal conditions
    • U6+ - a confusingly named cheaper variant that is also smaller. I would buy this one — not because it's cheaper, but because it's the smallest one.


    They are all ceiling mounted. Ceiling mounts are the way to go. Put them in the middle of a large central room in your home. It will provide perfect 5Ghz coverage within your home and your devices will seamlessly switch to 2.4Ghz when you leave the home (it'll probably work on your entire back/front yard and maybe even a bit down the street... even if you don't buy the "Long Range" model.

    If your house has walls (or floors) that make it a faraday cage, then you will need to buy more than one access point. Often only one is needed but they are designed to work with multiple if you require that (potentially thousands, these access points are used for football stadiums, music festivals, sky scrapers, etc).

    If you can't drill a hole in your ceiling, then buy a thin (flat profile) white ethernet cable use 3M adhesive strips to attach it the cable and wifi access point to your ceiling, nobody will notice unless they look up. You might need to patch up the paint when you move out but ceiling paint is dirt cheap and very forgiving (because it's matte paint).

    If you refuse to go with a ceiling mounted access point, Ubiquiti has wall mounted and bench top variants. But they're not as good - ceilings are usually made of thin flimsy material while walls are usually solid structures. That makes a big difference when it comes to real world wireless performance and reliability.

    It's a bit more than your budget, but I'd argue it's money well spent. My EdgeRouter X and old Unifi access point are approaching 7 years old and they have never even been restarted except when we've had power failures or when I've moved house... totally worth the money. The only problem I ever had is about 5 years in I forgot the password and wanted to change a setting... I had to do a factory reset. No biggie.

    But if that's too expensive, you should be able to find older models of the same hardware (especially predecessors to the U6+). Like I said, mine is 7 years old and working perfectly. I could see myself still using it in another 7 years - anything where I need really high performance is connected to the EdgeRouter X with an ethernet cable.

    PS: one of the ethernet ports on your EdgeRouter X is a "PoE OUT" port. Plug your Unifi wifi access point into that port, and you can toss the power supply that came with the access point in a drawer or just the rubbish bin. The EdgeRouter X will provide power over the ethernet cable.


    Note: some Ubiquiti hardware is garbage, and the company seems to be going downhill lately. But they still have excellent products