• 1 Post
  • 415 Comments
Joined 2 years ago
cake
Cake day: July 5th, 2023

help-circle
  • The Walkman and other tape players were so much superior to CD players for portability and convenience. Batteries lasted a lot longer for portable tape players than for CD players. Tapes could be remixed easily so you could bring a specific playlist (or 2 or 3) with you. Tapes were much more resilient than CDs. The superior audio quality of CDs didn’t matter as much when you were using 1980’s era headphones. Or, even if you were using a boombox, the spinning of a disc was still susceptible to bumps or movement causing skips, and the higher speed motor and more complex audio processing drained batteries much faster. And back then, rechargeable batteries weren’t really a thing, so people were just burning through regular single use alkaline batteries.

    It wasn’t until the 90’s that decent skip protection, a few generations of miniaturization and improved battery life, and improved headphones made portable CDs competitive with portable tapes.

    At the same time, cars started to get CD players, but a typical person doesn’t buy a new car every year, so it took a few years for the overall number of cars to start having a decent number of CD players.



  • NASA funded SpaceX based on hitting milestones on their COTS program. Those were just as available to Boeing and Blue Origin, but they had less success meeting those milestones and making a profit under fixed price contracts (as opposed to the traditional cost plus contracts). It’s still NASA-defined standards, only with an offloading of the risk and uncertainty onto the private contractors, which was great for SpaceX and terrible for Boeing.

    But ultimately it’s still just contracting.


  • NASA has always been dependent on commercial for profit entities as contractors. The Space Shuttle was developed by Rockwell International (which was later acquired by Boeing). The Apollo Program relied heavily on Boeing, Douglas Aircraft (which later merged into McDonnell Douglas, and then merged with Boeing), and North American Aviation (which later became Rockwell and was acquired by Boeing), and IBM. Lots of cutting edge stuff in that era happened from government contracts throwing money at private corporations.

    That’s the whole military industrial complex Eisenhower was talking about.

    The only difference with today is that space companies have other customers to choose from, not just NASA (or the Air Force/Space Force).




  • Physics don’t change fundamentally between 6 meters and 120 meters

    Yes it does. Mass to strength ratio of structural components changes with scale. So does the thrust to mass ratio of a rocket and its fuel. So does heat dissipation (affected by ratio of surface area to mass).

    And I don’t know shit about fluid dynamics, but I’m skeptical that things scale cleanly, either.

    Scaling upward will encounter challenges not apparent at small sizes. That goes for everything from engineering bridges to buildings to cars to boats to aircraft to spacecraft.





  • Google has been quietly doing that for more than 10 years, only we didn’t start really calling this stuff AI until 2022. Google had offline speech to text (and an always on local hotword detection for “hey Google”) since the Moto X 2013, and added hardware support for image processing in the camera app, as images were captured.

    The tasks they offloaded onto the Tensor chip starting in 2021 started opening up more image editing features (various algorithms for tuning and editing images), keyboard corrections and spelling/grammar recommendations that got better (and then worse), audio processing (better noise cancellation on calls, an always-on Shazam-like song recognition function that worked entirely offline), etc.

    Apple went harder at trying to use those AI features into language processing locally and making it obvious, but personally I think that the tech industry as a whole has grossly overcorrected for trying to do flashy AI, pushed beyond the limits of what the tech can competently do, instead of the quiet background stuff that just worked, while using the specialized hardware functions that efficiently process tensor math.


  • It’s a chain of trust, you have to trust the whole chain.

    Including the entire other side of the conversation. E2EE in a group chat still exposes the group chat if one participant shares their own key (or the chats themselves) with something insecure. Obviously any participant can copy and paste things, archive/log/screenshot things. It can all be automated, too.

    Take, for example, iMessage. We have pretty good confidence that Apple can’t read your chats when you have configured it correctly: E2EE, no iCloud archiving of the chats, no backups of the keys. But do you trust that the other side of the conversation has done the exact same thing correctly?

    Or take for example the stupid case of senior American military officials accidentally adding a prominent journalist to their war plans signal chat. It’s not a technical failure of signal’s encryption, but a mistake by one of the participants inviting the wrong person, who then published the chat to the world.



  • It wasn’t the buffer itself that drew power. It was the need to physically spin the disc faster in order to read the data to build up a buffer. So it would draw more power even if you left it physically stable. And then, if it would actually skip in reading, it would need to seek back to where it was to build up the buffer again.




  • They’re actually only about 48% accurate, meaning that they’re more often wrong than right and you are 2% more likely to guess the right answer.

    Wait what are the Bayesian priors? Are we assuming that the baseline is 50% true and 50% false? And what is its error rate in false positives versus false negatives? Because all these matter for determining after the fact how much probability to assign the test being right or wrong.

    Put another way, imagine a stupid device that just says “true” literally every time. If I hook that device up to a person who never lies, then that machine is 100% accurate! If I hook that same device to a person who only lies 5% of the time, it’s still 95% accurate.

    So what do you mean by 48% accurate? That’s not enough information to do anything with.


  • Yeah, from what I remember of what Web 2.0 was, it was services that could be interactive in the browser window, without loading a whole new page each time the user submitted information through HTTP POST. “Ajax” was a hot buzzword among web/tech companies.

    Flickr was mind blowing in that you could edit photo captions and titles without navigating away from the page. Gmail could refresh the inbox without reloading the sidebar. Google maps was impressive in that you could drag the map around and zoom within the window, while it fetched the graphical elements necessary on demand.

    Or maybe web 2.0 included the ability to implement states in the stateless HTTP protocol. You could log into a page and it would only show you the new/unread items for you personally, rather than showing literally every visitor the exact same thing for the exact same URL.

    Social networking became possible with Web 2.0 technologies, but I wouldn’t define Web 2.0 as inherently social. User interactions with a service was the core, and whether the service connected user to user through that service’s design was kinda beside the point.