Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)G
Posts
1
Comments
547
Joined
3 yr. ago

  • My first Linux distro was Ubuntu in 2006, with a graphical installer from the boot CD. It was revolutionary in my eyes, because WinXP was still installed using a curses-like text interface at the time. As I remember, installing Ubuntu was significantly easier than installing WinXP (and then wireless Internet support was basically shit in either OS at the time).

  • Cutting edge chip making is several different processes all stacked together. The nations that are roughly aligned with the western capitalist order have split up responsibilities across many, many different parts of this, among many different companies with global presence.

    The fabrication itself needs to tie together several different processes controlled by different companies. TSMC in Taiwan is the current dominant fab company, but it's not like there isn't a wave of companies closely behind them (Intel in the US, Samsung in South Korea).

    There's the chip design itself. Nvidia, Intel, AMD, Apple, Qualcomm, Samsung, and a bunch of other ARM licensees are designing chips, sometimes with the help of ARM itself. Many of these leaders are still American companies developing the design in American offices. ARM is British. Samsung is South Korean.

    Then there's the actual equipment used in the fabs. The Dutch company ASML is the most famous, as they have a huge lead on the competition in manufacturing photolithography machines (although old Japanese competitors like Nikon and Canon want to get back in the game). But there are a lot of other companies specializing in specific equipment found in those labs. The Japanese company Tokyo Electron and the American companies Applied Materials and Lam Research, are in almost every fab in the West.

    Once the silicon is fabricated, the actual packaging of that silicon into the little black packages to be soldered onto boards is a bunch of other steps with different companies specializing in different processes relevant to that.

    Plus advanced logic chips aren't the only type of chips out there. There are analog or signal processing chips, or power chips, or other useful sensor chips for embedded applications, where companies like Texas Instruments dominate on less cutting edge nodes, and memory/storage chips, where the market is dominated by 3 companies, South Korean Samsung and SK Hynix, and American company Micron.

    TSMC is only one of several, standing on a tightly integrated ecosystem that it depends on. It also isn't limited to only being located in Taiwan, as they own fabs that are starting production in the US, Japan, and Germany.

    China is working at trying to replace literally every part of the chain in domestic manufacturing. Some parts are easier than others to replace, but trying to insource the whole thing is going to be expensive, inefficient, and risky. Time will tell whether those costs and risks are worth it, but there's by no means a guarantee that they can succeed.

  • No, X-rays are too energetic.

    Photolithography is basically shining some kind of electromagnetic radiation through a stencil so that specific lines are etched into the top "photoresist" layer of a silicon wafer. The radiation causes a chemical change wherever a photon hits, so that stencil blocks the photons in a particular pattern.

    Photons are subject to interference from other photons (and even itself) based on wavelength, so smaller wavelengths (which are higher energy) can fit into smaller and finer feature size, which ultimately means smaller transistors where more can fit in any given area of silicon.

    But once the energy gets too high, as with X-ray photons, there's a secondary effect that ruins things. The photons have too much leftover energy even after hitting the photoresist to be etched, and it causes excited electrons to cause their own radiation where high energy photons start bouncing around underneath, and then the resulting boundaries between the photoresist that has been exposed to radiation and the stuff that hasn't becomes blurry and fuzzy, which wrecks the fine detail.

    So much of the 20 years leading up to commercialized EUV machines has been about finding the perfect wavelength optimized for feature size, between wavelengths small enough to make really fine details and energy levels low enough not to cause secondary reactions.

  • 2 lanes in each direction with a middle lane? That's a big chunk of Texas, especially when weighted for population.

  • How will it reduce demand for parking? Do you envision the car will drop someone off and then drive away until it finds a parking spot that's farther than the person would want to walk?

    Plenty of high demand areas use human valet parkers for this issue. The driver drops off their car at the curbside destination, and then valets take the vehicle and park it in a designated area that saves the car driver some walking.

    Then, the valet parking area in dense areas has tighter parking where cars are allowed to block in others. As a result, the same amount of paved parking spot can accommodate more cars. That's why in a lot of dense cities, garages with attendants you leave keys with are cheaper than self-park garages.

    Automated parking can therefore achieve higher utilization of the actual paved parking areas, a little bit away from the actual high pedestrian areas, in the same way that human valet parking already does today in dense walkable neighborhoods.

    and people wouldn't be happy waiting 5-10 minutes for their car to navigate back to them.

    As with the comparison to valets, it's basically a solved problem where people already do put up with this by calling ahead and making sure the car is ready for them at the time they anticipate needing it.

    Once again reinventing buses and trains

    Yes! And trains are very efficient. Even when cargo is containerized, where a particular shipping container may go from truck to train to ship, each individual containerized unit will want to take advantage of the scale between major hubs while still having the flexibility to make it between a specific origin and destination between the spokes. The container essentially hitches a ride with a larger, more efficient high volume transport for part of its journey, and breaks off from the pack for the portions where shared routing no longer make sense.

  • The default in most other states is that opposite direction traffic on a divided highway don't have to stop. The states differ in what constitutes a divided highway, but generally at least 5 feet of space or a physical barrier between the lanes would qualify. In Texas, however, there is no exception for divided highways, and the key definition is "controlled-access highway," which requires on/off ramps and physical barriers between traffic directions, or "different roadways,"

    So for a 5-lane road where there are 2 lanes going in each direction with a center lane for left turns, Texas requires opposite direction traffic to stop, while most other states do not.

  • Waymos were violating a Texas state law that requires cars to stop when a school bus stops, even in 2+ lane roads separated by a paved median, even for traffic going in the opposite direction:

    https://liggettlawgroup.com/wp-content/uploads/2019/09/School-bus-laws-img-1024x657.png

    The requirements for opposite side traffic in multi-lane roads is pretty rare and might be unique to Texas. And yes, human drivers fuck this up all the time, too, leading to a lot of PSAs in Texas, especially for new residents.

  • It's bizarre how if you drove through twenty bus stops in three days, you would not only lose your license but be in jail on multiple charges.

    This is a relatively unique Texas law that requires cars to stop when school buses are loading or unloading passengers, including on the opposite side of the road going the other direction. The self driving companies didn't program for that special use case, so it actually is a relatively easy fix in software.

    And the human drivers who move to Texas often get tripped up by this law, because many aren't aware of the requirement.

  • Paradoxically, the large scale deployment of self driving cars will improve the walkability of neighborhoods by reducing the demand for parking.

    One can also envision building on self driving tech to electronically couple closely spaced cars so that more passengers can fit in a given area, such that throughout of passenger miles per hour can increase several times over. Cars could tailgate like virtual train cars following each other at highway speeds with very little separation, lanes could be narrowed to fit more cars side by side in traffic, etc.

  • Most importantly, the projections of fusion being 30 years away depended on assumptions about funding, when political considerations made it so that we basically never came anywhere close to those assumptions:

    https://commons.wikimedia.org/wiki/File:U.S._historical_fusion_budget_vs._1976_ERDA_plan.png

    Fusion was never vaporware. We had developed working weapons relying on nuclear fusion in the 1950's. Obviously using a full blown fission reaction to "ignite" the fusion reaction was never going to be practical, but the core physical principles were always known, with the need for the engineering and materials science to catch up with alternative methods of igniting and harvesting the energy from those fusion reactions.

    But we never really devoted the resources to figuring it out. Only more recently has there been significant renewed interest in funding the research to make it possible, and as you note, many different projects are hitting different milestones on the frontier of that research.

  • Writing 360 TB at 4 MB/s will take over 1000 days, almost 3 years. Retrieving 360 TB at a rate of 30 MB/s is about 138 days. That capacity to bitrate ratio that is going to be really hard to use in a practical way, and it'll be critical to get that speed up. Their target of 500 MB/s is still more than 8 days to read or write the data from one storage platter.

  • I would argue, and I'm sure many historians and librarians and archivists would agree, that "general data backups" are essential human data. Storing the data allows for later analysis, which may provide important insights. Even things that seem trivial and unimportant today can provide very important insights later.

  • Honda won't honor my 10-year powertrain warranty just because I yeeted my 2-year-old Civic off a bridge into salt water!

  • Judge hands Lambo.com to Lamborghini after ruling owner acted in bad faith

    Jump
  • That's why Research in Motion (the developer of the Blackberry) had to buy the domain "rim.jobs" when the .jobs tld was launched.

  • I don't think it'd be that simple.

    Any given website URL could go viral at any moment. In the old days, that might look like a DDoS that brings down the site (aka the slashdot effect or hug of death), but these days many small sites are hosted on infrastructure that is protected against unexpectedly high traffic.

    So if someone hosts deceptive content on their server and it can be viewed by billions, there would be a disconnect between a website's reach and its accountability (to paraphrase Spider-Man's Uncle Ben).

  • The company describes this generator as a solid state device, but the diagrams show the reliance on fluid/flow of hydrogen between the hot side and the cold side for moving some protons around. That seems to be something in between the semiconductor-based solid state thermoelectric generators that are already commonly understood and some kind of generator with moving solid parts.

    It still seems like a low maintenance solution to have a closed loop of hydrogen, but that seems like a potential maintenance/failure point, as well, to rely on the chamber to remain filled with hydrogen gas.

  • The inventor/founder at the center of the article, Lonnie Johnson, was on the team at JPL that designed and implemented the thermoelectric generators (heated by radioactive decay from plutonium-238 pellets) on the Galileo spacecraft sent to Jupiter. So I would expect that he's more familiar with the thermodynamic and engineering challenges than even a typical expert.

    The PR fluff put out by the company mentions that the theoretical basis for this specific type of generator was worked out a while ago but needed materials science to advance to the point where this type of generator can be thermodynamically and commercially feasible.

    Looking at how this generator is supposed to work, it's interesting in that it does rely on the movement of fluid, but is supposed to be a totally closed loop, to be a bit different than the pure solid state, semiconductor-based Seebeck generators that are already well known.

    The other area talked about in this article is that they believe that it can be effective with lower temperature differentials than any previous technology, which might make a huge difference in whether it can be deployed to more useful places and thereby make it economically feasible more easily than prior concepts.

    In the end, if these generators can output some electric voltage/current, it might just take on similar generation characteristics as photovoltaics, which could mean that hooking these up to the grid could draw on some of the lessons learned from the rise of grid scale solar.

  • Specifically, desktop RAM is slabs of silicon, placed into little packages, soldered onto circuit boards in DIMM form or similar, to be plugged into a motherboard slot for RAM.

    The AI demand is for the silicon itself, using advanced packaging techniques to put them on the same package as the complex GPUs with very high bandwidth. So these same pieces of silicon are not even being put into DIMMs, so that if they fall out of use they'll be pretty much intertwined with chips in form factors that a consumer can't easily make use of.

    There's not really an easy way to bring that memory back into the consumer market, even after the AI bubble bursts.

  • Kinda off topic, but now I'm wondering whether Europeans think of phone size (and laptops and screens) in terms of inches rather than centimeters?