Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)G
Posts
1
Comments
547
Joined
3 yr. ago

  • In terms of usage of AI, I'm thinking "doing something a million people already know how to do" is probably on more secure footing than trying to go out and pioneer something new. When you're in the realm of copying and maybe remixing things for which there are lots of examples and lots of documentation (presumably in the training data), I'd bet large language models stay within a normal framework.

  • The hot concept around the late 2000's and early 2010's was crowdsourcing: leveraging the expertise of volunteers to build consensus. Quora, Stack Overflow, Reddit, and similar sites came up in that time frame where people would freely lend their expertise on a platform because that platform had a pretty good rule set for encouraging that kind of collaboration and consensus building.

    Monetizing that goodwill didn't just ruin the look and feel of the sites: it permanently altered people's willingness to participate in those communities. Some, of course, don't mind contributing. But many do choose to sit things out when they see the whole arrangement as enriching an undeserving middleman.

  • Most Android phones with always on have a grayscale screen that is mostly black. But iPhones introduced always on with 1Hz screens and still show a less saturated, less bright version of the color wallpaper on the lock screen.

  • Joke's on him, I'm putting my website at 305.domain.tld.

  • It's actually pretty funny to think about other AI scrapers ingesting this nonsense into the training data for future models, too, where the last line isn't enough to get the model to discard the earlier false text.

  • On phones and tablets, variable refresh rates make an "always on" display feasible in terms of battery budget, where you can have something like a lock screen turned on at all times without burning through too much power.

    On laptops, this might open up some possibilities of the lock screen or some kind of static or slideshow screensaver staying on longer while idle, before turning off the display.

  • While we're at it, I never understood why the convention for domain name wasn't left to right tld, domain, subdomain. Most significant on left is how we do almost everything else, including numbers and ISO 8601 dates.

  • It’s a fancy marketing term for when AI confidently does something in error.

    How can the AI be confident?

    We anthropomorphize the behaviors of these technologies to analogize their outputs to other phenomena observed in humans. In many cases, the analogy helps people decide how to respond to the technology itself, and that class of error.

    Describing things in terms of "hallucinations" tell users that the output shouldn't always be trusted, regardless of how "confident" the technology seems.

  • Its a bad idea because ai doesnt "know" in the same way humans do.

    Does that matter? From the user's perspective, it's a black box that takes inputs and produces outputs. The epistemology of what knowledge actually means is kinda irrelevant to the decisions of how to design that interface and decide what types of input are favored and which are disfavored.

    It's a big ol matrix with millions of parameters, some of which are directly controlled by the people who design and maintain the model. Yes, those parameters can be manipulated to be more or less agreeable.

    I'd argue that the current state of these models is way too deferential to the user, where it places too much weight on agreement with the user input, even when that input contradicts a bunch of the other parameters.

    Internal to the model is still a method of combining things it has seen to identify a consensus among what it has seen before, tying together certain tokens that actually do correspond to real words that carry real semantic meaning. It's just that current models obey the user a bit too much to overcome a real consensus, or will manufacture consensus where none exists.

    I don't see why someone designing an LLM can't manipulate the parameters to be less deferential to the claims, or even the instructions, given by the user.

  • Apple supports its devices for a lot longer than most OEMs after release (minimum 5 years since being available for sale from Apple, which might be 2 years of sales), but the impact of dropped support is much more pronounced, as you note. Apple usually announces obsolescence 2 years after support ends, too, and stop selling parts and repair manuals, except a few batteries supported to the 10 year mark. On the software/OS side, that usually means OS upgrades for 5-7 years, then 2 more years of security updates, for a total of 7-9 years of keeping a device reasonably up to date.

    So if you're holding onto a 5-year-old laptop, Apple support tends to be much better than a 5-year-old laptop from a Windows OEM (especially with Windows 11 upgrade requirements failing to support some devices that were on sale at the time of Windows 11's release).

    But if you've got a 10-year-old Apple laptop, it's harder to use normally than a 10-year-old Windows laptop.

    Also, don't use the Apple store for software on your laptop. Use a reasonable package manager like homebrew that doesn't have the problems you describe. Or go find a mirror that hosts old MacOS packages and install it yourself.

  • Most Costco-specific products, sold under their Kirkland brand, are pretty good. They're always a good value and they're sometimes are among the best in class separate from cost.

    I think Apple's products improved when they started designing their own silicon chips for phones, then tablets, then laptops and desktops. I have beef with their operating systems but there's no question that they're better able to squeeze battery life out of their hardware because of that tight control.

    In the restaurant world, there are plenty of examples of a restaurant having a better product because they make something in house: sauces, breads, butchery, pickling, desserts, etc. There are counterexamples, too, but sometimes that kind of vertical integration can result in a better end product.

  • Their horizontal integration is made more seamless by the vertical integration.

    On an Apple laptop, they're the OEM of the hardware product itself, while also being the manufacturer of the CPU and GPU and the operating system. For most other laptops that's 3 or 4 distinct companies.

  • Yeah, getting too close turns into an uncanny valley of sorts, where people expect all the edge cases to work the same. Making it familiar, while staying within its own design language and paradigms, strikes the right balance.

  • Even the human eye basically follows the same principle. We have 3 types of cones, each sensitive to different portions of wavelength, and our visual cortex combines each cone cell's single-dimensional inputs representing the intensity of light hitting that cell in its sensitivity range, from both eyes, plus the information from the color-blind rods, into a seamless single image.

  • This write-up is really, really good. I think about these concepts whenever people discuss astrophotography or other computation-heavy photography as being fake software generated images, when the reality of translating the sensor data with a graphical representation for the human eye (and all the quirks of human vision, especially around brightness and color) needs conscious decisions on how those charges or voltages on a sensor should be translated into a pixel on digital file.

  • my general computing as a subscription to a server.

    You say this, but I think most of us have offloaded formerly local computing to a server of some kind:

    • Email organization, including folders and attachments, has mostly shifted from a desktop client saving offline copies retrieved and then deleted from the server, to web and app and even IMAP interfaces to the canonical cloud server organization.
    • A huge chunk of users have shifted their productivity tasks (word processing, spreadsheets, presentations, image editing and design) to web-based software.
    • A lot of math functionality is honestly just easier to plug into web-based calculators for finance, accounting, and even the higher level math that Wolfram Alpha excels at.
    • Lots of media organization, from photos to videos to music, are now in cloud-based searchable albums and playlists.

    All these things used to be local uses of computing, and can now be accessed from low powered smartphones. Things like Chromebooks give a user access to between 50-100% of what they'd be doing on a full fledged high powered desktop, depending on the individual needs and use cases.

  • Do MSI and ASUS have enough corporate/enterprise sales to offset the loss of consumer demand? With the RAM companies the consumer crunch is caused by AI companies bidding up the price of raw memory silicon well beyond what makes financial sense to package and solder onto DIMMs (or even directly solder the packages onto boards for ultra thin laptops).

  • The key part of the statement is, "to service a demand that doesn't exist."

    But that's basically always true of big projects. The people financing the project believe that the demand will exist in the future, and know it will take time and investment of resources to get to the point where they will meet that future demand.

    They can be wrong on their projections of future demand, but that happens all the time, too. A classic example is when a city hosts the Olympics or World Cup and builds out a lot of infrastructure to meet that anticipated demand for both that specific event and the long term needs of the resident population. Sometimes it works, like with certain mass transit systems expanded for those events, and sometimes it doesn't, like when there are vacant stadiums sitting underused for decades after.

    Or, the analogy I always draw is to the late 90's when telecom was building it a bunch of fiber networks for the anticipated future demand for Internet connections. Most of those ended up in bankruptcy, with the fiber assets sold for a fraction of the cost of building them. But they still ended up being useful. Just not worth the cost.

    I think the same will happen with a lot of the data center infrastructure. Data centers will still be useful. A lot of the infrastructure for supporting those data centers (power and cooling systems, racks, network connections) will still be useful. There's just no guarantee that they'll be worth what they cost to build. And when that happens, we might see a glut in used data-center-grade computing equipment, and maybe hobbyists will score some deals at auctions to make their own frankenservers for their own purposes, and completely blow normal homelabbing out of the water.

  • "As long as the music's playing, you've got to get up and dance."

    That's Citi's former CEO, who explained that he would devote his company's resources to making money in a bubble (during the 2007 housing bubble), even when he knew it was a bubble.

    The memory chip producers are absolutely going to try to maximize production during this bubble. The normal life cycle is to run fabs on offsetting cycles where at any given time, the company has a few fabs in the planning stages, in the construction stages, R&D stages, early "risk" production, high volume production, and retooling for a new process.

    That means that during a bubble, it makes sense to try to accelerate the speed at which new fabs come online or old fabs get retooled. It makes sense to keep old fabs running longer at higher yields, even for previous generation product. These aren't mature businesses that were already planning on running the same factories forever. They already anticipate the cycle of multiple generations, and what that looks like is going to be more aggressive during periods where customers are throwing money at them.