Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)
Posts
11
Comments
295
Joined
3 yr. ago

  • Lol, plants don't need to be kept with weekly. Maintaining a xeriscape or native landscape is less time and effort than a lawn. I've been slowly converting my lawn to larger and more native beds. I don't have to water, even during exceptional drought. I have to top the mulch up once a year. I weed (usually just grass) just whenever I spot a weed. Depending on the plant, I trim or cut it back to the ground once or twice a year.

  • This kinda lines up with propaganda I've been seeing the past couple years (from the likes of Peter Theil and Alex Epstein). They argue that we should be extracting and using fossil fuels as fast as possible. The (stupid, fucked up, wishful thinking) idea is that cheap energy drives human development and technological solutions to climate change.

  • I think many worked. On the farm, in mines, in factories. Farmers would intentionally have many children just for extra labor. School hours and breaks are, in part, the way they are to let children work on the farm.

  • I wish there was a license for content like the GPL, that states if you use this content to train generative AI, the model must be open source. Not sure that would legally be enforceable though (due to fair-use).

  • OpenAI is no longer "pure." They are not open. They do not publish the details of any of the discoveries they've made (which used to be standard practice, even in the private sector). Their leadership is now in the "effective accelerationism" camp that worships capitalism, and sees developing AGI as their moral obligation, regardless of what harm it may cause to society. (They are also delusional, because it's very unlikely AGI will be developed anytime soon).

  • Is MAD not well-known or taught anymore? A lot of the comments here seem to be ignoring the fact that Russia or NATO would launch a full-scale retaliation before the first-strike even made it to its destination. It would likely result in the world human population going from 8 billion to 2 billion.

  • Those numbers will go down once everyone is driving 4wd EV Suburbans with half-inch steel plate armor (you know, just so they feel their kids are safe) :)

  • Thanks :) I've always been extremely pro-decentralization (that does not use blockchains to "solve" byzantine fault tolerance and sybil vulnerabilities). I'm fine with things being somewhat less efficient if they're decentralized, and fine with creators and fans eating the costs about things they're passionate about (though it would probably turn semi-decentralized with companies offering seeding/content-delivery services at low cost). The rise of symmetric home fiber connections further increases viability. But, I agree that it likely will never become mainstream.

  • Thankfully, my ISP informs me if someone on my network shares movies on Bittorent without a VPN. Do ISPs typically do the same for music on the ed2k and gnutella networks?

  • I believe it works like Bittorent (and things like Windows updates) where there is a swarm of peers that simultaneously upload and download to/from eachother, so the original creator, or any single user, doesn't necessarily need much bandwidth. There are some disadvantages to this, but it is manageable, and works for many other things. If it actually became a thing, I imagine sponsored/patreon-funded creators would pay someone to seed their videos to ensure availability and quality. Fans would probably help too. Technically, it's a viable option.

    But yeah, with how walled-garden the Internet has become, it probably won't become popular without massive amounts of marketing and doing things like signing exclusivity deals with popular creators, which needs a lot of money.

  • Not much server storage and bandwidth is needed if using p2p, like peertube.

  • Meta could've done a lot of things to prevent this. Internal documents show Zuckerberg repeatedly rejected suggestions to improve child safety. Meta lobbies congress to prevent any regulation. Meta controls the algorithms and knows they promote bad behavior such as dog piling, but this bad behavior increases "engagement" and revenue, so they refuse to change it. (Meta briefly changed its algorithms for a few months during the 2020 election to decrease the promotion of disinformation and hate speech, because they were under more scrutiny, but then changed it back after the election).

  • This may be true at the moment, but Amazon can control how shitty the non-prime experience is.

    Personally, I'm trying to avoid Amazon altogether. It's much worse now, and flooded with cheap defective shit. I've also been noticing that a lot of manufacturers don't sell on Amazon (guessing Amazon takes a big cut).

  • I'm guessing the batteries advertised as drop-in replacements have BMSs built in.

  • Ah, a three star programmer.

  • Yeah, those GPU estimates are probably correct.

    I specialized in ML during grad school, but only recently got back into it and keeping up with the latest developments. Started working at a startup last year that uses some AI components (classification models, generative image models, nothing nearly as large as GPT though).

    Pessimistic about the AGI timeline :) Though I will admit GPT caught me off guard. Never thought a model simply trained to predict the next word in a sequence of text would capable of what GPT is (that's all GPT does BTW, takes a sequence to text and predicts what the next token should be, repeatedly). I'm pessimistic because, AFAIK, there isn't really a ML/AI architecture or even a good theoretical foundation that could achieve AGI. Perhaps actual brain simulation could, but I'm guessing that is very inefficient. My wild-ass-guess is AGI in 20 years if interest and money stays consistent. Then ASI like a year after, because you could use the AGI to build ASI (the singularity concept). Then the ASI will turn us into blobs that cannot scream, because we won't have mouths :)

  • Correct, when you talk to GPT, it doesn't learn anything. If you're having a conversation with it, every time you press "send," it sends the entire conversation back to GPT, so within a conversation it can be corrected, but remembers nothing from the previous conversation. If a conversation becomes too long, it will also start forgetting stuff (GPT has a limited input length, called the context length). OpenAI does periodically update GPT, but yeah, each update is a finished product. They are very much not "open," but they probably don't do a full training between each update. They probably carefully do some sort of "fine-tuning" along with reinforcement-learning-with-human-feedback, and probably some more tricks to massage the model a bit while preventing catastrophic forgetting.

    Oh yeah, the latency of signals in the human brain is much, much slower than the latency of semiconductors. Forgot about that. That further muddies the very rough estimates. Also, there are multiple instances of GPTs running, not sure how many. It's estimated that each instance "only" requires 128 GPUs during inference (responding to chat messages), as opposed to 25k gpus for training. During training, the model needs to process multiple training examples at the same time for various reasons, including to speed up training, so more GPUs are needed. You could also think of it as training multiple instances at the same time, but combining what's "learned" into a single model/neural network.