Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)E
Posts
0
Comments
133
Joined
3 yr. ago

  • The distinction is web workers and offline mode.

    It means your PWA can preload everything it needs to run offline, and you can actually use it offline. That is different from a "cached website" which can only cache the pages you've already visited and otherwise does not allow you to update data locally.

  • Yes, although that recently changed in the EU (only) with the Digital Markets Act.

  • You're famous for being photographed shirtless on a horse. Is that because horses can't judge you, or is it a new Russian policy for reducing laundry costs?

  • I recently went through these exact pains trying to contribute to a project that exclusively ran through Discord and eventually had to give up when it was clear they would never enable issues in their GitHub repos for "reasons."

    It was impossible to discover the history behind anything. Even current information was lost within days, having to rehash aspects that were already investigated and decided upon.

  • Users would get bored quickly. The most engaging content on social media tends to be those posts that you disagree with or even anger you.

    The real trick would be to sprinkle in about 40% controversy, 5% trolling and a healthy dash of nonsensical hate posting.

    You need to give the user a tribe along with an opposition to defend the tribe against.

  • IPv4 support is required and works perfectly.

    Except it doesn't work perfectly, because it has a relatively small address space. That's why ipv6 exists.

  • There are huge gaps in ipv6 adoption which means most users and services must continue to support and use ipv4.

    Since everyone has to continue ipv4 support, there's not much motivation to push general adoption of ipv6. Maintaining dual stack support has its own costs.

    Even within AWS, many of their services still don't support ipv6. AWS fees for ipv4 addressing may end up being a comparatively big driver for adoption.

  • It's not skippable as far as I can tell. It also frequently advertises shows I've already watched. Sometimes it advertises the show I'm trying to watch.

    I'm pretty sure it also has the "ad counter" showing on the screen during this as well.

    Here's what they call it in their docs:

    You'll also see a quick preview only once per day before any show to keep you up-to-date on our original programming.

    It's not an ad, it's a "preview." /s

  • Yes, at least currently. There may be better options as multi-gigabit internet access becomes more common place and commodity hardware gets faster.

    The other options mentioned in this thread are basically toys in comparison (either obtaining results from existing search engines or operating at a scale less than a few terabytes).

  • It's a really interesting question and I imagine scaling a distributed solution like that with commodity hardware and relatively high latency network connections would be problematic in several ways.

    There are several orders of magnitude between the population of people who would participate in providing the service and those who would consume the service.

    Those populations aren't local to each other. In other words, your search is likely global across such a network, especially given the size of the indexed data.

    To put some rough numbers together for perspective, for search nearing Google's scale:

    • A single copy of a 100PB index would require 10,000 network participants each contributing 10TB of reliable and fast storage.
    • 100K searches / sec if evenly distributed and resolvable by a single node would be at least 10 req/sec/node. Realistically it's much higher than that, depending on how many copies of the index, how requests are routed, and how many nodes participate in a single query (probably on the order of hundreds). Of that 10TB of storage per node, substantial amounts of it would need to be kept in memory to sustain the likely hundreds of req/sec a node might see on average.
    • The index needs to be updated. Let's suppose the index is 1/10th the size of the crawled data and the oldest data is 30 days (which is pretty stale for popular sites). That's at least 33PB of data to crawl per day or roughly 3,000Gbps minimum sustained data ingestion. For those 10,000 nodes they would need 1Gbps of bandwidth to index fresh data.

    These are all rough numbers but this is not something the vast majority of people would have the hardware and connection to support.

    You'd also need many copies of this setup around the world for redundancy and lower latency. You'd also want to protect the network against DDoS, abuse and malicious network participants. You'll need some form of organizational oversight to support removal of certain data.

    Probably the best way to support such a distributed system in an open manner would be to have universities and other public organizations run the hardware and support the network (at a non-trivial expense).

  • PE

  • If you don't have prime you might not buy things from Amazon which would probably be a net loss compared to the potential ad revenue increase.

  • Why?