Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)L
Posts
29
Comments
746
Joined
3 yr. ago

  • There can be, although some parts may still need to be written in assembly (which is imperative, because that's ultimately what most CPUs do), for parts like a kernel's context switching logic. But C has similar restrictions, like how it is impossible to start a C function without initializing the stack. Exception: some CPUs (eg Cortex M) have a specialized mechanism to initialize the stack.

    As for why C, it's a low-level language that maps well to most CPU's native assembly language. If instead we had stack-based CPUs -- eg Lisp Machines or a real Java Machine -- then we'd probably be using other languages to write an OS for those systems.

  • The other commenters correctly opined that encryption at rest should mean you could avoid encryption in memory.

    But I wanted to expand on this:

    I really don't see a way around this, to make the string searchable the hashing needs to be predictable.

    I mean, there are probabilistic data structures, where something like a Bloom filter will produce one of two answers: definitely in the set, or possibly in the set. In the context of search tokens, if you had a Bloom filter, you could quickly assess if a message does not contain a search keyword, or if it might contain the keyword.

    A suitably sized Bloom filter -- possibly different lengths based on the associated message size -- would provide search coverage for that message, at least until you have to actually access and decrypt the message to fully search it. But it's certainly a valid technique to get a quick, cursory result.

    Though I think perhaps just having the messages in memory unencrypted would be easier, so long as that's not part of the attack space.

  • If this is about that period of human history where we had long-distance transportation (ie railroads) but didn't yet have mass communication infrastructure that isn't the postal service -- so 1830s to 1860s -- then I think the answer is to just plan to meet the other person at a certain place every month.

    To use modern parlance, put a recurring meeting on their calendar.

  • It can be, although the example I've given where each counter is a discrete part is probably no longer the case. It's likely that larger ICs which encompass all the requisite functionality can do the job, at lower cost than individual parts.

    But those ICs probably can't do 4:20:69, so I didn't bother mentioning that.

  • I have a UniFi EdgeRouter (old, and I'm looking into replacing it with a FreeBSD box) and I have a similar issue where the router -- but maybe the ISP? -- misses a DHCP renewal, resulting in the wholesale loss of connectivity. It's even more annoying because the ISP simultaneously rejects follow-up DHCP requests, on the theory that if the renewal was missed, the device cannot possibly exist anymore, at least for a few minutes.

    Since this router takes 12 minutes to manually reboot, that's usually enough time for the ISP to clear their cache and everything comes back up properly. But it's terribly annoying, hence why I'm looking to finally replace this router.

  • I should point out that for the hour counter, it's only a 5 bit counter, since the max value for hours is 23, which fits into 5 bits.

    So 566 is not quite the devil's work, but certainly very close.

  • (I'm going to take the question seriously)

    Supposing that you're asking about a digital clock as a standalone appliance -- because doing the 69th second in software would be trivial, and doing it with an analog clock is nigh impossible -- I believe it can be done.

    A run-of-the-mill digital clock uses what's known as a 7-segment display, one for each of the digits of the time. It's called 7-segment (or 7-seg) because there are seven distinct lines that can be lit up or darkened, which will write out a number between 0 to 9.

    In this way, six 7seg displays and some commas are sufficient to build a digital clock. However, we need to carefully consider whether the 7seg displays have all seven segments. In some commercial applications, where it's known that some numbers will never appear, they will actually remove some segments, to save cost.

    For example, in the typical American digital clock, the time is displayed in 12-hour time. This means the left digit of the hour will only ever be 0 or 1. So some cheap clocks will actually choose to build that digit using just 2 segments. When the hour is 10 or greater, those 2 segments can display the necessary!number 1. When the hour is less than 10, they just don't light up that digit at all. This also makes the clock incapable of 24-hour time.

    Fortunately though, to implement your idea of the 69th second, we don't have this problem. Although it's true that the left digit of the seconds only goes from 0 to 5 inclusive, the fact remains that those digits do actually require all 7 segments of a 7seg display. So we can display a number six without issue.

    Now, as for how to modify the digital clock circuitry, that's a bit harder but not impossible. The classic construction of a digital clock is as follows: the 60 Hz AC line frequency (or 50 Hz outside North America) is passed from the high-voltage circuitry to the low-voltage circuitry using an opto-isolator, which turns it into a square wave that oscillates 60 times per second.

    Specifically, there are 120 transitions per second, with 60 of them being a low-to-high transition and the other 60 being a high-to-low transition. Let's say we only care about the low-to-high. We now send that signal to a counter circuit, which is very similar to a mechanical odometer. For every transition of the oscillating signal, the counter advances by one. The counter counts in binary, and has six bits, because our goal is to count up to 59, to know when a full second has elapsed. We pair the counter with an AND circuit, which is checking for when the counter has the value 111011 (that's to in decimal). If so, the AND will force the next value of the counter to 000000, and so this counter resets every 1 second. This counter will never actually register a value of 60, because it is cut off after 59.

    Drawing from that AND circuit that triggers once per second, this new signal is a 1 Hz signal, also known as 1PPS (pulse per second). We can now feed this into another similar counter that resets at 59, which gives us a signal when a minute (60 seconds) has elapsed. And from that counter, we can feed it into yet another counter, for when 1 hour (60 minutes) has passed. And yet again, we can feed that too into a counter for either 12 hours or 24 hours.

    In this way, the final three counters are recording the time in seconds, minutes, and hours, which is the whole point of a clock appliance. But these counters are in binary; how do we turn on the 7seg display to show the numbers? This final aspect is handled using dedicated chips for the task, known as 7seg drivers. Although the simplest chips will drive only a single digit, there are variants that handle two adjacent digits, which we will use. Such a chip accepts a 7 bit binary value and has a lookup table to display the correct pair of digits on the 7seg displays. Suppose the input is 0101010 (42 in decimal), then the driver will illuminate four segments on the left (to make the number 4) and five segments on the right (to make the number 5). Note that our counter is 6 bits but the driver accepts 7 bits; this is tolerable because the left-most bit is usually forced to always be zero (more on this later).

    So that's how a simple digital clock works. Now we modify it for 69th second operation. The first issue is that our 6-bit counter for seconds will only go from 0-59 inclusive. We can fix this by replacing it with a 7 bit counter, and then modifying the AND circuit to keep advancing after 59, but only when the hour=04 and minute=20. This way, the clock works as normal for all times except 4:20. And when it's actually 4:20, the seconds will advance through 59 and get to 60. And 61, 62, and so on.

    But we must make sure to stop it after 69, so we need another AND circuit to detect when the counter reaches 69. And more importantly, we can't just zero out the counter; we must force the next counter value to be 10, because otherwise the time is wrong.

    It's very easy to zero out a counter, but it takes a bit of extra circuitry to load a specific value into the counter. But it can be done. And if we do that, we finally have counters suitable for 69th second operation. Because numbers 64 and higher require 7 bits to represent in binary, we can provide the 7th bit to the 7seg driver, and it will show the numbers correctly on the 7seg display without any further changes.

    TL;DR: it can absolutely be done, with only some small amount of EE work

  • Upvoting because the FAQ genuinely is worthwhile to read, and answers the question I had in mind:

    7.9 Why not just use a subset of HTTP and HTML?

    I don't agree with their answer though, since if the rough, overall Gemini experience:

    is roughly equivalent to HTTP where the only request method is "GET", the only request header is "Host" and the only response header is "Content-type", plus HTML where the only tags are

    <p>

    ,

    <pre>

    ,

    <a>

    ,

    <h1>

    through

    <h3>

    ,

    <ul>

    and

    <li>

    and

    <blockquote>

    Then it stands to reason -- per https://xkcd.com/927/ -- to do exactly that, rather than devise new protocol, client, and server software. Instead, some of their points have few or no legs to stand on.

    The problem is that deciding upon a strictly limited subset of HTTP and HTML, slapping a label on it and calling it a day would do almost nothing to create a clearly demarcated space where people can go to consume only that kind of content in only that kind of way.

    Initially, my reply was going to make a comparison to the impossibility of judging a book by its cover, since that's what users already do when faced with visiting a sketchy looking URL. But I actually think their assertion is a strawman, because no one has suggested that we should immediately stop right after such a protocol has been decided. Very clearly, the Gemini project also has client software, to go with their protocol.

    But the challenge of identifying a space is, quite frankly, still a problem with no general solution. Yes, sure, here on the Fediverse, we also have the ActivityPub protocol which necessarily constrains what interactions can exist, in the same way that ATProto also constrains what can exist. But even the most set-in-stone protocol (eg DICT) can be used in new and interesting ways, so I find it deeply flawed that they believe they have categorically enumerated all possible ways to use the Gemini protocol. The implication is that users will never be surprised in future about what the protocol enables, and that just sounds ahistoric.

    It's very tedious to verify that a website claiming to use only the subset actually does, as many of the features we want to avoid are invisible (but not harmless!) to the user.

    I'm failing how to see how this pans out, because seeing as the web is predominantly client-side (barring server side tracking of IP address, etc), it should be fairly obvious when a non-subset website is doing something that the subset protocol does not allow. Even if it's a lay-in-wait function, why would a subset-compliant client software honor that?

    When it becomes obvious that a website is not compliant with the subset, a well-behaved client should stop interacting with the website, because it has violated the protocol and cannot be trusted going forward. Add it to an internal list of do-not-connect and inform the user.

    It's difficult or even impossible to deactivate support for all the unwanted features in mainstream browsers, so if somebody breaks the rules you'll pay the consequences.

    And yet, Firefox forks are spawning left and right due to Mozilla's AI ambitions.

    Ok, that's a bit blithe, but I do recognize that the web engines within browsers are now incredibly complex. Even still though, the idea that we cannot extricate the unneeded sections of a rendering engine and leave behind the functionality needed to display a subset of HTML via HTTP, I just can't accept that until someone shows why that is the case.

    Complexity begats complexity, whereas this would be an exercise in removing complexity. It should be easier than writing new code for a new protocol.

    Writing a dumbed down web browser which gracefully ignores all the unwanted features is much harder than writing a Gemini client from scratch.

    Once again, don't do that! If a subset browser finds even one violation of the subset protocol, it should halt. That server is being malicious. Why would any client try to continue?

    The error handling of a privacy-respecting protocol that is a subset of HTML and HTTP would -- in almost all cases -- assume the server is malicious, and to disconnect. It is a betrayal of the highest order. There is no such thing as a "graceful" betrayal, so we don't try to handle that situation.

    Even if you did it, you'd have a very difficult time discovering the minuscule fraction of websites it could render.

    Is this about using the subset browser to look at regular port-80 web servers? Or is this about content discovery? Only the latter has a semblance of logic behind it, but that too is an unsolved problem to this day.

    Famously, YouTube and Spotify are drivers of content discovery, based in part due to algorithms that optimize for keeping users on those platforms. Whereas the Fediverse eschews centralized algorithms and instead just doesn't have one. And in spite of that, people find communities. They find people, hashtags, images, and media. Is it probably slower than if an algorithm could find these for the user's convenience? Yes, very likely.

    But that's the rub: no one knows what they don't know. They cannot discover what they don't even imagine could exist. That remains the case, whether the Gemini protocol is there or not. So I'm still not seeing why this is a disadvantage against an HTTP/HTML subset.

    Alternative, simple-by-design protocols like Gopher and Gemini create alternative, simple-by-design spaces with obvious boundaries and hard restrictions.

    ActivityPub does the same, but is constructed atop HTTP, while being extensible to like-for-like replace any existing social media platform that exists today -- and some we haven't even thought of yet -- while also creating hard and obvious boundaries which forment a unique community unlike any other social media platform.

    The assertion that only simple protocols can foster community spaces is belied by ActivityPub's success; ActivityPub is not exactly a simple protocol either. And this does not address why stripping down HTML/HTTP wouldn't also do the same.

    You can do all this with a client you wrote yourself, so you know you can trust it.

    I sure as heck do not trust the TFTP client I wrote at uni, and that didn't even have an encryption layer. The idea that every user will write their own encryption layer to implement the mandatory encryption for Gemini protocol is farcical.

    It's a very different, much more liberating and much more empowering experience than trying to carve out a tiny, invisible sub-sub-sub-sub-space of the web.

    So too would browsing a sunset of HTML/HTTP using a browser that only implements that subset. We know this because if your reading this right now, you're either viewing this comment through a web browser frontend for Lemmy, or using an ActivityPub client of some description. And it is liberating! Here we all are, on this sub sub sub sub space of the Internet, hanging out and commenting about protocols and design.

    But that doesn't mean we can't adapt already-proven, well-defined protocols into a subset that matches an earlier vision of the internet, while achieving the same.

  • as someone has to lead

    At this particular moment, the people of Minnesota are self-organizing the resistance against the invasion of their state, with no unified leadership structure in place. So I wouldn't say it's always mandatory.

    Long live l'etoile du nord.

  • An indisputable use-case for supercomputers is the computation of next-day and next-week weather models. By definition, a next-day weather prediction is utterly useless if it takes longer than a day to compute. And is progressively more useful if it can be computed even an hour faster, since that's more time to warn motorists to stay off the road, more time to plan evacuation routes, more time for farmers to adjust crop management, more time for everything. NOAA in the USA draws in sensor data from all of North America, and since weather is locally-affecting but globally-influenced, this still isn't enough for a perfect weather model. Even today, there is more data that could be consumed by models, but cannot due to making the predictions take longer. The only solution there is to raise the bar yet again, expanding the supercomputers used.

    Supercomputers are not super because they're bigger. They are super because they can do gargantuan tasks within the required deadlines.

  • I'm going off what I remember from a decade ago when working on embedded CPUs that have an Ethernet interface. IIRC, the activity LED -- whether a separate LED than the link LED, or combined as a single LED -- is typically wired to the PHY (the chip which converts analog signals on the wire/fibre into logical bits), as part of its transceiver functions. But some transceivers use a mechanism separate from the typical interface (eg SGMII) to the MAC (the chip which understands Ethernet frames; may be integrated into the PHY, or integrated into the CPU SoC). That auxiliary interface would allow the MAC to dictate what the LED should indicate.

    In either case, there isn't really a prescribed algorithm for what level of activity should warrant faster blinking, and certainly not any de facto standard between switch and NIC manufacturers. But generally, there will be something like 4 different "speeds" of blinking, based on whatever criteria the designers chose to use

  • I don't think you've listed what you (and your partner's?) financial timeline is. The number of years you have until needing to draw upon the nestegg is crucial for any discussion that involves retirement savings.

    Also, I may want to also post to !personalfinance@lemmy.ml

  • Nice! Which part was the most challenging to do?

  • I don't disagree about bikes seemingly getting more complicated. But I'd counter that my 2023 ebike would immediately benefit if all the existing sensors were CAN: from the mid-drive motor+controller, I have the left and right brake sensors running to the front, plus the display, and the headlight power circuit. Branching off the display are the controls for the turning on the bike.

    To the rear, I have the derailleur sensor, the speed sensor, and the taillight circuit. I've been meaning to also expose the brake light circuit, so that would be yet another set of wires.

    If I had CAN bus today, I'd shrink the wiring down to just: two CAN wires and two power wires to the front, and two CAN wires and two power wires to the rear. All sensors on the front attach to the display. All sensors at the rear are wired as a chain, with the tail/brake lights being at the very end.

    The ease of cable routing alone would be worth it. And perhaps those wires could then be armored, for improved resiliency.

  • In the spirit of c/nostupidquestion's Rule 1, asking two unrelated questions does not seem like it would accrue high-quality answers to either. And I see you've already added another post focusing on the first question.

    Since it doesn't cost 50 cents to make an additional post, I would suggest giving each question its own post. It would keep the discussion more focused, and actual answers should result.

  • Consider the following three types of monopolies:

    There are monopolies where a single entity has entrenched their position by having the categorically superior product, so far ahead of any competition and while no barriers are erected to prevent competitors, there simply is no hope and they will all play second fiddle. This type of monopoly doesn't really exist, except for a transient moment, for if there initially wasn't a barrier, there soon will be: as market leader, the monopolist accumulates capital that at best is unavailable to the competitors (ie zero sum resources, like land or labor), and at worst stands in the way of free competition (eg brand recognition, legally -recognized intellectual property).

    The second type is the steady-state scenario following the first, which is a monopoly that benefits from or actively enforces barriers against their competitors. Intellectual property (eg Disney) can be viewed as akin to the conventional means of production (land, labor, capital), so the monopolist that controls the usable land or can hire the best labor will cement their position as monopolist. In economic terms, we could say that the cost to overturn the monopolist is very high, and so perhaps it's economically reasonable to be a second-tier manufacturer rather than going up against the giant. The key ingredient for the monopolist is having that structure in place, to keep everyone else at bay.

    The third type is the oddball, for it's what we might call a "natural" or "practical" monopoly. While land, labor, and capital are indeed limited, what happens when it's actually so limited that there's basically only one? It's a bit hard to conceptualize having just one plot of land (maybe an island?) or having just one Dollar, but consider a single person who has such specialized knowledge that she is the only such person in the world. Do we say she is a monopolist because she can command whatever price she wants for her labor? Is she a monopolist because she does not share her knowledge-capital? What if she physically can't, for the knowledge is actually experience, honed over a lifetime? If it took her a lifetime to develop, then she may already lack the remaining lifetime to teach someone else for their lifetime.

    I use this example to segue to the more-customary example of a natural monopoly: the local electricity distribution system, not to be confused with the electric grid at-large, which also includes long-distance power lines. The distinction is as follows: the big, long power lines can compete with each other, taking different routes over terrain, under water, or sometimes even partially conducting through the earth itself. But consider that at a local level, on a residential street, there can practically only be a single distributor circuit for the neighborhood.

    I cannot be served by Provider X's wires while Co-Op Y's wires serve my neighbor, and Corpo Z's wires serve the school down the road. Going back to the convention means of production, we could say there is only one plot of land available to run these distributor circuits. So at most one entity can own and operate those wires.

    Laying all that background, let's look at your titular question. For monopoly types 1 and 2, it's entirely feasible to divide and collectivize those monopolies. But it's the natural monopolies that are problematic: if you divide them up (let's say geographically) and then collectivize them, there will still only ever be one "owner" of the distribution lines. You cannot have Collective A own a few meters of wire, and then Collective B owns a few meters in between, all while Collective C is connected at the end of the street. The movement of electric power is not amenable to such granular collectivization.

    To that end, the practical result is the same no matter how you examine it: a natural monopoly is one which cannot feasibly be split up, even when there's the will to do so. Generalizing quite a lot, capitalists would approach a natural monopoly with intent to exploit it for pure profit, while social democrats would seek to regulate natural monopolies (eg US State's public utilities commissions), and democratic socialists would want to push for state ownership of all natural monopolies, while communists would seek the dissolution of the state and have the natural monopoly serve everyone "according to their need". But the monopoly still exists in all these scenarios, for it can't be done any other way.

    Other natural monopolies exist, but even things like radio spectrum are relatively plentiful compared to local power lines, for which there really is just one place to build them. We don't have wireless power yet.

  • So far as I'm aware, CAN makes a lot of sense when it's no longer just two devices talking to each other, but a bunch of devices talking amongst themselves. Using UART for the same scenario would result in a lot more signalling wires, whereas CAN only requires a single, twisted pair of data lines that are shared by all devices.

    Automobiles followed a similar progression, since CAN was a product of Bosch. Initially meant to simplify the connections between engine sensors, it later proved useful all around a car, from the switches that control power windows to the adjustment of power mirrors. Most importantly, it signals between all of those decices using just two thin wires.

    For ebikes, the mandatory data path is between the user display and the motor, so UART worked fine. But other peripherals like brake, speed, and gear sensors, those had their own wires, all having to go to a central controller somewhere. So might as well use CAN to simplify the wiring and maybe add new functionality:

    Imagine the display has a button to enable the headlights, and that sends a CAN signal to the controller to close the relay for the headlights. But maybe you have an auxiliary headlight that reads the same signal and turns on as well. And maybe the taillight also turns on, plus a wireless relay so that the lights in your pedals also turn on.

    CAN is acceptable to wire two things together, but it really shows when building a cohesive network of peripherals. Unlike modern computer data networks, all devices on a CAN bus receive the same messages and so they can all react to the same "broadcast". I have personally sniffed the CAN bus on an automobile to implement some nifty integration with a dashcam. Maybe we might have CAN be how a GPS bike computer continues measuring speed using the wheel sensors, even when in a tunnel.

    That said, I'd be remiss if I ignored a major downside of CAN: because it's not drop-dead easy to examine like UART, some manufacturers will implement strange, proprietary message types using CAN. This makes it harder for users to intercept or modify those signals, since there isn't any documentation. Reverse engineering is sometimes needed to deduce the meaning of certain CAN messages. Ideally, industry standardization around CAN and ebike sensors would mean they're all compatible with each other. Or at least, I hope that happens.

    Still, I'm of the opinion that CAN is light-years preferable than every manufacturer reinventing their own data bus. The electronics community has been poking and proding CAN for decades, so using CAN means less reverse engineering overall.

  • They almost buried the lede:

    From an OEM perspective, the appeal is pretty clear. A universal mounting standard and unified CANBUS communication system can reduce tooling costs, simplify inventory, and shorten development timelines. Ananda says future M7000 derivatives will remain backward compatible, allowing brands to adapt quickly as market demands shift without starting from scratch with entirely new frame designs.

    There is a real problem with the ebike market today, when compared to the bicycle market, which is the wholesale lack of standardized parts. Sure, the the bicycle in the 19th Century also didn't have standardized parts, but the difference is now very apparent: for acoustic bikes, there are just six standards for bottom brackets, but are almost as many mid-drive ebike motor mounting patterns as there are manufacturers, of which there are many. This is just one example, and one can find incompatible ebike brake sensor, CAN vs UART data buses, headlight voltages, HUDs, and more.

    Without standardized parts, there cannot be widespread availability of parts. Without parts, there cannot be bike shops that can sustainably maintain people's ebikes, nor can riders attempt to extend the life of their ebikes on the road. Without modular replaceable parts, more e-waste and bicycle waste will be produced. Without standardization, vendor lock-in is the natural result, yielding unnecessarily higher prices for consumers.

    We need commoditization of basic ebike components, and there are no sufficently-large players that can throw their heft around for force the change. Compare to, say, Shimano, who can basically create a new racing bike standard out of thin air, and the industry will comply.

    So I do appreciate when a manufacturer comes out with an ostensible standards-based lineup, promising backwards compatibility. But I'm also skeptical: in computer design, some of the longest-lasting standards are: the IBM PC (1980s IBM design adopted by clone manufacturers), PCI (1990s, from a consortium of PC makers), and color-coded ports for mouse/keyboard/VGA (2000s Intel-led consortium). What we see is that the most durable standards (de facto or otherwise) are multilateral in nature: it takes multiple players to agree to standardize. Not necessarily with each other manufacturer, but consistency within the same company would help.

    If we get to the stage where there are "format wars" over the specs for a mid-drive ebike motor, then that would be genuine progress, because a format war means we can identify actual factions that are producing those standards. HD DVD fans were certainly disappointed to lose the war to Blu Ray, but it never deprived them of their ability to watch what they already bought. Fortunately, bicycles are durable goods and can last for a lot longer than a stamped optical disk.

  • I Made This @lemmy.zip

    3D-printed PLA "dial wheel" to manually brute-force a combination lock-box

  • micromobility - Bikes, scooters, boards: Whatever floats your goat, this is micromobility @lemmy.world

    FortNine: How e-Bikes are Killing Motorcycles - Aniioki A9 Pro Max Review

  • Woodworking @lemmy.ca

    Seeking hand plane recommendations

  • micromobility - Bikes, scooters, boards: Whatever floats your goat, this is micromobility @lemmy.world

    Re-greasing a mid-drive ebike motor yields noticeable improvements

  • bike wrench @lemmy.world

    Re-greasing a mid-drive ebike motor yields noticeable improvements

  • Dullsters @dullsters.net

    Removing the stock grease inside an ebike motor

  • micromobility - Bikes, scooters, boards: Whatever floats your goat, this is micromobility @lemmy.world

    First ride of my Segway Ninebot G30LP, recommended from this community

  • micromobility - Bikes, scooters, boards: Whatever floats your goat, this is micromobility @lemmy.world

    Seeking e-scooter recommendations: slow, short range, 10-inch/25cm wheels

  • Newpipe @lemmy.ml

    Hiding 24/7 live streams from "What's New" tab

  • bike wrench @lemmy.world

    Anti-bite freehub on mid-drive ebike: long term complications?

  • I Made This (MOVED TO LEMMY.ZIP) @lemm.ee

    First attempts at cast iron restoration: Wagner skillets

  • micromobility - Bikes, scooters, boards: Whatever floats your goat, this is micromobility @lemmy.world

    I Can’t Believe I Have to Make This Video | Re: Ontario Bill 212, to destroy existing bike infra in Toronto

  • IPv6 @lemmy.world

    The realities of building an IPv6-only city | APNIC Blog

    blog.apnic.net /2024/10/29/the-realities-of-building-an-ipv6-only-city/
  • micromobility - Bikes, scooters, boards: Whatever floats your goat, this is micromobility @lemmy.world

    Survey of ebikes, escooter injuries: injured ages skew higher, not lower

    jamanetwork.com /journals/jamanetworkopen/fullarticle/2821387
  • I Made This (MOVED TO LEMMY.ZIP) @lemm.ee

    A wood bench made from scraped pallets

  • Woodworking @lemmy.ca

    A wood bench made from scraped pallets

  • I Made This (MOVED TO LEMMY.ZIP) @lemm.ee

    Making an 80 cm (31.5 inch) dumbbell from a Titan 15-inch adjustable dumbbell

  • Home Gym @lemmy.world

    Making an 80 cm (31.5 inch) dumbbell from a Titan 15-inch adjustable dumbbell

  • micromobility - Bikes, scooters, boards: Whatever floats your goat, this is micromobility @lemmy.world

    $1000 Honda Suitcase - Motocompacto Review

  • micromobility - Bikes, scooters, boards: Whatever floats your goat, this is micromobility @lemmy.world

    My existing mid-drive Class 3 ebike weights 95 lbs (43 kg) loaded. What could I replace it with?