Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)H
Posts
0
Comments
229
Joined
3 yr. ago

  • From memory MTP is pretty flaky and quite slow.

    ADB push is pretty good but at that stage rsync is just as easy.

    Put SSH in the phone and you can do it all from the computer too.

  • Are you able to buy unlocked directly from Google? I typically avoid the carrier when I can.

  • The whole notion of LSP has been nice.

  • An LLM is an equation, fundamentally. Map a word to a number, equation, map back to words and now llm. If you're curious write a name generator using torch with an rnn (plenty of tutorials online) and you'll have a good idea.

    The parameters of the equation are referred to as weights. They release the weights but may not have released:

    • source code for training
    • there source code for inference / validation
    • training data
    • cleaning scripts
    • logs, git history, development notes etc.

    Open source is typically more concerned with the open nature of the code base to foster community engagement and less on the price of the resulting software.

    Curiously, open weighted LLM development has somewhat flipped this on its head. Where the resulting software is freely accessible and distributed, but the source code and material is less accessible.

  • The energy use isn't that extreme. A forward pass on a 7B can be achieved on a Mac book.

    If it's code and you RAG over some docs you could probably get away with a 4B tbh.

    ML models use more energy than a simple model, however, not that much more.

    The reason large companies are using so much energy is that they are using absolutely massive models to do everything so they can market a product. If individuals used the right model to solve the right problem (size, training, feed it with context etc. ) there would be no real issue.

    It's important we don't conflate the excellent progress we've made with transformers over the last decade with an unregulated market, bad company practices and limited consumer Tech literacy.

    TL;DR: LLM != search engine

  • I use it a lot when I'm writing my notes (ie joplin/obsidian), I'll use flux or stable diffusion for a few iterations until I can create an image that Is consistent with what I'm writing.

    It can be really convenient to be able to recognize an image as you're browsing through notes that are otherwise just filled with code or maybe a recount of the day.

    I'm sure most consumers consider excessive use of generative AI to be in bad form. It certainly doesn't exude professionalism.

  • Oh, okay, I understand what you're saying now.

    Yeah, I don't trust any of the VPN providers. There's just no evidence that they're trustworthy. I reach for Tor (or i2p sometimes).

    I typically run all the torrenting stuff in a container, I've never actually used that VPN to browse. I just spin the container up and down when I want my bandwidth back.

  • I've had a good experience with AirVPN. I mean, I only use it for torrenting, but... Is there a good reason not to go with them for torrents?

  • To be fair, wireguard is pretty painless.

  • Absolutely that's what the internet was made for!

    But family photos keep a bit more secure, Particularly if it's syncing directly from your phone, I take a lot of explicit photos of my wife, but also code that I'm writing on my computer, or the kids playing, etc.

  • Keepass with rsync / unison or a local git server works pretty well too.

  • Airvpn has port forward i believe.

  • I don't actually dislike ai imagery, I think it can produce interesting imagery. However, I must concede that is an excessive use of boilerplate bog-standard AI imagery.

  • I don't think I would have made too much of a difference because the state-of-the-art models still aren't a database.

    Maybe more recent models could store more information in a smaller number of parameters, but it's probably going to come down to the size of the model.

    The Only exception there is if there is indeed some pattern in modern history that the model is able to learn, but I really doubt that.

    What this article really calls to light is that people tend to use these models for things that they're not good at because it's being marketed contrary to what it is.

  • I think they all would have performed significantly better with a degree of context.

    Trying to use a large language model like a database is simply A misapplication of the technology.

    The real question is if you gave a human an entire library of history. Would they be able to identify relevant paragraphs based on a paragraph that only contains semantic information? The answer is probably not. This is the way that we need to be using these things.

    Unfortunately companies like openai really want this to be the next Google because there's so much money to be hired by selling this is a product to businesses who don't care to roll more efficient solutions.

  • Well, that's simply not true. The llm is simply trained on patterns. Human history doesn't really have clear rules such like programming languages, so it's not going to be able to internalise that very well. But the English language does have patterns so If you used a Semantic or hybrid Search over a corpus of content and then used an LLM to synthesise well structured summaries and responses, it would probably be fairly usable.

    The big challenge that we're facing with media today is that many authors do not have any understanding of statistics, programming or data science/ ML.

    Lllm is not ai, It's simply an application of an NN over a large data set that works really well. So well, in fact that the runtime penalty is outweighed by its utility.

    I would have killed for these a decade ago and they're an absolute game changer With a lot of potential to do a lot of good. Unfortunately the uninitiated among us have elected to treat them like a silver bullet because they think it's the next dot com bubble

  • To play devil's advocate, LMG is a private business concerned with profit. I would be surprised if there weren't many other companies well aware of the racket that also kept their mouth shut.

    This doesn't make their behaviour justifiable in any way, but it does highlight how silly this analogy is when we're comparing an open source developer of one of the largest projects I could think of to a private media group.

    It's almost like comparing NBC News to Salvation Army.

  • He wrote a kernel that is free, widely considered as secure and paves a way for secure computer interaction among the general public. A tremendous accomplishment.

    He may be abrasive and unkind but the Linux kernel has been a real positive contribution to all.

    • Starting Strength book
      • The program is simplistic, dont follow all the advice in the book but the guidance around compound lifts is good
      • Follow that basic program for a month or two (Or ice cream fitness), depending on your level of fitness you might like PHAT or PHUL.
    • Couch to 5k
      • Once you Can run a 10K It's simply a matter of determining what your goals are

    On the software side:

    • Fitnotes
      • Although it's proprietary, it's free and that allows you to export your data into SQLite. Any llm (even a local one) will cobble together a dashboard with plotly/jupyter or even PyQt without much pain
    • Waistline Is an application available on F-Droid, This is open source. So, send the developer a donation if you like it.