Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)A
Posts
3
Comments
179
Joined
3 yr. ago

  • There’s probably is a path to female only reproduction using artificial means. I suppose it would be possible with men too, if we could synthetically gestate a child, but it’s probably easier in women since they possess the evolved ability to gestate. Parthenogenesis has occurred in other organisms, possible perhaps for humans too, with help.

  • This already happens intrinsically in the models. The tokens are abstracted in the internal layers and only translated in the output layer back to next token prediction. Training visual models is slightly different because you’re not outputting tokens but pixel values (or possibly bounding boxes or edges, but not usually; conversely if not generative you may be predicting labels which could theoretically be in token space).

    The field itself is actually fairly stagnant in architecture. It’s still just attention layers all the way down. It’s just adding more context length and more layers and wider layers while training on more data. I personally think this approach will never achieve AGI or anything like it. It will get better at perfectly reciting its training data, but I don’t expect truly emergent phenomena to occur with these architectures just because they’re very big. They’ll be decent chatbots, but we already have that, and they’ll just consumer ever more resources for vanishingly small improvements (and won’t functionally improve any true logical capability beyond regurgitating logical paths already trodden in their training data but in a very brittle way, because they do not actually understand the logic or why the logic is valid, they have no true state model of objects which are described in the token space they’re traversing probabilistically).

  • Sorry, I’m not saying that’s a good thing. It’s not just the context that’s expanding, but the parameter of the base model. I’m saying at some point you just have saved a compressed version of the majority of the content (we’re already kind of there) and you’d be able to decompress it even more losslessly. This doesn't make it more useful for anything other than recreating copyrighted works.

  • Agreed. Continue the momentum and soon perhaps Mexico will go with a 4 day workweek.

  • Current models are speculated at 700 billion parameters plus. At 32 bit precision (half float), that’s 2.8TB of RAM per model, or about 10 of these units. There are ways to lower it, but if you’re trying to run full precision (say for training) you’d use over 2x this, something like maybe 4x depending on how you store gradients and updates, and then running full precision I’d reckon at 32bit probably. Possible I suppose they train at 32bit but I’d be kind of surprised.

    Edit: Also, they don’t release it anymore but some folks think newer models are like 1.5 trillion parameters. So figure around 2-3x that number above for newer models. The only real strategy for these guys is bigger. I think it’s dumb, and the returns are diminishing rapidly, but you got to sell the investors. If reciting nearly whole works verbatim is easy now, it’s going to be exact if they keep going. They’ll approach parameter spaces that can just straight up save things into their parameter spaces.

  • Thank you! This is really good info. I’ll take a look!

  • This is good to know. Can you provide a link to that court case or anything?

  • Valve states you can’t sell a steam key in another platform for cheaper than in steam, not that you can’t sell your game anywhere else at a lower price. That’s slightly different than here. Not defending it just saying that it is actually different than here.

  • This, and lossy compression is exactly right.

    Alternatively, it’s a decomposition of a big matrix (think very large excel) wherein each cell is a probability you observe every other word (really its tokens of course but for sake of argument) given that you’ve observed other words. Like, you could literally make a transformer in excel. It wouldn’t run, but that’s excels fault, not the math.

    Aside: but I’m pretty sure distributing a lossy compression and decompression algorithm is distribution, and charging for it is also there. Realistically if this is allowed, anyone should be able to pirate anything for any reason legally as long as it’s passed through a lossy compression and decompression first.

  • The oxidation of sugar, which is accomplished in our bodies, can be directly measured. The end products are CO2 and H2O. If you fully combust sugar in a bomb calorimeter, the energy release is equivalent. So whether your body breaks apart something, or fire breaks apart something, they are calorimetrically equivalent, which is why people even bothered with kcals for food in the first place.

    Edit: I should note just to head it off at the pass, aerobic oxidation in the body is about 35% efficient. You can measure this by looking at the metabolites formed, and the remaining 65% of the theoretical energy can directly be measured as heat, so when I say that whether our bodies do it or a calorimeter does it, I mean that the total energy is conserved because the laws of thermodynamics apply. If you eat more sugar your body will oxidize it and yield the same stuff, or possibly store it as fat. If you eat protein you can do the same measures, it’s just a slightly different process to enter the metabolic pathways. Fire is just a chemical process and reaction, after all, it’s not special. Your body also does chemical reactions.

    Whether you see value in it or not, kcals observe a physical reality which has value within human cells and can be used to describe how human bodies interact with food in an energetic landscape. It does not purport to solve all of human nutrition, and nobody sane uses it for that. It is used as a gross representation of energetic equivalence, for which it serves its purpose reasonably well.

    So again, if you believe your interpretation of human nutrition and biology and chemistry is novel or more useful, then submit to a journal your recommendations. Journals even accept review papers which are simply syntheses of existing papers, you don’t even have to conduct experiments. You can literally just go on pubmed and synthesize a paper and submit it for publication.

    Don’t submit to a predatory journal though that uses a pay to publish format. Submit to an established, peer reviewed journal.

    Conversely, just publish it on bioarxiv or something and then post it here for people to go to and review, though this is not as good as getting it peer reviewed.

    If you do publish something, do post another comment here and I’ll review it.

  • Sure but then what is your alternative for easily assessing in a reportable way what the energy density of a food is? Bomb calorimetry doesn’t say “you will get this amount of energy from a food”, it simply says “a gram of this material has about this much energy density.” Evolution has done a remarkable job of maximizing energetic recovery from compounds, and it’s simply true that eating more energy in the form of food than is used by the body will result in the body storing said excess energy. Kcal is a convenient relative metric that does correspond to this phenomena. That is objectively true. Is it exact? No. Does it claim to be exact? Again no, but you can calculate the yield by looking at digestive and metabolic processes within constraints and the relative amount is still useful as a gross measure.

    It’s fine to say that kcal aren’t what we eat. But then food isn’t really what we eat either. Food is simply compounds that our body can use to perform chemical work. You can quantify this work. And you can use a word other than work to describe chemical reactions, but the semantic point is conserved regardless.

    A good proxy for this general capacity in a human body from organic digestible material is kcal. If you have an alternative to this that is easily calculated and also easily understood, I’d recommend writing it up and submitting it to a nutrition or medical journal, where it can be peer reviewed, and if it holds merit, published, to be more broadly examined, and perhaps adopted.

  • Calorie is a unit of heat energy. The energetic yield of a gram of protein can be described by its ability to be burned to heat a gram of water, say. This is the definition of a calorie.

    That energy is still constant, there is not some magical world where protein suddenly has more energy density in a human body than it did when it was burned in a calorimeter. That would break the laws of thermodynamics.

    You can question whether there is metabolic advantage to consumption of certain types of food (that is, does the human body leverage certain foods in a metabolically more efficient fashion than others, such that consumption of the same calories but in a different composition results in differential weight gain or loss — this actually is studied in nutritional studies). But the laws of thermodynamics still apply there. The first that energy is conserved (kcal is fine for describing this as an upper bound) and the second concerning the free energy of a chemical system to perform “work”, which isn’t heating a gram of water (though it also sort of is, were warm blooded creatures) but rather describing the capacity of the substance to ultimately contribute to chemical processes in a cell, such as the generation or consumption of ATP.

    And yes, every gram of food has a specific amount it can contribute to those chemical processes, and it’s tied to the total amount of energy in a gram of that material, which is conveniently calculated in a calorimeter.

  • Sort of. Thermodynamics still definitely plays a role. You cannot have more calories than you ingest, and over time, you cannot perform more work than electrochemically possible; this is true precisely because of the laws described by thermodynamic constraints.

    The laws of thermodynamics aren’t tongue in cheek. The poster saying you can’t escape the laws of thermodynamics I took to mean they’re making a tongue in cheek response; in other words, they’re sort of being witty and saying the reason this finding was observed is because of the fundamental laws governing energy consumption and use in the human body. That absolutely is rhetorically meeting the definition of tongue in cheek.

  • The one that says you cannot burn more calories than your body uses and you have to burn more calories than food you eat. It’s just tongue in cheek that the amount of energy in a (closed) system is conserved.

    Of course one question is, does intermittent fasting somehow cause you to increase your base metabolic rate or cause you to digest your food less effectively per unit of food eaten, which could still satisfy thermodynamic constraints while still having an apparently larger effect. This study indicates that at a macro level, people do not have more success with this strategy vs traditional calorie restrictions, which do not support either hypothesis. They don’t disprove the hypotheses, but you don’t disprove such things, only support them. This doesn’t support them.

  • Looking through, it seems like for the most part these are very niche and/or require the user to be using SSO or enterprise recovery options and/or try to change and rotate keys or resync often. I think few people using this for personal would be interacting with that attack surface or accepting organizational invites, but it is serious for organizations (probably why they’re trying quickly to address this).

    Honestly I think a server being incognito controlled and undetected in bitwardens fleet while also performing these attacks is, unlikely? Certainly less likely than passwords being stolen from individual site hacks or probably even banks. Like at that point, it would just be easier to do these types of manipulations directly on bank accounts or crypto wallets or email accounts than here, but then again, if you crack a wallet like this you get theoretically all the goodies to those too I suppose, for a possibly short time (assuming the user wasn’t using 2FA that wasn’t email based as well).

    Not to mitigate these issues. They need to fix them, just trying to ascertain how severe and if individual users should have much cause for concern.

  • Man, like it’s good the Liquid Glass guy left because holy crap what a steaming pile of terrible UX, but dang do they need to fix this stuff.

    Also stop overly rounding corners. It’s stupid and gives me less space to interact with the content I actually want. Like skeuomorphic design was better than this by a lot, if a little “dated” looking.

  • Man they’re winning so hard! They now have the freedom to work more hours and thus the freedom to have less… erm… free time.

  • That’s where rent control and tax breaks come in, as well as scale if you’re opening multiple stores. I’m not aware of a large non profit grocery chain.

  • “Thinking” is just an arbitrary process to generate additional prompt tokens. In their training data now, they’ve realized people suck at writing prompts, and that it was clear their models lack causal or state models of anything. They’re simply good at word substitution to a context that is similar enough to the prompt they’re given. So a solution to sucky prompt writing and trying to sell people on its capacity (think full self driving — it’s never been full self driving, but it’s marketed that way to make people think it is super capable) is to simply have the model itself look up better templates within its training data that tend to result in better looking and sounding answers.

    The thinking is not thinking. It’s fancier probabilistic look up.

  • Steam Hardware @sopuli.xyz

    Probability that the next steam deck is arm based?

  • Selfhosted @lemmy.world

    Wayland GUI in an Unpriviliged LXC container (proxmox)

  • Selfhosted @lemmy.world

    First time software set up help