It's a buzzword in crypto but has real applications outside of that
- Posts
- 0
- Comments
- 125
- Joined
- 2 yr. ago
- Posts
- 0
- Comments
- 125
- Joined
- 2 yr. ago
I think it was this one from 2022 that mentions coreweave borrowing 29 billion backed by GPUs.
especially nice given how fast they depreciate.
I thought they already have that? At least i remember reading about coreweave or similar using their GPU's as collateral to buy more GPU's like a year ago.
It'd be more accurately titled Star Trek: Burnham
I always called it 'The Burnham Show, starring Michael Burnham'.
It was crazy to me how they could make every plot line revolve around her in some way, have her always be part of figuring out the solution, everyone else fawning over how great she is and what they'd do without her, just the lengths the writers went to to insert her everywhere. It's just so on your nose and gets really tiring after like 3 seasons.
Compared with like DS9 where you could have whole episodes where the main character, Quark, only has like 1-2 lines and they focus more on supporting cast like Cisco or just Bashir and Garrek (sorry, I couldn't resist :) )
I don't disagree. I meant for users it is incidental. Most users probably wouldn't buy them with spying as the main purpose(they just also don't really care that it can spy). making them much more widespread than something where spying was the main use-case, making the problem worse.
And as someone else mentioned, once you did get it, the temptation for using it for spying is there for a user. Making it worse than e.g. a spy pen imo, as with that you'd need the intent to spy first, and then buy it, but with this, you buy it for whatever reason and then think "oh, I could just spy now" since you already own the device, which I'd argue leads to more overall spying, so to speak. Maybe you see a video online and go "oh, I can just do that, right now, no effort on my part, since I already own this device".
And for Meta it's like tracking cookies on crack
sure, but there the spying is the purpose, whereas with the glasses it's incidental.
you don't buy such gadgets if you don't intend to spy, but people would buy meta glasses for other reason, and meta being able to spy on you is just a side-effect. Plus it' a matter of scale, this has the potential of being much more prominent than some spy camera.
I posted above already, but repeating for visibility. There is an initiative in the works right now to combat this: https://www.no-lobbying.ch/
Then do something about it: https://www.no-lobbying.ch/
I remember reading that hotel TVs are an option. They also have an ad platform, but one intended for the hotel owner to send ads from, not some 3rd party. Not exactly dumb but also not as bad as regular TVs.
And of course a beamer or PC screen connected to some cheap small form factor PC is always an option, with Kodi or similar on it, i haven't owned a TV in like 10 years, just using a small linux pc with beamer, and a tv tuner card in the past (nowadays my ISP offers all public channels on IPTV)
For the byte pair encoding (how those tokens get created) i think https://bpemb.h-its.org/ does a good job at giving an overview. after that i'd say self attention from 2017 is the seminal work that all of this is based on, and the most crucial to understand. https://jtlicardo.com/blog/self-attention-mechanism does a good job of explaining it. And https://jalammar.github.io/illustrated-transformer/ is probably the best explanation of a transformer architecture (llms) out there. Transformers are made up of a lot of self attention.
it does help if you know how matrix multiplications work, and how the backpropagation algorithm is used to train these things. i don't know of a good easy explanation off the top of my head but https://xnought.github.io/backprop-explainer/ looks quite good.
and that's kinda it, you just make the transformers bigger, with more weight, pluck on a lot of engineering around them, like being able to run code and making it run more efficientls, exploit thousands of poor workers to fine tune it better with human feedback, and repeat that every 6-12 month for ever so it can stay up to date.
Well each token has a vector. So 'co' might be [0.8,0.3,0.7] just instead of 3 numbers it's like 100-1000 long. And each token has a different such vector. Initially, those are just randomly generated. But the training algorithm is allowed to slowly modify them during training, pulling them this way and that, whichever way yields better results during training. So while for us, 'th' and 'the' are obviously related, for a model no such relation is given. It just sees random vectors and the training reorganizes them tho slowly have some structure. So who's to say if for the model 'd', 'da' and 'co' are in the same general area (similar vectors) whereas 'de' could be in the opposite direction. Here's an example of what this actually looks like. Tokens can be quite long, depending how common they are, here it's ones related to disease-y terms ending up close together, as similar things tend to cluster at this step. You might have an place where it's just common town name suffixes clustered close to each other.
and all of this is just what gets input into the llm, essentially a preprocessing step. So imagine someone gave you a picture like the above, but instead of each dot having some label, it just had a unique color. And then they give you lists of different colored dots and ask you what color the next dot should be. You need to figure out the rules yourself, come up with more and more intricate rules that are correct the most. That's kinda what an LLM does. To it, 'da' and 'de' could be identical dots in the same location or completely differents
plus of course that's before the llm not actually knowing what a letter or a word or counting is. But it does know that 5.6.1.5.4.3 is most likely followed by 7.7.2.9.7(simplilied representation), which when translating back, that maps to 'there are 3 r's in strawberry'. it's actually quite amazing that they can get it halfway right given how they work, just based on 'learning' how text structure works.
but so in this example, us state-y tokens are probably close together, 'd' is somewhere else, the relation between 'd' and different state-y tokens is not at all clear, plus other tokens making up the full state names could be who knows where. And tien there's whatever the model does on top of that with the data.
for a human it's easy, just split by letters and count. For an llm it's trying to correlate lots of different and somewhat unrelated things to their 'd-ness', so to speak
Huh that actually does sound like a good use-case of LLMs. Making it easier to weed out cheaters.
They don't look at it letter by letter but in tokens, which are automatically generated separately based on occurrence. So while 'z' could be it's own token, 'ne' or even 'the' could be treated as a single token vector. of course, 'e' would still be a separate token when it occurs in isolation. You could even have 'le' and 'let' as separate tokens, afaik. And each token is just a vector of numbers, like 300 or 1000 numbers that represent that token in a vector space. So 'de' and 'e' could be completely different and dissimilar vectors.
so 'delaware' could look to an llm more like de-la-w-are or similar.
of course you could train it to figure out letter counts based on those tokens with a lot of training data, though that could lower performance on other tasks and counting letters just isn't that important, i guess, compared to other stuff
I'd just offer refunds immediately, for everything
you can't pay the bill? That's ok, you get a refund.
you just wanted someone to shout at because Klarna sucks? Refund
You can't log in? Believe it or not, also a refund.
one other use case where they're helpful is 'translation'. Like i have a docker compose file and want a helm chart/kubernetes yaml files for the same thing. It can get you like 80% there, and save you a lot of yaml typing.
Wont work well if it's mo than like 5 services or if you wanted to translate a whole code base from one language to another. But converting one kind of file to another one with a different language or technology can work ok. Anything to write less yaml…
with less cold water coming down the US east coast, temperatures will rise. With more energy (heat) in that system, hurricanes will be more severe and frequent.
I always bring that up when they ring the door. There's 144k spaces available and there's 8.8 million of you, what are you even doing here?
Of course there are. But I mean, women's hormones do affect mood during the menstrual cycle (my wife certainly says she's more iritable before her period), and afaik the hormone therapy is some of the same hormones, so it didn't seem far fetched at all to me that it could play a role. hence me asking.
but could as well have been some deep seated anger at the world or similar, or something in between. Mostly I was just trying to think of reasons for why she might not be as bad as she was seeming, benefit of the doubt kind of thing.
I used to work with a trans woman who was a huge bitch, at least some of the time. Like actually shouting at coworkers for tiny mistakes, all-caps shouting in company chat at people trying to help with stuff, thinking she's the smartest person in any room, that kind of stuff.
i've always wondered if she's just a bitch or if at least some of it could be a side effect of hormone therapy? I mean, completely changing the hormones for your body must have some pretty dramatic effects in many areas and might take a long time until your body adjusts.
but a definitely won't just ask 'yo. Are you just a huge bitch or is it your medication' in a corporate setting.
[edit] just for clarity, she started transitioning about 1 month after she joined that team and I left after about a year and a half, in part because of the mood on the team going to shit, among other reasons. But so I couldn't compare to pre-hormone therapy or anything like that.
[edit2] thank you for all the replies, this was really enlightening and answered a lot of questions! Especially on a topic i feel is discussed less often, or at least I haven't come across.
I'd honestly be most worried about them being essentially raised by an ass-kissing sycophantic llm. That shit is already messing with some adults, i couldn't imagine what it'd do to a child.
a whole generation of narcissists maybe? I mean, sure, kids need encouragement. But just being told that everything you say is a great idea, your a genius, wow, you're so amazing, over and over…
and then combine that with social media that's already messing with kids self esteem and social skills shudder