Why is explaining transformers, instead of a concept unlikely to be anywhere in the training data in copyable/memorizable form, the benchmark of interest here?
The health secretary was also open about his dietary supplement routine — but he warned that he shouldn’t be seen as a pinnacle for what others should take.
In response to Miller's question, Kennedy said he takes Vitamin D, quercetin, zinc, magnesium, Vitamin C and “a bunch of other stuff.”
How does he choose which supplements to take? In a relatable way — and one that’s not necessarily medically advised.
“My method is I read an article about something, you know, and I get convinced that, oh, I gotta have this stuff,” he said. “And then I get it and then six months later I’m still taking it. I don’t remember what the article said. So, I end up with a big crate of vitamins that I’m taking, and I don’t even know why.”
Think of it like seti@HOME or bittorrent but for compute. Most computers are not working at 100% capacity at all times. So, if you had a network where people would run a background service that shared a bit of their computing power you could have a huge distributed computing network.
Yeah, I skimmed the petals repo. Again, I don't think there's a significant mechanical barrier to overcome, but I do see an economic one. Letting my gpu sit idly when I'm not actively using it is essentially free, but to convince me to run it for a project like petals I'd have to earn something valuable enough to offset my cost for power for the gpu and cooling my house. Is the only value returned by joining the petals network the capacity to run my own distributed training/inference on the same network? Would usage be balanced by some kind of ratio system similar to private tracker groups or have others proposed a kind of cryptocurrency? Aside: how does the network verify that the results of distributed computation are genuine and that a user isn't taking advantage of the network (or is this not possible because it would corrupt a user's own results as well?)?
Sorry I have a lot of questions and not enough time to read the petals paper linked on the repo until tomorrow. If the answers are "read the damn paper":
I don't think the mechanics of coordinating the distributed computing is the barrier so much as the economics involved in getting an extremely large scale distributed compute economy running. Is the proposal essentially a ratio system to measure balance/imbalance of use of the network?
Why use this?