To clarify: NVIDIA does allow system RAM paging (Unified Memory), but it’s not stable or recommended for ML inference. AMD ROCm supports real page-faulting + memory oversubscription. AMD can therefore run larger models on limited VRAM. AMD is not necessarily faster — usually slower. CPU inference is never faster than a GPU.
It's because in the real world, you have to deal with reality, not only the couple of pet possibilities that people want to consider. I think AOC is the best shot that any decent person in the US has. She would if she could, but in the real world, these are not actual options.
Yes, but the sensible one has icky brown people. And my pastor told me that us blond blue eyed tall people are the chosen ones. You see, that's why we are politically "right", cuz we're on the right side! Gahyuck 😛
Batteries are good now though. I buy almost all used tech where possible. No complaints. If you get a sufficiently new used phone, it's not physically possible for the battery to have undergone enough cycles to degrade noticably. Very good value early on I'd say.
/r/selffuck