Make flashcards of short questions + answers from your notes. You can use Anki for that (on Android it's AnkiDroid), and you might want to watch this quick tutorial by Derek Banas: https://www.youtube.com/watch?v=5urUZUWoTLo
One way you can speed up the process of making flashcards is via an AI (not necessarily ChatGPT, I tried Mixtral 8x7b on a few Wikipedia pages and it also works well for this, it's opensource and you have a free demo here: https://huggingface.co/chat/).
You could ask the AI:
Extract a precise and concise answer to the question "[INSERT QUESTION]" from the following paragraph: "..."
The reverse also works:
Formulate a few short questions that are answered by the following paragraph: "..."
You customize the OS and then install a custom OS?
I admit it was definitely an awkward way of writing it 🙃, but those are simply the things I tried in chronological order as I was not really familiar with GrapheneOS in the beginning.
For instance, even if you have an old Intel integrated GPU, chances are you can still benefit from AMD's FSR just by pushing a few flags to Proton GE, even if the game doesn't officially support it, and you'll literally get a free FPS boost (tested it for fun and can confirm on an Intel UHD Graphics 620).
Congrats! Your laptop will be even happier with a lighter but still nice-looking desktop environment like Xfce and you even have an Ubuntu flavor around it: Xubuntu.
They're called MFAs (Made For AdSense), nothing changed since the early AdSense days. It's unlikely that Google is knowingly boosting them, it's just that search engines have always been easy to game with artificial link building and keyword density maximization, MFA owners happen to be among the people most likely to invest a lot in cheating their way to the top of search engine rankings. AI only made creating MFAs easier.
With how MS Teams and now CNN have been reported here to be blocking Firefox, you know that Firefox is doing things right. If web giants are ganging up against it, it's all the more reason to switch to it to make a statement and prevent big tech from making privacy violation the norm.
Hard to tell as it's really dependent on your use. I'm mostly writing my own kernels (so, as if you're doing CUDA basically), and doing "scientific ML" (SciML) stuff that doesn't need anything beyond doing backprop on stuff with matrix multiplications and elementwise nonlinearities and some convolutions, and so far everything works. If you want some specific simple examples from computer vision: ResNet18 and VGG19 work fine.
Works out of the box on my laptop (the export below is to force ROCm to accept my APU since it's not officially supported yet, but the 7900XTX should have official support):
Last year only compiling and running your own kernels with hipcc worked on this same laptop, the AMD devs are really doing god's work here.
Yup, it's definitely about the "open-source" part. That's in contrast with Nvidia's ecosystem: CUDA and the drivers are proprietary, and the drivers' EULA prohibit you from using your gaming GPU for datacenter uses.
That's true, but ROCm does get better very quickly. Before last summer it was impossible for me to compile and run HIP code on my laptop, and then after one magic update everything worked. I can't speak for rendering as that's not my field, but I've done plenty of computational code with HIP and the performance was really good.
But my point was more about coding in HIP, not really about using stuff other people made with HIP. If you write your code with HIP in mind from the start, the results are usually good and you get good intuition about the hardware differences (warps for instance are of size 32 on NVidia but can be 32 or 64 on AMD and that makes a difference if your code makes use of warp intrinsics). If however you just use AMD's CUDA-to-HIP porting tool, then yeah chances are things won't work on the first run and you need to refine by hand, starting with all the implicit assumptions you made about how the NVidia hardware works.
HIP is amazing. For everyone saying "nah it can't be the same, CUDA rulez", just try it, it works on NVidia GPUs too (there are basically macros and stuff that remap everything to CUDA API calls) so if you code for HIP you're basically targetting at least two GPU vendors. ROCm is the only framework that allows me to do GPGPU programming in CUDA style on a thin laptop sporting an AMD APU while still enjoying 6 to 8 hours of battery life when I don't do GPU stuff. With CUDA, in terms of mobility, the only choices you get are a beefy and expensive gaming laptop with a pathetic battery life and heating issues, or a light laptop + SSHing into a server with an NVidia GPU.
This. I don't think people here realize that HR doesn't really have a say in this, they aren't the ones deciding on the firing and they aren't the ones who can undo it since they aren't the ones providing the team's budget.
HR's job in these situations is to do the dirty part: handle the announcement to each employee and damage control if necessary.
The girl in the video is saying that her manager was "pleased" with her work and she didn't understand why strangers in the HR department are doing the announcement to her: that's the whole point, it's very likely that it's that "nice" manager who threw you under the bus when he had to make a choice on which people he needs to keep after top management told him to downsize his team but he didn't have the guts to tell you that personally.
It depends. I'm working in the quant department of a bank and we work on pricing libraries that the traders then use. Since traders often use Excel and expect add-ins, we have a mostly Windows environment. Our head of CI, a huge Windows and Powershell fan, once then decided to add a few servers with Linux (RHEL) on them to have automated Valgrind checks and gcc/clang builds there to continuously test our builds for warnings, undefined behavior (gcc with O3 does catch a few of them) and stuff.
I thought cool, at least Linux is making it into this department. Then I logged into one of those servers.
The fucker didn't like the default file system hierarchy and did stuff like /Applications and `/Temp' and is installing programs by manually downloading binaries and extracting them there.
Make flashcards of short questions + answers from your notes. You can use Anki for that (on Android it's AnkiDroid), and you might want to watch this quick tutorial by Derek Banas: https://www.youtube.com/watch?v=5urUZUWoTLo
One way you can speed up the process of making flashcards is via an AI (not necessarily ChatGPT, I tried Mixtral 8x7b on a few Wikipedia pages and it also works well for this, it's opensource and you have a free demo here: https://huggingface.co/chat/).
You could ask the AI:
The reverse also works: