Yoko, Shinobu ni, eto... 🤔
עַם יִשְׂרָאֵל חַי Slava Ukraini 🇺🇦 ❤️ 🇮🇱
Yoko, Shinobu ni, eto... 🤔
עַם יִשְׂרָאֵל חַי Slava Ukraini 🇺🇦 ❤️ 🇮🇱
Source: Google working on two Pixel Watch 3 sizes
The quackening
Google's Pixels are nearly ready to convert your SIM card to eSIM
Dark Gathering - Chapter 56 - Rain
Google Photos reveals how Pixel 8 Pro 'Video Boost' feature will work including export and more
Firefox for Android is getting 400 new browser extensions - and you can try some now
OnePlus 10 Pro now seeing stable Oxygen OS 14, based on Android 14
A guide to a longer lasting Smartphone.
onegai, accept my pull request senpai
OpenAI investors' race to reinstate Sam Altman makes tech expert Gary Marcus feel 'sick to his stomach'
Shitty situation
Taking self-appreciation to the next level
Lost Piece
At least 200+ desktop extensions to be available on Firefox Android in December
For Europe’s Jews, a World of Fear
Paris graffiti recall 1930s antisemitism, says mayor
Antisemitism rampant online since Hamas’ attack, report finds
What are some of the best optimizations you applied to your code?
Turkey’s Erdogan says Hamas is a ‘liberation’ not a ‘terrorist’ group
Russia spread bedbug panic in France, intelligence services suspect
PSA: give open-source LLMs a try folks. If you're on Linux or macOS, ollama makes it incredibly easy to try most of the popular open-source LLMs like Mistral 7B, Mixtral 8x7B, CodeLlama etc... Obviously it's faster if you have a CUDA/ROCm-capable GPU, but it still works in CPU-mode too (albeit slow if the model is huge) provided you have enough RAM.
You can combine that with a UI like ollama-webui or a text-based UI like oterm.