• 1 Post
  • 35 Comments
Joined 6 months ago
cake
Cake day: June 30th, 2024

help-circle





  • I use it for little Python projects where it’s really really useful.

    I’ve used it for linux problems where it gave me the solution to problems that I had not been able to solve with a Google search alone.

    I use it as a kickstarter for writing texts by telling it roughly what my text needs to be, then tweaking the result it gives me. Sometimes I just use the first sentence but it’s enough to give me a starting point to make life easer.

    I use it when I need to understand texts about a topic I’m not familiar with. It can usually give me an idea of what the terminology means and how things are connected which helps a lot for further research on the topic and ultimately undestanding the text.

    I use it for everyday problems like when I needed a new tube for my bike but wasn’t sure what size it was so I told it what was written on the tyre and showed it a picture of the tube packaging while I was in the shop and asked it if it was the right one. It could tell my that it is the correct one and why. The explanation was easy to fact-check.

    I use Photoshop AI a lot to remove unwanted parts in photos I took or to expand photos where I’m not happy with the crop.

    Honestly, I absolutely love the new AI tools and I think people here are way too negative about it in general.






  • based on weighted averages of ‘what people are saying’ with a little randomization to spice things up

    That is massively oversimplified and not really how neural networks work. Training a neural network is not just calculating averages. It adjusts a very complex network of nodes in such a way that certain input generates certain output. It is entirely possible that during that training process, abstract mechanisms like logic get trained into the system as well, because a good NN can produce meaningful output even on input that is unlike anything it has ever seen before. Arguably that is the case with ChatGPT as well. It has been proven to be able to solve maths/calculating tasks it has never seen before in its training data. Give it a poem that you wrote yourself and have it write an analysis and interpretation - it will do it and it will probably be very good. I really don’t subscribe to this “statistical parrot” narrative that many people seem to believe. Just because it’s not good at the same tasks that humans are good at doesn’t mean it’s not intelligent. Of course it is different from a human brain, so differences in capabilities are to be expected. It has no idea of the physical world, it is not trained to tell truth from lies. Of course it’s not good at these things. That doesn’t mean it’s crap or “not intelligent”. You don’t call a person “not intelligent” just because they’re bad at specific tasks or don’t know some facts. There’s certainly room for improvement with these LLMs, but they’ve only been around in a really usable state for like 2 years or so. Have some patience and in the meantime use it for all the wonderful stuff it’s capable of.








  • I keep telling myself I need to start using a password manager but I’m worried I won’t be able to log into things on my phone or other devices like my work computer when I need to because I don’t know the password. Is that a legitimate worry or is there a solution for this? How do you sync passwords between computer and phone?