17 days without AI
17 days without AI
I said one week, but it's actually been 17 days. I didn't even notice.
- I see too many people becoming dependent on AI, and it makes me angry; it's as if they don't even use their brains for decision-making anymore.
- After quitting AI, I care much more about sources. Before, I was using AI too much, to be honest. I didn't care that much about where information came from. I used to think: there's information here; maybe it's not entirely accurate, but it's probably mostly correct. I didn't care much about the source because it looked good enough and sounded like the truth. But now I hate it when someone shares information without citing a source. When they say their source is "AI" or "Google", God, I hate that so much.
- Now I care so much about sources that I always verify things myself, even for simple matters.
- Because I could no longer summarize content (like articles or videos), my consumption decreased significantly. I didn't watch most of the videos I was curious about because, although I was curious at first, they usually weren't necessary, just a waste of time. To give some numbers: before, I was "reading" 30 articles a day and "watching" 10 videos a day; now I read just 5 articles and watch 2 videos.
- Before my experiment, I had listened to a podcast, and yesterday I re-listened to it. I felt like I had missed half of its meaning the first time. It was as if I had listened but hadn't really heard. This was a major realization for me.
- I spend less time on unnecessary things. Previously, I used AI to understand tools, summarize commits, and write bash scripts from scratch for tools that already existed. Now I don't waste my time on that. There are already bash or Python scripts for most tasks, but because of trust issues, I used to think recreating them with AI would be more reliable. This was a serious problem.
- The trust problem: I used to trust AI for code, information, and so on. The problem is that in a field or topic I don't know well, when AI gives me information, it may be complete nonsense. Because I lack expertise in that area, I can't detect when it's wrong. And because the AI is so confident, I become confident in that information too. This is a significant issue.
- The confidence problem: In reality, truth is complex and deep. In many topics, there is no single truth. Sometimes truths may seem illogical, unreasonable, or even incomprehensible. But when you use AI too much, you start to believe that truths exist and that they are understandable, when often they aren't. To grasp truth, you need background knowledge and cumulative information, but using AI skips that foundation and the entire information tree. You start to believe you can learn things without climbing that whole mountain of information.
- I have become less dependent on definite conclusions. I have started to accept uncertainty and unanswered questions. I now understand that some topics cannot be grasped without deep background knowledge. I have also started to care less about many things; much news, many videos, many decisions.
- My money stays in my pocket. I don't use expensive models, but AI is still costly, and paying $0.30 for a PDF summary felt wrong. Now I don't need most of the content I thought I needed, and when I really do need something, I simply open it and read.
- Let's talk about the biggest benefit: time. I was really surprised by this because I used to think AI made me faster, but objectively, it wasted too much of my time.
- I could say the reason is token speed, but I was using fast models before, not slow-thinking ones, so that's not the cause.
- Maybe the reason is that I wasted too much time on things that weren't my responsibility and on unnecessary tasks.
- Or perhaps it was perfectionism, always doing 2–3 iterations, changing structure, asking follow-up questions. It cost so much time.