To put some perspective into what our code looks like, there are very few tests (which may or may not pass), no formatter or linter for most of the code, no pipelines to block PRs, no gates whatsoever on PRs, and the code is somewhat typed sometimes (the Python, anyway). Our infrastructure was created ad-hoc, it's not reproducible, there's only one environment shared between dev and prod, etc.
I've been in multiple meetings with coworkers and my manager talking about how it is embarassing that this is what we're shipping. For context, I haven't been on this project for very long, but multiple projects we're working on are like this.
Two years ago, this would have been unacceptable. Our team has worked on and shipped products used by millions of people. Today the management is just chasing the hype, and we can barely get one customer to stay with us.
The issue lies with the priorities from the top down. They want new stuff. They don't care if it works, how maintainable it is, or even what the cost is. All they care about is "AI this" and "look at our velocity" and so on. Nobody cares if they're shipping something that works, or even shipping the right thing.
Because if I spent my whole day reviewing AI-generated PRs and walking through the codebase with them only for the next PR to be AI-generated unreviewed shit again, I'd never get my job done.
I'd love to help people learn, but nobody will use anything they learn because they're just going to ask an LLM to do their task for them anyway.
This is a people problem, and primarily at a high level. The incentive is to churn out slop rather than do things right, so that's what people do.
This is what happens to us. People put out a high volume of AI-generated PRs, nobody has time to review them, and the code becomes an amalgamation of mixed paradigms, dependency spaghetti, and partially tested (and horribly tested) code.
Also, the people putting out the AI-generated PRs are the same people rubber stamping the other PRs, which means PRs merge quickly, but nobody actually does a review.
And before people ask about sniffing it, the second paragraph:
People often recognize spoiled meat through a characteristic rotting odor caused by chemical compounds called biogenic amines or BAs. Food quality inspectors quantify these compounds using procedures that involve direct meat sampling and time-consuming laboratory analysis. However, once meat is sealed and distributed for commercial retail, such testing becomes impractical, making spoilage difficult to detect.
You can't sniff it through the packaging. Even when opened, your nose isn't accurate enough to know if something has just started to spoil, or if only a little bit of it has. And not everyone has good (or any) sense of smell.
I keep trying to manually write code that I'm proud of, but I can't. Everything always needs to be shipped fast and I need to move on to the next thing. I can't even catch my breath. The only thing allowing me to keep up with the team is Cursor, because they all use it as well. The last guy that refused to use AI was just excluded from the team.
This is the problem. It's not new that a company rushes its devs to deliver new features at a pace that results in garbage code. What's new is that devs who are willing to can deliver those features fast using a LLM. This obviously looks great to the imbecilic C-suites. Deliver features fast, get to market quickly, and spend less on devs!
This is just short-term thinking, and it looks like you've noticed this. The team you're on won't change because the culture at your company is to deliver the next feature ASAP and focus on the short term. This is common with startups, for example, because it's a constant race to get more funding. However, it always results in some half-assed product that inevitably needs to be rewritten at some point. With LLMs now, you'll also have a team of people who don't even understand their own code, making it take even longer to fix things or rewrite it later.
Anyway, if you hate it, start applying places now. At least in the US (where I am), the job market is ass. The more time you give yourself to search, the better the chance is that you'll find an option you like.
Infinite scroll is scarcely ever used in a good way
Just to clarify, we're only talking about mainstream social media here, right? Those are the only platforms they're considering here, and more specifically, only TikTok right now.
"Infinite scroll" is also how you can scroll up in your chat log and see more messages. It's how you can open logs for a VM online and see logs going further and further back. It's how you can search for a video on YouTube and keep scrolling down (past the inevitable pile of shit) until you find it.
On social media platforms, and in particular not in a chat interface, it can be toxic.
You can't really ban dark patterns even though we all agree they suck.
I think the point I was getting at was that a lot of things dark patterns do are individually things that have the potential for good or bad. Infinite scroll is one example. There's also modals, sale banners, and so on.
What makes a dark pattern dark isn't the specific, individual tools at use. It's the sum of those, plus the intent.
Doesn't look like this extends beyond TikTok, or at least mainstream social media as a whole.
Infinite scroll itself isn't really a problem. It's just one of the many tools used to keep users engaged on these platforms specifically by removing an interruption from the experience, but isn't sufficient on its own to create that unhealthy behavior. It's also used in healthier ways, like search results, chat logs, and so on.
The EU attempting to rein in these platforms' control over its users will be interesting to watch. There are decades of research these companies have done on user psychology to maximize their capture of the user's attention. Forcing them not to use all the tools they developed might result in people breaking out of the cycle of endlessly scrolling. Or it might just annoy users. I don't know which will happen.
My favorite meeting is my 8am standup that's scheduled for 30 minutes and averages an hour. It really makes sure I have no energy to do anything else that day. Except it's every day. And most updates are "no updates" with extra words to make it sound like people are doing stuff.
If you're writing a script that's more than 269 lines long, you shouldn't be using Bash.
Jokes aside, the point isn't the lines of code. It's complexity. Higher level languages can reduce complexity with tasks by having better tools for more complex logic. What could be one line of code in Python can be dozens in Bash (or a long, convoluted pipeline consisting of awk and sed, which I usually just glaze over at that point). Using other languages means better access to dev tools, like tools for testing, linting, and formatting the scripts.
While I'm not really a fan of hostility, it annoys me a lot when I see these massive Bash scripts at work. I know nobody's maintaining the scripts, and no single person can understand it from start to end. When it inevitably starts to fail, debugging them is a nightmare, and trying to add to it ends up with constantly looking up the syntax specific commands/programs want. Using a higher level language at least makes the scripts more maintainable later on.
EDIT: briefly searching indicates it's common in South Korea too. Not sure if it's just more common in Asia right now or what. Seems like most of the articles about the West I'm finding are about cameras in bathrooms and shit (not that that's better...).
It can (and does) happen in all countries. For whatever reason though, it seems to be more common in eastern Asia, from what I've seen anyway.
If this bothers you (and it probably should), then you should really check every hotel room you stay in, regardless of country. It usually just takes a few minutes when you get into the room, plus you can check for bed bugs while you're at it.
You can learn Rust whenever you want. There's no rule that you must learn anything (including C) before learning Rust. Of course, knowing C will make the basic concepts, especially around memory management, a lot simpler.
If your goal is to eventually learn Rust, your next rabbit hole should be the book ideally. If you prefer a video format, I don't have any specific suggestions since I don't learn through videos usually, but I know there's some good video resources on YouTube.
I'm not following. Which part of this is nondeterministic?
The language being complicated to write and the compiler being confusing to use isn't an indicator of determinism. If GCC were truly nondeterministic, that'd be a pretty major bug.
Also, note that I mentioned that the output behavior is deterministic. I'm not referring to reproducible builds, just that it always produces code that does what the source specifies (in this case according to a spec).
Their justification was to improve security for extensions, and while it did do that, it also crippled adblockers in their first iterations, and it was clear that was a goal initially.
The library is two text files (code) that are processed by an LLM (interpreter) to generate code of another type. This is not that new in terms of workflow.
I think what makes this the worst is the fact that the author admits that you can't be sure the library will work until you generate the code and test it. Even then you cannot guarantee the security of the generated code and as you do not understand the code you also cannot give support or patch it.
I've tried explaining how LLMs are not equatable to compilers/interpreters in the past, and it's usually to people who aren't in software roles. What it usually comes down to when I try to explain it is determinism. A compiler or interpreter deterministically produces code with some kind of behavior (defined by the source code). They often are developed to a spec, and the output doing the wrong thing is a bug. LLMs producing the wrong output is a feature. It's not something you try to fix, and something you often can't fix.
This, of course, ignores a lot of "lower level" optimizations someone can make about specific algorithms or data structures. I use "lower level" in quotes, of course, because those are some of the most important decisions you can make while writing code, but turning off your brain and letting a LLM do it for you "abstracts" those decisions away to a random number generator.
To put some perspective into what our code looks like, there are very few tests (which may or may not pass), no formatter or linter for most of the code, no pipelines to block PRs, no gates whatsoever on PRs, and the code is somewhat typed sometimes (the Python, anyway). Our infrastructure was created ad-hoc, it's not reproducible, there's only one environment shared between dev and prod, etc.
I've been in multiple meetings with coworkers and my manager talking about how it is embarassing that this is what we're shipping. For context, I haven't been on this project for very long, but multiple projects we're working on are like this.
Two years ago, this would have been unacceptable. Our team has worked on and shipped products used by millions of people. Today the management is just chasing the hype, and we can barely get one customer to stay with us.
The issue lies with the priorities from the top down. They want new stuff. They don't care if it works, how maintainable it is, or even what the cost is. All they care about is "AI this" and "look at our velocity" and so on. Nobody cares if they're shipping something that works, or even shipping the right thing.