Not exactly. Thinking models just inflate the context window to point the model closer to your target. GANs have two models which compete against each other, both training each other, with the goal of one (or both) of those models being improved over time.
Which outputs are accurate, and which ones are inaccurate? How could you tell? What steps did you take to verify accuracy? Was verifying it a manual process?
Shh you'll pop the bubble if you start talking sensibly. It's not an ASIC—it's a specialized piece of hardware optimized to execute a model with unparalleled performance. Now buy my entire stock of them and all the supply for the next two years please.
(Figuring out the compose combination for an emdash took longer than I'd like to admit lol)
Can't speak for Git, but caching responses is a common enough problem that it's built into the standard HTTP headers.
As for building a cache, you'd want to know a few things:
What is a cache entry? In your case, seems to be an API response.
How long do cache entries live? Do they live for a fixed time (TTL cache)? Do you have a max number of cached entries before you evict entries to make space? How do you determine which entries to evict if so?
What will store the cache entries? It seems like you chose Git, but I don't see any reason you couldn't start simple just by using the filesystem (and depending on the complexity, optionally a SQL DB).
You seem locked into using Git, and if that's the case, you still need to consider the second point there. Do you plan to evict cache entries? Git repos can grow unbounded in size, and it doesn't give you many options for determining what entries to keep.
For what it's worth, open source dev can also work. If you can commit some time to a project you care deeply about and make regular contributions, that's another form of experience, and I see no reason you couldn't add that as a line to your resume alongside any other work experience.
This has always been an issue. From my experience, the best way to get in was through internships, co-ops, and other kinds of programs. Those tend to have lower requirements and count as experience.
Of course, today, things are a lot different. It's a lot more competitive, and people don't care anymore about actual software dev skills, just who can churn out SLOC the fastest.
To put some perspective into what our code looks like, there are very few tests (which may or may not pass), no formatter or linter for most of the code, no pipelines to block PRs, no gates whatsoever on PRs, and the code is somewhat typed sometimes (the Python, anyway). Our infrastructure was created ad-hoc, it's not reproducible, there's only one environment shared between dev and prod, etc.
I've been in multiple meetings with coworkers and my manager talking about how it is embarassing that this is what we're shipping. For context, I haven't been on this project for very long, but multiple projects we're working on are like this.
Two years ago, this would have been unacceptable. Our team has worked on and shipped products used by millions of people. Today the management is just chasing the hype, and we can barely get one customer to stay with us.
The issue lies with the priorities from the top down. They want new stuff. They don't care if it works, how maintainable it is, or even what the cost is. All they care about is "AI this" and "look at our velocity" and so on. Nobody cares if they're shipping something that works, or even shipping the right thing.
Because if I spent my whole day reviewing AI-generated PRs and walking through the codebase with them only for the next PR to be AI-generated unreviewed shit again, I'd never get my job done.
I'd love to help people learn, but nobody will use anything they learn because they're just going to ask an LLM to do their task for them anyway.
This is a people problem, and primarily at a high level. The incentive is to churn out slop rather than do things right, so that's what people do.
This is what happens to us. People put out a high volume of AI-generated PRs, nobody has time to review them, and the code becomes an amalgamation of mixed paradigms, dependency spaghetti, and partially tested (and horribly tested) code.
Also, the people putting out the AI-generated PRs are the same people rubber stamping the other PRs, which means PRs merge quickly, but nobody actually does a review.
And before people ask about sniffing it, the second paragraph:
People often recognize spoiled meat through a characteristic rotting odor caused by chemical compounds called biogenic amines or BAs. Food quality inspectors quantify these compounds using procedures that involve direct meat sampling and time-consuming laboratory analysis. However, once meat is sealed and distributed for commercial retail, such testing becomes impractical, making spoilage difficult to detect.
You can't sniff it through the packaging. Even when opened, your nose isn't accurate enough to know if something has just started to spoil, or if only a little bit of it has. And not everyone has good (or any) sense of smell.
I keep trying to manually write code that I'm proud of, but I can't. Everything always needs to be shipped fast and I need to move on to the next thing. I can't even catch my breath. The only thing allowing me to keep up with the team is Cursor, because they all use it as well. The last guy that refused to use AI was just excluded from the team.
This is the problem. It's not new that a company rushes its devs to deliver new features at a pace that results in garbage code. What's new is that devs who are willing to can deliver those features fast using a LLM. This obviously looks great to the imbecilic C-suites. Deliver features fast, get to market quickly, and spend less on devs!
This is just short-term thinking, and it looks like you've noticed this. The team you're on won't change because the culture at your company is to deliver the next feature ASAP and focus on the short term. This is common with startups, for example, because it's a constant race to get more funding. However, it always results in some half-assed product that inevitably needs to be rewritten at some point. With LLMs now, you'll also have a team of people who don't even understand their own code, making it take even longer to fix things or rewrite it later.
Anyway, if you hate it, start applying places now. At least in the US (where I am), the job market is ass. The more time you give yourself to search, the better the chance is that you'll find an option you like.
Infinite scroll is scarcely ever used in a good way
Just to clarify, we're only talking about mainstream social media here, right? Those are the only platforms they're considering here, and more specifically, only TikTok right now.
"Infinite scroll" is also how you can scroll up in your chat log and see more messages. It's how you can open logs for a VM online and see logs going further and further back. It's how you can search for a video on YouTube and keep scrolling down (past the inevitable pile of shit) until you find it.
On social media platforms, and in particular not in a chat interface, it can be toxic.
Not exactly. Thinking models just inflate the context window to point the model closer to your target. GANs have two models which compete against each other, both training each other, with the goal of one (or both) of those models being improved over time.