So which is it? Are developers 55% more productive, or are they losing 20% of their time to inefficiencies and burning out at record rates?
The answer: executives are measuring—and reporting—what makes their stock price rise, not what’s actually happening on the ground.
Or if you want to get slightly more conspiratorial: the execs are all buying shares in OpenAI, Nvidia, and the like - so now they're more interested in ordering people to use LLM tools so that these stocks rise in price, even if it means sabotaging their own company.
Hi, not OP, but: that's known as frontmatter, it's somewhat widespread, and thus I suspect that it's much more difficult to have it live at the end of your markdown files than in a separate file or db altogether - unless OP is already rolling their own markdown parser.
If it's really for those convicted of such, then the National Rally (Rassemblement National) has, like, ten times as many ineligible candidates as any other party.
The only think I dislike about sneaking this into every page of my personal websites is the sinking feeling that I'll be helping OpenAI claw back market share from Anthropic. I wish someone would disclose an equivalent for chatgpt, and gemini.
I have the same preference for personal projects, but when I was working on a corporate team it was really useful to have the "run configs" for intellij checked in so that each new team member didn't need to set them up by themselves. Some of the setup needed to get the python debugger properly connected to the project could get quite gnarly.
I agree with the many others who say you're more than ready to start learning rust. I would add that if you've brushed up against manual memory management in C, then you might find the following a great introduction to rust and the borrow-checker: https://rust-unofficial.github.io/too-many-lists/
As usual for discussions about starting to learn rust, I would also recommend the "special"/experimental version of The Book maintained by Brown University: https://rust-book.cs.brown.edu/title-page.html . It has little interactive quizzes that help check your understanding, and some fancy diagrams in the sections on pointers and the borrow-checker.
I think this part references it, though it's kinda solely in passing:
Production evaluations can elicit entirely new forms of misalignment before deployment. More importantly, despite being entirely derived from GPT-5 traffic, our evaluation shows the rise of a novel form of model misalignment in GPT-5.1 – dubbed “Calculator Hacking” internally. This behavior arose from a training-time bug that inadvertently rewarded superficial web-tool use, leading the model to use the browser tool as a calculator while behaving as if it had searched. This ultimately constituted the majority of GPT-5.1’s deceptive behaviors at deployment.
He readily admitted as much when I asked him on mastodon about how to approach the subject with friends and family who are using the "it is what it is" rationalization. Then mentioned cult deprogramming research as something he's not as well versed in as he would like.
Or if you want to get slightly more conspiratorial: the execs are all buying shares in OpenAI, Nvidia, and the like - so now they're more interested in ordering people to use LLM tools so that these stocks rise in price, even if it means sabotaging their own company.