While your use case may not suffer from the problem depicted in the post[^1], I don't think it's worth weakening the proposed etiquette for. If having a system that can reduce the generated garbage a person can inflict upon another means slightly-worse worded texts - that's a price I'm willing to pay.
[^1]: It does exhibit other generative AI issues - like the environmental impact or like how it makes you reliant on companies just waiting to start enshittifing the field - it does not suffer from the issue of forcing humans to read meaningless slop that no one bothered to write.
Unlike Dell, Asus did mention a few more details - the system will pack DDR5 memory, HDMI, USB-A, USB-C, WiFi 6E, Bluetooth 5.3, and 2.5G Ethernet. Exact details regarding the USB and HDMI port were not offered, however.
Isn't the amount of memory kind of a tiny bit more important than which generation it is?
Science isn't about WHY. It's about WHY NOT. Why is so much of our science dangerous? Why not marry safe science if you love it so much. In fact, why not invent a special safety door that won't hit you on the butt on the way out, because you are fired.
$20k is what it would cost you or me, but it’s just free for them.
No it isn't. This is not regular software where the bulk of the price is the licensing. With slope-as-a-service, the bulk of the price is the data center operation cost - which Anthropic is certainly not getting for free.
Around the turn of the decade there was this big movement to rename the master branch to main. GitHub, too, made that switch and when you create a new repository the master branch is called main. The original flowchart was from 2010, when the main branch was still called master, so it's called master there.
The AI generated flowchart, of course, is not a plagiarism machine and it's exactly like a human being that is merely inspired by the source material. Surely it used the up-to-date name for that branch?
99% pass rate? Maybe that’s super impressive because it’s a stress test, but if 1% of my code fails to compile I think I’d be in deep shit.
Also - one of the main arguments of vibe coding advocators is that you just need to check the result several times and tell the AI assistant what needs fixing. Isn't a compiler test suite ideal for such workflow? Why couldn't they just feed the test failures back to the model and tell it to fix them, iterating again and again until they get it to work 100%?
But your first paragraph presents no data about the support for Israel's actions. The two numbers you wrote there are: