You are correct that it is the deserializer's choice. You are incorrect when you imply that it is a good idea to rely on behavior that isn't enforced in the spec. A lot of people have been surprised when that assumption turns out to be wrong.
There are really good uses for XML. Mostly for making things similar to HTML. Like markup for Android UIs or XAML for WPF. For pretty much everything else the complexity only brings headaches
Information set isn't a description of XML documents, but a description of what you have that you can write to XML, or what you'd get when you parse XML.
This is the key part from the document you linked
The information set of an XML document is defined to be the one obtained by parsing it according to the rules of the specification whose version corresponds to that of the document.
This is also a great example of the complexity of the XML specifications. Most people do not fully understand them, which is a negative aspect for a tool.
As an aside, you can have an enforced order in XML, but you have to also use XSD so you can specify xsd:sequence, which adds complexity and precludes ordered arrays in arbitrary documents.
In HTML, which things are attributes and which things are tags are part of the spec. With XML that is being used for something arbitrary, sometime is making the choice every time. They might have a different opinion than you do, or even the same opinion, but make different judgments on occasion. In JSON, there are fewer choices, so fewer chances for people to be surprised by other people's choices.
That's correct, but the order of tags in XML is not meaningful, and if you parse then write that, it can change order according to the spec. Hence, what you put would be something like the following if it was intended to represent an array.
Honestly, anyone pining for all the features of XML probably didn't live through the time when XML was used for everything. It was actually a fucking nightmare to account for the existence of all those features because the fact they existed meant someone could use them and feed them into your system. They were also the source of a lot of security flaws.
This article looks like it was written by someone that wasn't there, and they're calling people telling them the truth that they are liars because they think features they found in w3c schools look cool.
I'm not even sure if you can install without an MS account if you don't use Rufus anymore. Rufus requires literacy for sure, and even if you can still do it without it is designed to make it impossible to know you can from within the installer itself.
I use vscode (or codium when not at work) these days because it's a one stop shop for every language and every feature I could ever need is possible with a plugin. I have used visual studio, intelliJ and others in the past, and i fail to see the distinction from a usage perspective
Why would "all this talk of AI not being profitable" be what triggers discussion about LLM use in games? I would think making games fun and interesting should be what triggers any discussion about using anything in games. Are you Satya Nadella trying to find some way to make LLMs profitable? All this ignores that people have actually been talking about exactly what you described for 2 years already.
You are correct that it is the deserializer's choice. You are incorrect when you imply that it is a good idea to rely on behavior that isn't enforced in the spec. A lot of people have been surprised when that assumption turns out to be wrong.