

Cars do have that in what amounts to a TCU or Telematics Control Unit. The main problem here isn’t whether or not cars have that technology. It’s about the relevant government agency forcing companies like Tesla (and other automakers) to produce that data not just when there’s a crash, but as a matter of course.
I have a lot of questions about why Tesla’s are allowed on public roads when some of the models haven’t been crash tested. I have a lot of questions about why a company wouldn’t hand over data in the event of a crash without the requirement of a court order. I don’t necessarily agree that cars should be able to track us (if I buy it I own it and nobody should have that kind of data without my say so). But since we already have cars that do phone this data home, local, state, and federal government should have access to it. Especially when insurance companies are happy to use it to place blame in the event of a crash so they don’t have to pay out an insurance policy.
Here’s a question. I’m gonna preface it with some details. One of the things I used to do for the US Navy was the development of security briefs. To write a brief it’s essentially you pulling information from several sources (some of which might be classified in some way) to provide detail for the purposes of briefing a person or people about mission parameters.
Collating that data is important and it’s got to be not only correct but also up to date and ready in a timely manner. I’m sure ChatGPT or similar could do that to a degree (minus the bit about it being completely correct).
There are people sitting in degree programs as we speak who are using ChatGPT or another LLM to take shortcuts in not just learning but doing course work. Those people are in degree programs for counter intelligence degrees and similar. Those people may inadvertently put information into these models that is classified. I would bet it has already happened.
The same can be said for trade secrets. There’s lots of companies out there building code bases that are considered trade secrets or deal with trade secrets protected info.
Are you suggesting that they use such tools in the arsenal to make their output faster? What happens when they do that and the results are collected by whatever model they use and put back into the training data?
Do you admit that there are dangers here that people may not be aware of or even cognizant they may one day work in a field where this could be problematic? I wonder this all the time because people only seem to be thinking about the here and now of how quickly something can be done and not the consequences of doing it quickly or more “efficiently” using an LLM and I wonder why people don’t think about it the other way around.