The political bias of AI will be set by those tuning the models. Now mix in a bunch of voters asking LLMs who they should vote for, because people will outsource their thinking any chance they get. The result is model owners being able to sway elections with very little effort.
I can't fault that logic. I just think that the general public get guilt-tripped a lot of the time for things which are really the fault of corporations, and they continue to escape responsibility.
I was just meaning it's not a common turn of phrase in English and possibly the author isn't a native English speaker, so they're trying to make a Ukrainian idiom fit the headline when it doesn't.
There's a difference between driving fast and being ready to move when an opportunity appears. It mainly comes down to watching traffic far enough down the road so you anticipate where the gap will be. That then allows you to smoothly merge into it.
The encryption of streaming media is annoying, but it's not what I fear. The ability to lock the software that I run on my hardware to "approved vendors" only is what worries me, and it's what TPM promises. A security model where the only trusted party isn't even the person owning the hardware.
Do we have any evidence of them acting "on instructions from the ruling Chinese Communist Party"? I think we know they're full of security holes, sure. I always put that down to the designers not caring about doing a good job.
Journalists that interview LLMs need a slap. It's not an admission of wrong-doing if the LLM is the source.