Since Flock CEO wants to give this movement some press
Here's Benn Jordan, he's done a series of videos on the cameras, demonstrates their vulnerabilities, and talks about how Flock has been deploying secretly by co-opting local municipalities to subsidize their national rollout.
First video, the one seems to have started the major anti-Flock push: https://www.youtube.com/watch?v=Pp9MwZkHiMQ
Follow-up showing how easy they are to hack: https://www.youtube.com/watch?v=uB0gr7Fh6lY
More live demonstrated vulnerabilities: https://www.youtube.com/watch?v=vU1-uiUlHTo
Not as directly related, but he discusses a way to use generative AI models to create noise masks for your specific plate that will disrupt the OCR process that ALPRs use. (Key term: Adversarial Noise) https://www.youtube.com/watch?v=W_F4rEaRduk
The big danger here, which these steps mitigate but do not solve are:
#1 Algorithmically curated content
On the various social media, there are systems of automated content moderation that are in place that remove or suppress content. Ostensibly for protecting users from viewing illegal or disturbing content. In addition, there are systems for recommending content to a user by using metrics for the content, metrics for the users combined with machine learning algorithm and other controls which create a system of controls to both restrict and promote content based on criteria set by the owner. We commonly call this, abstractly, 'The Algorithm' Meta has theirs, X has theirs, TikTok has theirs. Originally these were used to recommend ads and products but now they've discovered that selling political opinions for cash is a far more lucrative business. This change from advertiser to for-hire propagandist
The personal metrics that these systems use are made up of every bit of information that the company can extract out of you via your smartphone, linked identity, ad network data and other data brokers. The amount of data that is available on the average consumer is pretty comprehensive right down to knowing the user's rough/exact location in real-time.
The Algorithm used by social media companies are a black box, so we don't know how they are designed. Nor do we know how they are being used at any given moment. There are things that they are required to do (like block illegal content) but there are very little, if any, restrictions on what they can block or promote otherwise nor are there any reporting requirements for changes to these systems or restrictions on selling the use of The Algorithm for any reason whatsoever.
There have been many public examples of the owners of that box to restricting speech by de-prioritizing videos or restricting content containing specific terms in a way that imposes a specific viewpoint through manufactured consensus. We have no idea if this was done by accident (as claimed by the companies, when they operate too brazenly and are discovered), if it was done because the owner had a specific viewpoint or if the owner was paid to impose that viewpoint.
This means that our entire online public discourse is controllable. That means of control is essentially unregulated and is increasingly being used and sold for, what cannot be called anything but, propaganda.
#2 - There is no #2, the Algorithms are dangerous cyberweapons, their usage should be heavily regulated and incredible restrictions put on their use against people.