European Commission Proposes Taking Away the Cops' Big Boy Surveillance Machine

We may earn a commission from links on this page.
Image for article titled European Commission Proposes Taking Away the Cops' Big Boy Surveillance Machine
Photo: Dan Kitwood (Getty Images)

The EU is giving the U.S. a run for its money with privacy regulation, and now they’ve upped the ante with a dynamo of a proposal: banishing AI systems that violate “fundamental rights,” with a special place in hell for law enforcement using real-time biometric identification. The end of that sentence is more of a personal interpretation, but the gist is that it’s time to end the free-for-all.

The sweeping list of protected freedoms in the proposal includes the right to human dignity, respect for privacy, non-discrimination, gender equality, freedom of expression (infringed by the “chilling effect” of surveillance), freedom of assembly, right to an effective remedy and to a fair trial, the rights of defense and the presumption of innocence, fair and just working conditions, consumer protections, the rights of the child, the integration of persons with disabilities, and environmental protection in that health and safety are impacted.

Advertisement

The proposed regulation is over 100 pages long, so here’s a summary of the bans.

Advertisement

BANNED:

  1. An AI system that “deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm.” Furthermore, a “person’s consciousness” is suggested to apply to age and mental state. In a speech, European Commission Executive Vice-President Margrethe Vestager identified an example of “a toy that uses voice assistance to manipulate a child into doing something dangerous. Such uses have no place in Europe,” Vestager continued. “We, therefore, propose to ban them.”
  2. Social scoring by governments: “evaluation or classification of the trustworthiness of natural persons” in a way that leads to “detrimental or unfavorable treatment” in an unrelated social context and/or harms people or groups in a way that “is unjustified or disproportionate to their social behaviour or its gravity.” This implicitly calls out the Chinese Communist Party, which designed a social credit system to score citizen’s “trustworthiness”—a system which has reportedly already denied travel tickets for tens of millions based on debt.
  3. Real-time biometric identification by law enforcement in public spaces that infringe on the public’s rights and freedoms. Exceptions are made for missing children, imminent threat to life, active terrorist attack, IDing a suspect of a serious crime, but even then, law enforcement will need approval, except in cases of a dire emergency.
  4. An exception is made for military uses.

In other words, law enforcement would have to hand over their spy toys for inspection and cut out the kind of abuse that’s now rampant in the United States. Cops have abused face recognition software to make will-nilly suspect identifications. Baltimore PD was caught using face recognition to scan Freddie Gray protesters and pick them off for outstanding warrants. Predictive policing algorithms intensify targeting in Black communities and perpetuate the cycle of disproportionate arrests. Predicted recidivism algorithms have likely lengthened prison sentences. When we get mere glimpses of secretive technology, the scope is always more terrifying than imagined.

Advertisement

Consumer uses, too, have wildly violated civil rights. Algorithms that assess mortgage eligibility have levied higher interest rates on Black and Latinx communities and limited healthcare access.

Such tools would all likely fall under the European Commission’s broad definition of an “AI system” which covers machine learning, “knowledge representation,” statistical “approaches” and search methods, among other applications. Generally, the software can use a “given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”

Advertisement

The European Commission also proposes strict regulations on AI systems that it deems “high-risk.” (The commission notes that overall this represents a very small proportion of systems in use.) “High-risk” uses include:

  • Real-time and “post” biometric identification, by anyone, outside the banned law enforcement uses.
  • Safety management for critical infrastructure like traffic and gas supply
  • Educational access and standardized test assessment
  • Job recruitment, candidate evaluation, promotion or termination
  • Allocation of government benefits
  • Determining creditworthiness
  • Emergency responder dispatch
  • Law enforcement’s assessment of individuals’ likelihood of committing crimes
  • Law enforcement’s prediction of recidivism rates
  • Law enforcement’s profiling of individuals
  • Law enforcement’s detection of a person’s emotional state and ersatz polygraph tests
  • Law enforcement’s detection of deep fakes
  • Law enforcement’s evaluation of evidence
  • Law enforcement’s use of unrelated data sets to “ identify unknown patterns or discover hidden relationships in the data”
  • At the border: detection of a person’s emotional state and ersatz polygraph tests
  • At the border: assessment of a visitor’s potential “security risk, a risk of irregular immigration, or a health risk”
  • At the border: verification of travel documents
  • At the border: examination of eligibility for asylum, visa and residence permits
  • Judicial system’s research and interpretation of the law

Providers for all of the above would have to regularly monitor their technology and report back to the European Commission. Developers are expected to create a risk management system, in order to regularly eliminate and mitigate risk. Dealers are expected to provide information and training to users, taking into account the end-user’s level of technical knowledge (read, cops). They would be expected to keep records of who used the technology and how, including input data (ie, cops would have to admit they used Woody Harrelson’s photo to make a suspect ID). They’d also need to Inform authorities when they’re aware of a risk.

Advertisement

Government officials would still be fine to use biometric identification in a way that it doesn’t necessarily cause harm, Vestager added in the speech. The commission considers fingerprint or face scans by border controls or customs agents to be harmless.

While some have complained that this will stifle innovation, the commission has added protections for that too. It would encourage member states to set up “regulatory sandboxes,” supervised by a member state or European Data Protection Supervisor. That sounds like a crackdown, but it’s more like an optional incubator for start-ups that get priority access.

Advertisement

And the European Commission reminds us that the “vast majority” of AI systems don’t fall under the above risk categories—think AI systems that don’t drive human interaction or involve identification. They aim to encourage things like smart sensors and algorithms that help farmers maximize food production and sustainability at cost savings. So, no to barbaric policing and yes to sustaining life on Earth.

Great, let’s go right ahead and copy-paste this.

Advertisement