The Ninth Circuit Court of Appeals has put police on notice: an automatic license plate reader (ALPR) alert, without human verification, is not enough to pull someone over.
Last week, the appellate court issued an important opinion in Green v. City & County of San Francisco, a civil rights lawsuit that calls into question whether technology alone can provide the basis for reasonable suspicion under the Fourth Amendment. The panel overturned a lower court ruling in favor of San Francisco and its police department, allowing the case to go to trial.
A case of computer error
Late one night in 2009, San Francisco cops pulled over Denise Green, an African-American city worker driving her own car. At gunpoint, they handcuffed her, forced to her knees, and then searched both her and her car — all because an automatic license plate reader misread her plate and identified her car as stolen.
The first error was technological: the ALPR unit misread Ms. Green’s plate, recording a “3” as a “7.” The next errors were human. The officer driving the ALPR-equipped squad car never visually verified that Green’s plate number matched the ALPR “hit”—despite an SFPD policy that requires this—even after he called it in and dispatch matched the plate to a vehicle that looked nothing like Green’s. The officer who later pulled Green over also failed to verify the plate, even though he had ample opportunity to do so while stopped immediately behind Green’s car at a red light.
Based solely on the ALPR “hit” and dispatch confirmation that the false hit was linked to a stolen vehicle, the second officer called in for backup and initiated a “high-risk” or “felony” traffic stop. While at least four officers pointed their guns at Ms. Green, she was ordered from her car and forced to spend at least 10 minutes handcuffed and on her knees (this proved so taxing, given Green’s physical condition, that the officers later had to help her up).
A search of Green and her car turned up nothing, and she had no criminal record. Although at this point the officers should have realized they pulled over the wrong car, and—more critically—that Green’s license plate was not the same as the car reported stolen, it still took the officers another 10 minutes before they figured out their mistake and let Green on her way.
Green sued the city and officers, claiming they violated her Fourth Amendment right to be free from unreasonable search and seizure. The district court judge granted summary judgment in favor of defendants, finding it was reasonable for the officer that pulled Green over to assume the first officer had confirmed the ALPR hit and further holding it was a reasonable good faith “mistake” on the part of both officers to assume without verifying that Green’s plate matched the hit.
The Ninth Circuit disagreed, ruling a jury could find that a police officer’s unverified reliance on an ALPR hit is an insufficient basis for a traffic stop and that the subsequent search and seizure of Green could violate the Fourth Amendment. Importantly, the appeals court noted that an unconfirmed ALPR hit could not provide a legal basis to pull Green over.
Ironically, SFPD already had a policy that required cops to visually confirm that the plate on the car was the same as the plate ID’ed by the ALPR system. The International Association of Chiefs of Police (“IACP”) has described this as one of the “essential components” of training on ALPR use, and several of the state policies mentioned by IACP also require this verification. But in Green’s case, none of the officers followed that policy.
False positives and the danger of over-reliance on technology
This case shows clearly the risks of blind reliance on technology for identification in criminal investigations. If the ALPR camera had not alerted the first officer based on a false license plate read, Green never would have been stopped, and this tragedy could easily have been avoided.
More and more frequently, cops are looking to technology to do initial identification of suspected criminals—whether it’s ALPR for traffic stops, face recognition for mug shots, or DNA for crime scene forensics. Yet these technologies are fallible.
Just last month TechDirt reported that a driver in a Kansas City suburb found himself surrounded by cops with guns out after a license plate camera misread his plate. Similar situations are possible with DNA and face recognition. For example, a man was misidentified as the perpetrator of a brutal home invasion and murder in San Jose based solely on his DNA. Even though he had a good alibi — he was inebriated and in the hospital the night of the murder — he still spent five months in jail. Researchers at NYU note that face recognition poses false positive risks as well, especially when databases like FBI’s Next Generation Identification include many millions of face images. And even the FBI admits NGI fails to provide accurate results in at least 15 percent of IDs.
We have noted before that false positives attributable to tech-based identification systems pose a risk for democratic societies because they turn innocent people into suspects, requiring them to prove their innocence rather than requiring law enforcement to prove their guilt. This completely upends our system of justice. In the United States, there is a presumption of innocence and a Fourth Amendment right to be free from unreasonable searches and seizures. This must mean—at a minimum—that law enforcement officers need reasonable suspicion of criminal activity before they can stop you.
Just as officers can make human mistakes that undermine whether there is in fact reasonable suspicion for a stop, this case shows that new technologies can and do make mistakes too. Innocent people shouldn’t have to pay the price for these mistakes.
Source: Electronic Frontier Foundation (EFF) – eff.org
Support InfoStride News' Credible Journalism: Only credible journalism can guarantee a fair, accountable and transparent society, including democracy and government. It involves a lot of efforts and money. We need your support. Click here to Donate