Document Type

Article

Publication Date

2013

Abstract

Courts are proudly resigned to the fact that the probable cause inquiry is “nontechnical.” In order to conduct a search or make an arrest, police need to satisfy the probable cause standard, which the Supreme Court has deemed “incapable of precise definition or quantification into percentages.” The flexibility of this standard enables courts to defer to police officers’ reasonable judgments and expert intuitions in unique situations. However, police officers are increasingly using investigative techniques that replace their own observational skills with test results from some other source, such as drug sniffing dogs, facial recognition technology, and DNA matching. The reliability of such practices can and should be quantified, but the vagueness of the probable cause standard renders it impossible for judges to determine which error rates are inconsistent with probable cause.

This article confronts the intersection between quantifiable evidence and the relentlessly fuzzy probable cause standard. It proposes that the probable cause standard be assigned a numerical value as a minimum threshold in cases where probable cause is based on mechanistic techniques that essentially replace a police officer’s own judgment. The article begins by exploring how the police and courts currently apply the probable cause standard, including courts’ confrontations with probabilities. It then explains why certain evidence should require quantified error rates to establish probable cause and how to properly calculate these error rates. In the final section, the article argues that assigning a minimum percentage to probable cause in appropriate circumstances would add much needed clarity to the law and protect against systemic abuses.

Comments

Publication Information: Forthcoming in Lewis & Clark Law Review.

Available at SSRN: http://ssrn.com/abstract=2116389

Share

COinS