The Internet today is full of fake people and fake information. Trust in both technology and institutions is in a downward spiral. This Article offers a novel comprehensive framework for calibrating a legal response to technology “fakery” through the lens of information security. Introducing the problems of Internet “MIST”—manipulation, impersonation, sequestering, and toxicity—it argues that these MIST challenges threaten the future viability of the Internet through two morphed dynamics destructive to trust. First, the arrival of the Internet-enabled “long con” has combined traditional con artistry with enhanced technological capability for data collection. Second, the risk of a new “PSYOP industrial complex” now looms. This chimera fuses techniques and employees from military psychological operations with marketing technologies of message hyper-personalization in order to target civilian audiences for behavioral modification. To address these two problematic dynamics through law, the Article constructs a broader theory of Internet untrustworthiness and fakery regulation.
Legal scholarship currently conflates two materially different forms of trust—trust in code and trust in people. As such, first, the Article imports the distinction from computer science theory between “trusted” and “trustworthy” systems. Next, engaging with the work of marketing theorist Edward Bernays, philosopher Jacques Ellul, and the theory of illusionism, this Article explains that determinations of intent/knowledge and context can serve as guideposts for legal paradigms of “untrustworthiness.” This recognition offers a path forward for legal analysis of technology fakery and Internet harms. By engaging with multiple threads of First Amendment jurisprudence and scholarship, the Article next sketches the First Amendment bounds for regulation of untrustworthy technology content and conduct. Finally, this Article presents a novel framework inspired by the philosophy of deception to frame future legal discussions of untrustworthy technology fakery—the NICE framework. NICE involves an evaluation of three variables—the legal nature of the technology content or conduct, the intent and knowledge of the faker, and the sensitivity of the context. Thus, NICE leverages traditional legal models of regulation to the greatest extent possible in addressing technology fakery. This Article concludes with examples of regulatory approaches to technology fakery informed by the NICE framework.
Andrea M. Matwyshyn and Miranda Mowbray, Fake, 43 Cardozo L. Rev. 643 (2021).