Maryland Law Review Maryland Law Review

Wastewater Monitoring and Informed Consent: Interrogating the Research/Surveillance Binary Under the Common Rule

Morgan Cole

Morgan Cole

According to the U.S. Centers for Disease Control and Prevention (“CDC”), “[s]urveillance systems can be either research or nonresearch, depending whether the purpose is to identify and control a health problem or to contribute to knowledge beyond the system’s participants, to society.” However, subpart A of the Federal Policy for the Protection of Human Subjects—otherwise known as the Common Rule—uniformly defines public health surveillance as nonresearch for the purpose of deciding when investigators must solicit informed consent from human research subjects.

Wastewater surveillance regimes share characteristics with both research and surveillance as defined under the Common Rule, and illustrate why treating those concepts as binary may be increasingly problematic. Though once a relatively straightforward way to anticipate and prevent infectious disease outbreaks, wastewater monitoring has evolved into a multifaceted surveillance tool aimed at a diverse suite of public health endpoints. The rapid growth of this technology during and after the height of the COVID-19 pandemic has made it an invaluable component of population-based disease prevention efforts, and, simultaneously, raised the following question: At what point does surveilling wastewater absent informed consent become unethical?

Read More
Maryland Law Review Maryland Law Review

I’m Going to Need to See Some ID: Free Speech Coalition v. Paxton and Online Age Verification

Kyle Korte

Kyle Korte

In Free Speech Coalition, Inc. v. Paxton, the Supreme Court considered a First Amendment challenge to a Texas statute that requires websites hosting material that is obscene to minors to verify visitors’ ages before granting access. The Free Speech Coalition (“FSC”), a trade association for the adult entertainment industry, along with a group of companies that operate pornographic websites and an adult entertainer, alleged that the statute unconstitutionally burdened adults’ First Amendment rights by requiring age verification to access material that, while obscene to minors, is not obscene to adults and is therefore protected speech. The Court held that the statute’s burden on adults’ access was only incidental and that, under intermediate scrutiny, implementing age verification is “appropriately tailored” to advance Texas’s “important interest in shielding children from sexually explicit content.” The Court erred in its decision because the statute is a direct burden on adults’ free speech rights and thus requires strict scrutiny. In deciding to apply intermediate scrutiny, the Court ignored the nuances of age verification online, the significant data privacy issues it poses, and the substantial burden on the protected speech of millions of adults. By applying intermediate scrutiny, the Court permits broad content-based restrictions while affording high deference to legislatures that puts adults’ access to protected speech, beyond access to pornography, at risk.

Read More
Maryland Law Review Maryland Law Review

For You or Forbidden: TikTok v. Garland and the TikTok “Ban”

Kyra Wisneski

Kyra Wisneski

In TikTok Inc. v. Garland, the Supreme Court considered whether two provisions of the Protecting Americans from Foreign Adversary Controlled Applications Act (the “Act,” or “PAFACA”) violated the petitioners’ First Amendment rights. The petitioners included a group of U.S.-based TikTok users (“creator-petitioners”), TikTok itself, and its parent company, ByteDance. The Court first assumed, without deciding, that the Act implicated the First Amendment. The Court then concluded that the Act was facially content-neutral and therefore subject to intermediate scrutiny. Finally, the Court held that the Act, as applied to the petitioners, satisfied intermediate scrutiny because it was sufficiently tailored to serve the government’s important interest in preventing China, a foreign adversary, from accessing U.S. user data.

The Court should have held that the challenged provisions were explicitly subject to First Amendment scrutiny as they imposed a disproportionate burden on the petitioners’ First Amendment rights. The Court correctly concluded that the Act was facially content-neutral and justified by a content-agonistic purpose: preventing a foreign adversary from exploiting sensitive data from 170 million U.S. users. On that basis, intermediate scrutiny was the appropriate standard of review, and the Act satisfied this standard. However, the Court failed to meaningfully engage with the government’s second asserted justification: preventing a foreign adversary from covertly manipulating content on TikTok. This justification is inherently content-based and, as an independent rationale, warrants strict scrutiny. By declining to recognize it as such, the Court risks permitting impermissible regulations of speech to evade exacting judicial review under the guise of content manipulation.

Read More