55 research outputs found
Remote Repossession
Ford’s February 2023 patent application raises a new possibility: that after a default, an internet-connected vehicle might autonomously drive itself off of the owner’s premises—to a public space, to the repossession agency, or even to a junkyard. But while this “remote repossession” would minimize the risks of harm that attend in-person repossessions, it creates at least three new risks. First, a danger of bodily injury and property damage to the owner. Second, an increased likelihood of physical harm to a third-party with no obviously responsible entity. And third, most invisibly but also perhaps most importantly: further erosion of consumers’ current structural rights—which might include a right against intrusions, a right to a certain amount of due process and human engagement before a repossession, and a right to be free from foreseeable harms associated with corporate remote interference.
I employ a techlaw methodology to explore what legal changes would better protect us—as potential defaulting owners, as possibly harmed third-parties, and as consumers who must increasingly rely on corporations to take reasonable care when engaging in digital self-help
Cyborg Justice and the Risk of Technological-Legal Lock-In
Although Artificial Intelligence (AI) is already of use to litigants and legal practitioners, we must be cautious and deliberate in incorporating AI into the common law judicial process. Human beings and machine systems process information and reach conclusions in fundamentally different ways, with AI being particularly ill-suited for the rule application and value balancing required of human judges. Nor will “cyborg justice”—hybrid human/AI judicial systems that attempt to marry the best of human and machine decisionmaking and minimize the drawbacks of both—be a panacea. While such systems would ideally maximize the strengths of human and machine intelligence, they might also magnify the drawbacks of both. They also raise distinct teaming risks associated with overtrust, undertrust, and interface design errors, as well as second-order structural side effects.One such side effect is “technological–legal lock-in.” Translating rules and decisionmaking procedures into algorithms grants them a new kind of permanency, which creates an additional barrier to legal evolution. In augmenting the common law’s extant conservative bent, hybrid human/AI judicial systems risk fostering legal stagnation and an attendant loss of judicial legitimacy
The Killer Robots Are Here: Legal and Policy Implications
In little over a year, the possibility of a complete ban on autonomous weapon systems—known colloquially as “killer robots”—has evolved from a proposal in an NGO report to the subject of an international meeting with representatives from over eighty states. However, no one has yet put forward a coherent definition of autonomy in weapon systems from a law of armed conflict perspective, which often results in the conflation of legal, ethical, policy, and political arguments. This Article therefore proposes that an “autonomous weapon system” be defined as “a weapon system that, based on conclusions derived from gathered information and preprogrammed constraints, is capable of independently selecting and engaging targets.”
Applying this definition, and contrary to the nearly universal consensus, it quickly becomes apparent that autonomous weapon systems are not weapons of the future: they exist and have already been integrated into states’ armed forces. The fact that such weaponry is currently being used with little critique has a number of profound implications. First, it undermines pro-ban arguments based on the premise that autonomous weapon systems are inherently unlawful. Second, it significantly reduces the likelihood that a complete ban would be successful, as states will be unwilling to voluntarily relinquish otherwise lawful and uniquely effective weaponry.
But law is not doomed to follow technology: if used proactively, law can channel the development and use of autonomous weapon systems. This Article concludes that intentional international regulation is needed, now, and suggests how such regulation may be designed to incorporate beneficial legal limitations and humanitarian protections
Constitutional Convergence and Customary International Law
In Getting to Rights: Treaty Ratification, Constitutional Convergence, and Human Rights Practice, Zachary Elkins, Tom Ginsburg, and Beth Simmons study the effects of post-World War II human rights texts on domestic constitutions, with a particular focus on the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights (ICCPR). After analyzing 680 constitutional systems compiled by the Comparative Constitutions Project to create a list of seventy-four constitutionally protected rights, the authors evaluate whether countries incorporate internationally codified human rights into their domestic constitutions, whether ratification of international agreements affects the probability of rights incorporation, and whether such incorporation increases the likelihood that countries enforce rights in practice.
After tabulating the data and running random-effects models, the authors find “a significant upward shift in the similarity to the [Universal Declaration] among constitutions written after 1948,” leading them to conclude that the Universal Declaration acted as a “template” from which constitutional drafters could select rights. They also demonstrate—after controlling for the era and a state’s prior constitutional tradition—that post-1966 constitutions from states that ratified the ICCPR are more likely to include its codified rights in subsequent constitutions than non-ratifying states. Finally, relying on Freedom House’s civil liberties index, the authors conclude that human rights agreement ratification and constitutional incorporation is correlated with improved human rights practice on the ground. [..
The Internet of Torts: Expanding Civil Liability Standards to Address Corporate Remote Interference
Thanks to the proliferation of internet-connected devices that constitute the “Internet of Things” (“IoT”), companies can now remotely and automatically alter or deactivate household items. In addition to empowering industry at the expense of individuals, this remote interference can cause property damage and bodily injury when an otherwise operational car, alarm system, or implanted medical device abruptly ceases to function.
Even as the potential for harm escalates, contract and tort law work in tandem to shield IoT companies from liability. Exculpatory clauses limit civil remedies, IoT devices’ bundled object/service nature thwarts implied warranty claims, and contractual notice of remote interference precludes common law tort suits. Meanwhile, absent a better understanding of how IoT-enabled injuries operate and propagate, judges are likely to apply products liability and negligence standards narrowly, in ways that curtail corporate liability.
But this is hardly the first time a new technology has altered social and power relations between industries and individuals, creating a potential liability inflection point. As before, we must decide what to incentivize and who to protect, with an awareness that the choices we make now will shape future assumptions about IoT companies’ obligations and consumer rights. Accordingly, this Article proposes reforms to contract and tort law to expand corporate liability and minimize foreseeable consumer injury
A Meaningful Floor for Meaningful Human Control
To the extent there is any consensus among States, ban advocates, and ban skeptics regarding the regulation of autonomous weapon systems (AWS), it is grounded in the idea that all weaponry should be subject to meaningful human control. This intuitively appealing principle is immensely popular, and numerous States have explicitly declared their support for it or questioned the lawfulness of weapons that operate without such control. Lack of opposition has led some to conclude that it is either a newly developed customary norm or a preexisting, recently exposed rule of customary international law, already binding on all States.
But this broad support comes at a familiar legislative cost; there is no consensus as to what meaningful human control actually requires. State X might define meaningful human control to require informed human approval of each possible action of a given weapon system (maintaining a human being in the loop ); State Y might understand it as the ability of a human operator to oversee and veto a weapon system‘s actions (having a human being on the loop ); and State Z might view the original programming alone as providing sufficiently meaningful human control (allowing human beings to be off the loop ). As the Czech Republic noted, in voicing its belief that the decision to end somebody‘s life must remain under meaningful human control, . . . [t]he challenging part is to establish what precisely \u27meaningful human control\u27 would entail.
This paper describes attempts to clarify what factors are relevant to meaningful human control, discusses benefits associated with retaining imprecision in a standard intended to regulate new technology through international consensus, and argues that the standard‘s vagueness should be limited by an interpretive floor. Meaningful human control as a regulatory concept can usefully augment existing humanitarian norms governing targeting—namely, that all attacks meet the treaty and customary international law requirements of distinction, proportionality, and feasible precautions. However, it should not be interpreted to conflict with these norms nor be prioritized in a way that undermines existing humanitarian protections. [..
War Torts: Accountability for Autonomous Weapons
Unlike conventional weapons or remotely operated drones, autonomous weapon systems can independently select and engage targets. As a result, they may take actions that look like war crimes—the sinking of a cruise ship, the destruction of a village, the downing of a passenger jet—without any individual acting intentionally or recklessly. Absent such willful action, no one can be held criminally liable under existing international law.
Criminal law aims to prohibit certain actions, and individual criminal liability allows for the evaluation of whether someone is guilty of a moral wrong. Given that a successful ban on autonomous weapon systems is unlikely (and possibly even detrimental), what is needed is a complementary legal regime that holds states accountable for the injurious wrongs that are the side effects of employing these uniquely effective but inherently unpredictable and dangerous weapons. Just as the Industrial Revolution fostered the development of modern tort law, autonomous weapon systems highlight the need for “war torts”: serious violations of international humanitarian law that give rise to state responsibility
Judicious Influence: Non-Self-Executing Treaties and the Charming Betsy Canon
Despite their seeming impotency, non-self-executing treaties play an important role in domestic jurisprudence. When a statute permits more than one construction, judges have a number of interpretive tools at their disposal. One of these is the Charming Betsy canon, which encourages judges to select an interpretation of an ambiguous statute that accords with U.S. international obligations -including those expressed in non-self-executing treaties. This Note concludes that the judicial practice of giving indirect force to all treaties through the Charming Betsy canon is both justified and beneficial
- …
