607 research outputs found
Recommended from our members
A History of Online Gatekeeping
The brief but intense history of American judicial and legislative confrontation with problems caused by the online world has demonstrated a certain wisdom: a reluctance to intervene in ways that dramatically alter online architectures; a solicitude for the collateral damage that interventions might wreak upon innocent activity; and, in the balance, a refusal to allow unambiguously damaging activities to remain unchecked if there is a way to curtail them.
The ability to regulate lightly while still curtailing the worst online harms that might arise has sprung from the presence of gatekeepers. These are intermediaries of various kinds - generally those who carry, host, or index others' content - whose natural business models and corresponding technology architectures have permitted regulators to conscript them to eliminate access to objectionable material or to identify wrongdoers in many instances.
The bulk of this Article puts together the pieces of that history most relevant to an understanding of the law's historical forbearance, describing a trajectory of gatekeeping beginning with defamation and continuing to copyright infringement, including shifts in technology toward peer-to-peer networks, that has so far failed to provoke a significant regulatory intrusion. I argue that the U.S. Supreme Court's Grokster decision upholds this tradition of light-touch regulation that has allowed the Internet to thrive. The decision thus is not a landmark so much as a milestone, ratifying a continuing detente between those who build on the Internet and those in a position to regulate the builders.
Grokster may have achieved such a fit with its ancestors by avoiding a set of now-pressing issues about gatekeepers. This avoidance is revealed by looking at Grokster's outcome: a loss for Grokster Ltd. that has no practical impact on the distribution and use of the sort of PC software that got Grokster Ltd., in trouble. The most recent peer-to-peer technologies eliminate a layer of intermediation from the networks they create; there are often no longer central websites or services that can be blamed, and then shut down or modified, to dampen the objectionable activities that they enable. Even decentralized Internet service providers may prove unable to intercede much as new overlay networks cloak users' network identities in addition to their personal ones. The loss of these natural points of control will cause those with challenged interests to foreground a new and less palatable set of intermediaries: software authors. These authors may be asked to write their software in such a way that it can be recalled or modified after it has been obtained by a user and then put to an undesirable purpose. They may even be asked to program their software to disable the installed software of others. Control over software - and the ability of PC users to run it - rather than control over the network, will be a future battleground for Internet regulation, a battleground primed by an independently-motivated movement by consumers away from open, generative PCs and toward more highly regulable endpoint platforms
Recommended from our members
The Rise and Fall of Sysopdom
"Sysop" has gone from a term of art known only to the bleeding-edge few to a dusty anachronism known only to the bleeding-gums few, without the usual years-long general linguistic acceptance and respect in between. In case the reader is not among the bleeders: sysops (from "system operators") run electronic areas accessible by typing furiously on one’s networked computer, through which one can meet, talk to (well, at least type at), and develop nuanced social relationships with other people similarly typing and reading. Few know what a sysop is because these electronic areas — aspirationally, and sometimes accurately, known as "online communities" — have never quite flourished and today are in decline.
Indeed, "online community" joins "sysop" in the oversize dustbin of trite or hopelessly esoteric, hence generally meaningless, cyberspace vernacular. Not that "online community" is obscure, like "sysop"; rather, the term’s emptiness results from its abuse. "Online community" is used by Internet companies the way a
motivational speaker uses "excellence," an academic uses "new paradigm," or a lawyer uses "justice": it represents something once craved and still invoked (if only as a linguistic placeholder) even as it is believed by all but the most naïve to be laughably beyond reach. Since it’s applied to almost anything, it now means
vague warm fuzzies and nothing more. The craft of sysoping and the phenomenon of online community (non-hollowly defined) have gone down together even as the Internet has burgeoned, and I want to explain what has happened to sysops as a way of explaining what has happened to the truly great and transformative promise of online communities. Law has played a major role in two distinct ways. First, sysops and the members of the communities they lead have struggled through lawlike reflection to arrive at just solutions to the disputes that inevitably arise in the
course of their interactions. This struggle is a large part of what has made the communities so interesting. Second, fear of the formalistic application of the machinery of the real-world legal system is threatening to drive the amateur sysop to extinction and thereby to destroy what’s left of online community
Recommended from our members
Book Review: What's in a Name?
In the spring of 1998, the U.S. government told the Internet: Govern yourself. This unfocused order - a blandishment, really, expressed as an awkward "statement of policy" by the Department of Commerce, carrying no direct force of law - came about because the management of obscure but critical centralized Internet functions was at a political crossroads.
This essay reviews Milton Mueller's book Ruling the Root, and the ways in which it accounts for what happened both before and after that crossroads
Recommended from our members
The Generative Internet
The generative capacity for unrelated and unaccredited audiences to build and distribute code and content through the Internet to its tens of millions of attached personal computers has ignited growth and innovation in information technology and has facilitated new creative endeavors. It has also given rise to regulatory and entrepreneurial backlashes. A further backlash among consumers is developing in response to security threats that exploit the openness of the Internet and of PCs to third-party contribution. A shift in consumer priorities from generativity to stability will compel undesirable responses from regulators and markets and, if unaddressed, could prove decisive in closing today's open computing environments. This Article explains why PC openness is as important as network openness, as well as why today's open network might give rise to unduly closed endpoints. It argues that the Internet is better conceptualized as a generative grid that includes both PCs and networks rather than as an open network indifferent to the configuration of its endpoints. Applying this framework, the Article explores ways--some of them bound to be unpopular among advocates of an open Internet represented by uncompromising end-to-end neutrality--in which the Internet can be made to satisfy genuine and pressing security concerns while retaining the most important generative aspects of today's networked technology
Recommended from our members
Searches and Seizures in a Networked World
This essay responds to Orin S. Kerr, Searches and Seizures in a Digital World, 119 Harv. L. Rev. 531 (2005), http://ssrn.com/abstract=697541.
Professor Kerr has published a thorough and careful article on the application of the Fourth Amendment to searches of computers in private hands - a treatment that has previously escaped the attentions of legal academia. Such a treatment is perhaps so overdue that it has been overtaken by two phenomena: first, the emergence of an overriding concern within the United States about terrorism; and second, changes in the way people engage in and store their most private digital communications and artifacts.
The first phenomenon has foregrounded a challenge by the President to the very notion that certain kinds of searches and seizures may be proscribed or regulated by Congress or the judiciary. The second phenomenon, grounded in the mass public availability of always-on Internet broadband, is leading to the routine entrustment of most private data to the custody of third parties - something orthogonal to a doctrinal framework in which the custodian of matter searched, rather than the person who is the real target of interest of a search, is typically the only one capable of meaningfully asserting Fourth Amendment rights to prevent a search or the use of its fruits.
Together, these phenomena make the application of the Fourth Amendment to the standard searches of home computers - searches that, to be sure, are still conducted regularly by national and local law enforcement - an interesting exercise that is yet overshadowed by greatly increased government hunger for private information of all sorts, both individual and aggregate, and by rapid developments in networked technology that will be used to satisfy that hunger. Perhaps most important, these factors transform Professor Kerr's view that a search occurs for Fourth Amendment purposes only when its results are exposed to human eyes: such a notion goes from unremarkably unobjectionable - police are permitted to mirror entirely a suspect's hard drive and then are constitutionally limited as they perform searches on the copy - to dangerous to any notion of limited government powers. Professor Kerr appreciates this as a troublesome result - indeed, downright creepy - but does not dwell upon it beyond suggesting that the copying of data might be viewed as a seizure if not a search, at least so long as it involves some physical touching or temporary commandeering of the machine. This view should be amplified: If remote vacuum cleaner approaches are used to record and store potentially all Internet and telephone communications for later searching, with no Fourth Amendment barrier to the initial information-gathering activity in the field, the government will be in a position to perform comprehensive secret surveillance of the public without any structurally enforceable barrier, because it will no longer have to demand information in individual cases from third parties or intrude upon the physical premises or possessions of a search target in order to gather information of interest. The acts of intruding upon a suspect's demesnes or compelling cooperation from a third party are natural triggers for judicial process or public objection. If the government has all necessary information for a search already in its possession, then we rely only upon its self-restraint in choosing the scope and depth of otherwise unmonitorable searching. This is precisely the self-restraint that the Fourth Amendment eschews for intrusive government searches by requiring outside monitoring by disinterested magistrates - or individually exigent circumstances in which such monitoring can be bypassed.
Taken together, the current areas of expansion of surveillance appear permanent rather than exigent, and sweeping rather than focused, causing the justifications behind special needs exceptions to swamp the baseline protections established for criminal investigations. This expansion stands to remove the structural safeguards designed to forestall the abuse of power by a government that knows our secrets
Recommended from our members
The Un-Microsoft Un-Remedy: Law Can Prevent the Problem That It Can't Patch Later
Microsoft has brilliantly exploited its current control of the personal computer operating system (OS) market to grant itself advantages towards controlling tomorrow's operating system market as well. This is made possible by the control Microsoft has asserted over user "defaults," a power Microsoft possesses thanks to a combination of (1) Windows' high market share, (2) the "network effects" that make switching to an alternative so difficult for any given consumer or computer manufacturer, and (3) software copyright, which largely prevents competitors from generating software that defeats network effects. The author suggests a much-reduced term of copyright for computer software--from 95 years to around five years--as a means of preventing antitrust problems before they arise
Recommended from our members
Evaluating the Costs and Benefits of Taxing Internet Commerce
Current tax law--and the current technical architecture of the Internet--make it difficult to enforce sales taxes on most Internet commerce. This has generated considerable policy debate. In this paper, we analyze the costs and benefits of enforcing such taxes including revenue losses, competition with retail, externalities, distribution, and compliance costs. The results suggest that the costs of not enforcing taxes are quite modest and will remain so for several years. At the same time, compliance costs are also likely to be low as Internet infrastructure evolves to make enforcement easier, and states coordinate to harmonize their statutes. There are benefits to nurturing/subsidizing the Internet but they tend to diminish over time. When tax costs and benefits take this form, a moratorium provides a natural compromise
Recommended from our members
Ubiquitous Human Computing
Ubiquitous computing means network connectivity everywhere, linking devices and systems as small as a thumb tack and as large as a worldwide product distribution chain. What could happen when people are so readily networked? This short essay explores issues arising from two possible emerging models of ubiquitous human computing: fungible networked brainpower and collective personal vital sign monitoring
Recommended from our members
Normative Principles for Evaluating Free and Proprietary Software
The production of most mass-market software can be grouped roughly according to free and proprietary development models. These models differ greatly from one another, and their associated licenses tend to insist that new software inherit the characteristics of older software from which it may be derived. Thus the success of one model or another can become self-perpetuating, as older free software is incorporated into later free software and proprietary software is embedded within successive proprietary versions. The competition between the two models is fierce, and the battle between them is no longer simply confined to the market. Claims of improper use of proprietary code within the free GNU/Linux operating system have resulted in multi-billion dollar litigation. This article explains the ways in which free and proprietary software are at odds, and offers a framework by which to assess their value - a prerequisite to determining the extent to which the legal system should take more than a passing, mechanical interest in the doctrinal claims now being pressed against GNU/Linux specifically and free software generally
- …
