15,231 research outputs found
Time to experiment: A response
It is with some pleasure that we were given the opportunity to offer this paper for commentary and we are grateful for the efforts made by readers to help us to refine our thinking. Given the constraints of space, we will respond to the main comments in turn. We plan to submit a more considered and elegant paper to a future edition when we have worked more on our model
Memory protection
Accidental overwriting of files or of memory regions belonging to other programs, browsing of personal files by superusers, Trojan horses, and viruses are examples of breakdowns in workstations and personal computers that would be significantly reduced by memory protection. Memory protection is the capability of an operating system and supporting hardware to delimit segments of memory, to control whether segments can be read from or written into, and to confine accesses of a program to its segments alone. The absence of memory protection in many operating systems today is the result of a bias toward a narrow definition of performance as maximum instruction-execution rate. A broader definition, including the time to get the job done, makes clear that cost of recovery from memory interference errors reduces expected performance. The mechanisms of memory protection are well understood, powerful, efficient, and elegant. They add to performance in the broad sense without reducing instruction execution rate
The ARPANET after twenty years
The ARPANET began operations in 1969 with four nodes as an experiment in resource sharing among computers. It has evolved into a worldwide research network of over 60,000 nodes, influencing the design of other networks in business, education, and government. It demonstrated the speed and reliability of packet-switching networks. Its protocols have served as the models for international standards. And yet the significance of the ARPANET lies not in its technology, but in the profound alterations networking has produced in human practices. Network designers must now turn their attention to the discourses of scientific technology, business, education, and government that are being mixed together in the milieux of networking, and in particular the conflicts and misunderstandings that arise from the different world views of these discourses
Information technologies for astrophysics circa 2001
It is easy to extrapolate current trends to see where technologies relating to information systems in astrophysics and other disciplines will be by the end of the decade. These technologies include mineaturization, multiprocessing, software technology, networking, databases, graphics, pattern computation, and interdisciplinary studies. It is easy to see what limits our current paradigms place on our thinking about technologies that will allow us to understand the laws governing very large systems about which we have large datasets. Three limiting paradigms are saving all the bits collected by instruments or generated by supercomputers; obtaining technology for information compression, storage and retrieval off the shelf; and the linear mode of innovation. We must extend these paradigms to meet our goals for information technology at the end of the decade
Speeding up parallel processing
In 1967 Amdahl expressed doubts about the ultimate utility of multiprocessors. The formulation, now called Amdahl's law, became part of the computing folklore and has inspired much skepticism about the ability of the current generation of massively parallel processors to efficiently deliver all their computing power to programs. The widely publicized recent results of a group at Sandia National Laboratory, which showed speedup on a 1024 node hypercube of over 500 for three fixed size problems and over 1000 for three scalable problems, have convincingly challenged this bit of folklore and have given new impetus to parallel scientific computing
Modeling reality
Although powerful computers have allowed complex physical and manmade hardware systems to be modeled successfully, we have encountered persistent problems with the reliability of computer models for systems involving human learning, human action, and human organizations. This is not a misfortune; unlike physical and manmade systems, human systems do not operate under a fixed set of laws. The rules governing the actions allowable in the system can be changed without warning at any moment, and can evolve over time. That the governing laws are inherently unpredictable raises serious questions about the reliability of models when applied to human situations. In these domains, computers are better used, not for prediction and planning, but for aiding humans. Examples are systems that help humans speculate about possible futures, offer advice about possible actions in a domain, systems that gather information from the networks, and systems that track and support work flows in organizations
Blindness in designing intelligent systems
New investigations of the foundations of artificial intelligence are challenging the hypothesis that problem solving is the cornerstone of intelligence. New distinctions among three domains of concern for humans--description, action, and commitment--have revealed that the design process for programmable machines, such as expert systems, is based on descriptions of actions and induces blindness to nonanalytic action and commitment. Design processes focusing in the domain of description are likely to yield programs like burearcracies: rigid, obtuse, impersonal, and unable to adapt to changing circumstances. Systems that learn from their past actions, and systems that organize information for interpretation by human experts, are more likely to be successful in areas where expert systems have failed
The internet worm
In November 1988 a worm program invaded several thousand UNIX-operated Sun workstations and VAX computers attached to the Research Internet, seriously disrupting service for several days but damaging no files. An analysis of the work's decompiled code revealed a battery of attacks by a knowledgeable insider, and demonstrated a number of security weaknesses. The attack occurred in an open network, and little can be inferred about the vulnerabilities of closed networks used for critical operations. The attack showed that passwork protection procedures need review and strengthening. It showed that sets of mutually trusting computers need to be carefully controlled. Sharp public reaction crystalized into a demand for user awareness and accountability in a networked world
About time
Time has historically been a measure of progress of recurrent physical processes. Coordination of future actions, prediction of future events, and assigning order to events are three practical reasons for implementing clocks and signalling mechanisms. In large networks of computers, these needs lead to the problem of synchronizing the clocks throughout the network. Recent methods allow this to be done in large networks with precision around 1 millisecond despite mean message exchange times near 5 milliseconds. These methods are discussed
Massive parallelism in the future of science
Massive parallelism appears in three domains of action of concern to scientists, where it produces collective action that is not possible from any individual agent's behavior. In the domain of data parallelism, computers comprising very large numbers of processing agents, one for each data item in the result will be designed. These agents collectively can solve problems thousands of times faster than current supercomputers. In the domain of distributed parallelism, computations comprising large numbers of resource attached to the world network will be designed. The network will support computations far beyond the power of any one machine. In the domain of people parallelism collaborations among large groups of scientists around the world who participate in projects that endure well past the sojourns of individuals within them will be designed. Computing and telecommunications technology will support the large, long projects that will characterize big science by the turn of the century. Scientists must become masters in these three domains during the coming decade
- …
