48 research outputs found
File Access Performance of Diskless Workstations
This paper studies the performance of single-user workstations that access files remotely over a local area network. From the environmental, economic, and administrative points of view, workstations that are diskless or that have limited secondary storage are desirable at the present time. Even with changing technology, access to shared data will continue to be important. It is likely that some performance penalty must be paid for remote rather than local file access. Our objectives are to assess this penalty and to explore a number of design alternatives that can serve to minimize it. Our approach is to use the results of measurement experiments to parameterize queuing network performance models. These models then are used to assess performance under load and to evahrate design alternatives. The major conclusions of our study are: (1) A system of diskless workstations with a shared file server can have satisfactory performance. By this, we mean performance comparable to that of a local disk in the lightly loaded case, and the ability to support substantial numbers of client workstations without significant degradation. As with any shared facility, good design is necessary to minimize queuing delays under high load. (2) The key to efficiency is protocols that allow volume transfers at every interface (e.g., between client and server, and between disk and memory at the server) and at every level (e.g., between client and server at the level of logical request/response and at the level of local area network packet size). However, the benefits of volume transfers are limited to moderate sizes (8-16 kbytes) by several factors. (3) From a performance point of view, augmenting the capabilities of the shared file server may be more cost effective than augmenting the capabilities of the client workstations. (4) Network contention should not be a performance problem for a lo-Mbit network and 100 active workstations in a software development environment
A Practical Comparison Between the TAO Real-Time Event Service and the Maestro/Ensemble Group Communication System
Distributed Process Groups in the V Kernel
The V kernel supports an abstraction of processes, with operations for interprocess communication, process management, and memory management. This abstraction is used as a software base for constructing distributed systems. As a distributed kernel, the V kernel makes intermachine boundaries largely transparent. In this environment of many cooperating processes on different machines, there are many logical groups of processes. Examples include the group of tile servers, a group of processes executing a particular job, and a group of processes executing a distributed parallel computation. In this paper we describe the extension of the V kernel to support process groups. Operations on groups include group interprocess communication, which provides an application-level abstraction of network multicast. Aspects of the implementation and performance, and initial experience with applications are discussed
The Distributed V Kernel and its Performance for Diskless Workstations
The distributed V kernel is a message-oriented kernel that provides uniform local and network interprocess communication. It is primarily being used in an environment of diskless workstations connected by a high-speed local network to a set of file servers. We describe a performance evaluation of the kernel, with particular emphasis on the cost of network file access. Our results show that over a local network: 1. Diskless workstations can access remote files with minimal performance penalty. 2. The V message facility can be used to access remote files at comparable cost to any well-tuned specialized file access protocol. We conclude that it is feasible to build a distributed system with all network communication using the V message facility even when most of the network nodes have no secondary storage
One-to-many Interprocess Communication in the V-System
The author compares the measured performance of pipes implemented by a pipe server process on top of the V message passing transport protocol versus the calculated performance of pipes implemented by an operating system kernel and supported by a dedicated protocol. He describes the implementation of pipes in the V system and presents measurements of their performance. He then calculates the performance of pipes when implemented in the kernel and supported by a dedicated protocol. The performance loss as a result of using the pipe server is shown to be about 8% for network pipes and about 25% for local pipes. Given these figures and given the fact that messages and not pipes are the principal means of interprocess communication in V, it is concluded that it is quite practical to implement pipes by a process using message passing, thereby avoiding the need for additional kernel and protocol complexity
