956 research outputs found
A Detailed Analysis of Contemporary ARM and x86 Architectures
RISC vs. CISC wars raged in the 1980s when chip area and processor design complexity were the primary constraints and desktops and servers exclusively dominated the computing landscape. Today, energy and power are the primary design constraints and the computing landscape is significantly different: growth in tablets and smartphones running ARM (a RISC ISA) is surpassing that of desktops and laptops running x86 (a CISC ISA). Further, the traditionally low-power ARM ISA is entering the high-performance server market, while the traditionally high-performance x86 ISA is entering the mobile low-power device market. Thus, the question of whether ISA plays an intrinsic role in performance or energy efficiency is becoming important, and we seek to answer this question through a detailed measurement based study on real hardware running real applications. We analyze measurements on the ARM Cortex-A8 and Cortex-A9 and Intel Atom and Sandybridge i7 microprocessors over workloads spanning mobile, desktop, and server computing. Our methodical investigation demonstrates the role of ISA in modern microprocessors? performance and energy efficiency. We find that ARM and x86 processors are simply engineering design points optimized for different levels of performance, and there is nothing fundamentally more energy efficient in one ISA class or the other. The ISA being RISC or CISC seems irrelevant
A Monte Carlo study of organ and effective doses of cone beam computed tomography (CBCT) scans in radiotherapy
Cone-beam CT (CBCT) scans utilized for image guided radiation therapy (IGRT) procedures have become an essential part of radiotherapy. The aim of this study was to assess organ and effective doses resulting from new CBCT scan protocols (head, thorax, and pelvis) released with a software upgrade of the kV on-board-imager (OBI) system. Influence of the scan parameters that were changed in the new protocols on the patient dose was also investigated. Organ and effective doses for protocols of the new software (V2.5) and a previous version (V1.6) were assessed using Monte Carlo (MC) simulations for the International Commission on Radiological Protection (ICRP) adult male and female reference computational phantoms. The number of projections and the mAs values were increased and the size of the scan field was extended in the new protocols. Influence of these changes on organ and effective doses of the scans was investigated. The OBI system was modelled in EGSnrc/BEAMnrc, and organ doses were estimated using EGSnrc/DOSXYZnrc. The MC model was benchmarked against experimental measurements. Organ doses resulting from the V2.5 protocols were higher than those of V1.6 for organs that were partially or fully inside the scans fields, and increased by (3 to 13)%, (10 to 77)%, and (13 to 21)% for the head, thorax, and pelvis protocols for both phantoms, respectively. As a result, effective doses rose by 14%, 17%, and 16% for the male phantom, and 13%, 18%, and 17% for the female phantom for the three scan protocols, respectively. The scan field extension for the V2.5 protocols contributed significantly in the dose increases, especially for organs that were partially irradiated such as the thyroid in head and thorax scans and colon in the pelvic scan. The contribution of the mAs values and projection numbers was minimal in the dose increases, up to 2.5%. The field size extension plays a major role in improving the treatment output by including more markers in the field of view to match between CBCT and CT images and hence setting up the patient precisely. Therefore, a trade-off between the risk and benefits of CBCT scans should be considered, and the dose increases should be monitored. Several recommendations have been made for optimization of the patient dose involved for IGRT procedures
On a swarm of Sergestid shrimps near Chennai
Swarm of Acetes sp. were observed during 20-30 April 2002 near shore along south Chennai coast.Aeru vallai or Mosquito net was operated to catch shrimps and overall catch comprises of Acetes species. The dominant species was Acetes indicus (90%) followed by A.japonicus (6%) and A. erythreaus
Constraint Centric Scheduling Guide
The advent of architectures with software-exposed resources (Spatial Architectures) has created a demand for universally applicable scheduling techniques. This paper describes our generalized spatial scheduling framework, formulated with Integer Linear Programming, and specifically accomplishes two goals. First, using the ?Simple? architecture, it illustrates how to use our open-source tool to create a customized scheduler and covers problem formulation with ILP and GAMS. Second, it summarizes results on the application to three real architectures (TRIPS,DySER,PLUG), demonstrating the technique?s practicality and competitiveness with existing schedulers
Fast Distributed PageRank Computation
Over the last decade, PageRank has gained importance in a wide range of
applications and domains, ever since it first proved to be effective in
determining node importance in large graphs (and was a pioneering idea behind
Google's search engine). In distributed computing alone, PageRank vector, or
more generally random walk based quantities have been used for several
different applications ranging from determining important nodes, load
balancing, search, and identifying connectivity structures. Surprisingly,
however, there has been little work towards designing provably efficient
fully-distributed algorithms for computing PageRank. The difficulty is that
traditional matrix-vector multiplication style iterative methods may not always
adapt well to the distributed setting owing to communication bandwidth
restrictions and convergence rates.
In this paper, we present fast random walk-based distributed algorithms for
computing PageRanks in general graphs and prove strong bounds on the round
complexity. We first present a distributed algorithm that takes O\big(\log
n/\eps \big) rounds with high probability on any graph (directed or
undirected), where is the network size and \eps is the reset probability
used in the PageRank computation (typically \eps is a fixed constant). We
then present a faster algorithm that takes O\big(\sqrt{\log n}/\eps \big)
rounds in undirected graphs. Both of the above algorithms are scalable, as each
node sends only small (\polylog n) number of bits over each edge per round.
To the best of our knowledge, these are the first fully distributed algorithms
for computing PageRank vector with provably efficient running time.Comment: 14 page
An unusual congregation of organisms in the catches off Kovalam, Madras
The fishermen belonging to Kovalam had a hectic activity in harvesting huge quantities of fish from the Kovalam bay from 26-8-'87 to 4-9-'87. Fishermen employed all available gears for catching the fish and prawns. According to them, this was due to the appearance of 'Vandal thanneer' or turbid water close to the shore. The present account embodies the results of the observations made on this unusual phenomenon
Marine fisheries of the south-east coast of India during 2008
The south-east coast of India comprising the
states of Andhra Pradesh, Tamil Nadu and
Pondicherry have a total coastline of 2050 km which
is 34% of the total coastline of the country. This region
is more diverse with respect to the number of species
that are landed. In 2007, it was observed that 499
species were landed in Tamil Nadu, 294 in Andhra
Pradesh and 115 in Pondicherry
Implementing Fine/Medium Grained TLP Support in a Many-Core Architecture
Abstract. We believe that future many-core architectures should support a simple and scalable way to execute many threads that are generated by parallel programs. A good candidate to implement an efficient and scalable execution of threads is the DTA (Decoupled Threaded Architecture), which is designed to exploit fine/medium grained Thread Level Parallelism (TLP) by using a hardware scheduling unit and relying on existing simple cores. In this paper, we present an initial implementation of DTA concept in a many-core architecture where it interacts with other architectural components designed from scratch in order to address the problem of scalability. We present initial results that show the scalability of the solution that were obtained using a many-core simulator written in SARCSim (a variant of UNISIM) with DTA support
- …
