31 research outputs found

    Software Tools for High-Performance Computiing: Survey and Recommendations

    Get PDF
    Applications programming for high-performance computing is notoriously difficult. Al-though parallel programming is intrinsically complex, the principal reason why high-performance computing is difficult is the lack of effective software tools. We believe that the lack of tools in turn is largely due to market forces rather than our inability to design and build such tools. Unfortunately, the poor availability and utilization of parallel tools hurt the entire supercomputing industry and the U.S. high performance computing initiative which is focused on applications. A disproportionate amount of resources is being spent on faster hardware and architectures, while tools are being neglected. This article introduces a taxonomy of tools, analyzes the major factors that contribute to this situation, and suggests ways that the imbalance could be redressed and the likely evolution of tools.</jats:p

    Teaching Compiler Development

    No full text

    Teaching Compiler Development

    No full text

    Para computing for distributed data analysis in terra scale

    No full text
    It is well known that one of the major constraints in distributed data mining is the network traffic, and here we proposed a valuable solution to that problem using repeated parameter estimation method. Para Computing in Distributed data mining is a new concept in that partial analyses of the raw data are mined from the network so that network traffic is minimized within a framework of real time data entry and ‘commutative analysis’. Here we refer commutative analysis as a technique that partial data analysis is carried out within small segments of micro-network in real time with an expected mining at a later stage in a large terra-scale network. The procedure packed with commutative analysis is referred to as Para Computing, and it is possible with repeated parameter estimation methods. This possibility was investigated thus resulting in a new paradigm for terra scale data mining

    Interactive conversion of sequential to multitasking FORTRAN

    No full text

    A new algorithm for global optimization for parallelism and locality

    No full text

    Program transformation for locality using affinity regions

    Full text link

    Data Distribution Models and Algorithms

    No full text
    Data distribution, and its interaction with parallelism and load balancing, is the key unsolved problem for compiling for parallelism for distributed memory computers. Many different techniques and algorithms have been proposed or implemented, suited to different programming environments, target architectures, and applications. However, there is little uniformity, common models, or obvious approach to classifying and comparing such techniques and algorithms. This paper provides a general framework for comparing different methods based upon their generality and underlying models. We show the equivalence of several methods, discuss the major unsolved problems and suggest avenues that might lead to solutions
    corecore