212 research outputs found

    Hardware Accelerated Scalable Parallel Random Number Generation

    Get PDF
    The Scalable Parallel Random Number Generators library (SPRNG) is widely used due to its speed, quality, and scalability. Monte Carlo (MC) simulations often employ SPRNG to generate large quantities of random numbers. Thanks to fast Field-Programmable Gate Array (FPGA) technology development, this thesis presents Hardware Accelerated SPRNG (HASPRNG) for the Virtex-II Pro XC2VP30 FPGAs. HASPRNG includes the full set of SPRNG generators and provides programming interfaces which hide detailed internal behavior from users. HASPRNG produces identical results with SPRNG, and it is verified with over 1 million consecutive random numbers for each type of generator. The programming interface allows a developer to use HASPRNG the same way as SPRNG. HASPRNG introduces 4-70 times faster execution than the original SPRNG. This thesis describes the implementation of HASPRNG, the verification platform, the programming interface, and its performance

    A case study on latency, bandwidth and energy efficiency of mobile 5G and YouTube Edge service in London. Why the 5G ecosystem and energy efficiency matter?

    Full text link
    The advancements in 5G mobile networks and Edge computing offer great potential for services like augmented reality and Cloud gaming, thanks to their low latency and high bandwidth capabilities. However, the practical limitations of achieving optimal latency on real applications remain uncertain. This paper aims to investigate the actual latency and bandwidth provided by 5G Networks and YouTube Edge service in London, UK. We analyze how latency and bandwidth differ between 4G LTE and 5G networks and how the location of YouTube Edge servers impacts these metrics. Our research reveals over 10 significant observations and implications, indicating that the primary constraints on 4G LTE and 5G capabilities are the ecosystem and energy efficiency of mobile devices down-streaming data. Our study demonstrates that to fully unlock the potential of 5G and it's applications, it is crucial to prioritize efforts aimed at improving the ecosystem and enhancing the energy efficiency

    Learning Styles and L2 Vocabulary Learning: Do Referential Preference Learners Gain More Vocabulary Than Expressive Preference Learners?

    Get PDF
    This study investigates the relationship between learning styles and second language vocabulary learning among young learners. The learning styles were operationalized in accordance with Nelson (1973), in which referential learning occurs when learners prefer to acquire a language through learning single words, whereas expressive learning happens when learners learn a language with entire phrases. After classifying students learning styles, the present study explored the relationship between learning style (referential vs. expressive) and task types (word vs. idiom) of vocabulary learning. Results indicated that while no interaction between single items was found, there was a significant interaction between referential learning and multi-word expressions (idioms) on vocabulary learning. The results suggest that the Korean students learning style was related to learning environments, including word-based lessons by school or institute in Korea.This work was supported by Hankuk University of Foreign Studies Research Fund of 2018

    Some Orders Are Important: Partially Preserving Orders in Top-Quality Planning

    Full text link
    The ability to generate multiple plans is central to using planning in real-life applications. Top-quality planners generate sets of such top-cost plans, allowing flexibility in determining equivalent ones. In terms of the order between actions in a plan, the literature only considers two extremes -- either all orders are important, making each plan unique, or all orders are unimportant, treating two plans differing only in the order of actions as equivalent. To allow flexibility in selecting important orders, we propose specifying a subset of actions the orders between which are important, interpolating between the top-quality and unordered top-quality planning problems. We explore the ways of adapting partial order reduction search pruning techniques to address this new computational problem and present experimental evaluations demonstrating the benefits of exploiting such techniques in this setting.Comment: To appear at SoCS 202

    Green Investment Banks: Unleashing The Potential of National Development Banks to Finance a Green and Just Transition

    Get PDF
    This publication explores how green investment banks to can help finance a green and just transition, making a case for strengthening their role and showing how international development finance institutions can help unleash their potential. It is critical that these institutions emerge as powerful and cost-effective vehicles to overcome investment barriers, leverage available resources, and help localize the Sustainable Development Goals. The Asian Development Bank and other multilaterals can share knowledge and provide innovative technical assistance to support governments in considering the green investment bank option

    Capital Mobility, Financial Risk, Institutions and Redistributive Spending

    Get PDF
    As democracy spreads, the importance of redistribution policies is believed to increase, bringing with them the threat of weakening incentives and slowing growth. Yet, to date the determinants of redistribution policies have rarely been investigated outside a few OECD countries and outside the context of narrowly defined transfer payments. This paper examines the determinants of a broader class of redistribution policies, namely, the share of public spending on health, education and welfare in total government spending in a larger set of countries (a panel data set consisting of 105 countries) over the period 1988-2000. In particular, the paper views redistributive spending as emanating from two global trends: deregulation of international capital movements and the spread of democratic institutions. Our basic hypothesis is that because of the risks involved in international capital mobility and the fact that their use of standard macroeconomic policies is increasingly limited by international rules of the game, governments find redistributive spending policies convenient tools for dealing with the distributive effects inherent in these risks, especially when financial crises actually occur. The results, with both fixed and random effects models, support most of the hypotheses, several of them quite strongly

    Large Language Models as Planning Domain Generators

    Full text link
    Developing domain models is one of the few remaining places that require manual human labor in AI planning. Thus, in order to make planning more accessible, it is desirable to automate the process of domain model generation. To this end, we investigate if large language models (LLMs) can be used to generate planning domain models from simple textual descriptions. Specifically, we introduce a framework for automated evaluation of LLM-generated domains by comparing the sets of plans for domain instances. Finally, we perform an empirical analysis of 7 large language models, including coding and chat models across 9 different planning domains, and under three classes of natural language domain descriptions. Our results indicate that LLMs, particularly those with high parameter counts, exhibit a moderate level of proficiency in generating correct planning domains from natural language descriptions. Our code is available at https://github.com/IBM/NL2PDDL.Comment: Published at ICAPS 202

    Software-Defined Number Formats for High-Speed Belief Propagation

    Get PDF
    This paper presents the design and implementation of Software-Defined Floating-Point (SDF) number formats for high-speed implementation of the Belief Propagation (BP) algorithm. SDF formats are designed specifically to meet the numeric needs of the computation and are more compact representations of the data. They reduce memory footprint and memory bandwidth requirements without sacrificing accuracy, given that BP for loopy graphs inherently involves algorithmic errors. This paper designs several SDF formats for sum-product BP applications by careful analysis of the computation. Our theoretical analysis leads to the design of 16-bit (half-precision) and 8-bit (mini-precision) widths. We moreover present highly efficient software implementation of the proposed SDF formats which is centered around conversion to hardware-supported single-precision arithmetic hardware. Our solution demonstrates negligible conversion overhead on commercially available CPUs. For Ising grids with sizes from 100×100 to 500×500, the 16- and 8-bit SDF formats along with our conversion module produce equivalent accuracy to double-precision floating-point format but with 2.86× speedups on average on an Intel Xeon processor. Particularly, increasing the grid size results in higher speed-up. For example, the proposed half-precision format with 3-bit exponent and 13-bit mantissa achieved the minimum and maximum speedups of 1.30× and 1.39× over single-precision, and 2.55× and 3.40× over double-precision, by increasing grid size from 100×100 to 500×500

    Creation of a Chronic Transfusion Database

    Get PDF
    The purpose of this project was to identify the chronic transfusion recipients in the Edmonton Zone and to develop a registry to track and monitor blood product use in this group of patients

    Fast, Scalable, Energy-Efficient Non-element-wise Matrix Multiplication on FPGA

    Full text link
    Modern Neural Network (NN) architectures heavily rely on vast numbers of multiply-accumulate arithmetic operations, constituting the predominant computational cost. Therefore, this paper proposes a high-throughput, scalable and energy efficient non-element-wise matrix multiplication unit on FPGAs as a basic component of the NNs. We firstly streamline inter-layer and intra-layer redundancies of MADDNESS algorithm, a LUT-based approximate matrix multiplication, to design a fast, efficient scalable approximate matrix multiplication module termed Approximate Multiplication Unit (AMU) . The AMU optimizes LUT-based matrix multiplications further through dedicated memory management and access design, decoupling computational overhead from input resolution and boosting FPGA-based NN accelerator efficiency significantly. The experimental results show that using our AMU achieves up to 9x higher throughput and 112x higher energy efficiency over the state-of-the-art solutions for the FPGA-based Quantised Neural Network (QNN) accelerators
    corecore