7,667 research outputs found

    Characterization of Power-to-Phase Conversion in High-Speed P-I-N Photodiodes

    Full text link
    Fluctuations of the optical power incident on a photodiode can be converted into phase fluctuations of the resulting electronic signal due to nonlinear saturation in the semiconductor. This impacts overall timing stability (phase noise) of microwave signals generated from a photodetected optical pulse train. In this paper, we describe and utilize techniques to characterize this conversion of amplitude noise to phase noise for several high-speed (>10 GHz) InGaAs P-I-N photodiodes operated at 900 nm. We focus on the impact of this effect on the photonic generation of low phase noise 10 GHz microwave signals and show that a combination of low laser amplitude noise, appropriate photodiode design, and optimum average photocurrent is required to achieve phase noise at or below -100 dBc/Hz at 1 Hz offset a 10 GHz carrier. In some photodiodes we find specific photocurrents where the power-to-phase conversion factor is observed to go to zero

    Designing of a prototype heat-sealer to manufacture solar water sterilization pouches for use in developing nations

    Get PDF
    Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2005.Includes bibliographical references (leaf 23).Water purification proves to be a difficult task in many developing nations. The SODIS (SOlar water DISinfection) process is a method which improves the microbiological quality of water making it safer for drinking and cooking using the UV-A rays and heat from the sun. Even simple processes such as this, require components that are not easily attainable in many rural areas-in this case the recommended two-liter bottle. Amy Smith, an instructor in MIT's Edgerton Center, researched and tested the effectiveness of polypropylene collapsible water pouches in the SODIS process. Thus, a heat-sealing device that can be used in developing nations to manufacture collapsible water pouches is needed. This device is intended to allow individuals in developing countries to take advantage of the SODIS water purification process. The approximately 60 watt prototype of the heat-sealing device is powered by a 12-volt solar deep-cycle battery and is made of simple materials so that it can be used and maintained in a variety of developing nations. A 20 inch nickel chromium strip is used as the heating element and Teflon forms a barrier between the heating element and the material to be sealed. A 4-mil polypropylene sheet is the pouch material of choice.(cont.) It is placed on top of the Teflon strip, before a lever arm is lowered, the device is turned 'on' and the sheet is sealed via the heated nickel chromium strip. Although the alpha prototype presented in this thesis has a number of positive attributes, such as using easily accessible or shippable components and making use of available power sources and/or batteries, there are areas for improvement. Making the device more robust, user friendly and versatile and making the seal strength more consistent and accurate are important characteristics that should be considered when designing a beta prototype.by Saundra S. Quinlan.S.B

    Learning a Static Analyzer from Data

    Full text link
    To be practically useful, modern static analyzers must precisely model the effect of both, statements in the programming language as well as frameworks used by the program under analysis. While important, manually addressing these challenges is difficult for at least two reasons: (i) the effects on the overall analysis can be non-trivial, and (ii) as the size and complexity of modern libraries increase, so is the number of cases the analysis must handle. In this paper we present a new, automated approach for creating static analyzers: instead of manually providing the various inference rules of the analyzer, the key idea is to learn these rules from a dataset of programs. Our method consists of two ingredients: (i) a synthesis algorithm capable of learning a candidate analyzer from a given dataset, and (ii) a counter-example guided learning procedure which generates new programs beyond those in the initial dataset, critical for discovering corner cases and ensuring the learned analysis generalizes to unseen programs. We implemented and instantiated our approach to the task of learning JavaScript static analysis rules for a subset of points-to analysis and for allocation sites analysis. These are challenging yet important problems that have received significant research attention. We show that our approach is effective: our system automatically discovered practical and useful inference rules for many cases that are tricky to manually identify and are missed by state-of-the-art, manually tuned analyzers

    The Merging History of Massive Black Holes

    Full text link
    We investigate a hierarchical structure formation scenario describing the evolution of a Super Massive Black Holes (SMBHs) population. The seeds of the local SMBHs are assumed to be 'pregalactic' black holes, remnants of the first POPIII stars. As these pregalactic holes become incorporated through a series of mergers into larger and larger halos, they sink to the center owing to dynamical friction, accrete a fraction of the gas in the merger remnant to become supermassive, form a binary system, and eventually coalesce. A simple model in which the damage done to a stellar cusps by decaying BH pairs is cumulative is able to reproduce the observed scaling relation between galaxy luminosity and core size. An accretion model connecting quasar activity with major mergers and the observed BH mass-velocity dispersion correlation reproduces remarkably well the observed luminosity function of optically-selected quasars in the redshift range 1<z<5. We finally asses the potential observability of the gravitational wave background generated by the cosmic evolution of SMBH binaries by the planned space-born interferometer LISA.Comment: 4 pages, 2 figures, Contribute to "Multiwavelength Cosmology", Mykonos, Greece, June 17-20, 200

    The Potential for Student Performance Prediction in Small Cohorts with Minimal Available Attributes

    Get PDF
    The measurement of student performance during their progress through university study provides academic leadership with critical information on each student’s likelihood of success. Academics have traditionally used their interactions with individual students through class activities and interim assessments to identify those “at risk” of failure/withdrawal. However, modern university environments, offering easy on-line availability of course material, may see reduced lecture/tutorial attendance, making such identification more challenging. Modern data mining and machine learning techniques provide increasingly accurate predictions of student examination assessment marks, although these approaches have focussed upon large student populations and wide ranges of data attributes per student. However, many university modules comprise relatively small student cohorts, with institutional protocols limiting the student attributes available for analysis. It appears that very little research attention has been devoted to this area of analysis and prediction. We describe an experiment conducted on a final-year university module student cohort of 23, where individual student data are limited to lecture/tutorial attendance, virtual learning environment accesses and intermediate assessments. We found potential for predicting individual student interim and final assessment marks in small student cohorts with very limited attributes and that these predictions could be useful to support module leaders in identifying students potentially “at risk.”.Peer reviewe

    A survey of cost-sensitive decision tree induction algorithms

    Get PDF
    The past decade has seen a significant interest on the problem of inducing decision trees that take account of costs of misclassification and costs of acquiring the features used for decision making. This survey identifies over 50 algorithms including approaches that are direct adaptations of accuracy based methods, use genetic algorithms, use anytime methods and utilize boosting and bagging. The survey brings together these different studies and novel approaches to cost-sensitive decision tree learning, provides a useful taxonomy, a historical timeline of how the field has developed and should provide a useful reference point for future research in this field

    An intelligent assistant for exploratory data analysis

    Get PDF
    In this paper we present an account of the main features of SNOUT, an intelligent assistant for exploratory data analysis (EDA) of social science survey data that incorporates a range of data mining techniques. EDA has much in common with existing data mining techniques: its main objective is to help an investigator reach an understanding of the important relationships ina data set rather than simply develop predictive models for selectd variables. Brief descriptions of a number of novel techniques developed for use in SNOUT are presented. These include heuristic variable level inference and classification, automatic category formation, the use of similarity trees to identify groups of related variables, interactive decision tree construction and model selection using a genetic algorithm

    Robust Machine Learning Applied to Astronomical Datasets I: Star-Galaxy Classification of the SDSS DR3 Using Decision Trees

    Get PDF
    We provide classifications for all 143 million non-repeat photometric objects in the Third Data Release of the Sloan Digital Sky Survey (SDSS) using decision trees trained on 477,068 objects with SDSS spectroscopic data. We demonstrate that these star/galaxy classifications are expected to be reliable for approximately 22 million objects with r < ~20. The general machine learning environment Data-to-Knowledge and supercomputing resources enabled extensive investigation of the decision tree parameter space. This work presents the first public release of objects classified in this way for an entire SDSS data release. The objects are classified as either galaxy, star or nsng (neither star nor galaxy), with an associated probability for each class. To demonstrate how to effectively make use of these classifications, we perform several important tests. First, we detail selection criteria within the probability space defined by the three classes to extract samples of stars and galaxies to a given completeness and efficiency. Second, we investigate the efficacy of the classifications and the effect of extrapolating from the spectroscopic regime by performing blind tests on objects in the SDSS, 2dF Galaxy Redshift and 2dF QSO Redshift (2QZ) surveys. Given the photometric limits of our spectroscopic training data, we effectively begin to extrapolate past our star-galaxy training set at r ~ 18. By comparing the number counts of our training sample with the classified sources, however, we find that our efficiencies appear to remain robust to r ~ 20. As a result, we expect our classifications to be accurate for 900,000 galaxies and 6.7 million stars, and remain robust via extrapolation for a total of 8.0 million galaxies and 13.9 million stars. [Abridged]Comment: 27 pages, 12 figures, to be published in ApJ, uses emulateapj.cl

    Porting Decision Tree Algorithms to Multicore using FastFlow

    Full text link
    The whole computer hardware industry embraced multicores. For these machines, the extreme optimisation of sequential algorithms is no longer sufficient to squeeze the real machine power, which can be only exploited via thread-level parallelism. Decision tree algorithms exhibit natural concurrency that makes them suitable to be parallelised. This paper presents an approach for easy-yet-efficient porting of an implementation of the C4.5 algorithm on multicores. The parallel porting requires minimal changes to the original sequential code, and it is able to exploit up to 7X speedup on an Intel dual-quad core machine.Comment: 18 pages + cove
    corecore