94 research outputs found

    DEVELOPMENT OF A MICROFLUIDIC GAS GENERATOR FROM AN EFFICIENT FILM-BASED MICROFABRICATION METHOD

    Get PDF
    poster abstractRecently, tape&film based microfabrication method has been studied for rapid prototyping of microfluidic devices due to its low cost and ease of fabrication [1]. But most of the reported film-based microfluidic devices are simple single-layer patterned 2-dimentional (2D) designs, whose potential applications are limited. In this paper, we present the design, fabrication and testing results of a 3-dimentional (3D) structured microfluidic gas generator prototype. This gas generator is used as an example to introduce our new approach of film-based fabrication method towards lab-use microfluidic research, which usually requires constant change of design and prefers low fabrication cost and short fabrication period. The prototype is a film-based comprehensive microfluidic gas generator which integrates self-circulation, self-regulation, catalytic reaction, and gas/liquid separation. Time and economy efficiency are the biggest merit of this method. The only required facility during the whole process is a digital craft-cutter. The working principle of the device is illustrated in Fig.1 [2]. The film-based prototype is an alternate version of the silicon-based self-circulating self-regulating gas generator developed by Meng [2]. Fig.2 shows the schematic of the filmbased prototype. It consists of 15 layers of films, tapes, glass slide, tubing connectors, and cube supporting. As shown in Fig.3, the prototype device was obtained by sequentially aligning and stacking multiple layers of patterned films and double-sided Kapton tape. The patterns were obtained by a digital craft-cutter from CAD drawings. The 3D structure was made from both the pattern and the thickness of the layer material, as shown in Fig.4. Besides, functional features can be easily added into the device. For instance, Pt-black was partially sprayed on the tape layer for catalytic reaction using a shadow mask, and nanoporous membrane was cut in the desired shape and stack-placed in position as the gas/liquid separator. The self-circulating and self-regulating functions were achieved by capillary force difference in different channels as shown in Fig.4, which can be achieved by fabricating different channel depths and treating the surface of certain channel into hydrophilic and leave others hydrophobic. The treatment for polystyrene (PS) film was achieved by spraying Lotus Leaf® hydrophilic coating or using oxygen plasma machine [3]. The fabricated device was tested with H2O2 solutions (for O2) and NH3BH3 solutions (for H2) at different concentrations (Fig.5). A pressure difference (1 psi) was applied across the gas/liquid separation membrane to provide better venting. The gas generation profiles are shown in Fig.6 and the summarized characteristics is given in Table 1. The generated gas flow rate is measured by a gas flow meter, and liquid pumping rate measured by monitoring the movement of a liquid/gas meniscus. Fig. 6 shows that higher reactant concentration causes higher gas generation rate. The fluctuation of gas generation rate is due to the pulsatile pumping of this self-pumping mechanism. It is expected that designs with multiple parallel channels can make the gas generation profile smooth due to the interactions among the channels. Detailed characterization results and discussion on reaction kinetics and pumping dynamics in the microfluidic reactor will be reported

    Understanding Aesthetic Evaluation using Deep Learning

    Get PDF
    A bottleneck in any evolutionary art system is aesthetic evaluation. Many different methods have been proposed to automate the evaluation of aesthetics, including measures of symmetry, coherence, complexity, contrast and grouping. The interactive genetic algorithm (IGA) relies on human-in-the-loop, subjective evaluation of aesthetics, but limits possibilities for large search due to user fatigue and small population sizes. In this paper we look at how recent advances in deep learning can assist in automating personal aesthetic judgement. Using a leading artist's computer art dataset, we use dimensionality reduction methods to visualise both genotype and phenotype space in order to support the exploration of new territory in any generative system. Convolutional Neural Networks trained on the user's prior aesthetic evaluations are used to suggest new possibilities similar or between known high quality genotype-phenotype mappings

    Predicting Player Experience Without the Player. An Exploratory Study

    Get PDF
    A key challenge of procedural content generation (PCG) is to evoke a certain player experience (PX), when we have no direct control over the content which gives rise to that experience. We argue that neither the rigorous methods to assess PX in HCI, nor specialised methods in PCG are sufficient, because they rely on a human in the loop. We propose to address this shortcoming by means of computational models of intrinsic motivation and AI game-playing agents. We hypothesise that our approach could be used to automatically predict PX across games and content types without relying on a human player or designer. We conduct an exploratory study in level generation based on empowerment, a specific model of intrinsic motivation. Based on a thematic analysis, we find that empowerment can be used to create levels with qualitatively different PX. We relate the identified experiences to established theories of PX in HCI and game design, and discuss next steps

    Identifying predictors of translocation success in rare plant species

    Get PDF
    The fundamental goal of a rare plant translocation is to create self-sustaining populations with the evolutionary resilience to persist in the long term. Yet, most plant translocation syntheses focus on a few factors influencing short-term benchmarks of success (e.g., survival and reproduction). Short-term benchmarks can be misleading when trying to infer future growth and viability because the factors that promote establishment may differ from those required for long-term persistence. We assembled a large (n = 275) and broadly representative data set of well-documented and monitored (7.9 years on average) at-risk plant translocations to identify the most important site attributes, management techniques, and species' traits for six life-cycle benchmarks and population metrics of translocation success. We used the random forest algorithm to quantify the relative importance of 29 predictor variables for each metric of success. Drivers of translocation outcomes varied across time frames and success metrics. Management techniques had the greatest relative influence on the attainment of life-cycle benchmarks and short-term population trends, whereas site attributes and species' traits were more important for population persistence and long-term trends. Specifically, large founder sizes increased the potential for reproduction and recruitment into the next generation, whereas declining habitat quality and the outplanting of species with low seed production led to increased extinction risks and a reduction in potential reproductive output in the long-term, respectively. We also detected novel interactions between some of the most important drivers, such as an increased probability of next-generation recruitment in species with greater seed production rates, but only when coupled with large founder sizes. Because most significant barriers to plant translocation success can be overcome by improving techniques or resolving site-level issues through early intervention and management, we suggest that by combining long-term monitoring with adaptive management, translocation programs can enhance the prospects of achieving long-term success

    SNAPSHOT USA 2020: A second coordinated national camera trap survey of the United States during the COVID-19 pandemic

    Get PDF
    Managing wildlife populations in the face of global change requires regular data on the abundance and distribution of wild animals, but acquiring these over appropriate spatial scales in a sustainable way has proven challenging. Here we present the data from Snapshot USA 2020, a second annual national mammal survey of the USA. This project involved 152 scientists setting camera traps in a standardized protocol at 1485 locations across 103 arrays in 43 states for a total of 52,710 trap-nights of survey effort. Most (58) of these arrays were also sampled during the same months (September and October) in 2019, providing a direct comparison of animal populations in 2 years that includes data from both during and before the COVID-19 pandemic. All data were managed by the eMammal system, with all species identifications checked by at least two reviewers. In total, we recorded 117,415 detections of 78 species of wild mammals, 9236 detections of at least 43 species of birds, 15,851 detections of six domestic animals and 23,825 detections of humans or their vehicles. Spatial differences across arrays explained more variation in the relative abundance than temporal variation across years for all 38 species modeled, although there are examples of significant site-level differences among years for many species. Temporal results show how species allocate their time and can be used to study species interactions, including between humans and wildlife. These data provide a snapshot of the mammal community of the USA for 2020 and will be useful for exploring the drivers of spatial and temporal changes in relative abundance and distribution, and the impacts of species interactions on daily activity patterns. There are no copyright restrictions, and please cite this paper when using these data, or a subset of these data, for publication

    SNAPSHOT USA 2019: a coordinated national camera trap survey of the United States

    Get PDF
    With the accelerating pace of global change, it is imperative that we obtain rapid inventories of the status and distribution of wildlife for ecological inferences and conservation planning. To address this challenge, we launched the SNAPSHOT USA project, a collaborative survey of terrestrial wildlife populations using camera traps across the United States. For our first annual survey, we compiled data across all 50 states during a 14-week period (17 August-24 November of 2019). We sampled wildlife at 1,509 camera trap sites from 110 camera trap arrays covering 12 different ecoregions across four development zones. This effort resulted in 166,036 unique detections of 83 species of mammals and 17 species of birds. All images were processed through the Smithsonian's eMammal camera trap data repository and included an expert review phase to ensure taxonomic accuracy of data, resulting in each picture being reviewed at least twice. The results represent a timely and standardized camera trap survey of the United States. All of the 2019 survey data are made available herein. We are currently repeating surveys in fall 2020, opening up the opportunity to other institutions and cooperators to expand coverage of all the urban-wild gradients and ecophysiographic regions of the country. Future data will be available as the database is updated at eMammal.si.edu/snapshot-usa, as will future data paper submissions. These data will be useful for local and macroecological research including the examination of community assembly, effects of environmental and anthropogenic landscape variables, effects of fragmentation and extinction debt dynamics, as well as species-specific population dynamics and conservation action plans. There are no copyright restrictions; please cite this paper when using the data for publication

    SNAPSHOT USA 2019 : a coordinated national camera trap survey of the United States

    Get PDF
    This article is protected by copyright. All rights reserved.With the accelerating pace of global change, it is imperative that we obtain rapid inventories of the status and distribution of wildlife for ecological inferences and conservation planning. To address this challenge, we launched the SNAPSHOT USA project, a collaborative survey of terrestrial wildlife populations using camera traps across the United States. For our first annual survey, we compiled data across all 50 states during a 14-week period (17 August - 24 November of 2019). We sampled wildlife at 1509 camera trap sites from 110 camera trap arrays covering 12 different ecoregions across four development zones. This effort resulted in 166,036 unique detections of 83 species of mammals and 17 species of birds. All images were processed through the Smithsonian's eMammal camera trap data repository and included an expert review phase to ensure taxonomic accuracy of data, resulting in each picture being reviewed at least twice. The results represent a timely and standardized camera trap survey of the USA. All of the 2019 survey data are made available herein. We are currently repeating surveys in fall 2020, opening up the opportunity to other institutions and cooperators to expand coverage of all the urban-wild gradients and ecophysiographic regions of the country. Future data will be available as the database is updated at eMammal.si.edu/snapshot-usa, as well as future data paper submissions. These data will be useful for local and macroecological research including the examination of community assembly, effects of environmental and anthropogenic landscape variables, effects of fragmentation and extinction debt dynamics, as well as species-specific population dynamics and conservation action plans. There are no copyright restrictions; please cite this paper when using the data for publication.Publisher PDFPeer reviewe
    corecore