1,945 research outputs found

    Increasing the power density and reducing the levelized cost of electricity of a reverse electrodialysis stack through blending

    Get PDF
    We increase the power density of a reverse electrodialysis (RED) stack by blending the low salinity feed with a higher salinity stream before the stack entrance. This lowers the capital cost of the system and the resulting levelized cost of electricity, enhancing the viability of RED renewable energy generation. Blending increases the power density by decreasing the dominating electrical resistance in the diluate channel as well as the effective resistance caused by concentration polarization, but not without sacrificing some driving potential. To quantify this trade-off and to evaluate the power density improvement blending can provide, a one-dimensional RED stack model is employed and validated with experimental results from the literature. For a typical stack configured with a feed velocity of 1 cm/s, power density improvements of over 20% and levelized cost of energy reductions of over 40% are achievable, provided the salinity of the available river water is below 200 ppm. Additional cost reductions are realized through back-end blending, whereby the diluate exit stream is used as the higher salinity blend stream. Also, improvements from blending increase for higher feed velocities, shorter stack lengths, and larger channel heights.King Fahd University of Petroleum and Minerals (Center for Clean Water and Clean Energy at MIT and KFUPM, project number R15-CW-11)Massachusetts Life Sciences Center (Hugh Hampton Memorial Fellowship

    A new reverse electrodialysis design strategy which significantly reduces the levelized cost of electricity

    Get PDF
    We develop a framework for choosing the optimal load resistance, feed velocity and residence time for a reverse electrodialysis stack based on minimizing the levelized cost of electricity. The optimal load resistance maximizes the gross stack power density and results from a trade-off between stack voltage and stack current. The primary trade-off governing the optimal feed velocity is between stack pumping power losses, which reduce the net power density and concentration polarization losses, which reduce the gross stack power density. Lastly, the primary trade-off governing the optimal residence time is between the capital costs of the stack and pretreatment system. Implementing our strategy, we show that a smaller load resistance, a smaller feed velocity and a larger residence time than are currently proposed in the literature reduces costs by over 40%. Despite these reductions, reverse electrodialysis remains more expensive than other renewable technologies.King Fahd University of Petroleum and Minerals (Center for Clean Water and Clean Energy at MIT and KFUPM, project number R15-CW-11)Massachusetts Institute of Technology (Hugh Hampton Memorial Fellowship

    An overview of the planned CCAT software system

    Get PDF
    CCAT will be a 25m diameter sub-millimeter telescope capable of operating in the 0.2 to 2.1mm wavelength range. It will be located at an altitude of 5600m on Cerro Chajnantor in northern Chile near the ALMA site. The anticipated first generation instruments include large format (60,000 pixel) kinetic inductance detector (KID) cameras, a large format heterodyne array and a direct detection multi-object spectrometer. The paper describes the architecture of the CCAT software and the development strategy.Comment: 17 pages, 6 figures, to appear in Software and Cyberinfrastructure for Astronomy III, Chiozzi & Radziwill (eds), Proc. SPIE 9152, paper ID 9152-10

    Design and field evaluation of REMPAD: a recommender system supporting group reminiscence therapy

    Get PDF
    This paper describes a semi-automated web-based system to facilitate digital remi-niscence therapy for patients with mild-to-moderate dementia, enacted in a group setting. The system, REMPAD, uses proactive recommendation technology to profile participants and groups, and offers interactive multimedia content from the Internet to match these profiles. In this paper, we focus on the design of the system to deliver an innovative personalized group reminiscence experience. We take a user-centered design approach to discover and address the design challenges and considerations. A combination of methodologies is used throughout this research study, including exploratory interviews, prototype use case walkthroughs, and field evaluations. The results of the field evaluation indicate high user satisfaction when using the system, and strong tendency towards repeated use in future. These studies provide an insight into the current practices and challenges of group reminiscence therapy, and inform the design of a multimedia recommender system to support facilitators and group therapy participants

    Automatically recommending multimedia content for use in group reminiscence therapy

    Get PDF
    This paper presents and evaluates a novel approach for automatically recommending multimedia content for use in group reminiscence therapy for people with Alzheimer's and other dementias. In recent years recommender systems have seen popularity in providing a personalised experience in information discovery tasks. This personalisation approach is naturally suited to tasks in healthcare, such as reminiscence therapy, where there has been a trend towards an increased emphasis on person-centred care. Building on recent work which has shown benefits to reminiscence therapy in a group setting, we develop and evaluate a system, REMPAD, which profiles people with Alzheimer's and other dementias, and provides multimedia content tailored to a given group context. In this paper we present our system and approach, and report on a user trial in residential care settings. In our evaluation we examine the potential to use early-aggregation and late-aggregation of group member preferences using case-based reasoning combined with a content-based method. We evaluate with respect to accuracy, utility and perceived usefulness. The results overall are positive and we find that our best-performing approach uses early aggregation CBR combined with a content-based method. Also, under different evaluation criteria, we note different performances, with certain configurations of our approach providing better accuracy and others providing better utility

    On the cost of electrodialysis for the desalination of high salinity feeds

    Get PDF
    We propose the use of electrodialysis to desalinate produced waters from shale formations in order to facilitate water reuse in subsequent hydraulic fracturing processes. We focus on establishing the energy and equipment size required for the desalination of feed waters containing total dissolved solids of up to 192,000 ppm, and we do this by experimentally replicating the performance of a 10-stage electrodialysis system. We find that energy requirements are similar to current vapour compression desalination processes for feedwaters ranging between roughly 40,000-90,000 ppm TDS, but we project water costs to potentially be lower. We also find that the cost per unit salt removed is significantly lower when removed from a high salinity stream as opposed to a low salinity stream, pointing towards the potential of ED to operate as a partial desalination process for high salinity waters. We then develop a numerical model for the system, validate it against experimental results and use this model to minimise salt removal costs by optimising the stack voltage. We find that the higher the salinity of the water from which salt is removed the smaller should be the ratio of the electrical current to its limiting value. We conclude, on the basis of energy and equipment costs, that electrodialysis processes are potentially feasible for the desalination of high salinity waters but require further investigation of robustness to fouling under field conditions.Massachusetts Institute of Technology. Office of the Dean for Graduate Education (Hugh Hampton Young Memorial Fellowship)MIT Energy Initiativ

    Absolut “copper catalyzation perfected”; robust living polymerization of NIPAM : Guinness is good for SET-LRP

    Get PDF
    The controlled polymerization of N-isopropyl acrylamide (NIPAM) is reported in a range of international beers, wine, ciders and spirits utilizing Cu(0)-mediated living radical polymerization (SET-LRP). Highly active Cu(0) is first formed in situ by the rapid disproportionation of [Cu(I)(Me6-Tren)Br] in the commercial water–alcohol mixtures. Rapid, yet highly controlled, radical polymerization follows (Đ values as low as 1.05) despite the numerous chemicals of diverse functionality present in these solvents e.g. alpha acids, sugars, phenols, terpenoids, flavonoids, tannins, metallo-complexes, anethole etc. The results herein demonstrate the robust nature of the aqueous SET-LRP protocol, underlining its ability to operate efficiently in a wide range of complex chemical environments

    Rojava’s ‘war of education’ : the role of education in building a revolutionary political community in North and East Syria

    Get PDF
    Acknowledgements We are indebted to our many friends across Kurdistan, including the interpreters we worked with and our research participants. Further, we are grateful to Professor Christian Lund (University of Copenhagen) for his supervision of this research, and Dr Rachel Shanks, Professor Pamela Abbott, and Dr Hanifi Barış (University of Aberdeen) for their feedback in the early stages of this article.Peer reviewedPublisher PD

    SPYGLASS. IV. New Stellar Survey of Recent Star Formation within 1 kpc

    Full text link
    Young stellar populations provide a powerful record that traces millions of years of star formation history in the solar neighborhood. Using a revised form of the SPYGLASS young star identification methodology, we produce an expanded census of nearby young stars (Age <50<50 Myr). We then use the HDBSCAN clustering algorithm to produce a new SPYGLASS Catalog of Young Associations (SCYA), which reveals 116 young associations within 1 kpc. More than 25\% of these groups are largely new discoveries, as 20 are substantively different from any previous definition, and 10 have no equivalent in the literature. The new associations reveal a yet undiscovered demographic of small associations with little connection to larger structures. Some of the groups we identify are especially unique for their high transverse velocities, which can differ from the solar velocity by 30-50 km s1^{-1}, and for their positions, which can reach up to 300 pc above the galactic plane. These features may suggest a unique origin, matching existing evidence of infalling gas parcels interacting with the disk ISM. Our clustering also suggests links between often-separated populations, hinting to direct structural connections between Orion Complex and Perseus OB2, and between the subregions of Vela. The \sim30 Myr old Cepheus-Hercules association is another emerging large-scale structure, with a size and population comparable to Sco-Cen. Cep-Her and other similarly-aged structures are also found clustered along extended structures perpendicular to known spiral arm structure, suggesting that arm-aligned star formation patterns have only recently become dominant in the solar neighborhood.Comment: Accepted to ApJ. 34 Pages, 14 Figures, 4 Tables in AASTEX63 format. Online-only catalog files and interactive figures are available in the ancillary dat

    Posterior samples of source galaxies in strong gravitational lenses with score-based priors

    Full text link
    Inferring accurate posteriors for high-dimensional representations of the brightness of gravitationally-lensed sources is a major challenge, in part due to the difficulties of accurately quantifying the priors. Here, we report the use of a score-based model to encode the prior for the inference of undistorted images of background galaxies. This model is trained on a set of high-resolution images of undistorted galaxies. By adding the likelihood score to the prior score and using a reverse-time stochastic differential equation solver, we obtain samples from the posterior. Our method produces independent posterior samples and models the data almost down to the noise level. We show how the balance between the likelihood and the prior meet our expectations in an experiment with out-of-distribution data.Comment: 5+6 pages, 3 figures, Accepted (poster + contributed talk) for the Machine Learning and the Physical Sciences Workshop at the 36th conference on Neural Information Processing Systems (NeurIPS 2022); Corrected style file and added authors checklis
    corecore