5,064 research outputs found

    The impact of freeze-drying infant fecal samples on measures of their bacterial community profiles and milk-derived oligosaccharide content.

    Get PDF
    Infant fecal samples are commonly studied to investigate the impacts of breastfeeding on the development of the microbiota and subsequent health effects. Comparisons of infants living in different geographic regions and environmental contexts are needed to aid our understanding of evolutionarily-selected milk adaptations. However, the preservation of fecal samples from individuals in remote locales until they can be processed can be a challenge. Freeze-drying (lyophilization) offers a cost-effective way to preserve some biological samples for transport and analysis at a later date. Currently, it is unknown what, if any, biases are introduced into various analyses by the freeze-drying process. Here, we investigated how freeze-drying affected analysis of two relevant and intertwined aspects of infant fecal samples, marker gene amplicon sequencing of the bacterial community and the fecal oligosaccharide profile (undigested human milk oligosaccharides). No differences were discovered between the fecal oligosaccharide profiles of wet and freeze-dried samples. The marker gene sequencing data showed an increase in proportional representation of Bacteriodes and a decrease in detection of bifidobacteria and members of class Bacilli after freeze-drying. This sample treatment bias may possibly be related to the cell morphology of these different taxa (Gram status). However, these effects did not overwhelm the natural variation among individuals, as the community data still strongly grouped by subject and not by freeze-drying status. We also found that compensating for sample concentration during freeze-drying, while not necessary, was also not detrimental. Freeze-drying may therefore be an acceptable method of sample preservation and mass reduction for some studies of microbial ecology and milk glycan analysis

    Automated algorithm for CBCT-based dose calculations of prostate radiotherapy with bilateral hip prostheses

    Get PDF
    ABSTRACTOBJECTIVE:Cone beam CT (CBCT) images contain more scatter than a conventional CT image and therefore provide inaccurate Hounsfield units (HUs). Consequently, CBCT images cannot be used directly for radiotherapy dose calculation. The aim of this study is to enable dose calculations to be performed with the use of CBCT images taken during radiotherapy and evaluate the necessity of replanning.METHODS:A patient with prostate cancer with bilateral metallic prosthetic hip replacements was imaged using both CT and CBCT. The multilevel threshold (MLT) algorithm was used to categorize pixel values in the CBCT images into segments of homogeneous HU. The variation in HU with position in the CBCT images was taken into consideration. This segmentation method relies on the operator dividing the CBCT data into a set of volumes where the variation in the relationship between pixel values and HUs is small. An automated MLT algorithm was developed to reduce the operator time associated with the process. An intensity-modulated radiation therapy plan was generated from CT images of the patient. The plan was then copied to the segmented CBCT (sCBCT) data sets with identical settings, and the doses were recalculated and compared.RESULTS:Gamma evaluation showed that the percentage of points in the rectum with γ < 1 (3%/3 mm) were 98.7% and 97.7% in the sCBCT using MLT and the automated MLT algorithms, respectively. Compared with the planning CT (pCT) plan, the MLT algorithm showed −0.46% dose difference with 8 h operator time while the automated MLT algorithm showed −1.3%, which are both considered to be clinically acceptable, when using collapsed cone algorithm.CONCLUSION:The segmentation of CBCT images using the method in this study can be used for dose calculation. For a patient with prostate cancer with bilateral hip prostheses and the associated issues with CT imaging, the MLT algorithms achieved a sufficient dose calculation accuracy that is clinically acceptable. The automated MLT algorithm reduced the operator time associated with implementing the MLT algorithm to achieve clinically acceptable accuracy. This saved time makes the automated MLT algorithm superior and easier to implement in the clinical setting.ADVANCES IN KNOWLEDGE:The MLT algorithm has been extended to the complex example of a patient with bilateral hip prostheses, which with the introduction of automation is feasible for use in adaptive radiotherapy, as an alternative to obtaining a new pCT and reoutlining the structures

    Melting of a 2D Quantum Electron Solid in High Magnetic Field

    Full text link
    The melting temperature (TmT_m) of a solid is generally determined by the pressure applied to it, or indirectly by its density (nn) through the equation of state. This remains true even for helium solids\cite{wilk:67}, where quantum effects often lead to unusual properties\cite{ekim:04}. In this letter we present experimental evidence to show that for a two dimensional (2D) solid formed by electrons in a semiconductor sample under a strong perpendicular magnetic field\cite{shay:97} (BB), the TmT_m is not controlled by nn, but effectively by the \textit{quantum correlation} between the electrons through the Landau level filling factor ν\nu=nh/eBnh/eB. Such melting behavior, different from that of all other known solids (including a classical 2D electron solid at zero magnetic field\cite{grim:79}), attests to the quantum nature of the magnetic field induced electron solid. Moreover, we found the TmT_m to increase with the strength of the sample-dependent disorder that pins the electron solid.Comment: Some typos corrected and 2 references added. Final version with minor editoriol revisions published in Nature Physic

    A geometric network model of intrinsic grey-matter connectivity of the human brain

    Get PDF
    Network science provides a general framework for analysing the large-scale brain networks that naturally arise from modern neuroimaging studies, and a key goal in theoretical neuro- science is to understand the extent to which these neural architectures influence the dynamical processes they sustain. To date, brain network modelling has largely been conducted at the macroscale level (i.e. white-matter tracts), despite growing evidence of the role that local grey matter architecture plays in a variety of brain disorders. Here, we present a new model of intrinsic grey matter connectivity of the human connectome. Importantly, the new model incorporates detailed information on cortical geometry to construct ‘shortcuts’ through the thickness of the cortex, thus enabling spatially distant brain regions, as measured along the cortical surface, to communicate. Our study indicates that structures based on human brain surface information differ significantly, both in terms of their topological network characteristics and activity propagation properties, when compared against a variety of alternative geometries and generative algorithms. In particular, this might help explain histological patterns of grey matter connectivity, highlighting that observed connection distances may have arisen to maximise information processing ability, and that such gains are consistent with (and enhanced by) the presence of short-cut connections

    Measuring Relations Between Concepts In Conceptual Spaces

    Full text link
    The highly influential framework of conceptual spaces provides a geometric way of representing knowledge. Instances are represented by points in a high-dimensional space and concepts are represented by regions in this space. Our recent mathematical formalization of this framework is capable of representing correlations between different domains in a geometric way. In this paper, we extend our formalization by providing quantitative mathematical definitions for the notions of concept size, subsethood, implication, similarity, and betweenness. This considerably increases the representational power of our formalization by introducing measurable ways of describing relations between concepts.Comment: Accepted at SGAI 2017 (http://www.bcs-sgai.org/ai2017/). The final publication is available at Springer via https://doi.org/10.1007/978-3-319-71078-5_7. arXiv admin note: substantial text overlap with arXiv:1707.05165, arXiv:1706.0636

    The central image of a gravitationally lensed quasar

    Full text link
    A galaxy can act as a gravitational lens, producing multiple images of a background object. Theory predicts there should be an odd number of images but, paradoxically, almost all observed lenses have 2 or 4 images. The missing image should be faint and appear near the galaxy's center. These ``central images'' have long been sought as probes of galactic cores too distant to resolve with ordinary observations. There are five candidates, but in one case the third image is not necessarily a central image, and in the others, the central component might be a foreground source rather than a lensed image. Here we report the most secure identification of a central image, based on radio observations of PMN J1632-0033, one of the latter candidates. Lens models incorporating the central image show that the mass of the lens galaxy's central black hole is less than 2 x 10^8 M_sun, and the galaxy's surface density at the location of the central image is more than 20,000 M_sun per square parsec, in agreement with expectations based on observations of galaxies hundreds of times closer to the Earth.Comment: Nature, in press [7 pp, 2 figs]. Standard media embargo applies before publicatio

    Incidence of community-acquired lower respiratory tract infections and pneumonia among older adults in the United Kingdom: a population-based study.

    Get PDF
    Community-acquired lower respiratory tract infections (LRTI) and pneumonia (CAP) are common causes of morbidity and mortality among those aged ≥65 years; a growing population in many countries. Detailed incidence estimates for these infections among older adults in the United Kingdom (UK) are lacking. We used electronic general practice records from the Clinical Practice Research Data link, linked to Hospital Episode Statistics inpatient data, to estimate incidence of community-acquired LRTI and CAP among UK older adults between April 1997-March 2011, by age, sex, region and deprivation quintile. Levels of antibiotic prescribing were also assessed. LRTI incidence increased with fluctuations over time, was higher in men than women aged ≥70 and increased with age from 92.21 episodes/1000 person-years (65-69 years) to 187.91/1000 (85-89 years). CAP incidence increased more markedly with age, from 2.81 to 21.81 episodes/1000 person-years respectively, and was higher among men. For both infection groups, increases over time were attenuated after age-standardisation, indicating that these rises were largely due to population aging. Rates among those in the most deprived quintile were around 70% higher than the least deprived and were generally higher in the North of England. GP antibiotic prescribing rates were high for LRTI but lower for CAP (mostly due to immediate hospitalisation). This is the first study to provide long-term detailed incidence estimates of community-acquired LRTI and CAP in UK older individuals, taking person-time at risk into account. The summary incidence commonly presented for the ≥65 age group considerably underestimates LRTI/CAP rates, particularly among older individuals within this group. Our methodology and findings are likely to be highly relevant to health planners and researchers in other countries with aging populations

    Beyond Volume: The Impact of Complex Healthcare Data on the Machine Learning Pipeline

    Full text link
    From medical charts to national census, healthcare has traditionally operated under a paper-based paradigm. However, the past decade has marked a long and arduous transformation bringing healthcare into the digital age. Ranging from electronic health records, to digitized imaging and laboratory reports, to public health datasets, today, healthcare now generates an incredible amount of digital information. Such a wealth of data presents an exciting opportunity for integrated machine learning solutions to address problems across multiple facets of healthcare practice and administration. Unfortunately, the ability to derive accurate and informative insights requires more than the ability to execute machine learning models. Rather, a deeper understanding of the data on which the models are run is imperative for their success. While a significant effort has been undertaken to develop models able to process the volume of data obtained during the analysis of millions of digitalized patient records, it is important to remember that volume represents only one aspect of the data. In fact, drawing on data from an increasingly diverse set of sources, healthcare data presents an incredibly complex set of attributes that must be accounted for throughout the machine learning pipeline. This chapter focuses on highlighting such challenges, and is broken down into three distinct components, each representing a phase of the pipeline. We begin with attributes of the data accounted for during preprocessing, then move to considerations during model building, and end with challenges to the interpretation of model output. For each component, we present a discussion around data as it relates to the healthcare domain and offer insight into the challenges each may impose on the efficiency of machine learning techniques.Comment: Healthcare Informatics, Machine Learning, Knowledge Discovery: 20 Pages, 1 Figur
    corecore