2,801 research outputs found
Low myo-inositol and high glutamine levels in brain are associated with neuropsychological deterioration after induced hyperammonemia
The neuropsychological effect of hyperammonemia is variable. This study tests the hypothesis that the effect of ammonia on the neuropsychological function in patients with cirrhosis is determined by the ability of the brain to buffer ammonia-induced increase in glutamine within the astrocyte by losing osmolytes like myo-inositol (mI) and not by the magnitude of the induced hyperammonemia. Fourteen cirrhotic patients with no evidence of overt hepatic encephalopathy were given a 75-g amino acid (aa) solution mimicking the hemoglobin molecule to induce hyperammonemia. Measurement of a battery of neuropsychological function tests including immediate memory, ammonia, aa, and short-echo time proton magnetic resonance spectroscopy were performed before and 4 h after administration of the as solution. Eight patients showed deterioration in the Immediate Memory Test at 4 h. Demographic factors, severity of liver disease, change in plasma ammonia, and as profiles after the as solution were similar in those that showed a deterioration compared with those who did not. In patients who showed deterioration in the memory test, the mI-to-creatine ratio (mI/Cr) was significantly lower at baseline than those that did not deteriorate. In contrast, the glutamate/glutamine-to-Cr ratio was significantly greater in the patients that deteriorated. The observation that deterioration in the memory test scores was greater in those with lower mI/Cr supports the hypothesis that the neuropsychological effects of induced hyperammonemia is determined by the capacity of the brain to handle ammonia-induced increase in glutamine
De novo large rare copy-number variations contribute to conotruncal heart disease in Chinese patients
published_or_final_versio
Microfluidic systems for the analysis of the viscoelastic fluid flow phenomena in porous media
In this study, two microfluidic devices are proposed as simplified 1-D microfluidic analogues of a porous medium. The objectives are twofold: firstly to assess the usefulness of the microchannels to mimic the porous medium in a controlled and simplified manner, and secondly to obtain a better insight about the flow characteristics of viscoelastic fluids flowing through a packed bed. For these purposes, flow visualizations and pressure drop measurements are conducted with Newtonian and viscoelastic fluids. The 1-D microfluidic analogues of porous medium consisted of microchannels with a sequence of contractions/ expansions disposed in symmetric and asymmetric arrangements. The real porous medium is in reality, a complex combination of the two arrangements of particles simulated with the microchannels, which can be considered as limiting ideal configurations. The results show that both configurations are able to mimic well the pressure drop variation with flow rate for Newtonian fluids. However, due to the intrinsic differences in the deformation rate profiles associated with each microgeometry, the symmetric configuration is more suitable for studying the flow of viscoelastic fluids at low De values, while the asymmetric configuration provides better results at high De values. In this way, both microgeometries seem to be complementary and could be interesting tools to obtain a better insight about the flow of viscoelastic fluids through a porous medium. Such model systems could be very interesting to use in polymer-flood processes for enhanced oil recovery, for instance, as a tool for selecting the most suitable viscoelastic fluid to be used in a specific formation. The selection of the fluid properties of a detergent for cleaning oil contaminated soil, sand, and in general, any porous material, is another possible application
Toward optimal implementation of cancer prevention and control programs in public health: A study protocol on mis-implementation
Abstract Background Much of the cancer burden in the USA is preventable, through application of existing knowledge. State-level funders and public health practitioners are in ideal positions to affect programs and policies related to cancer control. Mis-implementation refers to ending effective programs and policies prematurely or continuing ineffective ones. Greater attention to mis-implementation should lead to use of effective interventions and more efficient expenditure of resources, which in the long term, will lead to more positive cancer outcomes. Methods This is a three-phase study that takes a comprehensive approach, leading to the elucidation of tactics for addressing mis-implementation. Phase 1: We assess the extent to which mis-implementation is occurring among state cancer control programs in public health. This initial phase will involve a survey of 800 practitioners representing all states. The programs represented will span the full continuum of cancer control, from primary prevention to survivorship. Phase 2: Using data from phase 1 to identify organizations in which mis-implementation is particularly high or low, the team will conduct eight comparative case studies to get a richer understanding of mis-implementation and to understand contextual differences. These case studies will highlight lessons learned about mis-implementation and identify hypothesized drivers. Phase 3: Agent-based modeling will be used to identify dynamic interactions between individual capacity, organizational capacity, use of evidence, funding, and external factors driving mis-implementation. The team will then translate and disseminate findings from phases 1 to 3 to practitioners and practice-related stakeholders to support the reduction of mis-implementation. Discussion This study is innovative and significant because it will (1) be the first to refine and further develop reliable and valid measures of mis-implementation of public health programs; (2) bring together a strong, transdisciplinary team with significant expertise in practice-based research; (3) use agent-based modeling to address cancer control implementation; and (4) use a participatory, evidence-based, stakeholder-driven approach that will identify key leverage points for addressing mis-implementation among state public health programs. This research is expected to provide replicable computational simulation models that can identify leverage points and public health system dynamics to reduce mis-implementation in cancer control and may be of interest to other health areas
Measuring and explaining mortality in Dutch hospitals; The Hospital Standardized Mortality Rate between 2003 and 2005
Background. Indicators of hospital quality, such as hospital standardized mortality ratios (HSMR), have been used increasingly to assess and improve hospital quality. Our aim has been to describe and explain variation in new HSMRs for the Netherlands. Methods. HSMRs were estimated using data from the complete population of discharged patients during 2003 to 2005. We used binary logistic regression to indirectly standardize for differences in case-mix. Out of a total of 101 hospitals 89 hospitals remained in our explanatory analysis. In this analysis we explored the association between HSMRs and determinants that can and cannot be influenced by hospitals. For this analysis we used a two-level hierarchical linear regression model to explain variation in yearly HSMRs. Results. The average HSMR decreased yearly with more than eight
A multi-gene signature predicts outcome in patients with pancreatic ductal adenocarcinoma.
© 2014 Haider et al.; licensee BioMed Central. This is an Open Access article distributed under the terms of the Creative
Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain
Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article,
unless otherwise stated.Improved usage of the repertoires of pancreatic ductal adenocarcinoma (PDAC) profiles is crucially needed to guide the development of predictive and prognostic tools that could inform the selection of treatment options
Search For Heavy Pointlike Dirac Monopoles
We have searched for central production of a pair of photons with high
transverse energies in collisions at TeV using of data collected with the D\O detector at the Fermilab Tevatron in
1994--1996. If they exist, virtual heavy pointlike Dirac monopoles could
rescatter pairs of nearly real photons into this final state via a box diagram.
We observe no excess of events above background, and set lower 95% C.L. limits
of on the mass of a spin 0, 1/2, or 1 Dirac
monopole.Comment: 12 pages, 4 figure
Search for High Mass Photon Pairs in p-pbar --> gamma-gamma-jet-jet Events at sqrt(s)=1.8 TeV
A search has been carried out for events in the channel p-barp --> gamma
gamma jet jet. Such a signature can characterize the production of a
non-standard Higgs boson together with a W or Z boson. We refer to this
non-standard Higgs, having standard model couplings to vector bosons but no
coupling to fermions, as a "bosonic Higgs." With the requirement of two high
transverse energy photons and two jets, the diphoton mass (m(gamma gamma))
distribution is consistent with expected background. A 90(95)% C.L. upper limit
on the cross section as a function of mass is calculated, ranging from
0.60(0.80) pb for m(gamma gamma) = 65 GeV/c^2 to 0.26(0.34) pb for m(gamma
gamma) = 150 GeV/c^2, corresponding to a 95% C.L. lower limit on the mass of a
bosonic Higgs of 78.5 GeV/c^2.Comment: 9 pages, 3 figures. Replacement has new H->gamma gamma branching
ratios and corresponding new mass limit
Framework, principles and recommendations for utilising participatory methodologies in the co-creation and evaluation of public health interventions
Background:
Due to the chronic disease burden on society, there is a need for preventive public health interventions to stimulate society towards a healthier lifestyle. To deal with the complex variability between individual lifestyles and settings, collaborating with end-users to develop interventions tailored to their unique circumstances has been suggested as a potential way to improve effectiveness and adherence. Co-creation of public health interventions using participatory methodologies has shown promise but lacks a framework to make this process systematic. The aim of this paper was to identify and set key principles and recommendations for systematically applying participatory methodologies to co-create and evaluate public health interventions.
Methods:
These principles and recommendations were derived using an iterative reflection process, combining key learning from published literature in addition to critical reflection on three case studies conducted by research groups in three European institutions, all of whom have expertise in co-creating public health interventions using different participatory methodologies.
Results:
Key principles and recommendations for using participatory methodologies in public health intervention co-creation are presented for the stages of: Planning (framing the aim of the study and identifying the appropriate sampling strategy); Conducting (defining the procedure, in addition to manifesting ownership); Evaluating (the process and the effectiveness) and Reporting (providing guidelines to report the findings). Three scaling models are proposed to demonstrate how to scale locally developed interventions to a population level.
Conclusions:
These recommendations aim to facilitate public health intervention co-creation and evaluation utilising participatory methodologies by ensuring the process is systematic and reproducible
Measurement of the inclusive and dijet cross-sections of b-jets in pp collisions at sqrt(s) = 7 TeV with the ATLAS detector
The inclusive and dijet production cross-sections have been measured for jets
containing b-hadrons (b-jets) in proton-proton collisions at a centre-of-mass
energy of sqrt(s) = 7 TeV, using the ATLAS detector at the LHC. The
measurements use data corresponding to an integrated luminosity of 34 pb^-1.
The b-jets are identified using either a lifetime-based method, where secondary
decay vertices of b-hadrons in jets are reconstructed using information from
the tracking detectors, or a muon-based method where the presence of a muon is
used to identify semileptonic decays of b-hadrons inside jets. The inclusive
b-jet cross-section is measured as a function of transverse momentum in the
range 20 < pT < 400 GeV and rapidity in the range |y| < 2.1. The bbbar-dijet
cross-section is measured as a function of the dijet invariant mass in the
range 110 < m_jj < 760 GeV, the azimuthal angle difference between the two jets
and the angular variable chi in two dijet mass regions. The results are
compared with next-to-leading-order QCD predictions. Good agreement is observed
between the measured cross-sections and the predictions obtained using POWHEG +
Pythia. MC@NLO + Herwig shows good agreement with the measured bbbar-dijet
cross-section. However, it does not reproduce the measured inclusive
cross-section well, particularly for central b-jets with large transverse
momenta.Comment: 10 pages plus author list (21 pages total), 8 figures, 1 table, final
version published in European Physical Journal
- …
