270 research outputs found
Initial experience of a large, self-expanding, and fully recapturable transcatheter aortic valve: The UK & Ireland Implanters' registry.
OBJECTIVES: The UK & Ireland Implanters' registry is a multicenter registry which reports on real-world experience with novel transcatheter heart valves. BACKGROUND: The 34 mm Evolut R transcatheter aortic valve is a self-expanding and fully recapturable transcatheter aortic valve, designed to treat patients with a large aortic annulus. METHODS: Between January 2017 and April 2018, clinical, procedural and 30-day outcome data were prospectively collected from all patients receiving the 34 mm Evolut R valve across 17 participating centers in the United Kingdom and Ireland. The primary efficacy outcome was the Valve Academic Research Consortium-2(VARC-2)-defined endpoint of device success. The primary safety outcome was the VARC-2-defined composite endpoint of early safety at 30 days. RESULTS: A total of 217 patients underwent attempted implant. Mean age was 79.5 ± 8.8 years and Society of Thoracic Surgeons Predicted Risk of Mortality Score 5.2% ± 3.4%. Iliofemoral access was used in 91.2% of patients. Device success was 79.7%. Mean gradient was 7.0 ± 4.6 mmHg and effective orifice area 2.0 ± 0.6 cm2 . Paravalvular regurgitation was more than mild in 7.2%. A new permanent pacemaker was implanted in 15.7%. Early safety was demonstrated in 91.2%. At 30 days, all-cause mortality was 3.2%, stroke 3.7%, and major vascular complication 2.3%. CONCLUSIONS: Real-world experience of the 34 mm Evolut R transcatheter aortic valve demonstrated acceptable procedural success, safety, valve function, and incidence of new permanent pacemaker implantation
Cloudy, increasingly FAIR; Revisiting the FAIR Data guiding principles for the European Open Science Cloud
The FAIR Data Principles propose that all scholarly output should be Findable, Accessible, Interoperable, and Reusable. As a set of guiding principles, expressing only the kinds of behaviours that researchers should expect from contemporary data resources, how the FAIR principles should manifest in reality was largely open to interpretation. As support for the Principles has spread, so has the breadth of these interpretations. In observing this creeping spread of interpretation, several of the original authors felt it was now appropriate to revisit the Principles, to clarify both what FAIRness is, and is not
Trends in Population-Based Studies of Human Genetics in Infectious Diseases
Pathogen genetics is already a mainstay of public health investigation and control efforts; now advances in technology make it possible to investigate the role of human genetic variation in the epidemiology of infectious diseases. To describe trends in this field, we analyzed articles that were published from 2001 through 2010 and indexed by the HuGE Navigator, a curated online database of PubMed abstracts in human genome epidemiology. We extracted the principal findings from all meta-analyses and genome-wide association studies (GWAS) with an infectious disease-related outcome. Finally, we compared the representation of diseases in HuGE Navigator with their contributions to morbidity worldwide. We identified 3,730 articles on infectious diseases, including 27 meta-analyses and 23 GWAS. The number published each year increased from 148 in 2001 to 543 in 2010 but remained a small fraction (about 7%) of all studies in human genome epidemiology. Most articles were by authors from developed countries, but the percentage by authors from resource-limited countries increased from 9% to 25% during the period studied. The most commonly studied diseases were HIV/AIDS, tuberculosis, hepatitis B infection, hepatitis C infection, sepsis, and malaria. As genomic research methods become more affordable and accessible, population-based research on infectious diseases will be able to examine the role of variation in human as well as pathogen genomes. This approach offers new opportunities for understanding infectious disease susceptibility, severity, treatment, control, and prevention
A multi-disciplinary perspective on emergent and future innovations in peer review [version 2; referees: 2 approved]
Peer review of research articles is a core part of our scholarly communication system. In spite of its importance, the status and purpose of peer review is often contested. What is its role in our modern digital research and communications infrastructure? Does it perform to the high standards with which it is generally regarded? Studies of peer review have shown that it is prone to bias and abuse in numerous dimensions, frequently unreliable, and can fail to detect even fraudulent research. With the advent of web technologies, we are now witnessing a phase of innovation and experimentation in our approaches to peer review. These developments prompted us to examine emerging models of peer review from a range of disciplines and venues, and to ask how they might address some of the issues with our current systems of peer review. We examine the functionality of a range of social Web platforms, and compare these with the traits underlying a viable peer review system: quality control, quantified performance metrics as engagement incentives, and certification and reputation. Ideally, any new systems will demonstrate that they out-perform and reduce the biases of existing models as much as possible. We conclude that there is considerable scope for new peer review initiatives to be developed, each with their own potential issues and advantages. We also propose a novel hybrid platform model that could, at least partially, resolve many of the socio-technical issues associated with peer review, and potentially disrupt the entire scholarly communication system. Success for any such development relies on reaching a critical threshold of research community engagement with both the process and the platform, and therefore cannot be achieved without a significant change of incentives in research environments
Theoretical and technological building blocks for an innovation accelerator
The scientific system that we use today was devised centuries ago and is
inadequate for our current ICT-based society: the peer review system encourages
conservatism, journal publications are monolithic and slow, data is often not
available to other scientists, and the independent validation of results is
limited. Building on the Innovation Accelerator paper by Helbing and Balietti
(2011) this paper takes the initial global vision and reviews the theoretical
and technological building blocks that can be used for implementing an
innovation (in first place: science) accelerator platform driven by
re-imagining the science system. The envisioned platform would rest on four
pillars: (i) Redesign the incentive scheme to reduce behavior such as
conservatism, herding and hyping; (ii) Advance scientific publications by
breaking up the monolithic paper unit and introducing other building blocks
such as data, tools, experiment workflows, resources; (iii) Use machine
readable semantics for publications, debate structures, provenance etc. in
order to include the computer as a partner in the scientific process, and (iv)
Build an online platform for collaboration, including a network of trust and
reputation among the different types of stakeholders in the scientific system:
scientists, educators, funding agencies, policy makers, students and industrial
innovators among others. Any such improvements to the scientific system must
support the entire scientific process (unlike current tools that chop up the
scientific process into disconnected pieces), must facilitate and encourage
collaboration and interdisciplinarity (again unlike current tools), must
facilitate the inclusion of intelligent computing in the scientific process,
must facilitate not only the core scientific process, but also accommodate
other stakeholders such science policy makers, industrial innovators, and the
general public
Broken replication forks trigger heritable DNA breaks in the terminus of a circular chromosome
<p><u>(A) Circular map of the <i>E</i>. <i>coli</i> chromosome</u>: <i>oriC</i>, <i>dif</i> and <i>terD</i> to <i>terB</i> sites are indicated. Numbers refer to the chromosome coordinates (in kb) of MG1655. (<u>B) Linear map of the terminus region:</u> chromosome coordinates are shown increasing from left to right, as in the marker frequency panels (see Figure 1C for example), therefore in the opposite direction to the circular map. In addition to <i>dif</i> and <i>ter</i> sites, the positions of the <i>parS</i><sub>pMT1</sub> sites used for microscopy experiments are indicated. (<u>C) MFA analysis of terminus DNA loss in the <i>recB</i> mutant</u>: sequence read frequencies of exponential phase cells normalized to the total number of reads were calculated for each strain. Ratios of normalized reads in isogenic wild-type and <i>recB</i> mutant are plotted against chromosomal coordinates (in kb). The profile ratio of the terminus region is enlarged and the profile of the corresponding entire chromosomes is shown in inset. Original normalized profiles used to calculate ratios are shown in <a href="http://www.plosgenetics.org/article/info:doi/10.1371/journal.pgen.1007256#pgen.1007256.s005" target="_blank">S1 Fig</a>. The position of <i>dif</i> is indicated by a red arrow. The <i>ter</i> sites that arrest clockwise forks (<i>terC</i>, <i>terB</i>, green arrow) and counter-clockwise forks (<i>terA</i>, <i>terD</i>, blue arrow) are shown. <u>(D) Schematic representation of focus loss in the <i>recB</i> mutant:</u> Time-lapse microscopy experiments showed that loss of a focus in the <i>recB</i> mutant occurs concomitantly with cell division in one of two daughter cells, and that the cell that keeps the focus then generates a focus-less cell at each generation. The percentage of initial events was calculated as the percentage of cell divisions that generate a focus-less cell, not counting the following generations. In this schematic representation, two initial events occurred (generations #2 and #7) out of 9 generations, and focus loss at generation #2 is heritable. Panels shown in this figure were previously published in [<a href="http://www.plosgenetics.org/article/info:doi/10.1371/journal.pgen.1007256#pgen.1007256.ref019" target="_blank">19</a>] and are reproduced here to introduce the phenomenon.</p
Removal of PCR Error Products and Unincorporated Primers by Metal-Chelate Affinity Chromatography
Immobilized Metal Affinity Chromatography (IMAC) has been used for decades to purify proteins on the basis of amino acid content, especially surface-exposed histidines and “histidine tags” genetically added to recombinant proteins. We and others have extended the use of IMAC to purification of nucleic acids via interactions with the nucleotide bases, especially purines, of single-stranded RNA and DNA. We also have demonstrated the purification of plasmid DNA from contaminating genomic DNA by IMAC capture of selectively-denatured genomic DNA. Here we describe an efficient method of purifying PCR products by specifically removing error products, excess primers, and unincorporated dNTPs from PCR product mixtures using flow-through metal-chelate affinity adsorption. By flowing a PCR product mixture through a Cu2+-iminodiacetic acid (IDA) agarose spin column, 94–99% of the dNTPs and nearly all the primers can be removed. Many of the error products commonly formed by Taq polymerase also are removed. Sequencing of the IMAC-processed PCR product gave base-calling accuracy comparable to that obtained with a commercial PCR product purification method. The results show that IMAC matrices (specifically Cu2+-IDA agarose) can be used for the purification of PCR products. Due to the generality of the base-specific mechanism of adsorption, IMAC matrices may also be used in the purification of oligonucleotides, cDNA, mRNA and micro RNAs
An analysis of the feasibility of short read sequencing
Several methods for ultra high-throughput DNA sequencing are currently under investigation. Many of these methods yield very short blocks of sequence information (reads). Here we report on an analysis showing the level of genome sequencing possible as a function of read length. It is shown that re-sequencing and de novo sequencing of the majority of a bacterial genome is possible with read lengths of 20-30 nt, and that reads of 50 nt can provide reconstructed contigs (a contiguous fragment of sequence data) of 1000 nt and greater that cover 80% of human chromosome 1
A multi-disciplinary perspective on emergent and future innovations in peer review [version 1; peer review: 2 approved with reservations]
Peer review of research articles is a core part of our scholarly communication system. In spite of its importance, the status and purpose of peer review is often contested. What is its role in our modern digital research and communications infrastructure? Does it perform to the high standards with which it is generally regarded? Studies of peer review have shown that it is prone to bias and abuse in numerous dimensions, frequently unreliable, and can fail to detect even fraudulent research. With the advent of Web technologies, we are now witnessing a phase of innovation and experimentation in our approaches to peer review. These developments prompted us to examine emerging models of peer review from a range of disciplines and venues, and to ask how they might address some of the issues with our current systems of peer review. We examine the functionality of a range of social Web platforms, and compare these with the traits underlying a viable peer review system: quality control, quantified performance metrics as engagement incentives, and certification and reputation. Ideally, any new systems will demonstrate that they out-perform current models while avoiding as many of the biases of existing systems as possible. We conclude that there is considerable scope for new peer review initiatives to be developed, each with their own potential issues and advantages. We also propose a novel hybrid platform model that, at least partially, resolves many of the technical and social issues associated with peer review, and can potentially disrupt the entire scholarly communication system. Success for any such development relies on reaching a critical threshold of research community engagement with both the process and the platform, and therefore cannot be achieved without a significant change of incentives in research environments
- …
