1,024 research outputs found
Impact of an interatrial shunt device on survival and heart failure hospitalization in patients with preserved ejection fraction
Aims:
Impaired left ventricular diastolic function leading to elevated left atrial pressures, particularly during exertion, is a key driver of symptoms and outcomes in heart failure with preserved ejection fraction (HFpEF). Insertion of an interatrial shunt device (IASD) to reduce left atrial pressure in HFpEF has been shown to be associated with short‐term haemodynamic and symptomatic benefit. We aimed to investigate the potential effects of IASD placement on HFpEF survival and heart failure hospitalization (HFH).
Methods and results:
Heart failure with preserved ejection fraction patients participating in the Reduce Elevated Left Atrial Pressure in Patients with Heart Failure study (Corvia Medical) of an IASD were followed for a median duration of 739 days. The theoretical impact of IASD implantation on HFpEF mortality was investigated by comparing the observed survival of the study cohort with the survival predicted from baseline data using the Meta‐analysis Global Group in Chronic Heart Failure heart failure risk survival score. Baseline and post‐IASD implant parameters associated with HFH were also investigated. Based upon the individual baseline demographic and cardiovascular profile of the study cohort, the Meta‐analysis Global Group in Chronic Heart Failure score‐predicted mortality was 10.2/100 pt years. The observed mortality rate of the IASD‐treated cohort was 3.4/100 pt years, representing a 33% lower rate (P = 0.02). By Kaplan–Meier analysis, the observed survival in IASD patients was greater than predicted (P = 0.014). Baseline parameters were not predictive of future HFH events; however, poorer exercise tolerance and a higher workload‐corrected exercise pulmonary capillary wedge pressure at the 6 months post‐IASD study were associated with HFH.
Conclusions:
The current study suggests IASD implantation may be associated with a reduction in mortality in HFpEF. Large‐scale ongoing randomized studies are required to confirm the potential benefit of this therapy
One-year outcomes after transcatheter insertion of an interatrial shunt device for the management of heart failure with preserved ejection fraction
Background—Heart failure with preserved ejection fraction has a complex pathophysiology and remains a therapeutic challenge. Elevated left atrial pressure, particularly during exercise, is a key contributor to morbidity and mortality. Preliminary analyses have demonstrated that a novel interatrial septal shunt device that allows shunting to reduce the left atrial pressure provides clinical and hemodynamic benefit at 6 months. Given the chronicity of heart failure with preserved ejection fraction, evidence of longer-term benefit is required.
Methods and Results—Patients (n=64) with left ventricular ejection fraction ≥40%, New York Heart Association class II–IV, elevated pulmonary capillary wedge pressure (≥15 mm Hg at rest or ≥25 mm Hg during supine bicycle exercise) participated in the open-label study of the interatrial septal shunt device. One year after interatrial septal shunt device implantation, there were sustained improvements in New York Heart Association class (P<0.001), quality of life (Minnesota Living with Heart Failure score, P<0.001), and 6-minute walk distance (P<0.01). Echocardiography showed a small, stable reduction in left ventricular end-diastolic volume index (P<0.001), with a concomitant small stable increase in the right ventricular end-diastolic volume index (P<0.001). Invasive hemodynamic studies performed in a subset of patients demonstrated a sustained reduction in the workload corrected exercise pulmonary capillary wedge pressure (P<0.01). Survival at 1 year was 95%, and there was no evidence of device-related complications.
Conclusions—These results provide evidence of safety and sustained clinical benefit in heart failure with preserved ejection fraction patients 1 year after interatrial septal shunt device implantation. Randomized, blinded studies are underway to confirm these observations
Optimization models for sustainable reverse logistics network planning under uncertainty
Nowadays, evolving toward sustainable operations among supply chains is a critical need for the near future and the well-being of the upcoming generations. The term sustainability commonly refers to the interactions between the economic, environmental and social dimensions of development. A sustainable development usually refers to: “A development that meets the needs of the present without compromising the ability of future generations to meet their own needs". Practitioners and academics all over the world are working toward this goal since the last three decades. Thus, this thesis comes to complement this effort toward achieving sustainability in supply chain operations.
My dissertation, entitled “quantitative models for sustainable reverse logistics network design under uncertainty”, focuses on the importance of developing decision-making models that include critical uncertainties inherent to the reverse logistics operations in the industry. It studies more specifically the trade-offs that are necessary to design efficient reverse logistics networks while considering various environmental aspects, thus improving our chances to take this step toward sustainability. In the three articles presented below, we will use the construction, renovation and demolition (CRD) industry as a reference to validate our models through several case studies.
The first article, titled “Reverse logistics network redesign under uncertainty for wood waste in the CRD industry” presents a detailed case study of the challenges related to the wood building material waste management in Quebec, Canada. In this paper, the main objective is to determine the location and the capacities of the sorting facilities to ensure compliance with the regulation and prevent the wood from being massively landfilled. We formulate the problem as a mixed-integer linear programming model (MILP) to minimize the total cost of the wood recycling processes collected from CRD sites. We start from the real collection centers’ locations from the Quebec CRD industry and we propose a scenario-based approach to redesign the reverse logistics network based on various realizations of the randomness targeting the uncertain parameters. The results demonstrate that efforts toward obtaining accurate information about the supply sources’ locations, the collected wood quantity and its quality would guarantee a more efficient reverse logistics network redesign. Although environmental and social considerations are not addressed in this article, it represents a first step toward sustainability by optimizing waste management operations in a sector that is among the biggest waste generators worldwide.
Thus, in the second article, titled “A two-stage stochastic optimization model for reverse logistics network design under dynamic suppliers’ locations”, we introduce a new advanced model formulation that addresses multiple scenarios at the same time in order to cope with uncertainty in the best manner over a multi-period planning horizon. The availability of each material collected from the supply sources and the recycling rates at the collection centers are the main sources of uncertainty considered in this study. This time, not only we optimize the reverse logistics network design, but we also evaluate the integration of logistics platforms called source-separation centers (SSC), that we use to perform source-separation of the materials before shipping them to the main collection centers. We perform a sensitivity analysis on the number of supply sources (i.e. waste generators) to compare low-density rural collection zones versus high-density urban areas, where the waste collection activities are often more challenging. Although the SSC improve the network performance in both rural and urban zones, the flexibility provided by these dynamic platforms reaches its best efficiency in the high-density urban areas. The results suggest significant RLND adjustments that lead to increase both the average expected profit and the amount of materials recycled through the reverse logistics channel.
Finally, in the third article, titled “A carbon-constrained stochastic model for eco-efficient reverse logistics network design under environmental regulations in the CRD Industry”, we adapt the stochastic model of the previous article to include environmental considerations by adding a second objective function. In this research, we evaluate the optimal eco-efficient reverse logistics network design for the wood waste recycling from the CRD industry under both landfilling restrictions and emission control by a cap-and-trade system, such as the one effective in Quebec these days. In this paper, we emphasize the importance of the source separation strategy to address the challenge caused by the unpredictable quality of the wood collected and its impact on the efficiency of the recycling processes. Indeed, by accounting the emissions released by the various recycling processes, it turns out that the landfilling option may be the best option depending on the quality level of the collected waste. Finally, in this paper we establish the relation between the quality level uncertainty of the collected materials and the difficulty to comply with governmental recycling targets.
Overall, the scenario-based approach in the first article allows establishing the problematic of multiple uncertainties for designing an optimal reverse logistics network that performs under each scenario. Based on this finding, in the second article we develop a two-stage stochastic model in order to find the best expected RLND to cope with a large number of possible scenarios in a multi-period planning horizon. Lastly, in the third article we adapt this model to fit with the reality of the CRD industry for the wood waste recycling case study. Such adaptations imply emissions accounting from the wood recycling processes and complying with the legal framework regarding the recycling targets
Modèle de planification tactique de chaîne d'approvisionnement soumise à des régulations environnementales
La problématique environnementale étant mise en avant lors des dernières décennies, la gestion de la chaîne d’approvisionnement devient de plus en plus complexe. Les grandes chaînes logistiques étant des acteurs majeurs du réchauffement climatique et des dommages causés à notre environnement, les entreprises concernées se retrouvent au centre de la polémique et de plus en plus elles doivent rendre des comptes au gouvernement ainsi qu’à leurs consommateurs. On utilise communément le terme de chaîne logistique durable lorsque les traditionnelles considérations économiques sont élargies aux dimensions environnementales et sociales, représentant ainsi un équilibre permettant à une activité industrielle de perdurer dans le temps sans causer de dommage à son entourage. C’est dans cette logique que s’inscrit ce travail de recherche, visant en particulier deux règlementations environnementales d’actualité pour les entreprises Québécoises : les limitations relatives aux émissions de gaz à effet de serre et le principe de compensation aux services de collecte.
Dans ce travail nous avons mis en place un modèle d’optimisation linéaire permettant d’analyser les décisions tactiques d’approvisionnement en composants éco-conçus versus standards chez différents fournisseurs. Aussi la formulation mathématique utilisée permet d’évaluer le plan optimal de production aux usines manufacturières au moyen de deux technologies distinctes : standard versus propre. Le raisonnement appliqué suit trois étapes principales : tout d’abord une résolution préliminaire permet d’établir le plan optimal d’approvisionnement et de production dans le cas où l’entreprise étudiée n’est pas encore assujettie aux lois environnementales. Par la suite les simulations réalisées permettent de tester ce plan optimal lorsqu’il est soumis tout d’abord à une seule puis aux deux législations mentionnées plus haut simultanément. Si dans un premier temps nous privilégions les données réelles en vigueur actuellement dans la province du Québec, la suite de notre étude admet l’hypothèse que ces récentes règlementations vont être renforcées dans les prochaines années. Ainsi nous analysons les éventuelles conséquences d’un durcissement des législations sur le plan tactique optimal d’approvisionnement et de production de notre chaîne logistique
On a stochastically grain-discretised model for 2D/3D temperature mapping prediction in grinding
Excessive grinding heat might probably lead to unwanted heat damages of workpiece materials, most previous studies on grinding heat/temperature, however, assumed the wheel-workpiece contact zone as a moving band heat source, which might be not appropriate enough to capture the realistic situation in grinding. To address this, grinding temperature domain has been theoretically modeled in this paper by using a stochastically grain-discretised temperature model (SGDTM) with the consideration of grain-workpiece micro interactions (i.e. rubbing, ploughing and cutting), and the full 2D/3D temperature maps with highly-localised thermal information, even at the grain scale (i.e. with the thermal impacts induced by each individual grain), has been presented for the first time. To validate theoretical maps, a new methodological approach to capture 2D/3D temperature maps based on an array of sacrificial thermocouples have also been proposed. Experimental validation has indicated that the grinding temperature calculated by SGDTM showed a reasonable agreement with the experimental one in terms of both 1D temperature signals (i.e. the signals that are captured at a specific location within the grinding zone) and the 2D/3D temperature maps of the grinding zone, proving the feasibility and the accuracy of SGDTM. This study has also proved that, as expected, the heat fluxes are neither uniformly-distributed along the wheel width direction nor continuous along the workpiece feed direction. The proposed SGDTM and the temperature measurement technique are not only anticipated to be powerful to provide the basis for the prevention of grinding thermal damage (e.g. grinding burns, grinding annealing and rehardening), but also expected to be meaningful to enhance the existing understanding of grinding heat/temperature than using the common approach depending on the single thermocouple technique
A Study of Nanoclay Reinforcement of Biocomposites Made by Liquid Composite Molding
Liquid composite molding (LCM) processes are widely used to manufacture composite parts for the automotive industry. An appropriate selection of the materials and proper optimization of the manufacturing parameters are keys to produce parts with improved mechanical properties. This paper reports on a study of biobased composites reinforced with nanoclay particles. A soy-based unsaturated polyester resin was used as synthetic matrix, and glass and flax fiber fabrics were used as reinforcement. This paper aims to improve mechanical and flammability properties of reinforced composites by introducing nanoclay particles in the unsaturated polyester resin. Four different mixing techniques were investigated to improve the dispersion of nanoclay particles in the bioresin in order to obtain intercalated or exfoliated structures. An experimental study was carried out to define the adequate parameter combinations between vacuum pressure, filling time, and resin viscosity. Two manufacturing methods were investigated and compared: RTM and SCRIMP. Mechanical properties, such as flexural modulus and ultimate strength, were evaluated and compared for conventional glass fiber composites (GFC) and flax fiber biocomposites (GFBiores-C). Finally, smoke density analysis was performed to demonstrate the effects and advantages of using an environment-friendly resin combined with nanoclay particles
Measurement of the in-plane thermal conductivity of long fiber composites by inverse analysis
ABSTRACT: In the present work, inverse thermal analysis of heat conduction is carried out to estimate the in-plane thermal conductivity of composites. Numerical simulations were performed to determine the optimal configuration of the heating system to ensure a unidirectional heat transfer in the composite sample. Composite plates made of unsaturated polyester resin and unidirectional glass fibers were fabricated by injection to validate the methodology. A heating and cooling cycle is applied at the bottom and top surfaces of the sample. The thermal conductivity can be deduced from transient temperature measurements given by thermocouples positioned at three chosen locations along the fibers direction. The inverse analysis algorithm is initiated by solving the direct problem defined by the one-dimensional transient heat conduction equation using a first estimate of thermal conductivity. The integral in time of the square distance between the measured and predicted values is the criterion minimized in the inverse analysis algorithm. Finally, the evolution of the in-plane composite thermal conductivity can be deduced from the experimental results by the rule of mixture
Progress in experimental and theoretical evaluation methods for textile permeability
ABSTRACT: A great amount of attention has been given to the evaluation of the permeability tensor and several methods have been implemented for this purpose: experimental methods, as well as numerical and analytical methods. Numerical simulation tools are being seriously developed to cover the evaluation of permeability. However, the results are still far from matching reality. On the other hand, many problems still intervene in the experimental measurement of permeability, since it depends on several parameters including personal performance, preparation of specimens, equipment accuracy, and measurement techniques. Errors encountered in these parameters may explain why inconsistent measurements are obtained which result in unreliable experimental evaluation of permeability. However, good progress was done in the second international Benchmark, wherein a method to measure the in-plane permeability was agreed on by 12 institutes and universities. Critical researchers’ work was done in the field of analytical methods, and thus different empirical and analytical models have emerged, but most of those models need to be improved. Some of which are based on Cozeny-Karman equation. Others depend on numerical simulation or experiment to predict the macroscopic permeability. Also, the modeling of permeability of unidirectional fiber beds have taken the greater load of concern, whereas that of fiber bundle permeability prediction remain limited. This paper presents a review on available methods for evaluating unidirectional fiber bundles and engineering fabric permeability. The progress of each method is shown in order to clear things up
A dimensionless characteristic number for process selection and mold design in composites manufacturing : part I — theory
ABSTRACT: The present article introduces a dimensionless number devised to assist composite engineers in the fabrication of continuous fiber composites by Liquid Composite Molding (LCM), i.e., by injecting a liquid polymer resin through a fibrous reinforcement contained in a closed mold. This dimensionless number is calculated by integrating the ratio of the injection pressure to the liquid viscosity over the cavity filling time. It is hereby called the “injectability number” and provides an evaluation of the difficulty to inject a liquid into a porous material for a given part geometry, permeability distribution, and position of the inlet gate. The theoretical aspects behind this new concept are analyzed in Part I of the article, which demonstrates the invariance of the injectability number with respect to process parameters like constant and varying injection pressure or flow rate. Part I also details how process engineers can use the injectability number to address challenges in composite fabrication, such as process selection, mold design, and parameter optimization. Thanks to the injectability number, the optimal position of the inlet gate can be assessed and injection parameters scaled to speed up mold design. Part II of the article completes the demonstration of the novel concept by applying it to a series of LCM process examples of increasing complexity
Influence of Streptococcus pneumoniae Within-Strain Population Diversity on Virulence and Pathogenesis
The short generation time of many bacterial pathogens allows the accumulation of de novo mutations during routine culture procedures used for the preparation and propagation of bacterial stocks. Taking the major human pathogen Streptococcus pneumoniae as an example, we sought to determine the influence of standard laboratory handling of microbes on within-strain genetic diversity and explore how these changes influence virulence characteristics and experimental outcomes. A single culture of S. pneumoniae D39 grown overnight resulted in the enrichment of previously rare genotypes present in bacterial freezer stocks and the introduction of new variation to the bacterial population through the acquisition of mutations. A comparison of D39 stocks from different laboratories demonstrated how changes in bacterial population structure taking place during individual culture events can cumulatively lead to fixed, divergent change that profoundly alters virulence characteristics. The passage of D39 through mouse models of infection, a process used to standardize virulence, resulted in the enrichment of high-fitness genotypes that were originally rare (,2% frequency) in D39 culture collection stocks and the loss of previously dominant genotypes. In the most striking example, the selection of a,2%-frequency genotype carrying a mutation in sdhB, a gene thought to be essential for the establishment of lung infection, was associated with enhanced systemic virulence. Three separately passaged D39 cultures originating from the same frozen stocks showed considerable genetic divergence despite comparable virulence.
IMPORTANCE: Laboratory bacteriology involves the use of high-density cultures that we often assume to be clonal but that in reality are populations consisting of multiple genotypes at various abundances. We have demonstrated that the genetic structure of a single population of a widely used Streptococcus pneumoniae strain can be substantially altered by even short-term laboratory handling and culture and that, over time, this can lead to changes in virulence characteristics. Our findings suggest that caution should be applied when comparing data generated in different laboratories using the same strain but also when comparing data within laboratories over time. Given the dramatic reductions in the cost of next-generation sequencing technology in recent years, we advocate for the frequent sampling and sequencing of bacterial isolate collections
- …
