948 research outputs found

    Construction and analysis of causally dynamic hybrid bond graphs

    Get PDF
    Engineering systems are frequently abstracted to models with discontinuous behaviour (such as a switch or contact), and a hybrid model is one which contains continuous and discontinuous behaviours. Bond graphs are an established physical modelling method, but there are several methods for constructing switched or ‘hybrid’ bond graphs, developed for either qualitative ‘structural’ analysis or efficient numerical simulation of engineering systems. This article proposes a general hybrid bond graph suitable for both. The controlled junction is adopted as an intuitive way of modelling a discontinuity in the model structure. This element gives rise to ‘dynamic causality’ that is facilitated by a new bond graph notation. From this model, the junction structure and state equations are derived and compared to those obtained by existing methods. The proposed model includes all possible modes of operation and can be represented by a single set of equations. The controlled junctions manifest as Boolean variables in the matrices of coefficients. The method is more compact and intuitive than existing methods and dispenses with the need to derive various modes of operation from a given reference representation. Hence, a method has been developed, which can reach common usage and form a platform for further study

    An Optimal Number-Dependent Preventive Maintenance Strategy for Offshore Wind Turbine Blades Considering Logistics

    Get PDF
    In offshore wind turbines, the blades are among the most critical and expensive components that suffer from different types of damage due to the harsh maritime environment and high load. The blade damages can be categorized into two types: the minor damage, which only causes a loss in wind capture without resulting in any turbine stoppage, and the major (catastrophic) damage, which stops the wind turbine and can only be corrected by replacement. In this paper, we propose an optimal number-dependent preventive maintenance (NDPM) strategy, in which a maintenance team is transported with an ordinary or expedited lead time to the offshore platform at the occurrence of the Nth minor damage or the first major damage, whichever comes first. The long-run expected cost of the maintenance strategy is derived, and the necessary conditions for an optimal solution are obtained. Finally, the proposed model is tested on real data collected from an offshore wind farm database. Also, a sensitivity analysis is conducted in order to evaluate the effect of changes in the model parameters on the optimal solution

    An Experimental Investigation of Colonel Blotto Games

    Get PDF
    "This article examines behavior in the two-player, constant-sum Colonel Blotto game with asymmetric resources in which players maximize the expected number of battlefields won. The experimental results support all major theoretical predictions. In the auction treatment, where winning a battlefield is deterministic, disadvantaged players use a 'guerilla warfare' strategy which stochastically allocates zero resources to a subset of battlefields. Advantaged players employ a 'stochastic complete coverage' strategy, allocating random, but positive, resource levels across the battlefields. In the lottery treatment, where winning a battlefield is probabilistic, both players divide their resources equally across all battlefields." (author's abstract)"Dieser Artikel untersucht das Verhalten von Individuen in einem 'constant-sum Colonel Blotto'-Spiel zwischen zwei Spielern, bei dem die Spieler mit unterschiedlichen Ressourcen ausgestattet sind und die erwartete Anzahl gewonnener Schlachtfelder maximieren. Die experimentellen Ergebnisse bestätigen alle wichtigen theoretischen Vorhersagen. Im Durchgang, in dem wie in einer Auktion der Sieg in einem Schlachtfeld deterministisch ist, wenden die Spieler, die sich im Nachteil befinden, eine 'Guerillataktik' an, und verteilen ihre Ressourcen stochastisch auf eine Teilmenge der Schlachtfelder. Spieler mit einem Vorteil verwenden eine Strategie der 'stochastischen vollständigen Abdeckung', indem sie zufällig eine positive Ressourcenmenge auf allen Schlachtfeldern positionieren. Im Durchgang, in dem sich der Gewinn eines Schlachtfeldes probabilistisch wie in einer Lotterie bestimmt, teilen beide Spieler ihre Ressourcen gleichmäßig auf alle Schlachtfelder auf." (Autorenreferat

    Is Evolution of Blind Mole Rats Determined by Climate Oscillations?

    Get PDF
    The concept of climate variability facilitating adaptive radiation supported by the ‘‘Court Jester’’ hypothesis is disputed by the ‘‘Red Queen’’ one, but the prevalence of one or the other might be scale-dependent. We report on a detailed, comprehensive phylo-geographic study on the ,4 kb mtDNA sequence in underground blind mole rats of the family Spalacidae (or subfamily Spalacinae) from the East Mediterranean steppes. Our study aimed at testing the presence of periodicities in branching patterns on a constructed phylogenetic tree and at searching for congruence between branching events, tectonic history and paleoclimates. In contrast to the strong support for the majority of the branching events on the tree, the absence of support in a few instances indicates that network-like evolution could exist in spalacids. In our tree, robust support was given, in concordance with paleontological data, for the separation of spalacids from muroid rodents during the first half of the Miocene when open, grass-dominated habitats were established. Marine barriers formed between Anatolia and the Balkans could have facilitated the separation of the lineage ‘‘Spalax’’ from the lineage ‘‘Nannospalax’’ and of the clade ‘‘leucodon’’ from the clade ‘‘xanthodon’’. The separation of the clade ‘‘ehrenbergi’’ occurred during the late stages of the tectonically induced uplift of the Anatolian high plateaus and mountains, whereas the separation of the clade ‘‘vasvarii’’ took place when the rapidly uplifting Taurus mountain range prevented the Mediterranean rainfalls from reaching the Central Anatolian Plateau. The separation of Spalax antiquus and S. graecus occurred when the southeastern Carpathians were uplifted. Despite the role played by tectonic events, branching events that show periodicity corresponding to 400-kyr and 100-kyr eccentricity bands illuminate the important role of orbital fluctuations on adaptive radiation in spalacids. At the given scale, our results supports the ‘‘Court Jester’’ hypothesis over the ‘‘Red Queen’’ one

    Time-dynamic effects on the global temperature when harvesting logging residues for bioenergy

    Get PDF
    The climate mitigation potential of using logging residues (tree tops and branches) for bioenergy has been debated. In this study, a time-dependent life cycle assessment (LCA) was performed using a single-stand perspective. Three forest stands located in different Swedish climate zones were studied in order to assess the global temperature change when using logging residues for producing district heating. These systems were compared with two fossil reference systems in which the logging residues were assumed to remain in the forest to decompose over time, while coal or natural gas was used for energy. The results showed that replacing coal with logging residues gave a direct climate benefit from a single-stand perspective, while replacing natural gas gave a delayed climate benefit of around 8-12 years depending on climate zone. A sensitivity analysis showed that the time was strongly dependent on the assumptions for extraction and combustion of natural gas. The LCA showed that from a single-stand perspective, harvesting logging residues for bioenergy in the south of Sweden would give the highest temperature change mitigation potential per energy unit. However, the differences between the three climate zones studied per energy unit were relatively small. On a hectare basis, the southern forest stand would generate more biomass compared to the central and northern locations, which thereby could replace more fossil fuel and give larger climate benefits

    A new approach for developing continuous age-depth models from dispersed chronologic data: applications to the Miocene Santa Cruz formation, Argentina

    Get PDF
    Traditional methods (linear regression, spline fitting) of age-depth modeling generate overly optimistic confidence intervals. Originally developed for C, Bayesian models (use of observations independent of chronology) allow the incorporation of prior information about superposition of dated horizons, stratigraphic position of undated points, and variations in sedimentology and sedimentation rate into model fitting. We modified the methodology of two Bayesian age depth models, Bchron (Haslett and Parnell, 2008) and OxCal (Ramsey, 2008) for use with U-Pb dates. Some practical implications of this approach include: a) model age uncertainties increase in intervals that lack closely spaced age constraints; b) models do not assume normal distributions, allowing for the non-symmetric uncertainties of sometimes complex crystal age probability functions in volcanic tuffs; c) superpositional constraints can objectively reject some cases of zircon inheritance and mitigate apparent age complexities. We use this model to produce an age-depth model with continuous and realistic uncertainties, for the early Miocene Santa Cruz Formation (SCF), Argentina.Facultad de Ciencias Naturales y Muse

    A new approach for developing continuous age-depth models from dispersed chronologic data: applications to the Miocene Santa Cruz formation, Argentina

    Get PDF
    Traditional methods (linear regression, spline fitting) of age-depth modeling generate overly optimistic confidence intervals. Originally developed for C, Bayesian models (use of observations independent of chronology) allow the incorporation of prior information about superposition of dated horizons, stratigraphic position of undated points, and variations in sedimentology and sedimentation rate into model fitting. We modified the methodology of two Bayesian age depth models, Bchron (Haslett and Parnell, 2008) and OxCal (Ramsey, 2008) for use with U-Pb dates. Some practical implications of this approach include: a) model age uncertainties increase in intervals that lack closely spaced age constraints; b) models do not assume normal distributions, allowing for the non-symmetric uncertainties of sometimes complex crystal age probability functions in volcanic tuffs; c) superpositional constraints can objectively reject some cases of zircon inheritance and mitigate apparent age complexities. We use this model to produce an age-depth model with continuous and realistic uncertainties, for the early Miocene Santa Cruz Formation (SCF), Argentina.Facultad de Ciencias Naturales y Muse

    Efficiency of two-phase methods with focus on a planned population-based case-control study on air pollution and stroke

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>We plan to conduct a case-control study to investigate whether exposure to nitrogen dioxide (NO<sub>2</sub>) increases the risk of stroke. In case-control studies, selective participation can lead to bias and loss of efficiency. A two-phase design can reduce bias and improve efficiency by combining information on the non-participating subjects with information from the participating subjects. In our planned study, we will have access to individual disease status and data on NO<sub>2 </sub>exposure on group (area) level for a large population sample of Scania, southern Sweden. A smaller sub-sample will be selected to the second phase for individual-level assessment on exposure and covariables. In this paper, we simulate a case-control study based on our planned study. We develop a two-phase method for this study and compare the performance of our method with the performance of other two-phase methods.</p> <p>Methods</p> <p>A two-phase case-control study was simulated with a varying number of first- and second-phase subjects. Estimation methods: <it>Method 1</it>: Effect estimation with second-phase data only. <it>Method 2</it>: Effect estimation by adjusting the first-phase estimate with the difference between the adjusted and unadjusted second-phase estimate. The first-phase estimate is based on individual disease status and residential address for all study subjects that are linked to register data on NO<sub>2</sub>-exposure for each geographical area. <it>Method 3</it>: Effect estimation by using the expectation-maximization (EM) algorithm without taking area-level register data on exposure into account. <it>Method 4</it>: Effect estimation by using the EM algorithm and incorporating group-level register data on NO<sub>2</sub>-exposure.</p> <p>Results</p> <p>The simulated scenarios were such that, unbiased or marginally biased (< 7%) odds ratio (OR) estimates were obtained with all methods. The efficiencies of method 4, are generally higher than those of methods 1 and 2. The standard errors in method 4 decreased further when the case/control ratio is above one in the second phase. For all methods, the standard errors do not become substantially reduced when the number of first-phase controls is increased.</p> <p>Conclusion</p> <p>In the setting described here, method 4 had the best performance in order to improve efficiency, while adjusting for varying participation rates across areas.</p

    Measures and models for causal inference in cross-sectional studies: arguments for the appropriateness of the prevalence odds ratio and related logistic regression

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Several papers have discussed which effect measures are appropriate to capture the contrast between exposure groups in cross-sectional studies, and which related multivariate models are suitable. Although some have favored the Prevalence Ratio over the Prevalence Odds Ratio -- thus suggesting the use of log-binomial or robust Poisson instead of the logistic regression models -- this debate is still far from settled and requires close scrutiny.</p> <p>Discussion</p> <p>In order to evaluate how accurately true causal parameters such as Incidence Density Ratio (IDR) or the Cumulative Incidence Ratio (CIR) are effectively estimated, this paper presents a series of scenarios in which a researcher happens to find a preset ratio of prevalences in a given cross-sectional study. Results show that, provided essential and non-waivable conditions for causal inference are met, the CIR is most often inestimable whether through the Prevalence Ratio or the Prevalence Odds Ratio, and that the latter is the measure that consistently yields an appropriate measure of the Incidence Density Ratio.</p> <p>Summary</p> <p>Multivariate regression models should be avoided when assumptions for causal inference from cross-sectional data do not hold. Nevertheless, if these assumptions are met, it is the logistic regression model that is best suited for this task as it provides a suitable estimate of the Incidence Density Ratio.</p
    corecore