581 research outputs found

    Motorcycle helmets: what about their coating?

    Get PDF
    In traffic accidents involving motorcycles, paint traces can be transferred from the rider's helmet or smeared onto its surface. These traces are usually in the form of chips or smears and are frequently collected for comparison purposes. This research investigates the physical and chemical characteristics of the coatings found on motorcycles helmets. An evaluation of the similarities between helmet and automotive coating systems was also performed.Twenty-seven helmet coatings from 15 different brands and 22 models were considered. One sample per helmet was collected and observed using optical microscopy. FTIR spectroscopy was then used and seven replicate measurements per layer were carried out to study the variability of each coating system (intravariability). Principal Component Analysis (PCA) and Hierarchical Cluster Analysis (HCA) were also performed on the infrared spectra of the clearcoats and basecoats of the data set. The most common systems were composed of two or three layers, consistently involving a clearcoat and basecoat. The coating systems of helmets with composite shells systematically contained a minimum of three layers. FTIR spectroscopy results showed that acrylic urethane and alkyd urethane were the most frequent binders used for clearcoats and basecoats. A high proportion of the coatings were differentiated (more than 95%) based on microscopic examinations. The chemical and physical characteristics of the coatings allowed the differentiation of all but one pair of helmets of the same brand, model and color. Chemometrics (PCA and HCA) corroborated classification based on visual comparisons of the spectra and allowed the study of the whole data set at once (i.e., all spectra of the same layer). Thus, the intravariability of each helmet and its proximity to the others (intervariability) could be more readily assessed. It was also possible to determine the most discriminative chemical variables based on the study of the PCA loadings. Chemometrics could therefore be used as a complementary decision-making tool when many spectra and replicates have to be taken into account. Similarities between automotive and helmet coating systems were highlighted, in particular with regard to automotive coating systems on plastic substrates (microscopy and FTIR). However, the primer layer of helmet coatings was shown to differ from the automotive primer. If the paint trace contains this layer, the risk of misclassification (i.e., helmet versus vehicle) is reduced. Nevertheless, a paint examiner should pay close attention to these similarities when analyzing paint traces, especially regarding smears or paint chips presenting an incomplete layer system

    Discussion on how to implement a verbal scale in a forensic laboratory: benefits, pitfalls and suggestions to avoid misunderstandings

    Get PDF
    In a recently published guideline for evaluative reporting in forensic science, the European Network of Forensic Science Institutes (ENFSI) recommended the use of the likelihood ratio for the measurement of the value of forensic results. As a device to communicate the probative value of the results, the ENFSI guideline mentions the possibility to define and use a verbal scale, which should be unified within a forensic institution. This paper summarizes discussions held between scientists of our institution to develop and implement such a verbal scale. It intends to contribute to general discussions likely to be faced by any forensic institution that engages in continuous monitoring and improving of their evaluation and reporting format. We first present published arguments in favour of the use of such verbal qualifiers. We emphasize that verbal qualifiers do not replace the use of numbers to evaluate forensic findings, but are useful to communicate the probative value, since the weight of evidence in terms of likelihood ratio are still apprehended with difficulty by both the forensic scientists, especially in absence of hard data, and the recipient of information. We further present arguments that support the development of the verbal scale we propose. Recognising the limits of the use of such a verbal scale, we then discuss its disadvantages: it may lead to the spurious view according to which the value of the observations made in a given case is relative to other cases. Verbal qualifiers are also prone to misunderstandings and cannot be coherently combined with other evidence. We therefore recommend not using the verbal qualifier alone in a written statement. While scientists should only report on the probability of the findings - and not on the probability of the propositions, which are the duty of the Court - we suggest showing examples to let the recipient of information understand how the scientific evidence affects the probabilities of the propositions. To avoid misunderstandings, we also advise to mention in the statement what the results do not mean. Finally, we are of the opinion that if experts were able to coherently articulate numbers, and if recipients of information could properly handle such numbers, then verbal qualifiers could be abandoned completely. At that time, numerical expressions of probative value will be appropriately understood, as other numerical measures that most of us understand without the need of any further explanation, such as expressions for length or temperature

    Toward Forecasting Volcanic Eruptions using Seismic Noise

    Full text link
    During inter-eruption periods, magma pressurization yields subtle changes of the elastic properties of volcanic edifices. We use the reproducibility properties of the ambient seismic noise recorded on the Piton de la Fournaise volcano to measure relative seismic velocity variations of less than 0.1 % with a temporal resolution of one day. Our results show that five studied volcanic eruptions were preceded by clearly detectable seismic velocity decreases within the zone of magma injection. These precursors reflect the edifice dilatation induced by magma pressurization and can be useful indicators to improve the forecasting of volcanic eruptions.Comment: Supplementary information: http://www-lgit.obs.ujf-grenoble.fr/~fbrengui/brenguier_SI.pdf Supplementary video: http://www-lgit.obs.ujf-grenoble.fr/~fbrengui/brenguierMovieVolcano.av

    Arctic sea-ice-free season projected to extend into autumn

    Get PDF
    The recent Arctic sea ice reduction comes with an increase in the ice-free season duration, with comparable contributions of earlier ice retreat and later advance. CMIP5 models all project that the trend towards later advance should progressively exceed and ultimately double the trend towards earlier retreat, causing the ice-free season to shift into autumn. We show that such a shift is a basic feature of the thermodynamic response of seasonal ice to warming. The detailed analysis of an idealised thermodynamic ice–ocean model stresses the role of two seasonal amplifying feedbacks. The summer feedback generates a 1.6-day-later advance in response to a 1-day-earlier retreat. The underlying physics are the property of the upper ocean to absorb solar radiation more efficiently than it can release heat right before ice advance. The winter feedback is comparatively weak, prompting a 0.3-day-earlier retreat in response to a 1-day shift towards later advance. This is because a shorter growth season implies thinner ice, which subsequently melts away faster. However, the winter feedback is dampened by the relatively long ice growth period and by the inverse relationship between ice growth rate and thickness. At inter-annual timescales, the thermodynamic response of ice seasonality to warming is obscured by inter-annual variability. Nevertheless, in the long term, because all feedback mechanisms relate to basic and stable elements of the Arctic climate system, there is little inter-model uncertainty on the projected long-term shift into autumn of the ice-free season.</p

    The relationships between regional Quaternary uplift, deformation across active normal faults and historical seismicity in the upper plate of subduction zones: The Capo D’Orlando Fault, NE Sicily

    Get PDF
    In order to investigate deformation within the upper plate of the Calabrian subduction zone we have mapped and modelled a sequence of Late Quaternary palaeoshorelines tectonically-deformed by the Capo D’Orlando normal fault, NE Sicily, which forms part of the actively deforming Calabrian Arc. In addition to the 1908 Messina Strait earthquake (Mw 7.1), this region has experienced damaging earthquakes, possibly on the Capo D’Orlando Fault, however, it is not considered by some to be a potential seismogenic source. Uplifted Quaternary palaeoshorelines are preserved on the hangingwall of the Capo D’Orlando Fault, indicating that hangingwall subsidence is counteracted by regional uplift, likely because of deformation associated with subduction/collision. We attempt to constrain the relationship between regional uplift, crustal extensional processes and historical seismicity, and we quantify both the normal and regional deformation signals. We report uplift variations along the strike of the fault and use a synchronous correlation technique to assign ages to palaeoshorelines, facilitating calculation of uplift rates and the fault throw-rate. Uplift rates in the hangingwall increase from 0.4 mm/yr in the centre of the fault to 0.89 mm/yr beyond its SW fault tip, suggesting 0.5 mm/yr of fault related subsidence, which implies a throw-rate of 0.63 ± 0.02 mm/yr, and significant seismic hazard. Overall, we emphasise that upper plate extension and related vertical motions complicate the process of deriving information on the subduction/collision process, such as coupling and slip distribution on the subduction interface, parameters that are commonly inferred for other subduction zones without considering upper plate deformation

    Asperities and barriers on the seismogenic zone in North Chile: state-of-the-art after the 2007 Mw 7.7 Tocopilla earthquake inferred by GPS and InSAR data

    Get PDF
    The Mw 7.7 2007 November 14 earthquake had an epicentre located close to the city of Tocopilla, at the southern end of a known seismic gap in North Chile. Through modelling of Global Positioning System (GPS) and radar interferometry (InSAR) data, we show that this event ruptured the deeper part of the seismogenic interface (30–50 km) and did not reach the surface. The earthquake initiated at the hypocentre and was arrested ~150 km south, beneath the Mejillones Peninsula, an area already identified as an important structural barrier between two segments of the Peru–Chile subduction zone. Our preferred models for the Tocopilla main shock show slip concentrated in two main asperities, consistent with previous inversions of seismological data. Slip appears to have propagated towards relatively shallow depths at its southern extremity, under the Mejillones Peninsula. Our analysis of post-seismic deformation suggests that small but still significant post-seismic slip occurred within the first 10 d after the main shock, and that it was mostly concentrated at the southern end of the rupture. The post-seismic deformation occurring in this period represents ~12–19 per cent of the coseismic deformation, of which ~30–55 per cent has been released aseismically. Post-seismic slip appears to concentrate within regions that exhibit low coseismic slip, suggesting that the afterslip distribution during the first month of the post-seismic interval complements the coseismic slip. The 2007 Tocopilla earthquake released only ~2.5 per cent of the moment deficit accumulated on the interface during the past 130 yr and may be regarded as a possible precursor of a larger subduction earthquake rupturing partially or completely the 500-km-long North Chile seismic gap

    The gravitational wave detector VIRGO

    Get PDF
    International audienc

    The Virgo data acquisition system

    Get PDF
    International audienc

    Contrasting responses of mean and extreme snowfall to climate change

    Get PDF
    Snowfall is an important element of the climate system, and one that is expected to change in a warming climate. Both mean snowfall and the intensity distribution of snowfall are important, with heavy snowfall events having particularly large economic and human impacts. Simulations with climate models indicate that annual mean snowfall declines with warming in most regions but increases in regions with very low surface temperatures. The response of heavy snowfall events to a changing climate, however, is unclear. Here I show that in simulations with climate models under a scenario of high emissions of greenhouse gases, by the late twenty-first century there are smaller fractional changes in the intensities of daily snowfall extremes than in mean snowfall over many Northern Hemisphere land regions. For example, for monthly climatological temperatures just below freezing and surface elevations below 1,000 metres, the 99.99th percentile of daily snowfall decreases by 8% in the multimodel median, compared to a 65% reduction in mean snowfall. Both mean and extreme snowfall must decrease for a sufficiently large warming, but the climatological temperature above which snowfall extremes decrease with warming in the simulations is as high as −9 °C, compared to −14 °C for mean snowfall. These results are supported by a physically based theory that is consistent with the observed rain–snow transition. According to the theory, snowfall extremes occur near an optimal temperature that is insensitive to climate warming, and this results in smaller fractional changes for higher percentiles of daily snowfall. The simulated changes in snowfall that I find would influence surface snow and its hazards; these changes also suggest that it may be difficult to detect a regional climate-change signal in snowfall extremes.National Science Foundation (U.S.) (Grant AGS-1148594)United States. National Aeronautics and Space Administration (ROSES Grant 09-IDS09-0049
    corecore