87 research outputs found
Oxygen Deficiency Hazard (ODH) Monitoring System in the LHC Tunnel
The Large Hadron Collider (LHC) presently under construction at CERN, will contain about 100 tons of helium mostly located in equipment in the underground tunnel and in caverns. Potential failure modes of the accelerator, which may be followed by helium discharge to the tunnel, have been identified and the corresponding helium flows calculated [1, 2, 3]. In case of helium discharge in the tunnel causing oxygen deficiency, personnel working in the tunnel shall be warned and evacuate safely. This paper describes oxygen deficiency monitoring system based on the parameter of limited visibility due to the LHC tunnel curvature and acceptable delay time between the failure and the system activation
Cryogenic and vacuum sectorisation of the LHC arcs
Following the recommendation of the LHC TC of June 20th, 1995 to introduce a separate cryogenic distribution line (QRL), which opened the possibility to have a finer cryogenic and vacuum sectorisation of the LHC machine than the original 8 arcs scheme, a working group was set up to study the implications: technical feasibility, advantages and drawbacks as well as cost of such a sectorisation (DG/DI/LE/dl, 26 July 1995). This report presents the conclusions of the Working Group. In the LHC Conceptual Design Report, ref. CERN/AC/95-05 (LHC), 20 October 1995, the so-called "Yellow Book", a complete cryostat arc (~ 2.9 km) would have to be warmed up in order to replace a defective cryomagnet. Even by coupling the two large refrigerators feeding adjacent arcs at even points to speed up the warm-up and cool down of one arc, the minimum down-time of the machine needed to replace a cryomagnet would be more than a full month (and even 52 days with only one cryoplant). Cryogenic and vacuum sectorisation of an arc into smaller sectors is technically feasible and would allow to reduce the down-times considerably (by one to three weeks with four sectors of 750 m in length, with respectively two or one cryoplants). In addition, sectorisation of the arcs may permit a more flexible quality control and commissioning of the main machine systems, including cold testing of small magnet strings. Sectorisation, described in detail in the following paragraphs, consists essentially of installing several additional cryogenic and vacuum valves as well as some insulation vacuum barriers. Additional cryogenic valves are needed in the return lines of the circuits feeding each half-cell in order to complete the isolation of the cryoline QRL from the machine, allowing intervention (i.e. venting to atmospheric pressure) on machine sectors without affecting the rest of an arc. Secondly, and for the same purpose, special vacuum and cryogenic valves must be installed, at the boundaries of machine sectors, for the circuits not passing through the cryoline QRL. Finally, some additional vacuum barriers must be installed around the magnet cold masses to divide the insulation vacuum of the magnet cryostats into independent sub-sectors, permitting to keep under insulating vacuum the cryogenically floating cold masses, while a sector (or part of it) is warmed up and opened to atmosphere. A reasonable scenario of sectorisation, namely with four 650-750 m long sectors per arc, and each consisting of 3 or 4 insulation vacuum sub-sectors with two to four half-cells, would represent an additional total cost of about 6.6 MCHF for the machine. It is estimated that this capital investment would be paid off by time savings in less than three long unscheduled interventions such as the change of a cryomagnet
An improved model for joint segmentation and registration based on linear curvature smoother
Image segmentation and registration are two of the most challenging tasks in medical imaging. They are closely related because both tasks are often required simultaneously. In this article, we present an improved variational model for a joint segmentation and registration based on active contour without edges and the linear curvature model. The proposed model allows large deformation to occur by solving in this way the difficulties other jointly performed segmentation and registration models have in case of encountering multiple objects into an image or their highly dependence on the initialisation or the need for a pre-registration step, which has an impact on the segmentation results. Through different numerical results, we show that the proposed model gives correct registration results when there are different features inside the object to be segmented or features that have clear boundaries but without fine details in which the old model would not be able to cope. </jats:p
A Simplified Cryogenic Distribution Scheme for the Large Hadron Collider
The Large Hadron Collider (LHC), currently under construction at CERN, will make use of superconducting magnets operating in superfluid helium below 2 K. The reference cryogenic distribution scheme was based, in each 3.3 km sector served by a cryogenic plant, on a separate cryogenic distribution line which feeds elementary cooling loops corresponding to the length of a half-cell (53 m). In order to decrease the number of active components, cryogenic modules and jumper connections between distribution line and magnet strings a simplified cryogenic scheme is now implemented, based on cooling loops corresponding to the length of a full-cell (107 m) and compatible with the LHC requirements. Performance and redundancy limitations are discussed with respect to the previous scheme and balanced against potential cost savings
Mechanical design and layout of the LHC standard half-cell
The LHC Conceptual Design Report issued on 20th October 1995 [1] introduced significant changes to some fundamental features of the LHC standard half-cell, composed of one quadrupole, 3 dipoles and a set of corrector magnets. A separate cryogenic distribution line has been adopted containing most of the distribution lines previously installed inside the main cryostat. The dipole length has been increased from 10 to 15 m and independent powering of the focusing and defocusing quadrupole magnets has been chosen. Individual quench protection diodes were introduced in magnet interconnects and many auxiliary bus bars were added to feed in series the various families of superconducting corrector magnets. The various highly intricate basic systems such as: cryostats and cryogenics feeders, superconducting magnets and their electrical powering and protection, vacuum beam screen and its cooling, support and alignment devices have been redesigned, taking into account the very tight space available. These space constraints are imposed by the desire to have maximum integral bending field strength for maximum LHC energy, in the existing LEP tunnel. Finally, cryogenic and vacuum sectorisation have been introduced to reduce downtimes and facilitate commissioning
Visualisation of KPIs in zero emission neighbourhoods for improved stakeholder participation using Virtual Reality
This paper addresses the role of virtual reality in addressing the specific challenge of the increasing complexity and decreasing usability when dealing with the level of detail required to model a zero emission neighbourhood (ZEN).[1] In such neighbourhoods, there is a need to handle both 'top down' neighbourhood level data with 'bottom up' building and material level data. This can quickly become overwhelming particularly when dealing with non expert users such as planners, architects, researchers and citizens who play a key part in the design process of future ZENs. Visualisation is an invaluable means to communicate complex data in an interactive way that makes it easier for diverse stakeholders to engage in decision making early and throughout the design process. The main purpose of this work has been to make ZEN key performance indicators (KPIs) more easily comprehensible to a diverse set of stakeholders who need to be involved in the early design phase. The paper investigates how existing extended reality (XR) technologies, such as virtual reality, can be integrated with an existing dynamic LCA method in order to provide visualise feedback on KPIs in early phase design of sustainable neighbourhoods. This existing method provides a dynamic link between the REVIT Bim and the ZEB Tool using a Dynamo plugin.[2] The results presented in this paper demonstrate how virtual reality can help to improve stakeholder participation in the early design phase and more easily integrate science-based knowledge on GHG emissions and other KPIs into the further development of the user-centered architectural and urban ZEN toolbox for the design and planning, operation and monitoring of ZENs. [3]publishedVersionContent from this work may be used under the terms of theCreative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. Published under licence by IOP Publishing Lt
A variational joint segmentation and registration framework for multimodal images
Image segmentation and registration are closely related image processing techniques and often required as simultaneous tasks. In this work, we introduce an optimization-based approach to a joint registration and segmentation model for multimodal images deformation. The model combines an active contour variational term with mutual information (MI) smoothing fitting term and solves in this way the difficulties of simultaneously performed segmentation and registration models for multimodal images. This combination takes into account the image structure boundaries and the movement of the objects, leading in this way to a robust dynamic scheme that links the object boundaries information that changes over time. Comparison of our model with state of art shows that our method leads to more consistent registrations and accurate results
Appearance-driven conversion of polygon soup building models with level of detail control for 3D geospatial applications
In many 3D applications, building models in polygon-soup representation are commonly used for the purposes of visualization, for example, in movies and games. Their appearances are fine, however geometry-wise, they may have limited information of connectivity and may have internal intersections between their parts. Therefore, they are not well-suited to be directly used in 3D geospatial applications, which usually require geometric analysis. For an input building model in polygon-soup representation, we propose a novel appearance-driven approach to interactively convert it to a two-manifold model, which is more well-suited for 3D geospatial applications. In addition, the level of detail (LOD) can be controlled interactively during the conversion. Because a model in polygon-soup representation is not well-suited for geometric analysis, the main idea of the proposed method is extracting the visual appearance of the input building model and utilizing it to facilitate the conversion and LODs generation. The silhouettes are extracted and used to identify the features of the building. After this, according to the locations of these features, horizontal cross-sections are generated. We then connect two adjacent horizontal cross-sections to reconstruct the building. We control the LOD by processing the features on the silhouettes and horizontal cross-sections using a 2D approach. We also propose facilitating the conversion and LOD control by integrating a variety of rasterization methods. The results of our experiments demonstrate the effectiveness of our method
One class based feature learning approach for defect detection using deep autoencoders
Detecting defects is an integral part of any manufacturing process. Most works still utilize traditional image processing algorithms to detect defects owing to the complexity and variety of products and manufacturing environments. In this paper, we propose an approach based on deep learning which uses autoencoders for extraction of discriminative features. It can detect different defects without using any defect samples during training. This method, where samples of only one class (i.e. defect-free samples) are available for training, is called One Class Classification (OCC). This OCC method can also be used for training a neural network when only one golden sample is available by generating many copies of the reference image by data augmentation. The trained model is then able to generate a descriptor—a unique feature vector of an input image. A test image captured by an Automatic Optical Inspection (AOI) camera is sent to the trained model to generate a test descriptor, which is compared with a reference descriptor to obtain a similarity score. After comparing the results of this method with a popular traditional similarity matching method SIFT, we find that in the most cases this approach is more effective and more flexible than the traditional image processing-based methods, and it can be used to detect different types of defects with minimum customization
Towards automatic optical inspection of soldering defects
This paper proposes a method for automatic image-based classification of solder joint defects in the context of Automatic Optical Inspection (AOI) of Printed Circuit Boards (PCBs). Machine learning-based approaches are frequently used for image-based inspection. However, a main challenge is to manually create sufficiently large labeled training databases to allow for high accuracy of defect detection. Creating such large training databases is time-consuming, expensive, and often unfeasible in industrial production settings. In order to address this problem, an active learning framework is proposed which starts with only a small labeled subset of training data. The labeled dataset is then enlarged step-by-step by combining K-means clustering with active user input to provide representative samples for the training of an SVM classifier. Evaluations on two databases with insufficient and shifting solder joints samples have shown that the proposed method achieved high accuracy while requiring only minimal user input. The results also demonstrated that the proposed method outperforms random and representative sampling by ~ 3.2% and ~ 2.7%, respectively, and it outperforms the uncertainty sampling method by ~ 0.5%
- …
