112 research outputs found
Improving droplet sizing methodology for spray dynamics investigation
Spray modelling is one of the most useful techniques to characterize engine performance, efficiency and emissions. The size of droplets is one of the key variables that govern the efficiency of combustion of the liquid fuel. This study aims to develop an interactive tool using MATLAB codes that identifies the droplets and their sizes from the image taken with the long distance microscope in the spray chamber setup. In this developed method, firstly the background of the image was removed and then image processing techniques, dilation and erosion, were applied to the image file to refine the image files. Subsequently, circle detection method based on the Hough Transform algorithm with the function of imfindcircles was implemented. This function of the program allows the user to identify size droplets from the image files. A statistical study was conducted with the results automated from the MATLAB program using a different set of threshold values of black and white contrast. The results showed an optimal range for the threshold (black and white) values between 40 and 70. This optimal threshold range was established based on consideration of the correct and incorrect identification of the droplets. The results indicated that the program has the ability to identify the droplet providing size and numbers. The MATLAB program was developed using MATLAB compiler and can be used at different workstations. </jats:p
Accidental release of Liquefied Natural Gas in a processing facility: Effect of equipment congestion level on dispersion behaviour of the flammable vapour
An accidental leakage of Liquefied Natural Gas (LNG) can occur during processes of production, storage andtransportation. LNG has a complex dispersion characteristic after release into the atmosphere. This complexbehaviour demands a detailed description of the scientific phenomena involved in the dispersion of the releasedLNG. Moreover, a fugitive LNG leakage may remain undetected in complex geometry usually in semi-confined orconfined areas and is prone to fire and explosion events. To identify location of potential fire and/or explosionevents, resulting from accidental leakage and dispersion of LNG, a dispersion modelling of leakage is essential.This study proposes a methodology comprising of release scenarios, credible leak size, simulation, comparison ofcongestion level and mass of flammable vapour for modelling the dispersion of a small leakage of LNG and itsvapour in a typical layout using Computational Fluid Dynamics (CFD) approach. The methodology is applied to acase study considering a small leakage of LNG in three levels of equipment congestion. The potential fire and/orexplosion hazard of small leaks is assessed considering both time dependent concentration analysis and areabased model. Mass of flammable vapour is estimated in each case and effect of equipment congestion on sourceterms and dispersion characteristics are analysed. The result demonstrates that the small leak of LNG can createhazardous scenarios for a fire and/or explosion event. It is also revealed that higher degree of equipmentcongestion increases the retention time of vapour and intensifies the formation of pockets of isolated vapourcloud. This study would help in designing appropriate leak and dispersion detection systems, effective monitoring procedures and risk assessmen
Reliability Assessment of Main Engine Subsystems Considering Turbocharger Failure as a Case Study
Safe operation of a merchant vessel is dependent on the reliability of the vessel’s main propulsion engine. Reliability of the main propulsion engine is interdependent on the reliability of several subsystems including lubricating oil system, fuel oil system, cooling water system and scavenge air system. Turbochargers form part of the scavenge sub system and play a vital role in the operation of the main engine. Failure of turbochargers can lead to disastrous consequences and immobilisation of the main engine. Hence due consideration need to be given to the reliability assessment of the scavenge system while assessing the reliability of the main engine. This paper presents integration of Markov model (for constant failure components) and Weibull failure model (for wearing out components) to estimate the reliability of the main propulsion engine. This integrated model will provide more realistic and practical analysis. It will serve as a useful tool to estimate the reliability of the vessel’s main propulsion engine and make efficient and effective maintenance decisions. A case study of turbocharger failure and its impact on the main engine is also discussed
Extensive chemical characterization of a heavy fuel oil
This paper presents procedures for determining the fractions, chemical compositions and combustion characteristics of Heavy Fuel Oil (HFO). This chemical characterization is requisite for better prediction of thermodynamic behaviour of multicomponent fuel such as HFO which consist of thousands of different components. Detailed chemical and physical compositions, molecular weight range and mean molecular weight of individual fractions of fuel enable to use more advanced approaches such as continuous thermodynamics for simulation and modelling. Sequential elution solvent chromatography was used to separate a HFO into Saturates, Aromatics, Resins and Asphaltenes (SARA) as gas chromatographic analysis was unsatisfactory to reveal the overall composition of a HFO, due to the insufficient volatility in most of the heavy compounds. Subsequent mass spectrometricand elemental analysis showed a wide range of molecular weight distributions for the fractions. The results also indicate that the saturates fraction contains cyclic structures with aliphatic side chains while the aromatics fraction contains tetracyclic aromatic rings with aliphatic side chains. The degree of difference between the Thermo-Gravimetric Analysis (TGA) scans of the fractions in inert and oxidizing atmospheres observed at high temperatures also increases with the degree of functionality of the fractions due to the presumably greater extent of free radical chemistry occurring in an oxidizing environment. The Infrared spectra of the fractions are consistent with what would be expected from a consideration of the solvents used to elute them in column chromatography and supported the classification of the fractions
Parametric analysis of pyrolysis process on the product yields in a bubbling fluidized bed reactor
This paper presents a numerical study of operating factors on the product yields of a fast pyrolysis process in a 2-D standard lab-scale bubbling fluidized bed reactor. In a fast pyrolysis process, oxygen-free thermal decomposition of biomass occurs to produce solid biochar, condensable vapours and non-condensable gases. This process also involves complex transport phenomena and therefore the Euler-Euler approach with a multi-fluid model is applied. The eleven species taking part in the process are grouped into a solid reacting phase, condensable/non-condensable phase, and non-reacting solid phase (the heat carrier). The biomass decomposition is simplified to ten reaction mechanisms based on the thermal decomposition of lignocellulosic biomass. For coupling of multi-fluid model and reaction rates, the time-splitting method is used. The developed model is validated first using available experimental data and is then employed to conduct the parametric study. Based on the simulation results, the impact of different operating factors on the product yields are presented. The results for operating temperature (both sidewall and carrier gas temperature) show that the optimum temperature for the production of bio-oil is in the range of 500–525 °C. The higher the nitrogen velocity, the lower the residence time and less chance for the secondary crack of condensable vapours to non-condensable gases and consequently higher bio-oil yield. Similarly, when the height of the biomass injector was raised, the yields of condensable increased and non-condensable decreased due to the lower residence time of biomass. Biomass flow rate of 1.3 kg/h can produce favourable results. When larger biomass particle sizes are used, the intraparticle temperature gradient increases and leads to more accumulated unreacted biomass inside the reactor and the products’ yield decreases accordingly. The simulation indicated that the larger sand particles accompanied by higher carrier gas velocity are favourable for bio-oil production. Providing a net heat equivalent of 6.52 W to the virgin biomass prior to entering the reactor bed leads to 7.5% higher bio-oil yields whereas other products’ yields stay steady. Results from different feedstock material show that the sum of cellulose and hemicellulose content is favourable for the production of bio-oil whereas the biochar yield is directly related to the lignin content
Modelling an integrated impact of fire, explosion and combustion products during transitional events caused by an accidental release of LNG
In a complex processing facility, there is likelihood of occurrence of cascading scenarios, i.e. hydrocarbon release, fire, explosion and dispersion of combustion products. The consequence of such scenarios, when combined, can be more severe than their individual impact. Hence, actual impact can be only representedby integration of above mentioned events. A novel methodology is proposed to model an evolving accident scenario during an incidental release of LNG in a complex processing facility. The methodology is applied to a case study considering transitional scenarios namely spill, pool formation and evaporation of LNG, dispersion of natural gas, and the consequent fire, explosion and dispersion of combustion products using Computational Fluid Dynamics (CFD). Probit functions are employed to analyze individual impacts and a ranking method is used to combine various impacts to identify risk during the transitional events.The results confirmed that in a large and complex facility, an LNG fire can transit to a vapor cloud explosion ifthe necessary conditions are met, i.e.the flammable range, ignition source with enough energy and congestion/confinement level. Therefore, the integrated consequences are more severe than those associated with the individual ones, and need to be properly assessed. This study would provide an insight for an effective analysis of potential consequences of an LNG spill in any LNG processing facility and it can be useful for the safety measured design of process facilities
Review and analysis of fire and explosion accidents in maritime transportation
The globally expanding shipping industry has several hazards such as collision, capsizing, foundering, grounding, stranding, fire, and explosion. Accidents are often caused by more than one contributing factor through complex interaction. It is crucial to identify root causes and their interactions to prevent and understand such accidents. This study presents a detailed review and analysis of fire and explosion accidents that occurred in the maritimetransportation industry during 1990–2015. The underlying causes of fire and explosion accidents are identified and analysed. This study also reviewed potential preventative measures to prevent such accidents. Additionally, this study compares properties of alternative fuels and analyses their effectiveness in mitigating fire and explosionhazards. It is observed that Cryogenic Natural Gas (CrNG), Liquefied Natural Gas (LNG) and methanol have properties more suitable than traditional fuels in mitigating fire risk and appropriate management of their hazards could make them a safer option to traditional fuels. However, for commercial use at this stage, there exist several uncertainties due to inadequate studies, and technological immaturity. This study provides an insight into fire and explosion accident causation and prevention, including the prospect of using alternative fuels for mitigating fire and explosion risks in maritime transportation
Risk-based fault detection using Self-Organizing Map
The complexity of modern systems is increasing rapidly and the dominating relationships among system variables have become highly non-linear. This results in difficulty in the identification of a system׳s operating states. In turn, this difficulty affects the sensitivity of fault detection and imposes a challenge on ensuring the safety of operation. In recent years, Self-Organizing Maps has gained popularity in system monitoring as a robust non-linear dimensionality reduction tool. Self-Organizing Maps is able to capture non-linear variations of the system. Therefore, it is sensitive to the change of a system׳s states leading to early detection of fault. In this paper, a new approach based on Self-Organizing Maps is proposed to detect and assess the risk of fault. In addition, probabilistic analysis is applied to characterize the risk of fault into different levels according to the hazard potential to enable a refined monitoring of the system. The proposed approach is applied on two experimental systems. The results from both systems have shown high sensitivity of the proposed approach in detecting and identifying the root cause of faults. The refined monitoring facilitates the determination of the risk of fault and early deployment of remedial actions and safety measures to minimize the potential impact of fault
A probabilistic multivariate method for fault diagnosis of industrial processes
A probabilistic multivariate fault diagnosis technique is proposed for industrial processes. The joint probability density function containing essential features of normal operation is constructed considering dependency among the process variables. The dependence structures are modelled using Gaussian copula. The Gaussian copula uses rank correlation coefficients to capture the nonlinear relationships between process variables. For real-time monitoring, the probability of each online data samples is computed under the joint probability density function. Those samples having probabilities violating a predetermined control limit are classified to be faulty. For fault diagnosis, the reference dependence structures of the process variables are first determined from normal process data. These reference structures are then compared with those obtained from the faulty data samples. This assists in identifying the root-cause variable(s). The proposed technique is tested on two case studies: a nonlinear numerical example and an industrial case. The performance of the proposed technique is observed to be superior to the conventional statistical methods, such as PCA and MICA
Nonlinear Gaussian Belief Network based fault diagnosis for industrial processes
A Nonlinear Gaussian Belief Network (NLGBN) based fault diagnosis technique is proposed for industrial processes. In this study, a three-layer NLGBN is constructed and trained to extract useful features from noisy process data. The nonlinear relationships between the process variables and the latent variables are modelled by a set of sigmoidal functions. To take into account the noisy nature of the data, model variances are also introduced to both the process variables and the latent variables. The three-layer NLGBN is first trained with normal process data using a variational Expectation and Maximization algorithm. During real-time monitoring, the online process data samples are used to update the posterior mean of the top-layer latent variable. The absolute gradient denoted as G-index to update the posterior mean is monitored for fault detection. A multivariate contribution plot is also generated based on the G-index for fault diagnosis. The NLGBN-based technique is verified using two case studies. The results demonstrate that the proposed technique outperforms the conventional nonlinear techniques such as KPCA, KICA, SPA, and Moving Window KPCA
- …
