1,178 research outputs found
Performance optimization of a leagility inspired supply chain model: a CFGTSA algorithm based approach
Lean and agile principles have attracted considerable interest in the past few decades. Industrial sectors throughout the world are upgrading to these principles to enhance their performance, since they have been proven to be efficient in handling supply chains. However, the present market trend demands a more robust strategy incorporating the salient features of both lean and agile principles. Inspired by these, the leagility principle has emerged, encapsulating both lean and agile features. The present work proposes a leagile supply chain based model for manufacturing industries. The paper emphasizes the various aspects of leagile supply chain modeling and implementation and proposes a new Hybrid Chaos-based Fast Genetic Tabu Simulated Annealing (CFGTSA) algorithm to solve the complex scheduling problem prevailing in the leagile environment. The proposed CFGTSA algorithm is compared with the GA, SA, TS and Hybrid Tabu SA algorithms to demonstrate its efficacy in handling complex scheduling problems
A broad distribution of the alternative oxidase in microsporidian parasites
Microsporidia are a group of obligate intracellular parasitic eukaryotes that were considered to be amitochondriate until the recent discovery of highly reduced mitochondrial organelles called mitosomes. Analysis of the complete genome of Encephalitozoon cuniculi revealed a highly reduced set of proteins in the organelle, mostly related to the assembly of ironsulphur clusters. Oxidative phosphorylation and the Krebs cycle proteins were absent, in keeping with the notion that the microsporidia and their mitosomes are anaerobic, as is the case for other mitosome bearing eukaryotes, such as Giardia. Here we provide evidence opening the possibility that mitosomes in a number of microsporidian lineages are not completely anaerobic. Specifically, we have identified and characterized a gene encoding the alternative oxidase (AOX), a typically mitochondrial terminal oxidase in eukaryotes, in the genomes of several distantly related microsporidian species, even though this gene is absent from the complete genome of E. cuniculi. In order to confirm that these genes encode functional proteins, AOX genes from both A. locustae and T. hominis were over-expressed in E. coli and AOX activity measured spectrophotometrically using ubiquinol-1 (UQ-1) as substrate. Both A. locustae and T. hominis AOX proteins reduced UQ-1 in a cyanide and antimycin-resistant manner that was sensitive to ascofuranone, a potent inhibitor of the trypanosomal AOX. The physiological role of AOX microsporidia may be to reoxidise reducing equivalents produced by glycolysis, in a manner comparable to that observed in trypanosome
The relevance of outsourcing and leagile strategies in performance optimization of an integrated process planning and scheduling
Over the past few years growing global competition has forced the manufacturing industries to upgrade their old production strategies with the modern day approaches. As a result, recent interest has been developed towards finding an appropriate policy that could enable them to compete with others, and facilitate them to emerge as a market winner. Keeping in mind the abovementioned facts, in this paper the authors have proposed an integrated process planning and scheduling model inheriting the salient features of outsourcing, and leagile principles to compete in the existing market scenario. The paper also proposes a model based on leagile principles, where the integrated planning management has been practiced. In the present work a scheduling problem has been considered and overall minimization of makespan has been aimed. The paper shows the relevance of both the strategies in performance enhancement of the industries, in terms of their reduced makespan. The authors have also proposed a new hybrid Enhanced Swift Converging Simulated Annealing (ESCSA) algorithm, to solve the complex real-time scheduling problems. The proposed algorithm inherits the prominent features of the Genetic Algorithm (GA), Simulated Annealing (SA), and the Fuzzy Logic Controller (FLC). The ESCSA algorithm reduces the makespan significantly in less computational time and number of iterations. The efficacy of the proposed algorithm has been shown by comparing the results with GA, SA, Tabu, and hybrid Tabu-SA optimization methods
Shell we cook it? An experimental approach to the microarchaeological record of shellfish roasting
In this paper, we investigate the microarchaeological traces and archaeological visibility of shellfish cooking activities through a series of experimental procedures with direct roasting using wood-fueled fires and controlled heating in a muffle furnace. An interdisciplinary geoarchacological approach, combining micromorphology, FTIR (in transmission and ATR collection modes), TGA and XRD, was used to establish a baseline on the mineralogical transformation of heated shells from aragonite to calcite and diagnostic sedimentary traces produced by roasting fire features. Our experimental design focused on three main types of roasting procedures: the construction of shallow depressions with heated rocks (pebble cuvette experiments), placing shellfish on top of hot embers and ashes (fire below experiment), and by kindling short-lived fires on top of shellfish (fire above experiments). Our results suggest that similar shellfish roasting procedures will largely create microstratigraphic signatures of anthropogenically reworked combusted material spatially "disconnected" from the actual combustion locus. The construction of shallow earth ovens might entail an increased archaeological visibility, and some diagnostic signatures of in situ hearths can be obtained by fire below roasting activities. We also show that macroscopic visual modifications and mineralogical characterization of discarded shellfish might be indicative of specific cooking activities versus secondary burning.Max Planck Societyinfo:eu-repo/semantics/publishedVersio
Long-term TNT and DNT contamination: 1-D modeling of natural attenuation in the vadose zone: case study, Portugal
The vadose zone of a trinitrotoluene (TNT) and dinitrotoluene (DNT) contaminated site was investigated to assess the mobility of those explosives under natural conditions. Located in the left margin of the River Tejo Basin, Portugal, the site is located on unconsolidated sediments. Wastewaters associated with the 50-year explosives production were disposed in excavated ponds, from where water would infiltrate and pollute the unsaturated and saturated parts of the local aquifers. Two boreholes were drilled to 9 m depth in such a former waste pond to investigate the contaminant's fate in the vadose zone. Sediment samples were taken every 1-2 m for analysis of the polynitroaromatics (p-NACs) and organic volatile compounds, pH, organic carbon content, cation exchange capacity and grain size analysis. The main contaminant was TNT representing >70 % of the total p-NACs concentration that peaked approximately 7 mg/kg in one borehole, even if the median in both boreholes was of similar to 1 mg/kg. DNT was 4-30 % of the total p-NACs and nitrotoluene (NT), up to 5 %. No other (volatile) organic compound was detected. The predominance of TNT as the main contaminant implies that any natural mass reduction has been inefficient to clean the site. Several 1-D model simulations of p-NACs cleaning of the vadose zone under natural conditions indicated that the most probable scenario of combined advection and partitioning will only remove TNT after 10's of years, whereas DNT and NT will hardly be removed. Such low concentrations and long times for the p-NACs removal, suggest that by now those compounds have been washed-out to a level below standard limits
Search for astronomical neutrinos from blazar TXS 0506+056 in super-kamiokande
We report a search for astronomical neutrinos in the energy region from several GeV to TeV in the direction of the blazar TXS 0506+056 using the Super-Kamiokande detector following the detection of a 100 TeV neutrinos from the same location by the IceCube collaboration. Using Super-Kamiokande neutrino data across several data samples observed from 1996 April to 2018 February we have searched for both a total excess above known backgrounds across the entire period as well as localized excesses on smaller timescales in that interval. No significant excess nor significant variation in the observed event rate are found in the blazar direction. Upper limits are placed on the electron- and muon-neutrino fluxes at the 90% confidence level as 6.0 × 10−7 and 4.5 × 10−7–9.3 × 10−10 [erg cm−2 s−1], respectively
Extremely metal-poor gas at a redshift of 7
In typical astrophysical environments, the abundance of heavy elements ranges from 0.001 to 2 times the solar value. Lower abundances have been seen in selected stars in the Milky Way’s halo and in two quasar absorption systems at redshift z = 3 (ref. 4). These are widely interpreted as relics from the early Universe, when all gas possessed a primordial chemistry. Before now there have been no direct abundance measurements from the first billion years after the Big Bang, when the earliest stars began synthesizing elements. Here we report observations of hydrogen and heavy-element absorption in a spectrum of a quasar at z = 7.04, when the Universe was just 772 million years old (5.6 per cent of its present age). We detect a large column of neutral hydrogen but no corresponding metals (defined as elements heavier than helium), limiting the chemical abundance to less than 1/10,000 times the solar level if the gas is in a gravitationally bound proto-galaxy, or to less than 1/1,000 times the solar value if it is diffuse and unbound. If the absorption is truly intergalactic, it would imply that the Universe was neither ionized by starlight nor chemically enriched in this neighbourhood at z ≈ 7. If it is gravitationally bound, the inferred abundance is too low to promote efficient cooling, and the system would be a viable site to form the predicted but as yet unobserved massive population III stars
Saccharomyces cerevisiae mutants affected in vacuole assembly or vacuolar H+-ATPase are hypersensitive to lead (Pb) toxicity
Lead is an important environmental pollutant. The role of vacuole, in Pb detoxification, was studied using a vacuolar protein sorting mutant strain (vps16Δ), belonging to class C mutants. Cells disrupted in VPS16 gene, did not display a detectable vacuolar-like structure. Based on the loss of cell proliferation capacity, it was found that cells from vps16Δ mutant exhibited a hypersensitivity to Pb-induced toxicity, compared to wild type (WT) strain. The function of vacuolar H+-ATPase (V-ATPase), in Pb detoxification, was evaluated using mutants with structurally normal vacuoles but defective in subunits of catalytic (vma1Δ or vma2Δ) or membrane domain (vph1Δ or vma3Δ) of V-ATPase. All mutants tested, lacking a functional V-ATPase, displayed an increased susceptibility to Pb, comparatively to cells from WT strain. Modification of vacuolar morphology, in Pb-exposed cells, was visualized using a Vma2p-GFP strain. The treatment of yeast cells with Pb originated the fusion of the medium size vacuolar lobes into one enlarged vacuole. In conclusion, it was found that vacuole plays an important role in the detoxification of Pb in Saccharomyces cerevisiae; in addition, a functional V-ATPase was required for Pb compartmentalization.The authors thank the Fundacao para a Ciencia e a Tecnologia (FCT) through the Portuguese Government for their financial support of this work through the grant PEST-OE/EQB/LA0023/2011 to IBB
Measurement of the inclusive and dijet cross-sections of b-jets in pp collisions at sqrt(s) = 7 TeV with the ATLAS detector
The inclusive and dijet production cross-sections have been measured for jets
containing b-hadrons (b-jets) in proton-proton collisions at a centre-of-mass
energy of sqrt(s) = 7 TeV, using the ATLAS detector at the LHC. The
measurements use data corresponding to an integrated luminosity of 34 pb^-1.
The b-jets are identified using either a lifetime-based method, where secondary
decay vertices of b-hadrons in jets are reconstructed using information from
the tracking detectors, or a muon-based method where the presence of a muon is
used to identify semileptonic decays of b-hadrons inside jets. The inclusive
b-jet cross-section is measured as a function of transverse momentum in the
range 20 < pT < 400 GeV and rapidity in the range |y| < 2.1. The bbbar-dijet
cross-section is measured as a function of the dijet invariant mass in the
range 110 < m_jj < 760 GeV, the azimuthal angle difference between the two jets
and the angular variable chi in two dijet mass regions. The results are
compared with next-to-leading-order QCD predictions. Good agreement is observed
between the measured cross-sections and the predictions obtained using POWHEG +
Pythia. MC@NLO + Herwig shows good agreement with the measured bbbar-dijet
cross-section. However, it does not reproduce the measured inclusive
cross-section well, particularly for central b-jets with large transverse
momenta.Comment: 10 pages plus author list (21 pages total), 8 figures, 1 table, final
version published in European Physical Journal
- …
