445 research outputs found

    A Study of the Merger History of the Galaxy Group HCG 62 Based on X-Ray Observations and SPH Simulations

    Full text link
    We choose the bright compact group HCG 62, which was found to exhibit both excess X-ray emission and high Fe abundance to the southwest of its core, as an example to study the impact of mergers on chemical enrichment in the intragroup medium. We first reanalyze the high-quality Chandra and XMM-Newton archive data to search for the evidence for additional SN II yields, which is expected as a direct result of the possible merger-induced starburst. We reveal that, similar to the Fe abundance, the Mg abundance also shows a high value in both the innermost region and the southwest substructure, forming a high-abundance plateau, meanwhile all the SN Ia and SN II yields show rather flat distributions in >0.1r200>0.1r_{200} in favor of an early enrichment. Then we carry out a series of idealized numerical simulations to model the collision of two initially isolated galaxy groups by using the TreePM-SPH GADGET-3 code. We find that the observed X-ray emission and metal distributions, as well as the relative positions of the two bright central galaxies with reference to the X-ray peak, can be well reproduced in a major merger with a mass ratio of 3 when the merger-induced starburst is assumed. The `best-match' snapshot is pinpointed after the third pericentric passage when the southwest substructure is formed due to gas sloshing. By following the evolution of the simulated merging system, we conclude that the effects of such a major merger on chemical enrichment are mostly restricted within the core region when the final relaxed state is reached.Comment: Accepted for publication in the Astrophysical Journa

    Economic Effects of Bridal Ceremonies on Small Business Enterprises in USA: An exploratory Study

    Get PDF
    The wedding industry plays a significant role in the U.S. society and economy. The internal and external treats of the wedding industry remain one of the major issues the industry is encountering. The current intense competition and declining marriage rates remain vital challenges and must be well managed to stimulate the industry growth. This research is investigating the motives behind the industry challenges to propose the most efficient solutions and overcome these underperformances. Our research data reveals that the rapid development and utilization of technology have increased wedding costs, inspired labor market changes and raised moral laxity in society. Hence, the collaboration between small business enterprises and the US government is imperative to instigate the industry expansion. Keywords: wedding, bridal ceremonies, small business, vertical integration, instigat

    Implicit Chain Particle Model for Polymer Grafted Nanoparticles

    Full text link
    Matrix-free nanocomposites made from polymer grafted nanoparticles (PGN) represent a paradigm shift in materials science because they greatly improve nanoparticle dispersion and offer greater tunability over rheological and mechanical properties in comparison to neat polymers. Utilizing the full potential of PGNs requires a deeper understanding of how polymer graft length, density, and chemistry influence interfacial interactions between particles. There has been great progress in describing these effects with molecular dynamics (MD). However, the limitations of the length and time scales of MD make it prohibitively costly to study systems involving more than a few PGNs. Here, we address some of these challenges by proposing a new modeling paradigm for PGNs using a strain-energy mapping framework involving potential of mean force (PMF) calculations. In this approach, each nanoparticle is coarse-grained into a representative particle with chains treated implicitly, namely, the implicit chain particle model (ICPM). Using a chemistry-specific CG-MD model of PMMA as a testbed, we derive the effective interaction between particles arranged in a closed-packed lattice configuration by matching bulk dilation/compression strain energy densities. The strain-rate dependence of the mechanical work in ICPM is also discussed. Overall, the ICPM model increases the computational speed by approximately 5-6 orders of magnitude compared to the CG-MD models. This novel framework is foundational for particle-based simulations of PGNs and their blends and accelerates the understanding and predictions of emergent properties of PGN materials

    Learning Pair Potentials using Differentiable Simulations

    Full text link
    Learning pair interactions from experimental or simulation data is of great interest for molecular simulations. We propose a general stochastic method for learning pair interactions from data using differentiable simulations (DiffSim). DiffSim defines a loss function based on structural observables, such as the radial distribution function, through molecular dynamics (MD) simulations. The interaction potentials are then learned directly by stochastic gradient descent, using backpropagation to calculate the gradient of the structural loss metric with respect to the interaction potential through the MD simulation. This gradient-based method is flexible and can be configured to simulate and optimize multiple systems simultaneously. For example, it is possible to simultaneously learn potentials for different temperatures or for different compositions. We demonstrate the approach by recovering simple pair potentials, such as Lennard-Jones systems, from radial distribution functions. We find that DiffSim can be used to probe a wider functional space of pair potentials compared to traditional methods like Iterative Boltzmann Inversion. We show that our methods can be used to simultaneously fit potentials for simulations at different compositions and temperatures to improve the transferability of the learned potentials.Comment: 12 pages, 10 figure

    Dataset Quantization with Active Learning based Adaptive Sampling

    Full text link
    Deep learning has made remarkable progress recently, largely due to the availability of large, well-labeled datasets. However, the training on such datasets elevates costs and computational demands. To address this, various techniques like coreset selection, dataset distillation, and dataset quantization have been explored in the literature. Unlike traditional techniques that depend on uniform sample distributions across different classes, our research demonstrates that maintaining performance is feasible even with uneven distributions. We find that for certain classes, the variation in sample quantity has a minimal impact on performance. Inspired by this observation, an intuitive idea is to reduce the number of samples for stable classes and increase the number of samples for sensitive classes to achieve a better performance with the same sampling ratio. Then the question arises: how can we adaptively select samples from a dataset to achieve optimal performance? In this paper, we propose a novel active learning based adaptive sampling strategy, Dataset Quantization with Active Learning based Adaptive Sampling (DQAS), to optimize the sample selection. In addition, we introduce a novel pipeline for dataset quantization, utilizing feature space from the final stage of dataset quantization to generate more precise dataset bins. Our comprehensive evaluations on the multiple datasets show that our approach outperforms the state-of-the-art dataset compression methods.Comment: Accepted to ECCV 202

    Accuracy of a novel calibratable real-time continuous glucose monitoring device based on FreeStyle libre in- and out-of-hospital

    Get PDF
    ObjectivesBased on FreeStyle Libre, we designed QT AIR, an advanced real-time, calibrated Continuous Glucose Monitoring (CGM) device. This study aim to validate the consistency and clinical accuracy of the product by comparing the capillary blood glucose (CBG) with CGM data in both in-hospital and outpatient scenarios.MethodsResults of CGM devices were compared with random capillary glucose values from users in both in-hospital and outpatient settings. The accuracy of CGMs was assessed through consistency analysis, Bland-Altman analysis, calculation of MARD and MAD, Consensus Error Grids, as well as analysis using the Continuous Glucose Deviation Interval and Variability Analysis (CG-DIVA).ResultsIn outpatient setting, 1907 values from 138 users were analyzed. FreeStyle Libre data, QT AIR calibrated and uncalibrated data showed strong positive correlations with capillary blood glucose values. The MARD values for the FreeStyle Libre, uncalibrated QT AIR, and calibrated QT AIR groups were 18.33%, 20.63%, and 12.39%, respectively. Consensus Error Grid, reference values in Zone A: FreeStyle Libre: 69.75%, QT AIR uncalibrated: 67.80%, QT AIR calibrated: 87.62%. The Bland-Altman analysis results suggest that FreeStyle Libre exhibitsed a systematic underestimation of blood glucose levels, while QT AIR almost rectified the differences. In the in-Hospital setting, the MARD of QT AIR after calibration was reduced to 7.24%. The Consensus error grid analyses of the in-Hospital data revealed that 95% of the calibrated QT AIR values fell within Zone A, a significantly higher proportion than that of other two group. The CG-DIVA analysis of the calibrated QT AIR device showed a median bias of -0.49% and a between-sensor variability of 26.65%, both of which are significantly lower than the corresponding values observed for the FreeStyle Libre device.ConclusionsWe successfully transformed a retrospective CGM system into a real-time monitoring device. The monitoring accuracy of the device could be improved by calibration

    PoinTramba: A Hybrid Transformer-Mamba Framework for Point Cloud Analysis

    Full text link
    Point cloud analysis has seen substantial advancements due to deep learning, although previous Transformer-based methods excel at modeling long-range dependencies on this task, their computational demands are substantial. Conversely, the Mamba offers greater efficiency but shows limited potential compared with Transformer-based methods. In this study, we introduce PoinTramba, a pioneering hybrid framework that synergies the analytical power of Transformer with the remarkable computational efficiency of Mamba for enhanced point cloud analysis. Specifically, our approach first segments point clouds into groups, where the Transformer meticulously captures intricate intra-group dependencies and produces group embeddings, whose inter-group relationships will be simultaneously and adeptly captured by efficient Mamba architecture, ensuring comprehensive analysis. Unlike previous Mamba approaches, we introduce a bi-directional importance-aware ordering (BIO) strategy to tackle the challenges of random ordering effects. This innovative strategy intelligently reorders group embeddings based on their calculated importance scores, significantly enhancing Mamba's performance and optimizing the overall analytical process. Our framework achieves a superior balance between computational efficiency and analytical performance by seamlessly integrating these advanced techniques, marking a substantial leap forward in point cloud analysis. Extensive experiments on datasets such as ScanObjectNN, ModelNet40, and ShapeNetPart demonstrate the effectiveness of our approach, establishing a new state-of-the-art analysis benchmark on point cloud recognition. For the first time, this paradigm leverages the combined strengths of both Transformer and Mamba architectures, facilitating a new standard in the field. The code is available at https://github.com/xiaoyao3302/PoinTramba.Comment: 14 pages, 4 figures, 6 table
    corecore