470 research outputs found
Leveraging viscous Hamilton-Jacobi PDEs for uncertainty quantification in scientific machine learning
Uncertainty quantification (UQ) in scientific machine learning (SciML)
combines the powerful predictive power of SciML with methods for quantifying
the reliability of the learned models. However, two major challenges remain:
limited interpretability and expensive training procedures. We provide a new
interpretation for UQ problems by establishing a new theoretical connection
between some Bayesian inference problems arising in SciML and viscous
Hamilton-Jacobi partial differential equations (HJ PDEs). Namely, we show that
the posterior mean and covariance can be recovered from the spatial gradient
and Hessian of the solution to a viscous HJ PDE. As a first exploration of this
connection, we specialize to Bayesian inference problems with linear models,
Gaussian likelihoods, and Gaussian priors. In this case, the associated viscous
HJ PDEs can be solved using Riccati ODEs, and we develop a new Riccati-based
methodology that provides computational advantages when continuously updating
the model predictions. Specifically, our Riccati-based approach can efficiently
add or remove data points to the training set invariant to the order of the
data and continuously tune hyperparameters. Moreover, neither update requires
retraining on or access to previously incorporated data. We provide several
examples from SciML involving noisy data and \textit{epistemic uncertainty} to
illustrate the potential advantages of our approach. In particular, this
approach's amenability to data streaming applications demonstrates its
potential for real-time inferences, which, in turn, allows for applications in
which the predicted uncertainty is used to dynamically alter the learning
process
Leveraging Hamilton-Jacobi PDEs with time-dependent Hamiltonians for continual scientific machine learning
We address two major challenges in scientific machine learning (SciML):
interpretability and computational efficiency. We increase the interpretability
of certain learning processes by establishing a new theoretical connection
between optimization problems arising from SciML and a generalized Hopf
formula, which represents the viscosity solution to a Hamilton-Jacobi partial
differential equation (HJ PDE) with time-dependent Hamiltonian. Namely, we show
that when we solve certain regularized learning problems with integral-type
losses, we actually solve an optimal control problem and its associated HJ PDE
with time-dependent Hamiltonian. This connection allows us to reinterpret
incremental updates to learned models as the evolution of an associated HJ PDE
and optimal control problem in time, where all of the previous information is
intrinsically encoded in the solution to the HJ PDE. As a result, existing HJ
PDE solvers and optimal control algorithms can be reused to design new
efficient training approaches for SciML that naturally coincide with the
continual learning framework, while avoiding catastrophic forgetting. As a
first exploration of this connection, we consider the special case of linear
regression and leverage our connection to develop a new Riccati-based
methodology for solving these learning problems that is amenable to continual
learning applications. We also provide some corresponding numerical examples
that demonstrate the potential computational and memory advantages our
Riccati-based approach can provide
Coffee: Cost-Effective Edge Caching for 360 Degree Live Video Streaming
While live 360 degree video streaming delivers immersive viewing experience,
it poses significant bandwidth and latency challenges for content delivery
networks. Edge servers are expected to play an important role in facilitating
live streaming of 360 degree videos. In this paper, we propose a novel
predictive edge caching algorithm (Coffee) for live 360 degree video that
employ collaborative FoV prediction and predictive tile prefetching to reduce
bandwidth consumption, streaming cost and improve the streaming quality and
robustness. Our light-weight caching algorithms exploit the unique tile
consumption patterns of live 360 degree video streaming to achieve high tile
caching gains. Through extensive experiments driven by real 360 degree video
streaming traces, we demonstrate that edge caching algorithms specifically
designed for live 360 degree video streaming can achieve high streaming cost
savings with small edge cache space consumption. Coffee, guided by viewer FoV
predictions, significantly reduces back-haul traffic up to 76% compared to
state-of-the-art edge caching algorithms. Furthermore, we develop a
transcoding-aware variant (TransCoffee) and evaluate it using comprehensive
experiments, which demonstrate that TransCoffee can achieve 63\% lower cost
compared to state-of-the-art transcoding-aware approaches
Modelling and Performance Analysis of the Over-the-Air Computing in Cellular IoT Networks
Ultra-fast wireless data aggregation (WDA) of distributed data has emerged as
a critical design challenge in the ultra-densely deployed cellular internet of
things network (CITN) due to limited spectral resources. Over-the-air computing
(AirComp) has been proposed as an effective solution for ultra-fast WDA by
exploiting the superposition property of wireless channels. However, the effect
of access radius of access point (AP) on the AirComp performance has not been
investigated yet. Therefore, in this work, the mean square error (MSE)
performance of AirComp in the ultra-densely deployed CITN is analyzed with the
AP access radius. By modelling the spatial locations of internet of things
devices as a Poisson point process, the expression of MSE is derived in an
analytical form, which is validated by Monte Carlo simulations. Based on the
analytical MSE, we investigate the effect of AP access radius on the MSE of
AirComp numerically. The results show that there exists an optimal AP access
radius for AirComp, which can decrease the MSE by up to 12.7%. It indicates
that the AP access radius should be carefully chosen to improve the AirComp
performance in the ultra-densely deployed CITN
Modelling the 5G Energy Consumption using Real-world Data: Energy Fingerprint is All You Need
The introduction of fifth-generation (5G) radio technology has revolutionized
communications, bringing unprecedented automation, capacity, connectivity, and
ultra-fast, reliable communications. However, this technological leap comes
with a substantial increase in energy consumption, presenting a significant
challenge. To improve the energy efficiency of 5G networks, it is imperative to
develop sophisticated models that accurately reflect the influence of base
station (BS) attributes and operational conditions on energy usage.Importantly,
addressing the complexity and interdependencies of these diverse features is
particularly challenging, both in terms of data processing and model
architecture design.
This paper proposes a novel 5G base stations energy consumption modelling
method by learning from a real-world dataset used in the ITU 5G Base Station
Energy Consumption Modelling Challenge in which our model ranked second. Unlike
existing methods that omit the Base Station Identifier (BSID) information and
thus fail to capture the unique energy fingerprint in different base stations,
we incorporate the BSID into the input features and encoding it with an
embedding layer for precise representation. Additionally, we introduce a novel
masked training method alongside an attention mechanism to further boost the
model's generalization capabilities and accuracy. After evaluation, our method
demonstrates significant improvements over existing models, reducing Mean
Absolute Percentage Error (MAPE) from 12.75% to 4.98%, leading to a performance
gain of more than 60%
A Neural Network Model for K(λ) Retrieval and Application to Global Kpar Monitoring
Accurate estimation of diffuse attenuation coefficients in the visible wavelengths Kd(λ) from remotely sensed data is particularly challenging in global oceanic and coastal waters. The objectives of the present study are to evaluate the applicability of a semi-analytical Kd(λ) retrieval model (SAKM) and Jamet’s neural network model (JNNM), and then develop a new neural network Kd(λ) retrieval model (NNKM). Based on the comparison of Kd(λ) predicted by these models with in situ measurements taken from the global oceanic and coastal waters, all of the NNKM, SAKM, and JNNM models work well in Kd(λ) retrievals, but the NNKM model works more stable and accurate than both SAKM and JNNM models. The near-infrared band-based and shortwave infrared band-based combined model is used to remove the atmospheric effects on MODIS data. The Kd(λ) data was determined from the atmospheric corrected MODIS data using the NNKM, JNNM, and SAKM models. The results show that the NNKM model produces <30% uncertainty in deriving Kd(λ) from global oceanic and coastal waters, which is 4.88-17.18% more accurate than SAKM and JNNM models. Furthermore, we employ an empirical approach to calculate Kpar from the NNKM model-derived diffuse attenuation coefficient at visible bands (443, 488, 555, and 667 nm). The results show that our model presents a satisfactory performance in deriving Kpar from the global oceanic and coastal waters with 20.2% uncertainty. The Kpar are quantified from MODIS data atmospheric correction using our model. Comparing with field measurements, our model produces ~31.0% uncertainty in deriving Kpar from Bohai Sea. Finally, the applicability of our model for general oceanographic studies is briefly illuminated by applying it to climatological monthly mean remote sensing reflectance for time ranging from July, 2002- July 2014 at the global scale. The results indicate that the high Kd(λ) and Kpar values are usually found around the coastal zones in the high latitude regions, while low Kd(λ) and Kpar values are usually found in the open oceans around the low-latitude regions. These results could improve our knowledge about the light field under waters at either the global or basin scales, and be potentially used into general circulation models to estimate the heat flux between atmosphere and ocean.journal articl
Milk processing as a tool to reduce cow’s milk allergenicity: a mini-review
Milk processing technologies for the control of cow’s milk protein allergens are reviewed in this paper. Cow’s milk is a high nutritious food; however, it is also one of the most common food allergens. The major allergens from cow’s milk have been found to be β-lactoglobulin, α-lactalbumin and caseins. Strategies for destroying or modifying these allergens to eliminate milk allergy are being sought by scientists all over the world. In this paper, the main processing technologies used to prevent and eliminate cow’s milk allergy are presented and discussed, including heat treatment, glycation reaction, high pressure, enzymatic hydrolysis and lactic acid fermentation. Additionally, how regulating and optimizing the processing conditions can help reduce cow’s milk protein allergenicity is being investigated. These strategies should provide valuable support for the development of hypoallergenic milk products in the future
Adverse renal outcomes following targeted therapies in renal cell carcinoma: a systematic review and meta-analysis
Introduction: To clarify the prevalence of adverse renal outcomes following targeted therapies in renal cell carcinoma (RCC).Methods: A systematic search was performed in MEDLINE, EMBASE, and Cochrane Central Library. Studies that had reported adverse renal outcomes following targeted therapies in RCC were eligible. Outcomes included adverse renal outcomes defined as either renal dysfunction as evidenced by elevated serum creatinine levels or the diagnosis of acute kidney injury, or proteinuria as indicated by abnormal urine findings. The risk of bias was assessed according to Cochrane handbook guidelines. Publication bias was assessed using Funnel plot analysis and Egger Test.Results: The occurrences of the examined outcomes, along with their corresponding 95% confidence intervals (CIs), were combined using a random-effects model. In all, 23 studies including 10 RCTs and 13 observational cohort studies were included. The pooled incidence of renal dysfunction and proteinuria following targeted therapies in RCC were 17% (95% CI: 12%–22%; I2 = 88.5%, p < 0.01) and 29% (95% CI: 21%–38%; I2 = 93.2%, p < 0.01), respectively. The pooled incidence of both types of adverse events varied substantially across different regimens. Occurrence is more often in polytherapy compared to monotherapy. The majority of adverse events were rated as CTCAE grades 1 or 2 events. Four studies were assessed as having low risk of bias.Conclusion: Adverse renal outcomes reflected by renal dysfunction and proteinuria following targeted therapies in RCC are not uncommon and are more often observed in polytherapy compared to monotherapy. The majority of the adverse events were of mild severity.Systematic Review Registration: Identifier CRD42023441979
Genomewide meta-analysis identifies loci associated with IGF-I and IGFBP-3 levels with impact on age-related traits
The growth hormone/insulin-like growth factor (IGF) axis can be manipulated in animal models to promote longevity, and IGF-related proteins including IGF-I and IGF-binding protein-3 (IGFBP-3) have also been implicated in risk of human diseases including cardiovascular diseases, diabetes, and cancer. Throug
Indian Summer Monsoon variations and competing influences between hemispheres since ~35 ka recorded in Tengchongqinghai Lake, southwest China
The southwestern Yunnan Province of China, which is located at the southeastern margin of the Tibetan Plateau and close to Bay of Bengal, is significantly influenced by the Indian Summer Monsoon (ISM). In this study, we reconstruct proxies for the ISM from 35 to 1 ka through detailed analysis of grain-size distribution, geochemical composition and environmental magnetism from a 7.96 m sediment core from Tengchongqinghai Lake, Yunnan Province, China. Globally recognized, abrupt climatic events, including Heinrich Events 0–3 (H0−H3) and the Bølling-Allerød (B/A) warm period are identified in most of our proxies, and the long-term trend is consistent with other published records such as stalagmite oxygen isotopes (δ18O) from Sangxing Cave. Northern Hemisphere (NH) temperature, which is influenced by NH solar insolation, is commonly suggested to play a dominant role in controlling the ISM. A comparison of our record with the δ18O variations of ice cores from Greenland and Antarctica, a sea surface temperature (SST) record from the Bay of Bengal, and summer solar insolation at 25°N latitude demonstrates that the general pattern of ISM change does follow variations in summer insolation; however, the ISM lags summer insolation by thousands of years. While the ISM fluctuations are highly correlated with NH temperature on shorter timescales (centennial-millennial), the gradually weakened ISM from 22.5 ka until the Last Glacial Maximum (LGM) indicates a close relationship with the rise of Southern Hemisphere (SH) temperature and the relatively cold background of the SH. Our record expands on the findings of ISM records from Heqing paleolake basin in southwestern China and the Arabian Sea sediments, suggesting that the NH and SH have a competitive influence on ISM by controlling the cross-equatorial pressure gradient. This relationship means that when NH temperatures are relatively high, it has a stronger influence on the ISM than SH influences. In contrast, when the SH temperature is relatively low, it has a dominant influence on ISM. In addition, we speculate that the change of SH temperature not only influences the cross-equatorial pressure gradient directly, but also likely modulates the circulation system of ocean energy by influencing the Atlantic Meridional Overturning Circulation (AMOC)
- …
