351 research outputs found
Challenges and Opportunities for Proton Batteries: From Electrodes, Electrolytes to Full‐Cell Applications
Proton batteries have emerged as a promising solution for grid-scale energy storage benefiting their high safety and abundant raw materials. The battery chemistry based on proton-ions is intrinsically advantageous in integrating fast diffusion kinetics and high capacities, thus offering great potential to break through the energy limit of capacitors and the power limit of traditional batteries. Significant efforts have been dedicated to advancing proton batteries, leading to successive milestones in recent years. Herein, the recent progress of proton batteries is summarized and insights into the challenges in electrodes, electrolytes and future opportunities for enhancing full-cell applications are provided. The fundamentals of electrochemical proton storage and representative faradaic electrodes are discussed, delving into their current limitations in mechanism studies and electrochemical performances. Subsequently, the classification, challenges, and strategies for improving protonic electrolytes are addressed. Finally, the state-of-the-art proton full-cells are explored, and views on the rational design of proton battery devices for achieving high-performance aqueous energy storage are offered
Estimation of scrub typhus incidence and spatiotemporal multicomponent characteristics from 2016 to 2023 in Zhejiang Province, China
BackgroundChina is one of the main epidemic areas of scrub typhus, and Zhejiang Province, which is located in the coastal area of southeastern China, is considered a key region of scrub typhus. However, there may be significant bias in the number of reported cases of scrub typhus, to the extent that its epidemiological patterns are not clearly understood. The purpose of this study was to estimate the possible incidence of scrub typhus and to identify the main driving components affecting the occurrence of scrub typhus at the county level.MethodsData on patients with scrub typhus diagnosed at medical institutions between January 2016 and December 2023 were collected from the China Disease Control and Prevention Information System (CDCPIS). The kriging interpolation method was used to estimate the possible incidence of scrub typhus. Additionally, a multivariate time series model was applied to identify the main driving components affecting the occurrence of scrub typhus in different regions.ResultsFrom January 2016 to September 2023, 2,678 cases of scrub typhus were reported in Zhejiang Province, including 1 case of reported death, with an overall case fatality rate of 0.04%. The seasonal characteristics of scrub typhus in Zhejiang Province followed an annual single peak model, and the months of peak onset in different cities were different. The estimated area with case occurrence was relatively wider. There were 41 counties in Zhejiang Province with an annual reported case count of less than 1, while from the estimated annual incidence, the number of counties with less than 1 case decreased to 21. The average annual number of cases in most regions fluctuated between 0 and 15. The numbers of cases in the central urban area of Hangzhou city, Jiaxin city and Huzhou city did not exceed 5. The estimated random effect variance parameters σλ2, σϕ2, and σν2 were 0.48, 1.03 and 3.48, respectively. The endemic component values of the top 10 counties were Shuichang, Cangnan, Chun’an, Xinchang, Pingyang, Xianju, Longquan, Dongyang, Yueqing and Qingyuan. The spatiotemporal component values of the top 10 counties were Pujiang, Anji, Pan’an, Dongyang, Jinyun, Ninghai, Yongjia, Xiaoshan, Yinwu and Shengzhou. The autoregressive component values of the top 10 counties were Lin’an, Cangnan, Chun’an, Yiwu, Pujiang, Longquan, Xinchang, Luqiao, Sanmen and Fuyang.ConclusionThe estimated incidence was higher than the current reported number of cases, and the possible impact area of the epidemic was also wider than the areas with reported cases. The main driving factors of the scrub typhus epidemic in Zhejiang included endemic components such as natural factors, but there was significant heterogeneity in the composition of driving factors in different regions. Some regions were driven by spatiotemporal spread across regions, and the time autoregressive effect in individual regions could not be ignored. These results that monitoring of cases, vectors, and pathogens of scrub typhus should be strengthened. Furthermore, each region should take targeted prevention and control measures based on the main driving factors of the local epidemic to improve the accuracy of prevention and control
DPL: Decoupled Prompt Learning for Vision-Language Models
Prompt learning has emerged as an efficient and effective approach for
transferring foundational Vision-Language Models (e.g., CLIP) to downstream
tasks. However, current methods tend to overfit to seen categories, thereby
limiting their generalization ability for unseen classes. In this paper, we
propose a new method, Decoupled Prompt Learning (DPL), which reformulates the
attention in prompt learning to alleviate this problem. Specifically, we
theoretically investigate the collaborative process between prompts and
instances (i.e., image patches/text tokens) by reformulating the original
self-attention into four separate sub-processes. Through detailed analysis, we
observe that certain sub-processes can be strengthened to bolster robustness
and generalizability by some approximation techniques. Furthermore, we
introduce language-conditioned textual prompting based on decoupled attention
to naturally preserve the generalization of text input. Our approach is
flexible for both visual and textual modalities, making it easily extendable to
multi-modal prompt learning. By combining the proposed techniques, our approach
achieves state-of-the-art performance on three representative benchmarks
encompassing 15 image recognition datasets, while maintaining
parameter-efficient. Moreover, our DPL does not rely on any auxiliary
regularization task or extra training data, further demonstrating its
remarkable generalization ability.Comment: 11 pages, 5 figures, 8 table
TexRO: Generating Delicate Textures of 3D Models by Recursive Optimization
This paper presents TexRO, a novel method for generating delicate textures of
a known 3D mesh by optimizing its UV texture. The key contributions are
two-fold. We propose an optimal viewpoint selection strategy, that finds the
most miniature set of viewpoints covering all the faces of a mesh. Our
viewpoint selection strategy guarantees the completeness of a generated result.
We propose a recursive optimization pipeline that optimizes a UV texture at
increasing resolutions, with an adaptive denoising method that re-uses existing
textures for new texture generation. Through extensive experimentation, we
demonstrate the superior performance of TexRO in terms of texture quality,
detail preservation, visual consistency, and, notably runtime speed,
outperforming other current methods. The broad applicability of TexRO is
further confirmed through its successful use on diverse 3D models.Comment: Technical report. Project page: https://3d-aigc.github.io/TexR
Semantics-aware Motion Retargeting with Vision-Language Models
Capturing and preserving motion semantics is essential to motion retargeting
between animation characters. However, most of the previous works neglect the
semantic information or rely on human-designed joint-level representations.
Here, we present a novel Semantics-aware Motion reTargeting (SMT) method with
the advantage of vision-language models to extract and maintain meaningful
motion semantics. We utilize a differentiable module to render 3D motions. Then
the high-level motion semantics are incorporated into the motion retargeting
process by feeding the vision-language model with the rendered images and
aligning the extracted semantic embeddings. To ensure the preservation of
fine-grained motion details and high-level semantics, we adopt a two-stage
pipeline consisting of skeleton-aware pre-training and fine-tuning with
semantics and geometry constraints. Experimental results show the effectiveness
of the proposed method in producing high-quality motion retargeting results
while accurately preserving motion semantics.Comment: Accepted in CVPR202
GGRt: Towards Pose-free Generalizable 3D Gaussian Splatting in Real-time
This paper presents GGRt, a novel approach to generalizable novel view
synthesis that alleviates the need for real camera poses, complexity in
processing high-resolution images, and lengthy optimization processes, thus
facilitating stronger applicability of 3D Gaussian Splatting (3D-GS) in
real-world scenarios. Specifically, we design a novel joint learning framework
that consists of an Iterative Pose Optimization Network (IPO-Net) and a
Generalizable 3D-Gaussians (G-3DG) model. With the joint learning mechanism,
the proposed framework can inherently estimate robust relative pose information
from the image observations and thus primarily alleviate the requirement of
real camera poses. Moreover, we implement a deferred back-propagation mechanism
that enables high-resolution training and inference, overcoming the resolution
constraints of previous methods. To enhance the speed and efficiency, we
further introduce a progressive Gaussian cache module that dynamically adjusts
during training and inference. As the first pose-free generalizable 3D-GS
framework, GGRt achieves inference at 5 FPS and real-time rendering at
100 FPS. Through extensive experimentation, we demonstrate that our
method outperforms existing NeRF-based pose-free techniques in terms of
inference speed and effectiveness. It can also approach the real pose-based
3D-GS methods. Our contributions provide a significant leap forward for the
integration of computer vision and computer graphics into practical
applications, offering state-of-the-art results on LLFF, KITTI, and Waymo Open
datasets and enabling real-time rendering for immersive experiences.Comment: Project page: https://3d-aigc.github.io/GGR
XLD: A Cross-Lane Dataset for Benchmarking Novel Driving View Synthesis
Thoroughly testing autonomy systems is crucial in the pursuit of safe
autonomous driving vehicles. It necessitates creating safety-critical scenarios
that go beyond what can be safely collected from real-world data, as many of
these scenarios occur infrequently on public roads. However, the evaluation of
most existing NVS methods relies on sporadic sampling of image frames from the
training data, comparing the rendered images with ground truth images using
metrics. Unfortunately, this evaluation protocol falls short of meeting the
actual requirements in closed-loop simulations. Specifically, the true
application demands the capability to render novel views that extend beyond the
original trajectory (such as cross-lane views), which are challenging to
capture in the real world. To address this, this paper presents a novel driving
view synthesis dataset and benchmark specifically designed for autonomous
driving simulations. This dataset is unique as it includes testing images
captured by deviating from the training trajectory by 1-4 meters. It comprises
six sequences encompassing various time and weather conditions. Each sequence
contains 450 training images, 150 testing images, and their corresponding
camera poses and intrinsic parameters. Leveraging this novel dataset, we
establish the first realistic benchmark for evaluating existing NVS approaches
under front-only and multi-camera settings. The experimental findings
underscore the significant gap that exists in current approaches, revealing
their inadequate ability to fulfill the demanding prerequisites of cross-lane
or closed-loop simulation. Our dataset is released publicly at the project
page: https://3d-aigc.github.io/XLD/.Comment: project page: https://3d-aigc.github.io/XLD
VDG: Vision-Only Dynamic Gaussian for Driving Simulation
Dynamic Gaussian splatting has led to impressive scene reconstruction and
image synthesis advances in novel views. Existing methods, however, heavily
rely on pre-computed poses and Gaussian initialization by Structure from Motion
(SfM) algorithms or expensive sensors. For the first time, this paper addresses
this issue by integrating self-supervised VO into our pose-free dynamic
Gaussian method (VDG) to boost pose and depth initialization and static-dynamic
decomposition. Moreover, VDG can work with only RGB image input and construct
dynamic scenes at a faster speed and larger scenes compared with the pose-free
dynamic view-synthesis method. We demonstrate the robustness of our approach
via extensive quantitative and qualitative experiments. Our results show
favorable performance over the state-of-the-art dynamic view synthesis methods.
Additional video and source code will be posted on our project page at
https://3d-aigc.github.io/VDG
- …
