1,249 research outputs found
Experimental study of needle-tissue interaction forces: effect of needle geometries, insertion methods and tissue characteristics.
A thorough understanding of needle-tissue interaction mechanics is necessary to optimize needle design, achieve robotically needle steering, and establish surgical simulation system. It is obvious that the interaction is influenced by numerous variable parameters, which are divided into three categories: needle geometries, insertion methods, and tissue characteristics. A series of experiments are performed to explore the effect of influence factors (material samples n=5 for each factor) on the insertion force. Data were collected from different biological tissues and a special tissue-equivalent phantom with similar mechanical properties, using a 1-DOF mechanical testing system instrumented with a 6-DOF force/torque (F/T) sensor. The experimental results indicate that three basic phases (deformation, insertion, and extraction phase) are existent during needle penetration. Needle diameter (0.7-3.2mm), needle tip (blunt, diamond, conical, and beveled) and bevel angle (10-85°) are turned out to have a great influence on insertion force, so do the insertion velocity (0.5-10mm/s), drive mode (robot-assisted and hand-held), and the insertion process (interrupted and continuous). Different tissues such as skin, muscle, fat, liver capsule and vessel are proved to generate various force cures, which can contribute to the judgement of the needle position and provide efficient insertion strategy
Cathode materials for high performance lithium-sulfur batteries
Since the late 20th century, energy crises have acquired worldwide attention. In the last two decades a lot of renewable energy sources have been fully developed and used, including solar energy, wind energy, tide energy and so on. However, the application of these energy sources
are hindered by time and space restrictions. For example, solar energy can only be used at day time with relatively clear whether. To make full use of these energy resources, a variety of energy storage devices have been developed. Among them, lithium-ion batteries (LIBs) are the
most successful commercialized energy storage devices and are widely used in our daily life, including phones, computers, electric vehicles and so on. However, the energy density of LIBs is hindered by the theoretical specific capacity of the lithium transition metal oxide cathode.
Lithium-sulfur batteries (LSBs) with a theoretical specific capacity of 1675 mA h g-1 are regarded as the most promising next generation energy storage devices. But several obstacles, including the low conductivity of S and Li2S, the big volume change of S during charge and
discharge and the notorious shuttle effect, stand in the road of commercialization of LSBs.
In the thesis, two different strategies have been applied to solve these problems. First, ZIF-67, one kind of metal-organic framework (MOF), was used as a template to synthesis porous carbon frameworks. The carbon frameworks were used as a S host to accommodate the volume
change of S and improve the conductivity of the electrode. What’s more, the Co centers in ZIF- 67 transferred into cobalt phosphide and cobalt sulphides, based on the detailed experiment condition. Cobalt phosphide and cobalt sulphides with high catalyst activity accelerate the
reactions in the electrodes and alleviated the shuttle effect and thus improved the electrochemical performance.
Second, sulfurized poly acrylonitrile (SPAN) was used as a source of S for LSBs. The covalent C-S bonds in SPAN alleviated the shuttle effect through reducing the formation of lithium polysulfides. Carbon nanotubes (CNTs) and Se-doping further improved the electrochemical
performance of SPAN through improving the conductivity and accelerating the reactions. Samples with different levels of Se-doping were synthesized and characterized to find the best conditions. Meanwhile, the structure of the as-synthesized SPAN samples was characterized by a variety of methods to gain some insight about structure of SPAN, which is a subject of debate among researchers.
Through these two strategies, the shuttle effect in LSBs was reduced and the performance of LSBs were improved. A higher specific capacity and a better cyclic stability were achieved. At the same time, a better understanding of the mechanism of LSBs was gained
牛山英治が編纂した山岡鉄舟の伝記について
Table S8. Comparison of GD in different studies. MICN is an abbreviation of Modified introduction in China; TS is an abbreviation of Tropical/Subtropical; SS is an abbreviation of Stiff Stalk; NSS is an abbreviation of non-Stiff Stalk; HZS is an abbreviation of Huangzaosi. (XLSX 11 kb
Contrastive Disentanglement in Generative Adversarial Networks
Disentanglement is defined as the problem of learninga representation that
can separate the distinct, informativefactors of variations of data. Learning
such a representa-tion may be critical for developing explainable and
human-controllable Deep Generative Models (DGMs) in artificialintelligence.
However, disentanglement in GANs is not a triv-ial task, as the absence of
sample likelihood and posteriorinference for latent variables seems to prohibit
the forwardstep. Inspired by contrastive learning (CL), this paper, froma new
perspective, proposes contrastive disentanglement ingenerative adversarial
networks (CD-GAN). It aims at dis-entangling the factors of inter-class
variation of visual datathrough contrasting image features, since the same
factorvalues produce images in the same class. More importantly,we probe a
novel way to make use of limited amount ofsupervision to the largest extent, to
promote inter-class dis-entanglement performance. Extensive experimental
resultson many well-known datasets demonstrate the efficacy ofCD-GAN for
disentangling inter-class variation
What you don't know... can't hurt you? A natural field experiment on relative performance feedback in higher education
This paper studies the effect of providing feedback to college students on their position in the grade distribution by using a natural field experiment. This information was updated every six months during a three-year period. We find that greater grades transparency decreases educational performance, as measured by the number of examinations passed and grade point average (GPA). However, self-reported satisfaction, as measured by surveys conducted after feedback is provided but before students take their examinations, increases. We provide a theoretical framework to understand these results, focusing on the role of prior beliefs and using out-of-trial surveys to test the model. In the absence of treatment, a majority of students underestimate their position in the grade distribution, suggesting that the updated information is “good news” for many students. Moreover, the negative effect on performance is driven by those students who underestimate their position in the absence of feedback. Students who overestimate initially their position, if anything, respond positively. The performance effects are short lived—by the time students graduate, they have similar accumulated GPA and graduation rates
Terrain Diffusion Network: Climatic-Aware Terrain Generation with Geological Sketch Guidance
Sketch-based terrain generation seeks to create realistic landscapes for
virtual environments in various applications such as computer games, animation
and virtual reality. Recently, deep learning based terrain generation has
emerged, notably the ones based on generative adversarial networks (GAN).
However, these methods often struggle to fulfill the requirements of flexible
user control and maintain generative diversity for realistic terrain.
Therefore, we propose a novel diffusion-based method, namely terrain diffusion
network (TDN), which actively incorporates user guidance for enhanced
controllability, taking into account terrain features like rivers, ridges,
basins, and peaks. Instead of adhering to a conventional monolithic denoising
process, which often compromises the fidelity of terrain details or the
alignment with user control, a multi-level denoising scheme is proposed to
generate more realistic terrains by taking into account fine-grained details,
particularly those related to climatic patterns influenced by erosion and
tectonic activities. Specifically, three terrain synthesisers are designed for
structural, intermediate, and fine-grained level denoising purposes, which
allow each synthesiser concentrate on a distinct terrain aspect. Moreover, to
maximise the efficiency of our TDN, we further introduce terrain and sketch
latent spaces for the synthesizers with pre-trained terrain autoencoders.
Comprehensive experiments on a new dataset constructed from NASA Topology
Images clearly demonstrate the effectiveness of our proposed method, achieving
the state-of-the-art performance. Our code and dataset will be publicly
available
LightGrad: Lightweight Diffusion Probabilistic Model for Text-to-Speech
Recent advances in neural text-to-speech (TTS) models bring thousands of TTS
applications into daily life, where models are deployed in cloud to provide
services for customs. Among these models are diffusion probabilistic models
(DPMs), which can be stably trained and are more parameter-efficient compared
with other generative models. As transmitting data between customs and the
cloud introduces high latency and the risk of exposing private data, deploying
TTS models on edge devices is preferred. When implementing DPMs onto edge
devices, there are two practical problems. First, current DPMs are not
lightweight enough for resource-constrained devices. Second, DPMs require many
denoising steps in inference, which increases latency. In this work, we present
LightGrad, a lightweight DPM for TTS. LightGrad is equipped with a lightweight
U-Net diffusion decoder and a training-free fast sampling technique, reducing
both model parameters and inference latency. Streaming inference is also
implemented in LightGrad to reduce latency further. Compared with Grad-TTS,
LightGrad achieves 62.2% reduction in paramters, 65.7% reduction in latency,
while preserving comparable speech quality on both Chinese Mandarin and English
in 4 denoising steps.Comment: Accepted by ICASSP 202
Contrast-augmented Diffusion Model with Fine-grained Sequence Alignment for Markup-to-Image Generation
The recently rising markup-to-image generation poses greater challenges as
compared to natural image generation, due to its low tolerance for errors as
well as the complex sequence and context correlations between markup and
rendered image. This paper proposes a novel model named "Contrast-augmented
Diffusion Model with Fine-grained Sequence Alignment" (FSA-CDM), which
introduces contrastive positive/negative samples into the diffusion model to
boost performance for markup-to-image generation. Technically, we design a
fine-grained cross-modal alignment module to well explore the sequence
similarity between the two modalities for learning robust feature
representations. To improve the generalization ability, we propose a
contrast-augmented diffusion model to explicitly explore positive and negative
samples by maximizing a novel contrastive variational objective, which is
mathematically inferred to provide a tighter bound for the model's
optimization. Moreover, the context-aware cross attention module is developed
to capture the contextual information within markup language during the
denoising process, yielding better noise prediction results. Extensive
experiments are conducted on four benchmark datasets from different domains,
and the experimental results demonstrate the effectiveness of the proposed
components in FSA-CDM, significantly exceeding state-of-the-art performance by
about 2%-12% DTW improvements. The code will be released at
https://github.com/zgj77/FSACDM.Comment: Accepted to ACM MM 2023. The code will be released at
https://github.com/zgj77/FSACD
Don't Chase Your Tail! Missing Key Aspects Augmentation in Textual Vulnerability Descriptions of Long-tail Software through Feature Inference
Augmenting missing key aspects in Textual Vulnerability Descriptions (TVDs)
for software with a large user base (referred to as non-long-tail software) has
greatly advanced vulnerability analysis and software security research.
However, these methods often overlook software instances that have a limited
user base (referred to as long-tail software) due to limited TVDs, variations
in software features, and domain-specific jargon, which hinders vulnerability
analysis and software repairs. In this paper, we introduce a novel software
feature inference framework designed to augment the missing key aspects of TVDs
for long-tail software. Firstly, we tackle the issue of non-standard software
names found in community-maintained vulnerability databases by
cross-referencing government databases with Common Vulnerabilities and
Exposures (CVEs). Next, we employ Large Language Models (LLMs) to generate the
missing key aspects. However, the limited availability of historical TVDs
restricts the variety of examples. To overcome this limitation, we utilize the
Common Weakness Enumeration (CWE) to classify all TVDs and select cluster
centers as representative examples. To ensure accuracy, we present Natural
Language Inference (NLI) models specifically designed for long-tail software.
These models identify and eliminate incorrect responses. Additionally, we use a
wiki repository to provide explanations for proprietary terms. Our evaluations
demonstrate that our approach significantly improves the accuracy of augmenting
missing key aspects of TVDs for log-tail software from 0.27 to 0.56 (+107%).
Interestingly, the accuracy of non-long-tail software also increases from 64%
to 71%. As a result, our approach can be useful in various downstream tasks
that require complete TVD information
- …
