650 research outputs found
Modified T-F Function Method for Finding Global Minimizer on Unconstrained Optimization
This paper indicates that the filled function which appeared in one of the papers by Y. L. Shang et al. (2007) is also a tunneling function; that is, we prove that
under some general assumptions this function has the characters of both tunneling
function and filled function. A solution algorithm based on this T-F function is given
and numerical tests from test functions show that our T-F function method is very
effective in finding better minima
Multi-Source Multi-View Clustering via Discrepancy Penalty
With the advance of technology, entities can be observed in multiple views.
Multiple views containing different types of features can be used for
clustering. Although multi-view clustering has been successfully applied in
many applications, the previous methods usually assume the complete instance
mapping between different views. In many real-world applications, information
can be gathered from multiple sources, while each source can contain multiple
views, which are more cohesive for learning. The views under the same source
are usually fully mapped, but they can be very heterogeneous. Moreover, the
mappings between different sources are usually incomplete and partially
observed, which makes it more difficult to integrate all the views across
different sources. In this paper, we propose MMC (Multi-source Multi-view
Clustering), which is a framework based on collective spectral clustering with
a discrepancy penalty across sources, to tackle these challenges. MMC has
several advantages compared with other existing methods. First, MMC can deal
with incomplete mapping between sources. Second, it considers the disagreements
between sources while treating views in the same source as a cohesive set.
Third, MMC also tries to infer the instance similarities across sources to
enhance the clustering performance. Extensive experiments conducted on
real-world data demonstrate the effectiveness of the proposed approach
The Analysis and Calculation Method of Urban Rail Transit Carrying Capacity Based on Express-Slow Mode
Urban railway transport that connects suburbs and city areas is characterized by uneven temporal and spatial distribution in terms of passenger flow and underutilized carrying capacity. This paper aims to develop methodologies to measure the carrying capacity of the urban railway by introducing a concept of the express-slow mode. We first explore factors influencing the carrying capacity under the express-slow mode and the interactive relationships among these factors. Then we establish seven different scenarios to measure the carrying capacity by considering the ratio of the number of the express trains and the slow trains, the station where overtaking takes place, and the number of overtaking maneuvers. Taking Shanghai Metro Line 16 as an empirical study, the proposed methods to measure the carrying capacity under different express-slow mode are proved to be valid. This paper contributes to the literature by remodifying the traditional methods to measure the carrying capacity when different express-slow modes are applied to improve the carrying capacity of the suburban railway
Enhancing Generation through Summarization Duality and Explicit Outline Control
Automatically open-ended long text generation poses significant challenges
due to semantic incoherence and plot implausibility. Previous works usually
alleviate this problem through outlines in the form of short phrases or
abstractive signals by designing unsupervised tasks, which tend to be unstable
and weakly interpretable.
Assuming that a summary serves as a mature outline, we introduce a two-stage,
summary-enhanced outline supervised generation framework. This framework
leverages the dual characteristics of the summarization task to improve outline
prediction, resulting in more explicit and plausible outlines. Furthermore, we
identify an underutilization issue in outline-based generation with both
standard pretrained language models (e.g., GPT-2, BART) and large language
models (e.g., Vicuna, ChatGPT). To address this, we propose a novel explicit
outline control method for more effective utilization of generated outlines.Comment: 14 page
Tuning-Free Visual Customization via View Iterative Self-Attention Control
Fine-Tuning Diffusion Models enable a wide range of personalized generation
and editing applications on diverse visual modalities. While Low-Rank
Adaptation (LoRA) accelerates the fine-tuning process, it still requires
multiple reference images and time-consuming training, which constrains its
scalability for large-scale and real-time applications. In this paper, we
propose \textit{View Iterative Self-Attention Control (VisCtrl)} to tackle this
challenge. Specifically, VisCtrl is a training-free method that injects the
appearance and structure of a user-specified subject into another subject in
the target image, unlike previous approaches that require fine-tuning the
model. Initially, we obtain the initial noise for both the reference and target
images through DDIM inversion. Then, during the denoising phase, features from
the reference image are injected into the target image via the self-attention
mechanism. Notably, by iteratively performing this feature injection process,
we ensure that the reference image features are gradually integrated into the
target image. This approach results in consistent and harmonious editing with
only one reference image in a few denoising steps. Moreover, benefiting from
our plug-and-play architecture design and the proposed Feature Gradual Sampling
strategy for multi-view editing, our method can be easily extended to edit in
complex visual domains. Extensive experiments show the efficacy of VisCtrl
across a spectrum of tasks, including personalized editing of images, videos,
and 3D scenes.Comment: Under revie
Application of Machine Learning Optimization in Cloud Computing Resource Scheduling and Management
In recent years, cloud computing has been widely used. Cloud computing refers
to the centralized computing resources, users through the access to the
centralized resources to complete the calculation, the cloud computing center
will return the results of the program processing to the user. Cloud computing
is not only for individual users, but also for enterprise users. By purchasing
a cloud server, users do not have to buy a large number of computers, saving
computing costs. According to a report by China Economic News Network, the
scale of cloud computing in China has reached 209.1 billion yuan. At present,
the more mature cloud service providers in China are Ali Cloud, Baidu Cloud,
Huawei Cloud and so on. Therefore, this paper proposes an innovative approach
to solve complex problems in cloud computing resource scheduling and management
using machine learning optimization techniques. Through in-depth study of
challenges such as low resource utilization and unbalanced load in the cloud
environment, this study proposes a comprehensive solution, including
optimization methods such as deep learning and genetic algorithm, to improve
system performance and efficiency, and thus bring new breakthroughs and
progress in the field of cloud computing resource management.Rational
allocation of resources plays a crucial role in cloud computing. In the
resource allocation of cloud computing, the cloud computing center has limited
cloud resources, and users arrive in sequence. Each user requests the cloud
computing center to use a certain number of cloud resources at a specific time
HAFFormer: A Hierarchical Attention-Free Framework for Alzheimer's Disease Detection From Spontaneous Speech
Automatically detecting Alzheimer's Disease (AD) from spontaneous speech
plays an important role in its early diagnosis. Recent approaches highly rely
on the Transformer architectures due to its efficiency in modelling long-range
context dependencies. However, the quadratic increase in computational
complexity associated with self-attention and the length of audio poses a
challenge when deploying such models on edge devices. In this context, we
construct a novel framework, namely Hierarchical Attention-Free Transformer
(HAFFormer), to better deal with long speech for AD detection. Specifically, we
employ an attention-free module of Multi-Scale Depthwise Convolution to replace
the self-attention and thus avoid the expensive computation, and a GELU-based
Gated Linear Unit to replace the feedforward layer, aiming to automatically
filter out the redundant information. Moreover, we design a hierarchical
structure to force it to learn a variety of information grains, from the frame
level to the dialogue level. By conducting extensive experiments on the
ADReSS-M dataset, the introduced HAFFormer can achieve competitive results
(82.6% accuracy) with other recent work, but with significant computational
complexity and model size reduction compared to the standard Transformer. This
shows the efficiency of HAFFormer in dealing with long audio for AD detection
- …
