968 research outputs found
Magnification Control in Self-Organizing Maps and Neural Gas
We consider different ways to control the magnification in self-organizing
maps (SOM) and neural gas (NG). Starting from early approaches of magnification
control in vector quantization, we then concentrate on different approaches for
SOM and NG. We show that three structurally similar approaches can be applied
to both algorithms: localized learning, concave-convex learning, and winner
relaxing learning. Thereby, the approach of concave-convex learning in SOM is
extended to a more general description, whereas the concave-convex learning for
NG is new. In general, the control mechanisms generate only slightly different
behavior comparing both neural algorithms. However, we emphasize that the NG
results are valid for any data dimension, whereas in the SOM case the results
hold only for the one-dimensional case.Comment: 24 pages, 4 figure
Magnification Control in Winner Relaxing Neural Gas
An important goal in neural map learning, which can conveniently be
accomplished by magnification control, is to achieve information optimal coding
in the sense of information theory. In the present contribution we consider the
winner relaxing approach for the neural gas network. Originally, winner
relaxing learning is a slight modification of the self-organizing map learning
rule that allows for adjustment of the magnification behavior by an a priori
chosen control parameter. We transfer this approach to the neural gas
algorithm. The magnification exponent can be calculated analytically for
arbitrary dimension from a continuum theory, and the entropy of the resulting
map is studied numerically conf irming the theoretical prediction. The
influence of a diagonal term, which can be added without impacting the
magnification, is studied numerically. This approach to maps of maximal mutual
information is interesting for applications as the winner relaxing term only
adds computational cost of same order and is easy to implement. In particular,
it is not necessary to estimate the generally unknown data probability density
as in other magnification control approaches.Comment: 14pages, 2 figure
The Intracellular Loop of the Glycine Receptor: It’s not all about the Size
The family of Cys-loop receptors (CLRs) shares a high degree of homology and sequence identity. The overall structural elements are highly conserved with a large extracellular domain (ECD) harboring an α-helix and 10 β-sheets. Following the ECD, four transmembrane domains (TMD) are connected by intracellular and extracellular loop structures. Except the TM3-4 loop, their length comprises 7-14 residues. The TM3-4 loop forms the largest part of the intracellular domain (ICD) and exhibits the most variable region between all CLRs. The ICD is defined by the TM3-4 loop together with the TM1-2 loop preceding the ion channel pore. During the last decade, crystallization approaches were successful for some members of the CLR family. To allow crystallization, the intracellular loop was in most structures replaced by a short linker present in prokaryotic CLRs. Therefore, no structural information about the large TM3-4 loop of CLRs including the glycine receptors (GlyRs) is available except for some basic stretches close to TM3 and TM4. The intracellular loop has been intensively studied with regard to functional aspects including desensitization, modulation of channel physiology by pharmacological substances, posttranslational modifications, and motifs important for trafficking. Furthermore, the ICD interacts with scaffold proteins enabling inhibitory synapse formation. This review focuses on attempts to define structural and functional elements within the ICD of GlyRs discussed with the background of protein-protein interactions and functional channel formation in the absence of the TM3-4 loop
Batch and median neural gas
Neural Gas (NG) constitutes a very robust clustering algorithm given
euclidian data which does not suffer from the problem of local minima like
simple vector quantization, or topological restrictions like the
self-organizing map. Based on the cost function of NG, we introduce a batch
variant of NG which shows much faster convergence and which can be interpreted
as an optimization of the cost function by the Newton method. This formulation
has the additional benefit that, based on the notion of the generalized median
in analogy to Median SOM, a variant for non-vectorial proximity data can be
introduced. We prove convergence of batch and median versions of NG, SOM, and
k-means in a unified formulation, and we investigate the behavior of the
algorithms in several experiments.Comment: In Special Issue after WSOM 05 Conference, 5-8 september, 2005, Pari
Regularization in Relevance Learning Vector Quantization Using l one Norms
International audienceWe propose in this contribution a method for l one regularization in prototype based relevance learning vector quantization (LVQ) for sparse relevance profiles. Sparse relevance profiles in hyperspectral data analysis fade down those spectral bands which are not necessary for classification. In particular, we consider the sparsity in the relevance profile enforced by LASSO optimization. The latter one is obtained by a gradient learning scheme using a differentiable parametrized approximation of the -norm, which has an upper error bound. We extend this regularization idea also to the matrix learning variant of LVQ as the natural generalization of relevance learning
Investigation of topographical stability of the concave and convex Self-Organizing Map variant
We investigate, by a systematic numerical study, the parameter dependence of
the stability of the Kohonen Self-Organizing Map and the Zheng and Greenleaf
concave and convex learning with respect to different input distributions,
input and output dimensions
Winner-relaxing and winner-enhancing Kohonen maps: Maximal mutual information from enhancing the winner
The magnification behaviour of a generalized family of self-organizing
feature maps, the Winner Relaxing and Winner Enhancing Kohonen algorithms is
analyzed by the magnification law in the one-dimensional case, which can be
obtained analytically. The Winner-Enhancing case allows to acheive a
magnification exponent of one and therefore provides optimal mapping in the
sense of information theory. A numerical verification of the magnification law
is included, and the ordering behaviour is analyzed. Compared to the original
Self-Organizing Map and some other approaches, the generalized Winner Enforcing
Algorithm requires minimal extra computations per learning step and is
conveniently easy to implement.Comment: 6 pages, 5 figures. For an extended version refer to cond-mat/0208414
(Neural Computation 17, 996-1009
Time-dependent Moisture Distribution in Drying Cement Mortars - Results of Neutron Radiography and Inverse Analysis of Drying Tests
09081 Abstracts Collection -- Similarity-based learning on structures
From 15.02. to 20.02.2009, the Dagstuhl Seminar 09081 ``Similarity-based learning on structures \u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
- …
