134,392 research outputs found
Perceptron learning with random coordinate descent
A perceptron is a linear threshold classifier that separates examples with a hyperplane. It is perhaps the simplest learning model that is used standalone. In this paper, we propose a family of random coordinate descent algorithms for perceptron learning on binary classification problems. Unlike most perceptron learning algorithms which require smooth cost functions, our algorithms directly minimize the training error, and usually achieve the lowest training error compared with other algorithms. The algorithms are also computational efficient. Such advantages make them favorable for both standalone use and ensemble learning, on problems that are not linearly separable. Experiments show that our algorithms work very well with AdaBoost, and achieve the lowest test errors for half of the datasets
Spontaneous Symmetry Breaking and Chiral Symmetry
In this introductory lecture, some basic features of the spontaneous symmetry
breaking are discussed. More specifically, -model, non-linear
realization, and some examples of spontaneous symmetry breaking in the
non-relativistic system are discussed in details. The approach here is more
pedagogical than rigorous and the purpose is to get some simple explanation of
some useful topics in this rather wide area. .Comment: Lecture Delivered at VII Mexico Workshop on Paritcles and Fields,
Merida, Yucatan Mexico, Nov 10-17,199
Disclosure and Cross-listing: Evidence from Asia-Pacific Firms
Purpose – The purpose of this paper is to examine whether both country disclosure environment and firm-level disclosures are associated with cross-listing in the USA or London or otherwise.
Design/methodology/approach – The authors test the association using a sample of Asia-Pacific firms covered in the Standard and Poor\u27s, 2001/2002 disclosure survey, capturing the country-level disclosure using the Center for International Financial Analysis and Research (CIFAR) score. The firm-level disclosure is measured using the S&P disclosure score. The authors conduct a logistic regression analysis and a two-stage least squares analysis to examine whether the outcome, cross-listing or not, is associated with the country disclosure environment and firm-level disclosures.
Findings – The authors find that Asia-Pacific firms from weak disclosure environments and having higher firm-level disclosure scores are more likely to seek listing in the USA. Further, the paper provides initial evidence that these Asia-Pacific firms are as likely to seek listing in London as in the USA. No significant difference was found in S&P scores between US and London cross-listings after controlling for the effects of other variables. This suggests that firms that cross-list in London present similar disclosure levels to firms that cross-list in the USA.
Originality/value – The paper\u27s findings contribute to the cross-listing literature on disclosure by showing that the interaction between firm-level disclosure and country-level disclosure has an impact on whether a firm cross-lists in the USA/London or not. The authors\u27 comparison of US cross-listings versus London cross-listings provides the first evidence that disclosures of US and London cross-listings are not significantly different
Two problems related to the Smarandache function
The main purpose of this paper is to study the solvability of some equations involving the pseudo Smarandache function Z(n) and the Smarandache reciprocal function Sc(n), and propose some interesting conjectures
Optimizing 0/1 Loss for Perceptrons by Random Coordinate Descent
The 0/1 loss is an important cost function for perceptrons. Nevertheless it cannot be easily minimized by most existing perceptron learning algorithms. In this paper, we propose a family of random coordinate descent algorithms to directly minimize the 0/1 loss for perceptrons, and prove their convergence. Our algorithms are computationally efficient, and usually achieve the lowest 0/1 loss compared with other algorithms. Such advantages make them favorable for nonseparable real-world problems. Experiments show that our algorithms are especially useful for ensemble learning, and could achieve the lowest test error for many complex data sets when coupled with AdaBoost
- …
