5,696 research outputs found
Recommended from our members
Neurons and symbols: a manifesto
We discuss the purpose of neural-symbolic integration including its principles, mechanisms and applications. We outline a cognitive computational model for neural-symbolic integration, position the model in the broader context of multi-agent systems, machine learning and automated reasoning, and list some of the challenges for the area of
neural-symbolic computation to achieve the promise of effective integration of robust learning and expressive reasoning under uncertainty
Recommended from our members
The connectionist inductive learning and logic programming system
First-order logic learning in artificial neural networks
Artificial Neural Networks have previously been applied in neuro-symbolic learning to learn ground logic program rules. However, there are few results of learning relations using neuro-symbolic learning. This paper presents the system PAN, which can learn relations. The inputs to PAN are one or more atoms, representing the conditions of a logic rule, and the output is the conclusion of the rule. The symbolic inputs may include functional terms of arbitrary depth and arity, and the output may include terms constructed from the input functors. Symbolic inputs are encoded as an integer using an invertible encoding function, which is used in reverse to extract the output terms. The main advance of this system is a convention to allow construction of Artificial Neural Networks able to learn rules with the same power of expression as first order definite clauses. The system is tested on three examples and the results are discussed
The epidemiology of canine leishmaniasis: transmission rates estimated from a cohort study in Amazonian Brazil
We estimate the incidence rate, serological conversion rate and basic case reproduction number (R0) of Leishmania infantum from a cohort study of 126 domestic dogs exposed to natural infection rates over 2 years on Marajó Island, Pará State, Brazil. The analysis includes new methods for (1) determining the number of seropositives in cross-sectional serological data, (2) identifying seroconversions in longitudinal studies, based on both the number of antibody units and their rate of change through time, (3) estimating incidence and serological pre-patent periods and (4) calculating R0 for a potentially fatal, vector-borne disease under seasonal transmission. Longitudinal and cross-sectional serological (ELISA) analyses gave similar estimates of the proportion of dogs positive. However, longitudinal analysis allowed the calculation of pre-patent periods, and hence the more accurate estimation of incidence: an infection–conversion model fitted by maximum likelihood to serological data yielded seasonally varying per capita incidence rates with a mean of 8·66×10[minus sign]3/day (mean time to infection 115 days, 95% C.L. 107–126 days), and a median pre-patent period of 94 (95% C.L. 82–111) days. These results were used in conjunction with theory and dog demographic data to estimate the basic reproduction number, R0, as 5·9 (95% C.L. 4·4–7·4). R0 is a determinant of the scale of the leishmaniasis control problem, and we comment on the options for control
Recommended from our members
Using inductive types for ensuring correctness of neuro-symbolic computations
Recommended from our members
Learning Distributed Representations for Multiple-Viewpoint Melodic Prediction
The analysis of sequences is important for extracting in- formation from music owing to its fundamentally temporal nature. In this paper, we present a distributed model based on the Restricted Boltzmann Machine (RBM) for learning melodic sequences. The model is similar to a previous suc- cessful neural network model for natural language [2]. It is first trained to predict the next pitch in a given pitch se- quence, and then extended to also make use of information in sequences of note-durations in monophonic melodies on the same task. In doing so, we also propose an efficient way of representing this additional information that takes advantage of the RBM’s structure. Results show that this RBM-based prediction model performs better than previ- ously evaluated n-gram models and also outperforms them in certain cases. It is able to make use of information present in longer sequences more effectively than n-gram models, while scaling linearly in the number of free pa- rameters required
- …
