115 research outputs found
The processing cost of Downward Entailingness: the representation and verification of comparative constructions
We bring experimental considerations to bear on the structure of comparatives and on our understanding of how quantifiers are processed. At issue are mismatches between the standard view of quantifier processing cost and results from speeded verification experiments with comparative quantifiers. We build our case in several steps: 1. We show that the standard view, which attributes proces sing cost to the verification process, accounts for some aspects of the data, but fails to cover the main effect of monotonicity on measured behavior. We derive a prediction of this view for comparatives, and show that it is not borne out. 2. We consider potential reasons – experimental and theoretical – for this theory - data mismatch. 3. We describe a new processing experimentwith comparative quantifiers, designed to address the experimental concerns. Ist results still point to the inadequacy of the standard view. 4. We review the semantics of comparative constructions and their potential processing implications. 5. We revise the definition of quantifier processing cost and tie it to the number of Downward Entailing (DE) operators at Logical Form (LF). We show how this definition successfully reconciles the theory - data mismatch. 6. The emerging picture calls for a distinction between the complexity of verified representations and the complexity of the verification process itself
Democratization in a passive dendritic tree : an analytical investigation
One way to achieve amplification of distal synaptic inputs on a dendritic tree is to scale the amplitude and/or duration of the synaptic conductance with its distance from the soma. This is an example of what is often referred to as “dendritic democracy”. Although well studied experimentally, to date this phenomenon has not been thoroughly explored from a mathematical perspective. In this paper we adopt a passive model of a dendritic tree with distributed excitatory synaptic conductances and analyze a number of key measures of democracy. In particular, via moment methods we derive laws for the transport, from synapse to soma, of strength, characteristic time, and dispersion. These laws lead immediately to synaptic scalings that overcome attenuation with distance. We follow this with a Neumann approximation of Green’s representation that readily produces the synaptic scaling that democratizes the peak somatic voltage response. Results are obtained for both idealized geometries and for the more realistic geometry of a rat CA1 pyramidal cell. For each measure of democratization we produce and contrast the synaptic scaling associated with treating the synapse as either a conductance change or a current injection. We find that our respective scalings agree up to a critical distance from the soma and we reveal how this critical distance decreases with decreasing branch radius
How spiking neurons give rise to a temporal-feature map
A temporal-feature map is a topographic neuronal representation of temporal attributes of phenomena or objects that occur in the outside world. We explain the evolution of such maps by means of a spike-based Hebbian learning rule in conjunction with a presynaptically unspecific contribution in that, if a synapse changes, then all other synapses connected to the same axon change by a small fraction as well. The learning equation is solved for the case of an array of Poisson neurons. We discuss the evolution of a temporal-feature map and the synchronization of the single cells’ synaptic structures, in dependence upon the strength of presynaptic unspecific learning. We also give an upper bound for the magnitude of the presynaptic interaction by estimating its impact on the noise level of synaptic growth. Finally, we compare the results with those obtained from a learning equation for nonlinear neurons and show that synaptic structure formation may profit
from the nonlinearity
Dendritic Morphology Predicts Pattern Recognition Performance in Multi-compartmental Model Neurons with and without Active Conductances
This is an Open Access article published under the Creative Commons Attribution license CC BY 4.0 which allows users to read, copy, distribute and make derivative works, as long as the author of the original work is citedIn this paper we examine how a neuron’s dendritic morphology can affect its pattern recognition performance. We use two different algorithms to systematically explore the space of dendritic morphologies: an algorithm that generates all possible dendritic trees with 22 terminal points, and one that creates representative samples of trees with 128 terminal points. Based on these trees, we construct multi-compartmental models. To assess the performance of the resulting neuronal models, we quantify their ability to discriminate learnt and novel input patterns. We find that the dendritic morphology does have a considerable effect on pattern recognition performance and that the neuronal performance is inversely correlated with the mean depth of the dendritic tree. The results also reveal that the asymmetry index of the dendritic tree does not correlate with the performance for the full range of tree morphologies. The performance of neurons with dendritic tapering is best predicted by the mean and variance of the electrotonic distance of their synapses to the soma. All relationships found for passive neuron models also hold, even in more accentuated form, for neurons with active membranesPeer reviewedFinal Published versio
What we talk about when we talk about capacitance measured with the voltage-clamp step method
Capacitance is a fundamental neuronal property. One common way to measure capacitance is to deliver a small voltage-clamp step that is long enough for the clamp current to come to steady state, and then to divide the integrated transient charge by the voltage-clamp step size. In an isopotential neuron, this method is known to measure the total cell capacitance. However, in a cell that is not isopotential, this measures only a fraction of the total capacitance. This has generally been thought of as measuring the capacitance of the “well-clamped” part of the membrane, but the exact meaning of this has been unclear. Here, we show that the capacitance measured in this way is a weighted sum of the total capacitance, where the weight for a given small patch of membrane is determined by the voltage deflection at that patch, as a fraction of the voltage-clamp step size. This quantifies precisely what it means to measure the capacitance of the “well-clamped” part of the neuron. Furthermore, it reveals that the voltage-clamp step method measures a well-defined quantity, one that may be more useful than the total cell capacitance for normalizing conductances measured in voltage-clamp in nonisopotential cells
Structural Homeostasis: Compensatory Adjustments of Dendritic Arbor Geometry in Response to Variations of Synaptic Input
As the nervous system develops, there is an inherent variability in the connections formed between differentiating neurons. Despite this variability, neural circuits form that are functional and remarkably robust. One way in which neurons deal with variability in their inputs is through compensatory, homeostatic changes in their electrical properties. Here, we show that neurons also make compensatory adjustments to their structure. We analysed the development of dendrites on an identified central neuron (aCC) in the late Drosophila embryo at the stage when it receives its first connections and first becomes electrically active. At the same time, we charted the distribution of presynaptic sites on the developing postsynaptic arbor. Genetic manipulations of the presynaptic partners demonstrate that the postsynaptic dendritic arbor adjusts its growth to compensate for changes in the activity and density of synaptic sites. Blocking the synthesis or evoked release of presynaptic neurotransmitter results in greater dendritic extension. Conversely, an increase in the density of presynaptic release sites induces a reduction in the extent of the dendritic arbor. These growth adjustments occur locally in the arbor and are the result of the promotion or inhibition of growth of neurites in the proximity of presynaptic sites. We provide evidence that suggest a role for the postsynaptic activity state of protein kinase A in mediating this structural adjustment, which modifies dendritic growth in response to synaptic activity. These findings suggest that the dendritic arbor, at least during early stages of connectivity, behaves as a homeostatic device that adjusts its size and geometry to the level and the distribution of input received. The growing arbor thus counterbalances naturally occurring variations in synaptic density and activity so as to ensure that an appropriate level of input is achieved
Asymmetric Excitatory Synaptic Dynamics Underlie Interaural Time Difference Processing in the Auditory System
In order to localize sounds in the environment, the auditory system detects and encodes differences in signals between each ear. The exquisite sensitivity of auditory brain stem neurons to the differences in rise time of the excitation signals from the two ears allows for neuronal encoding of microsecond interaural time differences
Biophysical Basis for Three Distinct Dynamical Mechanisms of Action Potential Initiation
Transduction of graded synaptic input into trains of all-or-none action
potentials (spikes) is a crucial step in neural coding. Hodgkin identified three
classes of neurons with qualitatively different analog-to-digital transduction
properties. Despite widespread use of this classification scheme, a
generalizable explanation of its biophysical basis has not been described. We
recorded from spinal sensory neurons representing each class and reproduced
their transduction properties in a minimal model. With phase plane and
bifurcation analysis, each class of excitability was shown to derive from
distinct spike initiating dynamics. Excitability could be converted between all
three classes by varying single parameters; moreover, several parameters, when
varied one at a time, had functionally equivalent effects on excitability. From
this, we conclude that the spike-initiating dynamics associated with each of
Hodgkin's classes represent different outcomes in a nonlinear
competition between oppositely directed, kinetically mismatched currents. Class
1 excitability occurs through a saddle node on invariant circle bifurcation when
net current at perithreshold potentials is inward (depolarizing) at steady
state. Class 2 excitability occurs through a Hopf bifurcation when, despite net
current being outward (hyperpolarizing) at steady state, spike initiation occurs
because inward current activates faster than outward current. Class 3
excitability occurs through a quasi-separatrix crossing when fast-activating
inward current overpowers slow-activating outward current during a stimulus
transient, although slow-activating outward current dominates during constant
stimulation. Experiments confirmed that different classes of spinal lamina I
neurons express the subthreshold currents predicted by our simulations and,
further, that those currents are necessary for the excitability in each cell
class. Thus, our results demonstrate that all three classes of excitability
arise from a continuum in the direction and magnitude of subthreshold currents.
Through detailed analysis of the spike-initiating process, we have explained a
fundamental link between biophysical properties and qualitative differences in
how neurons encode sensory input
- …
