69 research outputs found

    Adsorption-desorption noise can be used for improving selectivity

    Full text link
    Small chemical sensors are subjected to adsorption-desorption fluctuations which usually considered as noise contaminating useful signal. Based on temporal properties of this noise, it is shown that it can be made useful if proper processed. Namely, the signal, which characterizes the total amount of adsorbed analyte, should be subjected to a kind of amplitude discrimination (or level crossing discrimination) with certain threshold. When the amount is equal or above the threshold, the result of discrimination is standard dc signal, otherwise it is zero. Analytes are applied at low concentration: the mean adsorbed amount is below the threshold. The threshold is achieved from time to time thanking to the fluctuations. The signal after discrimination is averaged over a time window and used as the output of the whole device. Selectivity of this device is compared with that of its primary adsorbing sites, based on explicit description of the threshold-crossing statistics. It is concluded that the whole sensor may have much better selectivity than do its individual adsorbing sites.Comment: 10 pages, 3 figures, 2 table

    Testing of information condensation in a model reverberating spiking neural network

    Full text link
    Information about external world is delivered to the brain in the form of structured in time spike trains. During further processing in higher areas, information is subjected to a certain condensation process, which results in formation of abstract conceptual images of external world, apparently, represented as certain uniform spiking activity partially independent on the input spike trains details. Possible physical mechanism of condensation at the level of individual neuron was discussed recently. In a reverberating spiking neural network, due to this mechanism the dynamics should settle down to the same uniform/periodic activity in response to a set of various inputs. Since the same periodic activity may correspond to different input spike trains, we interpret this as possible candidate for information condensation mechanism in a network. Our purpose is to test this possibility in a network model consisting of five fully connected neurons, particularly, the influence of geometric size of the network, on its ability to condense information. Dynamics of 20 spiking neural networks of different geometric sizes are modelled by means of computer simulation. Each network was propelled into reverberating dynamics by applying various initial input spike trains. We run the dynamics until it becomes periodic. The Shannon's formula is used to calculate the amount of information in any input spike train and in any periodic state found. As a result, we obtain explicit estimate of the degree of information condensation in the networks, and conclude that it depends strongly on the net's geometric size.Comment: 12 pages, 9 figures, 40 references. Content of this work was partially published in an abstract form in the abstract book of the 2nd International Biophysics Congress and Biotechnology at GAP & 21th National Biophysics Congress, (5-9 Oct. 2009) Diyarbakir, Turkey, http://www.ibc2009.org/. In v2 the ancillary file movie.pdf is added, which offers examples of neuronal network dynamic

    Output Stream of Binding Neuron with Feedback

    Full text link
    The binding neuron model is inspired by numerical simulation of Hodgkin-Huxley-type point neuron, as well as by the leaky integrate-and-fire model. In the binding neuron, the trace of an input is remembered for a fixed period of time after which it disappears completely. This is in the contrast with the above two models, where the postsynaptic potentials decay exponentially and can be forgotten only after triggering. The finiteness of memory in the binding neuron allows one to construct fast recurrent networks for computer modeling. Recently, the finiteness is utilized for exact mathematical description of the output stochastic process if the binding neuron is driven with the Poissonian input stream. In this paper, the simplest networking is considered for binding neuron. Namely, it is expected that every output spike of single neuron is immediately fed into its input. For this construction, externally fed with Poissonian stream, the output stream is characterized in terms of interspike interval probability density distribution if the binding neuron has threshold 2. For higher thresholds, the distribution is calculated numerically. The distributions are compared with those found for binding neuron without feedback, and for leaky integrator. Sample distributions for leaky integrator with feedback are calculated numerically as well. It is oncluded that even the simplest networking can radically alter spikng statistics. Information condensation at the level of single neuron is discussed.Comment: Version #1: 4 pages, 5 figures, manuscript submitted to Biological Cybernetics. Version #2 (this version): added 3 pages of new text with additional analytical and numerical calculations, 2 more figures, 11 more references, added Discussion sectio
    corecore