376 research outputs found

    Next Generation Simulation Tools: The Systems Biology Workbench and BioSPICE Integration

    Get PDF
    Researchers in quantitative systems biology make use of a large number of different software packages for modelling, analysis, visualization, and general data manipulation. In this paper, we describe the Systems Biology Workbench (SBW), a software framework that allows heterogeneous application components—written in diverse programming languages and running on different platforms—to communicate and use each others' capabilities via a fast binary encoded-message system. Our goal was to create a simple, high performance, opensource software infrastructure which is easy to implement and understand. SBW enables applications (potentially running on separate, distributed computers) to communicate via a simple network protocol. The interfaces to the system are encapsulated in client-side libraries that we provide for different programming languages. We describe in this paper the SBW architecture, a selection of current modules, including Jarnac, JDesigner, and SBWMeta-tool, and the close integration of SBW into BioSPICE, which enables both frameworks to share tools and compliment and strengthen each others capabilities

    Variability in Singing and in Song in the Zebra Finch

    Get PDF
    Variability is a defining feature of the oscine song learning process, reflected in song and in the neural pathways involved in song learning. For the zebra finch, juveniles learning to sing typically exhibit a high degree of vocal variability, and this variability appears to be driven by a key brain nucleus. It has been suggested that this variability is a necessary part of a trial-­â€and-­â€error learning process in which the bird must search for possible improvements to its song. Our work examines the role this variability plays in learning in two ways: through behavioral experiments with juvenile zebra finches, and through a computational model of parts of the oscine brain. Previous studies have shown that some finches exhibit less variability during the learning process than others by producing repetitive vocalizations. A constantly changing song model was played to juvenile zebra finches to determine whether auditory stimuli can affect this behavior. This stimulus was shown to cause an overall increase in repetitiveness; furthermore, there was a correlation between repetitiveness at an early stage in the learning process and the length of time a bird is repetitive overall, and birds that were repetitive tended to repeat the same thing over an extended period of time. The role of a key brain nucleus involved in song learning was examined through computational modeling. Previous studies have shown that this nucleus produces variability in song, but can also bias the song of a bird in such a way as to reduce errors while singing. Activity within this nucleus during singing is predominantly uncorrelated with the timing of the song, however a portion of this activity is correlated in such a manner. The modeling experiments consider the possibility that this persistent signal is part of a trial-­â€and-­â€error search and contrast this with the possibility that the persistent signal is the product of some mechanism to directly improve song. Simulation results show that a mixture of timing-­â€dependent and timing-­â€independent activity in this nucleus produces optimal learning results for the case where the persistent signal is a key component of a trial-­â€and-­â€error search, but not in the case where this signal will directly improve song. Although a mixture of timing-­â€locked and timing-­â€independent activity produces optimal results, the ratio found to be optimal within the model differs from what has been observed in vivo. Finally, novel methods for the analysis of birdsong, motivated by the high variability of juvenile song, are presented. These methods are designed to work with sets of song samples rather than through pairwise comparison. The utility of these methods is demonstrated, as well as results illustrating how such methods can be used as the basis for aggregate measures of song such as repertoire complexity

    Signal Fidelity and Coordination Collapse: A Neural-Centric Model of Performance Failure

    Get PDF
    Traditional fatigue models in elite sport emphasize energy depletion or metabolic thresholds, but these do not explain why athletes with preserved physiological capacity often experience technical breakdown. This paper introduces the Neural-Centric Model (NCM), a theoretical framework that reframes fatigue as a degradation in signal fidelity; the nervous system’s ability to generate precise, timely motor commands under load. The model outlines a five-stage failure cascade: signal instability, recruitment drift, coordination variability, skill degradation, and performance collapse. Rather than focusing solely on output loss, it emphasizes loss of control as the initiating failure mechanism. Coordination-based proxies like CMJ timing drift, stride variability, and movement disruption are proposed as early indicators of neural fatigue. The paper integrates motor control, neuromechanics, and training theory, and is positioned as a Stage 1–2 conceptual model (per ARMSS). A case example demonstrates feasibility, and a proposed study design outlines future testing. The model invites researchers to interrogate coordination as a control-layer constraint in performance and challenges systems to prioritize neural stability over volume accumulation

    Applications and developments of chemometric methods for process analytical chemistry

    Get PDF
    Traditional process monitoring methods of off-line analysis involve removing a sample from the process and taking it to a centralised analytical laboratory. It takes time for the analytical result to be achieved and the result is used retrospectively to determine the yield or quality of a batch, and not to control the process. This leads to batches being produced that do not meet specifications, so may require re-working, wasting time and money. The process should be monitored to allow control of the batch to ensure it meets specifications first time, and every time. The use of at-line or on-line analysis, such as near infrared spectroscopy, provides quicker process analysis and allows the results to be used to monitor and control the process. These techniques are usually nondestructive so less waste is produced, and are safer as they can be located away from the process environment.Within the analysis of processes, sampling is a key issue. The sample must be representative of the process to ensure the analysis gives a true indication of the batch. This is a problem when the process is heterogeneous as a sample taken from one region of the process may give a different analytical result from a sample taken from another region.Guided microwave spectroscopy (GMS) has been investigated for its use as an on-line process analyser. The GMS has a sample chamber in which a process can be carried out and this whole chamber is analysed. This removes the sampling issue. This method is not well understood or used in process analysis due to the complicated MW spectra. Near infrared (NIR) spectroscopy is a tried and tested method of process analysis and many examples of applications exist of its use in industry. The spectra are easy to interpret and relate to the process. The main problem with NIR is that a probe must be used for on-line analysis. This produces sampling issues, and any process variation, such as a process upset, must be in the vicinity of the probe to be detected.In this work, a new process analysis technique, GMS, has been compared to an established technique, NIR, to determine their effectiveness within process analysis. NIR is used as a reference method for the GMS to aid interpretation of the spectra, and relate it to the process.Various processes have been investigated to determine the effectiveness of NIR and GMS to monitor them. A drying process has been monitored which has a problem of sampling due to huge cakes of several tonnes of material that are dried.The drying process was first simulated by adding solvent to a material to determine if the process can be monitored and the limits of solvent that can be detected. NIR data was collected using a diffuse reflectance probe. The spectra were found to be unrepresentative of the process as it was reliant on the solvent added being in the vicinity of the probe. GMS was used to monitor the process as it provides a representative measurement. Three different systems were analysed: the addition of water to sand, propanol to ascorbic acid and ethanol to salicylic acid. Simple partialleast squares (PLS) models were built to predict the amount of solvent present in the solid sample from MW spectra. Various pre-processing techniques were examined to produce the best model. The models were built using auto-scaled followed by Box-Cox logarithmically transformed data, and allow prediction of the amount of water in sand, and the amount of propanol in ascorbic acid down to 1 % w/w with relative errors below 5%. The calibration models can predict up to 30% solvent, so the technique was shown to be very useful for monitoring the drying of a solid. The model for the addition of ethanol to salicylic acid gave relative errors of 32% so seems to be an unsuitable method. However, models built using above 2% ethanol gave relative errors of only 20/0, suggesting the MW spectra are not sensitive to levels of ethanol below this.Propanol was then removed from ascorbic acid by drying to prove that the actual drying method can be monitored. The use of principal component analysis (PCA) scores plotted against time and the residuals (process spectra minus the reference dry spectra) show that the drying process has the possibility of being monitored in a representative way using MW spectroscopy.An esterification reaction has been monitored and various aspects of this process have been investigated. Traditionally calibration models are built using reference concentration spectra. Ideally process samples should be used to build the model which means a reference method such as GC must be used to give concentration data. These methods take time to develop and within this work it was found difficult to get reproducible results. Calibration free techniques have been used to extract the concentration profiles of the reaction to allow the rate constants of the reaction to be determined. A calibration free technique has also been used to determine the endpoint of the process, and also detect process upsets. During these processes, it is desirable to be able to predict the endpoint of a reaction, instead of waiting for it to be reached, which may waste time. It is also advantageous to be able to detect process upsets to allow the batch to be corrected. Multivariate curve resolution (MCR) was used to extract the concentration profiles from the MW and NIR spectra, and these profiles used to calculate the rate constants, k of the reaction. The MW and NIR calculated k values do not agree, suggesting the two techniques do not capture the same process variation. The rate constants have also been calculated using GC measurements as a comparison. These values also do not agree with the spectroscopic methods, but it is unknown which method provides the correct determination of the rate constant. However, it has been found that the use of MW and NIR spectroscopy provides a much more reproducible method to monitor esterification reactions than GC.An adaptive algorithm called caterpillar has been used to determine the endpoint of an esterification reaction, and also to detect a variety of process upsets. This allows the reaction to be monitored to ensure it proceeds as expected without the need for building a calibration model. The endpoint was detected reproducibly for MW spectra taken for repeat reactions showing the spectra are suitable for monitoring the reaction. The same endpoint was not detected for corresponding NIR spectra, so this does not appear to be as reproducible a method.MW spectroscopy was found to detect process upsets of addition of incorrect catalyst, addition of water, addition of an interferant and incorrect changing of reactants. The NIR was found to only pick up the addition of water and incorrect charging of reactants. It has been found that the MW spectra are more sensitive to small disturbances in the process variation and it is a better technique for endpoint determination and process upset detection. The NIR spectra does not appear to be as representative of the process, possibly due to the limitations of sampling with the probe used

    Modelling the effects of environmental stressors on pig performance

    Get PDF
    The performance of pigs reared commercially is often considerably below that of their potential as seen under good experimental conditions. At least some of this decrease in performance can be attributed to environmental stressors. The aims and corresponding chapters of this thesis were to; (1) Choose a suitable predictor of potential pig growth. (2) Develop a deterministic dynamic model to predict the effects of genotype and the nutritional and thermal environments on the voluntary feed intake, growth and body composition of growing pigs. (3) Test and evaluate the model developed in chapter 2 against experimental data from the literature. (4) Quantify the effects of social stressors on the performance of growing pigs and incorporate these into the previously developed model, including variation in ability to cope with encountered social stressors. (5) Extend the model to deal with individual pig variation. (6) Compare the variation predicted by the population model with that observed under experimental conditions.The Gompertz function was chosen as a predictor of potential pig growth and as the starting point for model simulation, i.e., to provide an upper limit to growth. It uses few parameters, holds over a wide degree of maturity and the values of its parameters can be estimated simply. Unconstrained voluntary feed intake, predicted from the current state of the pig and composition of the feed, is that required to achieve potential growth. Actual food intake and the consequent gain were predicted taking into account the capacity of the animal to consume bulk and its ability to maintain thermoneutrality. The physical environment, described by the ambient temperature, wind speed, floor type and humidity, sets the maximum and minimum heat the pig is able to lose and determines whether the environment is hot, cold or thermoneutral. Model predictions were generally in good quantitative agreement with the observed data over the wide range of treatments tested and give support to the models value and accuracy. The social environment was described by group size, space allowance, feeder space allowance and the occurrence or not of mixing. All of these factors may act as stressors and it is assumed in the model that they decrease performance by lowering the capacity of the animal to attain its potential. The parameter EX accounts or differences in ability to cope when exposed to social stressors. The introduction of individual variation in growth potential, initial state and EX allowed the mean population response to be compared with that of the average individual. Whether these responses differed depended in part upon the social stressors encountered. The addition of variation in initial state and EX allowed better estimates of the phenotypic variation observed in real experiments to be achieved.The developed simulation framework is able to explore, and at least in principle, predict the performance of both individuals and populations differing in growth potential, initial state and ability to cope when raised under given dietary, physical and social environmental conditions. One of the main advantages of simulation models is that they allow the effects of a multiple factors on animal performance to be considered simultaneously, including any interactions that may exist, in a way that cannot be done by direct experimentation. These interactions may be crucial in decision-making processes as different individuals and populations may react differently in response to the same environmental stressors

    Quantitative Tools for Examining the Vocalizations of Juvenile Songbirds

    Get PDF
    The singing of juvenile songbirds is highly variable and not well stereotyped, a feature that makes it difficult to analyze with existing computational techniques. We present here a method suitable for analyzing such vocalizations, windowed spectral pattern recognition (WSPR). Rather than performing pairwise sample comparisons, WSPR measures the typicality of a sample against a large sample set. We also illustrate how WSPR can be used to perform a variety of tasks, such as sample classification, song ontogeny measurement, and song variability measurement. Finally, we present a novel measure, based on WSPR, for quantifying the apparent complexity of a bird's singing

    The importance of sub-peat carbon storage as shown by data from Dartmoor, UK

    Get PDF
    Peatlands are highly valued for their range of ecosystem services, including distinctive biodiversity, agricultural uses, recreational amenities, water provision, river flow regulation and their capacity to store carbon. There have been a range of estimates of carbon stored in peatlands in the United Kingdom, but uncertainties remain, in particular with regard to depth and bulk density of peat. In addition, very few studies consider the full profile with depth in carbon auditing. The importance of sub-peat soils within peatland carbon stores has been recognized, but remains poorly understood and is included rarely within peatland carbon audits. This study examines the importance of the carbon store based on a study of blanket peat on Dartmoor, UK, by estimating peat depths in a 4 × 1 km survey area using ground penetrating radar (GPR), extraction of 43 cores across a range of peat depth, and estimation of carbon densities based on measures of loss-on-ignition and bulk density. Comparison of GPR estimates of peat depth with core depths shows excellent agreement, to provide the basis for a detailed understanding of the distribution of peat depths within the survey area. Carbon densities of the sub-peat soils are on average 78 and 53 kg C/m3 for the overlying blanket peat. There is considerable spatial variability in the estimates of total carbon from each core across the survey area, with values ranging between 56.5 kg C/m2 (1.01 m total depth of peat and soil) and 524 kg C/m2 (6.63 m total depth). Sub-peat soil carbon represents between 4 and 28 per cent (mean 13.5) of the total carbon stored, with greater values for shallower peat. The results indicate a significant and previously unaccounted store of carbon within blanket peat regions which should be included in future calculations of overall carbon storage. It is argued that this store needs to be considered in carbon audits. © 2013 British Society of Soil Science

    The Dickey Bird Scientists Take Charge: Science, Policy, and the Spotted Owl

    Get PDF
    In 1992, the Forest Service adopted a new operating policy, Ecosystem Management, which minimized the agency\u27s timber production goals in favor of a more ecologically balanced view of its responsibilities. In explaining this shift, scholars have dismissed the possibility of internal reform, arguing that the Service could not change without irresistible external pressure from environmental activists and new public values supporting biodiversity. Viewing the Service\u27s shift through the lens of the spotted owl controversy, however, demonstrates the important role agency culture played in instigating bureaucratic change. The Service\u27s evolution stemmed from the rising influence of its scientists in policy formation. Their research in support of protecting the owl and the biodiversity of old-growth forests thrived in an agency that nurtured scientific independence, and it thrust them into leadership positions. Forest Service science legitimized the arguments of environmentalists and crystallized public values favoring biodiversity into a new policy
    corecore