199 research outputs found
Physiological Correlates of Volunteering
We review research on physiological correlates of volunteering, a neglected but promising research field. Some of these correlates seem to be causal factors influencing volunteering. Volunteers tend to have better physical health, both self-reported and expert-assessed, better mental health, and perform better on cognitive tasks. Research thus far has rarely examined neurological, neurochemical, hormonal, and genetic correlates of volunteering to any significant extent, especially controlling for other factors as potential confounds. Evolutionary theory and behavioral genetic research suggest the importance of such physiological factors in humans. Basically, many aspects of social relationships and social activities have effects on health (e.g., Newman and Roberts 2013; Uchino 2004), as the widely used biopsychosocial (BPS) model suggests (Institute of Medicine 2001). Studies of formal volunteering (FV), charitable giving, and altruistic behavior suggest that physiological characteristics are related to volunteering, including specific genes (such as oxytocin receptor [OXTR] genes, Arginine vasopressin receptor [AVPR] genes, dopamine D4 receptor [DRD4] genes, and 5-HTTLPR). We recommend that future research on physiological factors be extended to non-Western populations, focusing specifically on volunteering, and differentiating between different forms and types of volunteering and civic participation
Optimized mixed Markov models for motif identification
BACKGROUND: Identifying functional elements, such as transcriptional factor binding sites, is a fundamental step in reconstructing gene regulatory networks and remains a challenging issue, largely due to limited availability of training samples. RESULTS: We introduce a novel and flexible model, the Optimized Mixture Markov model (OMiMa), and related methods to allow adjustment of model complexity for different motifs. In comparison with other leading methods, OMiMa can incorporate more than the NNSplice's pairwise dependencies; OMiMa avoids model over-fitting better than the Permuted Variable Length Markov Model (PVLMM); and OMiMa requires smaller training samples than the Maximum Entropy Model (MEM). Testing on both simulated and actual data (regulatory cis-elements and splice sites), we found OMiMa's performance superior to the other leading methods in terms of prediction accuracy, required size of training data or computational time. Our OMiMa system, to our knowledge, is the only motif finding tool that incorporates automatic selection of the best model. OMiMa is freely available at [1]. CONCLUSION: Our optimized mixture of Markov models represents an alternative to the existing methods for modeling dependent structures within a biological motif. Our model is conceptually simple and effective, and can improve prediction accuracy and/or computational speed over other leading methods
Re-sampling strategy to improve the estimation of number of null hypotheses in FDR control under strong correlation structures
<p>Abstract</p> <p>Background</p> <p>When conducting multiple hypothesis tests, it is important to control the number of false positives, or the False Discovery Rate (FDR). However, there is a tradeoff between controlling FDR and maximizing power. Several methods have been proposed, such as the q-value method, to estimate the proportion of true null hypothesis among the tested hypotheses, and use this estimation in the control of FDR. These methods usually depend on the assumption that the test statistics are independent (or only weakly correlated). However, many types of data, for example microarray data, often contain large scale correlation structures. Our objective was to develop methods to control the FDR while maintaining a greater level of power in highly correlated datasets by improving the estimation of the proportion of null hypotheses.</p> <p>Results</p> <p>We showed that when strong correlation exists among the data, which is common in microarray datasets, the estimation of the proportion of null hypotheses could be highly variable resulting in a high level of variation in the FDR. Therefore, we developed a re-sampling strategy to reduce the variation by breaking the correlations between gene expression values, then using a conservative strategy of selecting the upper quartile of the re-sampling estimations to obtain a strong control of FDR.</p> <p>Conclusion</p> <p>With simulation studies and perturbations on actual microarray datasets, our method, compared to competing methods such as q-value, generated slightly biased estimates on the proportion of null hypotheses but with lower mean square errors. When selecting genes with controlling the same FDR level, our methods have on average a significantly lower false discovery rate in exchange for a minor reduction in the power.</p
Individual and societal dimensions of security
Despite the prevalence of state-based approaches to security studies during the Cold War, alternative ways of thinking about security-focusing on the individual and society-also developed during this time period. However, in the post-Cold War era the primacy of the state in considerations of security has come under increasing challenge from a variety of perspectives. In this essay, the development of the study of individual and societal dimensions of security is traced and discussed against the background of the end of the Cold War. The first part of the essay examines the evolution of thinking about individual and societal dimensions of security during the Cold War. The second part focuses on the post-Cold War revival in thinking about these aspects of security. The essay concludes by considering the future of world politics conceived of as "risk society" and the implications for individual and societal dimensions of security
BioTIME 2.0 : expanding and improving a database of biodiversity time series
Funding: H2020 European Research Council (Grant Number(s): GA 101044975, GA 101098020).Motivation: Here, we make available a second version of the BioTIME database, which compiles records of abundance estimates for species in sample events of ecological assemblages through time. The updated version expands version 1.0 of the database by doubling the number of studies and includes substantial additional curation to the taxonomic accuracy of the records, as well as the metadata. Moreover, we now provide an R package (BioTIMEr) to facilitate use of the database. Main Types of Variables: Included The database is composed of one main data table containing the abundance records and 11 metadata tables. The data are organised in a hierarchy of scales where 11,989,233 records are nested in 1,603,067 sample events, from 553,253 sampling locations, which are nested in 708 studies. A study is defined as a sampling methodology applied to an assemblage for a minimum of 2 years. Spatial Location and Grain: Sampling locations in BioTIME are distributed across the planet, including marine, terrestrial and freshwater realms. Spatial grain size and extent vary across studies depending on sampling methodology. We recommend gridding of sampling locations into areas of consistent size. Time Period and Grain: The earliest time series in BioTIME start in 1874, and the most recent records are from 2023. Temporal grain and duration vary across studies. We recommend doing sample-level rarefaction to ensure consistent sampling effort through time before calculating any diversity metric. Major Taxa and Level of Measurement: The database includes any eukaryotic taxa, with a combined total of 56,400 taxa. Software Format: csv and. SQL.Peer reviewe
Methodological nationalism and the domestic analogy: classical resources for their critique
BioTIME 2.0 : expanding and improving a database of biodiversity time series
Motivation.
Here, we make available a second version of the BioTIME database, which compiles records of abundance estimates for species in sample events of ecological assemblages through time. The updated version expands version 1.0 of the database by doubling the number of studies and includes substantial additional curation to the taxonomic accuracy of the records, as well as the metadata. Moreover, we now provide an R package (BioTIMEr) to facilitate use of the database.
Main Types of Variables Included.
The database is composed of one main data table containing the abundance records and 11 metadata tables. The data are organised in a hierarchy of scales where 11,989,233 records are nested in 1,603,067 sample events, from 553,253 sampling locations, which are nested in 708 studies. A study is defined as a sampling methodology applied to an assemblage for a minimum of 2 years.
Spatial Location and Grain.
Sampling locations in BioTIME are distributed across the planet, including marine, terrestrial and freshwater realms. Spatial grain size and extent vary across studies depending on sampling methodology. We recommend gridding of sampling locations into areas of consistent size.
Time Period and Grain.
The earliest time series in BioTIME start in 1874, and the most recent records are from 2023. Temporal grain and duration vary across studies. We recommend doing sample-level rarefaction to ensure consistent sampling effort through time before calculating any diversity metric.
Major Taxa and Level of Measurement.
The database includes any eukaryotic taxa, with a combined total of 56,400 taxa.
Software Format.
csv and. SQL
- …
