30 research outputs found
Team medical care to the infant spinal cord
小児脊髄損傷患者の看護を経験し、患者の治療効果を上げるために、多くのコメディカルの方々やボランティアの方々と関わることができた。その中で専門的な知識・技術が集まり、ケアの介入ができ、チーム医療の大切さを実感したので報告する。Article信州大学医学部附属病院看護研究集録 39(1): 79-82(2011)departmental bulletin pape
laevis
Fagonia chilensis Hooker et Arnott4 miles n. of Ogilby, along road to Glamisplants prostrat
IGF-I instructs multipotent adult neural progenitor cells to become oligodendrocytes.
Adult multipotent neural progenitor cells can differentiate into neurons, astrocytes, and oligodendrocytes in the mammalian central nervous system, but the molecular mechanisms that control their differentiation are not yet well understood. Insulin-like growth factor I (IGF-I) can promote the differentiation of cells already committed to an oligodendroglial lineage during development. However, it is unclear whether IGF-I affects multipotent neural progenitor cells. Here, we show that IGF-I stimulates the differentiation of multipotent adult rat hippocampus-derived neural progenitor cells into oligodendrocytes. Modeling analysis indicates that the actions of IGF-I are instructive. Oligodendrocyte differentiation by IGF-I appears to be mediated through an inhibition of bone morphogenetic protein signaling. Furthermore, overexpression of IGF-I in the hippocampus leads to an increase in oligodendrocyte markers. These data demonstrate the existence of a single molecule, IGF-I, that can influence the fate choice of multipotent adult neural progenitor cells to an oligodendroglial lineage.journal articl
Penalty terms and loss functions.
<p>(A) Penalty terms: <i>L</i><sub>0</sub>-norm imposes the most explicit constraint on the model complexity as it effectively counts the number of nonzero entries in the model parameter vector. While it is possible to train prediction models with <i>L</i><sub>0</sub>-penalty using, e.g., greedy or other types of discrete optimization methods, the problem becomes mathematically challenging due to the nonconvexity of the constraint, especially when other than the squared loss function is used. The convexity of the <i>L</i><sub>1</sub> and <i>L</i><sub>2</sub> norms makes them easier for the optimization. While the <i>L</i><sub>2</sub> norm has good regularization properties, it must be used together with either <i>L</i><sub>0</sub> or <i>L</i><sub>1</sub> norms to perform feature selection. (B) Loss functions: The plain classification error is difficult to minimize due to its nonconvex and discontinuous nature, and therefore one often resorts to its better behaving surrogates, including the hinge loss used with SVMs, the cross-entropy used with logistic regression, or the squared error used with regularized least-squares classification and regression. These surrogates in turn differ both in their quality of approximating the classification error and in terms of the optimization machinery they can be minimized with (<a href="http://www.plosgenetics.org/article/info:doi/10.1371/journal.pgen.1004754#pgen.1004754.s001" target="_blank">Text S1</a>).</p
Performance of regularized machine learning models.
<p>Upper panel: Behavior of the learning approaches in terms of their predictive accuracy (<i>y</i>-axis) as a function of the number of selected variants (<i>x</i>-axis). Differences can be attributed to the genotypic and phenotypic heterogeneity as well as genotyping density and quality. (A) The area under the receiver operating characteristic curve (AUC) for the prediction of Type 1 diabetes (T1D) cases in SNP data from WTCCC <a href="http://www.plosgenetics.org/article/info:doi/10.1371/journal.pgen.1004754#pgen.1004754-Wellcome1" target="_blank">[118]</a>, representing ca. one million genetic features and ca. 5,000 individuals in a case-control setup. (B) Coefficient of determination (<i>R</i><sup>2</sup>) for the prediction of a continuous trait (Tunicamycin) in SNP data from a cross between two yeast strains (Y2C) <a href="http://www.plosgenetics.org/article/info:doi/10.1371/journal.pgen.1004754#pgen.1004754-Bloom1" target="_blank">[44]</a>, representing ca. 12,000 variants and ca. 1,000 segregants in a controlled laboratory setup. The peak prediction accuracy/number of most predictive variants are listed in the legend. The model validation was implemented using nested 3-fold cross-validation (CV) <a href="http://www.plosgenetics.org/article/info:doi/10.1371/journal.pgen.1004754#pgen.1004754-Okser2" target="_blank">[5]</a>. Prior to any analysis being done, the data was split into three folds. On each outer round of CV, two of the folds were combined forming a training set, and the remaining one was used as an independent test set. On each round, all feature and parameter selection was done using a further internal 3-fold CV on the training set, and the predictive performance of the learned models was evaluated on the independent test set. The final performance estimates were calculated as the average over these three iterations of the experiment. In learning approaches where internal CV was not needed to select model parameters (e.g., log odds), this is equivalent to a standard 3-fold CV. T1D data: the <i>L</i><sub>2</sub>-regularized (ridge) regression was based on selecting the top 500 variants according to the <i>χ</i><sup>2</sup> filter. For wrappers, we used our greedy <i>L</i><sub>2</sub>-regularized least squares (RLS) implementation <a href="http://www.plosgenetics.org/article/info:doi/10.1371/journal.pgen.1004754#pgen.1004754-Pahikkala1" target="_blank">[30]</a>, while the embedded methods, Lasso, Elastic Net and <i>L</i><sub>1</sub>-logistic regression, were implemented through the Scikit-Learn <a href="http://www.plosgenetics.org/article/info:doi/10.1371/journal.pgen.1004754#pgen.1004754-Pedregosa1" target="_blank">[119]</a>, interpolated across various regularization parameters up to the maximal number of variants (500 or 1,000). As a baseline model, we implemented a log odds-ratio weighted sum of the minor allele dosage in the 500 selected variants within each individual <a href="http://www.plosgenetics.org/article/info:doi/10.1371/journal.pgen.1004754#pgen.1004754-Evans1" target="_blank">[25]</a>. Y2C: the filter method was based on the top 1,000 variants selected according to <i>R</i><sup>2</sup>, followed by <i>L</i><sub>2</sub>-regularization within greedy RLS using nested CV. As a baseline model, we implemented a greedy version of least squares (LS), which is similar to the stepwise forward regression used in the original work <a href="http://www.plosgenetics.org/article/info:doi/10.1371/journal.pgen.1004754#pgen.1004754-Bloom1" target="_blank">[44]</a>; the greedy LS differs from the greedy RLS in terms that it implements regularization through optimization of <i>L</i><sub>0</sub> norm instead of <i>L</i><sub>2</sub>. It was noted that the greedy LS method drops around the point where the number of selected variants exceeds the number training examples (here, 400). Lower panel: Overlap in the genetic features selected by the different approaches. (C) The numbers of selected variants within the major histocompatibility complex (MHC) are shown in parentheses for the T1D data. (D) The overlap among then maximally predictive variants in the Y2C data. Note: these results should be considered merely as illustrative examples. Differing results may be obtained when other prediction models are implemented in other genetic datasets or other prediction applications.</p
In cases where the reason why an interaction was annotated could not be identified the supplemental type was assigned
Empty cells represent zero count.<p><b>Copyright information:</b></p><p>Taken from "Comparative analysis of five protein-protein interaction corpora"</p><p>http://www.biomedcentral.com/1471-2105/9/S3/S6</p><p>BMC Bioinformatics 2008;9(Suppl 3):S6-S6.</p><p>Published online 11 Apr 2008</p><p>PMCID:PMC2349296.</p><p></p
Lowry-quantitation of the soluble protein content of the BRB-TMV plants and of the corresponding wt control plants.
<p>Standard error of mean is presented as bars above the columns (consisting of three biological replicates). The confidence level determined by the Student's T-test, with confidence level higher than 95% is indicated by *, higher than 99% indicated by **, and higher than 0,001 indicated by ***.</p
