2,578 research outputs found

    Factors influencing in vivo transduction by recombinant adeno-associated viral vectors expressing the human factor IX cDNA.

    Get PDF
    Long-term expression of coagulation factor IX (FIX) has been observed in murine and canine models following administration of recombinant adeno-associated viral (rAAV) vectors into either the portal vein or muscle. These studies were designed to evaluate factors that influence rAAV-mediated FIX expression. Stable and persistent human FIX (hFIX) expression (> 22 weeks) was observed from 4 vectors after injection into the portal circulation of immunodeficient mice. The level of expression was dependent on promoter with the highest expression, 10% of physiologic levels, observed with a vector containing the cytomegalovirus (CMV) enhancer/beta-actin promoter complex (CAGG). The kinetics of expression after injection of vector particles into muscle, tail vein, or portal vein were similar with hFIX detectable at 2 weeks and reaching a plateau by 8 weeks. For a given dose, intraportal administration of rAAV CAGG-FIX resulted in a 1.5-fold or 4-fold higher level of hFIX compared to tail vein or intramuscular injections, respectively. Polymerase chain reaction analysis demonstrated predominant localization of the rAAV FIX genome in liver and spleen after tail vein injection with a higher proportion in liver after portal vein injection. Therapeutic levels of hFIX were detected in the majority of immunocompetent mice (21 of 22) following intravenous administration of rAAV vector without the development of anti-hFIX antibodies, but hFIX was not detected in 14 immunocompetent mice following intramuscular administration, irrespective of strain. Instead, neutralizing anti-hFIX antibodies were detected in all the mice. These observations may have important implications for hemophilia B gene therapy with rAAV vectors

    Annuities and Individual Welfare

    Get PDF
    This paper advances the theory of annuity demand. First, we derive sufficient conditions under which complete annuitization is optimal, showing that this well-known result holds true in a more general setting than in Yaari (1965). Specifically, when markets are complete, sufficient conditions need not impose exponential discounting, intertemporal separability or the expected utility axioms; nor need annuities be actuarially fair, nor longevity risk be the only source of consumption uncertainty. All that is required is that consumers have no bequest motive and that annuities pay a rate of return for survivors greater than those of otherwise matching conventional assets, net of administrative costs. Second, we show that full annuitization may not be optimal when markets are incomplete. Some annuitization is optimal as long as conventional asset markets are complete. The incompleteness of markets can lead to zero annuitization but the conditions on both annuity and bond markets are stringent. Third, we extend the simulation literature that calculates the utility gains from annuitization by considering consumers whose utility depends both on present consumption and a which they have become accustomed. The value of annuitization hinges critically on the size of the initial standard-of-living relative to wealth.

    Modeling the emergence of universality in color naming patterns

    Get PDF
    The empirical evidence that human color categorization exhibits some universal patterns beyond superficial discrepancies across different cultures is a major breakthrough in cognitive science. As observed in the World Color Survey (WCS), indeed, any two groups of individuals develop quite different categorization patterns, but some universal properties can be identified by a statistical analysis over a large number of populations. Here, we reproduce the WCS in a numerical model in which different populations develop independently their own categorization systems by playing elementary language games. We find that a simple perceptual constraint shared by all humans, namely the human Just Noticeable Difference (JND), is sufficient to trigger the emergence of universal patterns that unconstrained cultural interaction fails to produce. We test the results of our experiment against real data by performing the same statistical analysis proposed to quantify the universal tendencies shown in the WCS [Kay P and Regier T. (2003) Proc. Natl. Acad. Sci. USA 100: 9085-9089], and obtain an excellent quantitative agreement. This work confirms that synthetic modeling has nowadays reached the maturity to contribute significantly to the ongoing debate in cognitive science.Comment: Supplementery Information available here http://www.pnas.org/content/107/6/2403/suppl/DCSupplementa

    Entangled networks, synchronization, and optimal network topology

    Full text link
    A new family of graphs, {\it entangled networks}, with optimal properties in many respects, is introduced. By definition, their topology is such that optimizes synchronizability for many dynamical processes. These networks are shown to have an extremely homogeneous structure: degree, node-distance, betweenness, and loop distributions are all very narrow. Also, they are characterized by a very interwoven (entangled) structure with short average distances, large loops, and no well-defined community-structure. This family of nets exhibits an excellent performance with respect to other flow properties such as robustness against errors and attacks, minimal first-passage time of random walks, efficient communication, etc. These remarkable features convert entangled networks in a useful concept, optimal or almost-optimal in many senses, and with plenty of potential applications computer science or neuroscience.Comment: Slightly modified version, as accepted in Phys. Rev. Let

    Network synchronization: Optimal and Pessimal Scale-Free Topologies

    Full text link
    By employing a recently introduced optimization algorithm we explicitely design optimally synchronizable (unweighted) networks for any given scale-free degree distribution. We explore how the optimization process affects degree-degree correlations and observe a generic tendency towards disassortativity. Still, we show that there is not a one-to-one correspondence between synchronizability and disassortativity. On the other hand, we study the nature of optimally un-synchronizable networks, that is, networks whose topology minimizes the range of stability of the synchronous state. The resulting ``pessimal networks'' turn out to have a highly assortative string-like structure. We also derive a rigorous lower bound for the Laplacian eigenvalue ratio controlling synchronizability, which helps understanding the impact of degree correlations on network synchronizability.Comment: 11 pages, 4 figs, submitted to J. Phys. A (proceedings of Complex Networks 2007

    Sustained high-level expression of human factor IX (hFIX) after liver-targeted delivery of recombinant adeno-associated virus encoding the hFIX gene in rhesus macaques

    Get PDF
    The feasibility, safety, and efficacy of liver-directed gene transfer was evaluated in 5 male macaques (aged 2.5 to 6.5 years) by using a recombinant adeno-associated viral (rAAV) vector (rAAV-2 CAGG-hFIX) that had previously mediated persistent therapeutic expression of human factor IX (hFIX; 6%-10% of physiologic levels) in murine models. A dose of 4 × 1012 vector genomes (vgs)/kg of body weight was administered through the hepatic artery or portal vein. Persistence of the rAAV vgs as circular monomers and dimers and high-molecular-weight concatamers was documented in liver tissue by Southern blot analysis for periods of up to 1 year. Vector particles were present in plasma, urine, or saliva for several days after infusion (as shown by polymerase chain reaction analysis), and the vgs were detected in spleen tissue at low copy numbers. An enzyme-linked immunosorption assay capable of detecting between 1% and 25% of normal levels of hFIX in rhesus plasma was developed by using hyperimmune serum from a rhesus monkey that had received an adenoviral vector encoding hFIX. Two macaques having 3 and 40 rAAV genome equivalents/cell, respectively, in liver tissue had 4% and 8% of normal physiologic plasma levels of hFIX, respectively. A level of hFIX that was 3% of normal levels was transiently detected in one other macaque, which had a genome copy number of 25 before abrogation by a neutralizing antibody (inhibitor) to hFIX. This nonhuman-primate model will be useful in further evaluation and development of rAAV vectors for gene therapy of hemophilia B. © 2002 by The American Society of Hematology

    Optimal network topologies: Expanders, Cages, Ramanujan graphs, Entangled networks and all that

    Full text link
    We report on some recent developments in the search for optimal network topologies. First we review some basic concepts on spectral graph theory, including adjacency and Laplacian matrices, and paying special attention to the topological implications of having large spectral gaps. We also introduce related concepts as ``expanders'', Ramanujan, and Cage graphs. Afterwards, we discuss two different dynamical feautures of networks: synchronizability and flow of random walkers and so that they are optimized if the corresponding Laplacian matrix have a large spectral gap. From this, we show, by developing a numerical optimization algorithm that maximum synchronizability and fast random walk spreading are obtained for a particular type of extremely homogeneous regular networks, with long loops and poor modular structure, that we call entangled networks. These turn out to be related to Ramanujan and Cage graphs. We argue also that these graphs are very good finite-size approximations to Bethe lattices, and provide almost or almost optimal solutions to many other problems as, for instance, searchability in the presence of congestion or performance of neural networks. Finally, we study how these results are modified when studying dynamical processes controlled by a normalized (weighted and directed) dynamics; much more heterogeneous graphs are optimal in this case. Finally, a critical discussion of the limitations and possible extensions of this work is presented.Comment: 17 pages. 11 figures. Small corrections and a new reference. Accepted for pub. in JSTA

    Gestational age at delivery and special educational need: retrospective cohort study of 407,503 schoolchildren

    Get PDF
    <STRONG>Background</STRONG> Previous studies have demonstrated an association between preterm delivery and increased risk of special educational need (SEN). The aim of our study was to examine the risk of SEN across the full range of gestation. <STRONG>Methods and Findings</STRONG> We conducted a population-based, retrospective study by linking school census data on the 407,503 eligible school-aged children resident in 19 Scottish Local Authority areas (total population 3.8 million) to their routine birth data. SEN was recorded in 17,784 (4.9%) children; 1,565 (8.4%) of those born preterm and 16,219 (4.7%) of those born at term. The risk of SEN increased across the whole range of gestation from 40 to 24 wk: 37–39 wk adjusted odds ratio (OR) 1.16, 95% confidence interval (CI) 1.12–1.20; 33–36 wk adjusted OR 1.53, 95% CI 1.43–1.63; 28–32 wk adjusted OR 2.66, 95% CI 2.38–2.97; 24–27 wk adjusted OR 6.92, 95% CI 5.58–8.58. There was no interaction between elective versus spontaneous delivery. Overall, gestation at delivery accounted for 10% of the adjusted population attributable fraction of SEN. Because of their high frequency, early term deliveries (37–39 wk) accounted for 5.5% of cases of SEN compared with preterm deliveries (<37 wk), which accounted for only 3.6% of cases. <STRONG>Conclusions</STRONG> Gestation at delivery had a strong, dose-dependent relationship with SEN that was apparent across the whole range of gestation. Because early term delivery is more common than preterm delivery, the former accounts for a higher percentage of SEN cases. Our findings have important implications for clinical practice in relation to the timing of elective delivery

    Bayes and health care research.

    Get PDF
    Bayes’ rule shows how one might rationally change one’s beliefs in the light of evidence. It is the foundation of a statistical method called Bayesianism. In health care research, Bayesianism has its advocates but the dominant statistical method is frequentism. There are at least two important philosophical differences between these methods. First, Bayesianism takes a subjectivist view of probability (i.e. that probability scores are statements of subjective belief, not objective fact) whilst frequentism takes an objectivist view. Second, Bayesianism is explicitly inductive (i.e. it shows how we may induce views about the world based on partial data from it) whereas frequentism is at least compatible with non-inductive views of scientific method, particularly the critical realism of Popper. Popper and others detail significant problems with induction. Frequentism’s apparent ability to avoid these, plus its ability to give a seemingly more scientific and objective take on probability, lies behind its philosophical appeal to health care researchers. However, there are also significant problems with frequentism, particularly its inability to assign probability scores to single events. Popper thus proposed an alternative objectivist view of probability, called propensity theory, which he allies to a theory of corroboration; but this too has significant problems, in particular, it may not successfully avoid induction. If this is so then Bayesianism might be philosophically the strongest of the statistical approaches. The article sets out a number of its philosophical and methodological attractions. Finally, it outlines a way in which critical realism and Bayesianism might work together. </p

    Lithium distribution across the membrane of motoneurons in the isolated frog spinal cord

    Get PDF
    Lithium sensitive microelectrodes were used to investigate the transmembrane distribution of lithium ions (Li+) in motoneurons of the isolated frog spinal cord. After addition of 5 mmol·l–1 LiCl to the bathing solution the extracellular diffusion of Li+ was measured. At a depth of 500 m, about 60 min elapsed before the extracellular Li+ concentration approached that of the bathing solution. Intracellular measurements revealed that Li+ started to enter the cells soon after reaching the motoneuron pool and after up to 120 min superfusion, an intra — to extracellular concentration ratio of about 0.7 was obtained. The resting membrane potential and height of antidromically evoked action potentials were not altered by 5 mmol·l–1 Li+
    corecore