90 research outputs found

    Algorithmic Randomness and Capacity of Closed Sets

    Full text link
    We investigate the connection between measure, capacity and algorithmic randomness for the space of closed sets. For any computable measure m, a computable capacity T may be defined by letting T(Q) be the measure of the family of closed sets K which have nonempty intersection with Q. We prove an effective version of Choquet's capacity theorem by showing that every computable capacity may be obtained from a computable measure in this way. We establish conditions on the measure m that characterize when the capacity of an m-random closed set equals zero. This includes new results in classical probability theory as well as results for algorithmic randomness. For certain computable measures, we construct effectively closed sets with positive capacity and with Lebesgue measure zero. We show that for computable measures, a real q is upper semi-computable if and only if there is an effectively closed set with capacity q

    Boundary-crossing identities for diffusions having the time-inversion property

    Get PDF
    We review and study a one-parameter family of functional transformations, denoted by (S (β)) β∈ℝ, which, in the case β<0, provides a path realization of bridges associated to the family of diffusion processes enjoying the time-inversion property. This family includes Brownian motions, Bessel processes with a positive dimension and their conservative h-transforms. By means of these transformations, we derive an explicit and simple expression which relates the law of the boundary-crossing times for these diffusions over a given function f to those over the image of f by the mapping S (β), for some fixed β∈ℝ. We give some new examples of boundary-crossing problems for the Brownian motion and the family of Bessel processes. We also provide, in the Brownian case, an interpretation of the results obtained by the standard method of images and establish connections between the exact asymptotics for large time of the densities corresponding to various curves of each family

    Cokriging for evaluating agricultural pollution

    Get PDF
    Agricultural irrigation is a major non-point source polluter. Evaluating the extent of this type of non-point source pollution requires sampling and analysis of drainage waters. To reduce costs, sampling efficiency is important. Cokriging can be used as a tool for interpolating between sampling times or locations. In this experiment, subsurface drainage data from irrigated lands near Twin Falls, Idaho were used. Total Dissolved Solids and NO3-N were selected as variables. The objective was to determine if 50 and 65 percent of the measured data could be removed (creating two new data sets) and accurately estimated via cokriging using a variogram model based on the remaining data. Cokriging models were developed using statistical information obtained from variograms of the remaining data. Once accurate models were developed for both the 50 and 65 percent removal cases, estimations were made for the missing data values. One-way analysis of variance and t-tests were used to test whether the means and variances of the estimated values were significantly different from those of the measured values. At the 65 percent removal level, there were significant differences in the means and variances of the estimated and measured values for NO3-N. One way analysis of variance and similarity of variance tests were used to test whether differences between the error values of the modeled and removed data were significant. By using the unedited full set of measured data for variogram modeling none of the tests produced rejections

    Quantitative trait locus mapping associated with earliness and fruit weight in tomato

    Get PDF
    ABSTRACT The flowering time is regarded as an important factor that affects yield in various crops. In order to understand how the molecular basis controlling main components of earliness in tomato (Solanum lycopersicum L.), and to deduce whether the correlation between fruit weight, days to flowering and seed weight, is caused by pleiotropic effects or genetic linkage, a QTLs analysis was carried out using an F2 interspecific population derived from the cross of S. lycopersicum and S. pimpinellifolium. The analysis revealed that most of the components related to earliness were independent due to the absence of phenotypic correlation and lack of co-localization of their QTLs. QTLs affecting the flowering time showed considerable variation over time in values of explained phenotypic variation and average effects, which suggested dominance becomes more evident over time. The path analysis showed that traits such as days to flowering, seed weight, and length of the first leaf had a significant effect on the expression of fruit weight, confirming that their correlations were due to linkage. This result was also confirmed in two genomic regions located on chromosomes 1 and 4, where despite showing high co-localization of QTLs associated to days to flowering, seed weight and fruit weight, the presence and absence of epistasis in dfft1.1 × dftt4.1 and fw1.1 × fw4.1, suggested that the linkage was the main cause of the co-localization

    Cokriging for evaluating agricultural pollution

    No full text
    Agricultural irrigation is a major non-point source polluter. Evaluating the extent of this type of non-point source pollution requires sampling and analysis of drainage waters. To reduce costs, sampling efficiency is important. Cokriging can be used as a tool for interpolating between sampling times or locations. In this experiment, subsurface drainage data from irrigated lands near Twin Falls, Idaho were used. Total Dissolved Solids and NO3-N were selected as variables. The objective was to determine if 50 and 65 percent of the measured data could be removed (creating two new data sets) and accurately estimated via cokriging using a variogram model based on the remaining data. Cokriging models were developed using statistical information obtained from variograms of the remaining data. Once accurate models were developed for both the 50 and 65 percent removal cases, estimations were made for the missing data values. One-way analysis of variance and t-tests were used to test whether the means and variances of the estimated values were significantly different from those of the measured values. At the 65 percent removal level, there were significant differences in the means and variances of the estimated and measured values for NO3-N. One way analysis of variance and similarity of variance tests were used to test whether differences between the error values of the modeled and removed data were significant. By using the unedited full set of measured data for variogram modeling none of the tests produced rejections

    Momentum Strategies in Shari’ah-Compliant Stocks: The Role of Debt

    No full text
    This article addresses a puzzle: why dividend yield (DY) has lost its predictive ability since the 1990s. Campbell and Shiller [1988]\u27s dynamic Gordon model provides a theoretical foundation to explain DY\u27s predictability of stock returns, however, when the transversality condition fails to hold (that is, when a bubble is present), this implies that DY cannot predict stock returns. Using a recursive test procedure, developed by Phillips et al. [2011], to detect bubbles in the New York Stock Exchange Index, we find stock price bubbles indeed occurred from the end of 1991 and ended in September 2008, the starting date of the financial turmoil triggered by the subprime crisis. Along with major real-world events that influenced financial markets and the early 1990s sharp drop in DY, the empirical evidence coincides with our inference (based on Campbell and Shiller\u27s model), showing that DY is indeed a useful variable in predicting future stock returns during a no-bubble period, but it loses its predictive ability when bubbles are present
    corecore