1,707 research outputs found

    Estimating global warming potential for agricultural landscapes with minimal field data and cost

    Get PDF
    Greenhouse gas (GHG) emissions from agriculture comprise 10-12% of anthropocentric global emissions; and 76% of the agricultural emissions are generated in the developing world. Landscape GHG accounting is an effective way to efficiently develop baseline emissions and appropriate mitigation approaches. In a 9,736-hectare case study area dominated by rice and wheat in the Karnal district of Haryana state, India, the authors used a low-cost landscape agricultural GHG accounting method with limited fieldwork, remote sensing, and biogeochemical modeling. We used the DeNitrification-DeComposition (DNDC) model software to simulate crop growth and carbon and nitrogen cycling to estimate net GHG emissions, with information based on the mapping of cropping patterns over time using multi- resolution and multi-temporal optical remote sensing imagery. We estimated a mean net emission of 78,620 tCO2e/yr (tons of carbon dioxide equivalents per year) with a 95% confidence interval of 51,212-106,028 tCO2e/yr based on uncertainties in our crop mapping and soil data. A modeling sensitivity analysis showed soil clay fraction, soil organic carbon fraction, soil density, and nitrogen amendments to be among the most sensitive factors, and therefore critical to capture in field surveys. We recommend a multi-phase approach to increase efficiency and reduce cost in GHG accounting. Field campaigns and aspects of remote sensing image characteristics can be optimized for targeted landscapes through solid background research. An appropriate modeling approach can be selected based on crop and soil characteristics. Soil data in developing world landscapes remain a significant source of uncertainty for studies like these and should remain a key research and data development effort

    Modeling and predicting the shape of the far-infrared to submillimeter emission in ultra-compact HII regions and cold clumps

    Get PDF
    Dust properties are very likely affected by the environment in which dust grains evolve. For instance, some analyses of cold clumps (7 K- 17 K) indicate that the aggregation process is favored in dense environments. However, studying warm (30 K-40 K) dust emission at long wavelength (λ\lambda>>300 μ\mum) has been limited because it is difficult to combine far infared-to-millimeter (FIR-to-mm) spectral coverage and high angular resolution for observations of warm dust grains. Using Herschel data from 70 to 500 μ\mum, which are part of the Herschel infrared Galactic (Hi-GAL) survey combined with 1.1 mm data from the Bolocam Galactic Plane Survey (BGPS), we compared emission in two types of environments: ultra-compact HII (UCHII) regions, and cold molecular clumps (denoted as cold clumps). With this comparison we tested dust emission models in the FIR-to-mm domain that reproduce emission in the diffuse medium, in these two environments (UCHII regions and cold clumps). We also investigated their ability to predict the dust emission in our Galaxy. We determined the emission spectra in twelve UCHII regions and twelve cold clumps, and derived the dust temperature (T) using the recent two-level system (TLS) model with three sets of parameters and the so-called T-β\beta (temperature-dust emissvity index) phenomenological models, with β\beta set to 1.5, 2 and 2.5. We tested the applicability of the TLS model in warm regions for the first time. This analysis indicates distinct trends in the dust emission between cold and warm environments that are visible through changes in the dust emissivity index. However, with the use of standard parameters, the TLS model is able to reproduce the spectral behavior observed in cold and warm regions, from the change of the dust temperature alone, whereas a T-β\beta model requires β\beta to be known.Comment: Accepted for publication in A&A. 19 pages, 8 figures, 7 table

    Multiwavelength study of the high-latitude cloud L1642: chain of star formation

    Get PDF
    L1642 is one of the two high galactic latitude (|b| > 30deg) clouds confirmed to have active star formation. We examine the properties of this cloud, especially the large-scale structure, dust properties, and compact sources in different stages of star formation. We present high-resolution far-infrared and submm observations with the Herschel and AKARI satellites and mm observations with the AzTEC/ASTE telescope, which we combined with archive data from near- and mid-infrared (2MASS, WISE) to mm observations (Planck). The Herschel observations, combined with other data, show a sequence of objects from a cold clump to young stellar objects at different evolutionary stages. Source B-3 (2MASS J04351455-1414468) appears to be a YSO forming inside the L1642 cloud, instead of a foreground brown dwarf, as previously classified. Herschel data reveal striation in the diffuse dust emission around L1642. The western region shows striation towards NE and has a steeper column density gradient on its southern side. The densest central region has a bow-shock like structure showing compression from the west and a filamentary tail extending towards east. The differences suggest that these may be spatially distinct structures, aligned only in projection. We derive values of the dust emission cross-section per H nucleon for different regions of the cloud. Modified black-body fits to the spectral energy distribution of Herschel and Planck data give emissivity spectral index beta values 1.8-2.0 for the different regions. The compact sources have lower beta values and show an anticorrelation between T and beta. Markov chain Monte Carlo calculations demonstrate the strong anticorrelation between beta and T errors and the importance of mm Planck data in constraining the estimates. L1642 reveals a more complex structure and sequence of star formation than previously known.Comment: 22 pages, 18 figures, accepted to Astronomy & Astrophysics; abstract shortened and figures reduced for astrop

    Plasma Physics

    Get PDF
    Contains reports on six research projects.United States Atomic Energy Commission (Contract AT(30-1)-1842

    Chaos in Time Dependent Variational Approximations to Quantum Dynamics

    Full text link
    Dynamical chaos has recently been shown to exist in the Gaussian approximation in quantum mechanics and in the self-consistent mean field approach to studying the dynamics of quantum fields. In this study, we first show that any variational approximation to the dynamics of a quantum system based on the Dirac action principle leads to a classical Hamiltonian dynamics for the variational parameters. Since this Hamiltonian is generically nonlinear and nonintegrable, the dynamics thus generated can be chaotic, in distinction to the exact quantum evolution. We then restrict attention to a system of two biquadratically coupled quantum oscillators and study two variational schemes, the leading order large N (four canonical variables) and Hartree (six canonical variables) approximations. The chaos seen in the approximate dynamics is an artifact of the approximations: this is demonstrated by the fact that its onset occurs on the same characteristic time scale as the breakdown of the approximations when compared to numerical solutions of the time-dependent Schrodinger equation.Comment: 10 pages (12 figures), RevTeX (plus macro), uses epsf, minor typos correcte

    License prices for financially constrained firms

    Get PDF
    It is often alleged that high auction prices inhibit service deployment. We investigate this claim under the extreme case of financially constrained bidders. If demand is just slightly elastic, auctions maximize consumer surplus if consumer surplus is a convex function of quantity (a common assumption), or if consumer surplus is concave and the proportion of expenditure spent on deployment is greater than one over the elasticity of demand. The latter condition appears to be true for most of the large telecom auctions in the US and Europe. Thus, even if high auction prices inhibit service deployment, auctions appear to be optimal from the consumers’ point of view
    corecore