4,783 research outputs found

    An Agent-Based Model of Multifunctional Agricultural Landscape Using Genetic Algorithms

    Get PDF
    Landowner characteristics influence his/her willingness to change landuse practices to provide more or less environmental benefits. However, most studies of agricultural/environmental polices identify landowners as homogenous. And, the primary cause of failure of many environmental and other polices is the lack of knowledge on how humans may respond to polices based on changes in their behavior (Stern, 1993). From socioeconomic theory and empirical research, landowners can be identified as individuals who make agricultural landuse decisions independently based on their objectives. Identifying possible classes of landowners, assessing how each would potentially respond to policy alternatives, and the resulting pattern of land uses in a watershed or a riparian corridor would be very useful to policy makers as they evaluated alternatives. Agricultural landscapes are important producers of ecosystem services. The mix of ecosystem services and commodity outputs of an agricultural landscape depends on the spatial pattern of land uses emerging from individual land use decisions. However, many empirical studies show that the production of ecosystem services from agricultural landscapes is declining. This is consistent with research conducted over the last few decades showing there is a narrow range of social circumstances under which landowners are willing to make investments in the present to achieve public benefits in the future through investing in natural capital resulting in public goods which are frequently produced as ecosystem services. In this study an agent-based model within a watershed planning context is used to analyze the tradeoffs involved in producing a number of ecosystem services and agricultural commodities given price and policy scenarios while assuming three different types of agents in terms of their goals. The agents represent landowners who have been divided into a number of different groups based on their goals and the size of their farm operations. The multi-agent-based model is developed using a heuristic search and optimization technique called genetic algorithm (GA) (Holland), which belongs to a broader class of evolutionary algorithms. GAs exhibit three properties (1) they start with a population of solution, (2) they explore the solution space through recombination and mutation and (3) they evaluate individual solutions based on their appropriate fitness value(s), for example given profit maximizing agents this would be gross margin. A GA is a heuristic stochastic search and optimization method, which works by mimicking the evolutionary principles and chromosomal processing in natural genetics. The three economic agents that are modeled are based on variations in their objective functions and constraints. This study will help in identifying the tradeoffs associated with various agents in the provision of ecosystem services and agricultural commodities. The agent model developed here will help policy and decision maker identify the various agents within the watershed and assess various policy options based on that information. The study will also help to understand the interaction and feedback between the agents and their environment associated with various policy initiatives. The results of the study indicate that the agent model correctly predicts the actual landuse landcover map by 75 percent.Multifunctional agriculture, Agent based modeling, Genetic Algorithm, Environmental Economics and Policy, Land Economics/Use,

    On EM Reconstruction of a Multi Channel Shielded Applicator for Cervical Cancer Brachytherapy: A Feasibility Study

    Full text link
    Electromagnetic tracking (EMT) is a promising technology for automated catheter and applicator reconstruc- 10 tions in brachytherapy. In this work, a proof-of-concept is presented for reconstruction of the individual channels of a shielded tandem applicator dedicated to intensity modulated brachytherapy. All six channels of a straight prototype was reconstructed and the distance between two opposite channels was measured. A study was also conducted on the influence of the shield on the data fluctuation of the EMT system. The differences with the CAD specified dimensions are under 2 mm. The pair of channels which has one of it more distant from the generator have 15 higher inter-channel distance with higher variability. In the first 110 cm reconstruction, all inter-channel distances are within the geometrical tolerances. According to a paired Student t-test, the data given by the EM system with and without the shield applicator tip are not significantly different. This study shows that the reconstruction of channel path within the mechanical accuracy of the applicator is possible.Comment: 3 pages, 3 figure

    Should we mine the deep seafloor?

    Get PDF
    As land-based mineral resources become increasingly difficult and expensive to acquire, the potential for mining resources from the deep seafloor has become widely discussed and debated. Exploration leases are being granted, and technologies are under development. However, the quantity and quality of the resources are uncertain, and many worry about risks to vulnerable deep-sea ecosystems. Deep-sea mining has become part of the discussion of the United Nations Sustainable Development Goals. In this article we provide a summary of benefits, costs, and uncertainties that surround this potentially attractive but contentious topic

    ECOLOGICAL-ECONOMIC MODELING ON A WATERSHED BASIS: A CASE STUDY OF THE CACHE RIVER OF SOUTHERN ILLINOIS

    Get PDF
    A digitally represented watershed landscape (ARC/INFO GIS) is merged with farm optimization (linear programming) and sediment and chemical transport (AGNPS) models. Enhanced targeting of non-point source pollution to remedial policy and management initiatives result. The implications of which are linked back to farm income and forward to the managed ecosystem.Environmental Economics and Policy, Research and Development/Tech Change/Emerging Technologies,

    Large Magellanic Cloud Microlensing Optical Depth with Imperfect Event Selection

    Full text link
    I present a new analysis of the MACHO Project 5.7 year Large Magellanic Cloud (LMC) microlensing data set that incorporates the effects of contamination of the microlensing event sample by variable stars. Photometric monitoring of MACHO LMC microlensing event candidates by the EROS and OGLE groups has revealed that one of these events is likely to be a variable star, while additional data has confirmed that many of the other events are very likely to be microlensing. This additional data on the nature of the MACHO microlensing candidates is incorporated into a simple likelihood analysis to derive a probability distribution for the number of MACHO microlens candidates that are true microlensing events. This analysis shows that 10-12 of the 13 events that passed the MACHO selection criteria are likely to be microlensing events, with the other 1-3 being variable stars. This likelihood analysis is also used to show that the main conclusions of the MACHO LMC analysis are unchanged by the variable star contamination. The microlensing optical depth toward the LMC is = 1.0 +/- 0.3 * 10^{-7}. If this is due to microlensing by known stellar populations, plus an additional population of lens objects in the Galactic halo, then the new halo population would account for 16% of the mass of a standard Galactic halo. The MACHO detection exceeds the expected background of 2 events expected from ordinary stars in standard models of the Milky Way and LMC at the 99.98% confidence level. The background prediction is increased to 3 events if maximal disk models are assumed for both the MilkyWay and LMC, but this model fails to account for the full signal seen by MACHO at the 99.8% confidence level.Comment: 20 pages, 2 postscript figues, accepted by Ap

    Tracking the phase-transition energy in disassembly of hot nuclei

    Full text link
    In efforts to determine phase transitions in the disintegration of highly excited heavy nuclei, a popular practice is to parametrise the yields of isotopes as a function of temperature in the form Y(z)=zτf(zσ(TT0))Y(z)=z^{-\tau}f(z^{\sigma}(T-T_0)), where Y(z)Y(z)'s are the measured yields and τ,σ\tau, \sigma and T0T_0 are fitted to the yields. Here T0T_0 would be interpreted as the phase transition temperature. For finite systems such as those obtained in nuclear collisions, this parametrisation is only approximate and hence allows for extraction of T0T_0 in more than one way. In this work we look in detail at how values of T0T_0 differ, depending on methods of extraction. It should be mentioned that for finite systems, this approximate parametrisation works not only at the critical point, but also for first order phase transitions (at least in some models). Thus the approximate fit is no guarantee that one is seeing a critical phenomenon. A different but more conventional search for the nuclear phase transition would look for a maximum in the specific heat as a function of temperature T2T_2. In this case T2T_2 is interpreted as the phase transition temperature. Ideally T0T_0 and T2T_2 would coincide. We invesigate this possibility, both in theory and from the ISiS data, performing both canonical (TT) and microcanonical (e=E/Ae=E^*/A) calculations. Although more than one value of T0T_0 can be extracted from the approximate parmetrisation, the work here points to the best value from among the choices. Several interesting results, seen in theoretical calculations, are borne out in experiment.Comment: Revtex, 10 pages including 8 figures and 2 table

    Direct imaging constraints on planet populations detected by microlensing

    Full text link
    Results from gravitational microlensing suggested the existence of a large population of free-floating planetary mass objects. The main conclusion from this work was partly based on constraints from a direct imaging survey. This survey determined upper limits for the frequency of stars that harbor giant exoplanets at large orbital separations. Aims. We want to verify to what extent upper limits from direct imaging do indeed constrain the microlensing results. We examine the current derivation of the upper limits used in the microlensing study and re-analyze the data from the corresponding imaging survey. We focus on the mass and semi-major axis ranges that are most relevant in context of the microlensing results. We also consider new results from a recent M-dwarf imaging survey as these objects are typically the host stars for planets detected by microlensing. We find that the upper limits currently applied in context of the microlensing results are probably underestimated. This means that a larger fraction of stars than assumed may harbor gas giant planets at larger orbital separations. Also, the way the upper limit is currently used to estimate the fraction of free-floating objects is not strictly correct. If the planetary surface density of giant planets around M-dwarfs is described as df_Planet ~ a^beta da, we find that beta ~ 0.5 - 0.6 is consistent with results from different observational studies probing semi-major axes between ~0.03 - 30 AU. Having a higher upper limit on the fraction of stars that may have gas giant planets at orbital separations probed by the microlensing data implies that more of the planets detected in the microlensing study are potentially bound to stars rather than free-floating. The current observational data are consistent with a rising planetary surface density for giant exoplanets around M-dwarfs out to ~30 AU.Comment: Accepted for publication in A&A as Research Note, 3 page

    Effect of interleukin-6 receptor blockade on surrogates of vascular risk in rheumatoid arthritis: MEASURE, a randomised, placebo-controlled study

    Get PDF
    Objectives The interleukin-6 receptor (IL-6R) blocker tocilizumab (TCZ) reduces inflammatory disease activity in rheumatoid arthritis (RA) but elevates lipid concentrations in some patients. We aimed to characterise the impact of IL-6R inhibition on established and novel risk factors in active RA. Methods Randomised, multicentre, two-part, phase III trial (24-week double-blind, 80-week open-label), MEASURE, evaluated lipid and lipoprotein levels, high-density lipoprotein (HDL) particle composition, markers of coagulation, thrombosis and vascular function by pulse wave velocity (PWV) in 132 patients with RA who received TCZ or placebo. Results Median total-cholesterol, low-density lipoprotein-cholesterol (LDL-C) and triglyceride levels increased in TCZ versus placebo recipients by week 12 (12.6% vs 1.7%, 28.1% vs 2.2%, 10.6% vs −1.9%, respectively; all p&#60;0.01). There were no significant differences in mean small LDL, mean oxidised LDL or total HDL-C concentrations. However, HDL-associated serum amyloid A content decreased in TCZ recipients. TCZ also induced reductions (&#60;30%) in secretory phospholipase A2-IIA, lipoprotein(a), fibrinogen and D-dimers and elevation of paraoxonase (all p&#60;0.0001 vs placebo). The ApoB/ApoA1 ratio remained stable over time in both groups. PWV decreases were greater with placebo than TCZ at 12 weeks (adjusted mean difference 0.79 m/s (95% CI 0.22 to 1.35; p=0.0067)). Conclusions These data provide the first detailed evidence for the modulation of lipoprotein particles and other surrogates of vascular risk with IL-6R inhibition. When compared with placebo, TCZ induced elevations in LDL-C but altered HDL particles towards an anti-inflammatory composition and favourably modified most, but not all, measured vascular risk surrogates. The net effect of such changes for cardiovascular risk requires determination.</p
    corecore