6,006 research outputs found
Economic Incentives Versus Command and Control: What's the Best Approach for Solving Environmental Problems?
Now, decades after the first environmental laws were passed in this country, policymakers face many choices when seeking to solve environmental problems. Will taxing polluters for their discharges be more effective than fining them for not meeting certain emissions standards? Will a regulatory agency find it less costly to enforce a ban or oversee a system of tradable permits? Which strategy will reduce a pollutant the quickest? Clearly, there are no "one-size-fits-all" answers. Many factors enter into the decision to favor either policies that lean more toward economic incentives (EI) and toward direct regulation, commonly referred to as command-and-control (CAC) policy. Underlying determinants include a country's governmental and regulatory infrastructure, along with the nature of the environmental problem itself. Even with these contextual factors to consider, we thought it would be useful to compare EI and CAC policies and their outcomes in a real-world setting. To do this, we looked at six environmental problems that the United States and at least one European country dealt with differently (see box on page 14.) For each problem, one approach was more of an EI measure, while the other relied more on CAC. For example, to reduce point-source industrial water pollution, the Netherlands implemented a system of fees for organic pollutants (EI), while the United States established a system of guidelines and permits (CAC). It turned out, in fact, that most policies had at least some elements of both approaches, but we categorized them as EI or CAC based on their dominant features. We then asked researchers who had previously studied these policies on either side of the Atlantic to update or prepare new case studies. We analyzed the 12 case studies (two for each of the six environmental problems) against a list of hypotheses frequently made for or against EI and CAC, such as which instrument is more effective or imposes less administrative burden
Recognition map analysis and crop acreage estimation using Skylab EREP data
There are no author-identified significant results in this report
Investigation of LANDSAT follow-on thematic mapper spatial, radiometric and spectral resolution
The author has identified the following significant results. Fine resolution M7 multispectral scanner data collected during the Corn Blight Watch Experiment in 1971 served as the basis for this study. Different locations and times of year were studied. Definite improvement using 30-40 meter spatial resolution over present LANDSAT 1 resolution and over 50-60 meter resolution was observed, using crop area mensuration as the measure. Simulation studies carried out to extrapolate the empirical results to a range of field size distributions confirmed this effect, showing the improvement to be most pronounced for field sizes of 1-4 hectares. Radiometric sensitivity study showed significant degradation of crop classification accuracy immediately upon relaxation from the nominally specified values of 0.5% noise equivalent reflectance. This was especially the case for data which were spectrally similar such as that collected early in the growing season and also when attempting to accomplish crop stress detection
Economic evaluation of crop acreage estimation by multispectral remote sensing
The author has identified the following significant results. Photointerpretation of S190A and S190B imagery showed significantly better resolution with the S190B system. A small tendancy to underestimate acreage was observed. This averaged 6 percent and varied with field size. The S190B system had adequate resolution for acreage measurement but the color film did not provide adequate contrast to allow detailed classification of ground cover from imagery of a single date. In total 78 percent of the fields were correctly classified but with 56 percent correct for the major crop, corn
Recognition map analysis and crop acreage estimation
There are no author-identified significant results in this report
Nishimori point in the 2D +/- J random-bond Ising model
We study the universality class of the Nishimori point in the 2D +/- J
random-bond Ising model by means of the numerical transfer-matrix method. Using
the domain-wall free-energy, we locate the position of the fixed point along
the Nishimori line at the critical concentration value p_c = 0.1094 +/- 0.0002
and estimate nu = 1.33 +/- 0.03. Then, we obtain the exponents for the moments
of the spin-spin correlation functions as well as the value for the central
charge c = 0.464 +/- 0.004. The main qualitative result is the fact that
percolation is now excluded as a candidate for describing the universality
class of this fixed point.Comment: 4 pages REVTeX, 3 PostScript figures; final version to appear in
Phys. Rev. Lett.; several small changes and extended explanation
Dynamics and chemistry of vortex remnants in late Arctic spring 1997 and 2000: Simulations with the Chemical Lagrangian Model of the Stratosphere (CLaMS)
High-resolution simulations of the chemical composition of the Arctic stratosphere during late spring 1997 and 2000 were performed with the Chemical Lagrangian Model of the Stratosphere (CLaMS). The simulations were performed for the entire northern hemisphere on two isentropic levels 450 K (~18 km) and 585 K (~24 km).<br> <br> The spatial distribution and the lifetime of the vortex remnants formed after the vortex breakup in May 1997 display different behavior above and below 20 km. Above 20 km, vortex remnants propagate southward (up to 40°N) and are "frozen in'' in the summer circulation without significant mixing. Below 20 km the southward propagation of the remnants is bounded by the subtropical jet. Their lifetime is shorter by a factor of 2 than that above 20 km, owing to significant stirring below this altitude. The behavior of vortex remnants formed in March 2000 is similar but, due to an earlier vortex breakup, dominated during the first 6 weeks after the vortex breakup by westerly winds, even above 20 km.<br> <br> Vortex remnants formed in May 1997 are characterized by large mixing ratios of HCl indicating negligible, halogen-induced ozone loss. In contrast, mid-latitude ozone loss in late boreal spring 2000 is dominated, until mid-April, by halogen-induced ozone destruction within the vortex remnants, and subsequent transport of the ozone-depleted polar air masses (dilution) into the mid-latitudes. By varying the intensity of mixing in CLaMS, the impact of mixing on the formation of ClONO<sub>2</sub> and ozone depletion is investigated. We find that the photochemical decomposition of HNO<sub>3</sub> and not mixing with NO<sub>x</sub>-rich mid-latitude air is the main source of NO<sub>x</sub> within the vortex remnants in March and April 2000. Ozone depletion in the remnants is driven by ClO<sub>x</sub> photolytically formed from ClONO<sub>2</sub>. At the end of May 1997, the halogen-induced ozone deficit at 450 K poleward of 30°N amounts to ~12% with ~10% in the polar vortex and ~2% in well-isolated vortex remnants after the vortex breakup
Large tunable valley splitting in edge-free graphene quantum dots on boron nitride
Coherent manipulation of binary degrees of freedom is at the heart of modern
quantum technologies. Graphene offers two binary degrees: the electron spin and
the valley. Efficient spin control has been demonstrated in many solid state
systems, while exploitation of the valley has only recently been started, yet
without control on the single electron level. Here, we show that van-der Waals
stacking of graphene onto hexagonal boron nitride offers a natural platform for
valley control. We use a graphene quantum dot induced by the tip of a scanning
tunneling microscope and demonstrate valley splitting that is tunable from -5
to +10 meV (including valley inversion) by sub-10-nm displacements of the
quantum dot position. This boosts the range of controlled valley splitting by
about one order of magnitude. The tunable inversion of spin and valley states
should enable coherent superposition of these degrees of freedom as a first
step towards graphene-based qubits
Are Individuals Fickle-Minded?
Game theory has been used to model large-scale social events — such as constitutional law, democratic stability, standard setting, gender roles, social movements, communication, markets, the selection of officials by means of elections, coalition formation, resource allocation, distribution of goods, and war — as the aggregate result of individual choices in interdependent decision-making. Game theory in this way assumes methodological individualism. The widespread observation that game theory predictions do not in general match observation has led to many attempts to repair game theory by creating behavioral game theory, which adds corrective terms to the game theoretic predictions in the hope of making predictions that better match observations. But for game theory to be useful in making predictions, we must be able to generalize from an individual’s behavior in one situation to that individual’s behavior in very closely similar situations. In other words, behavioral game theory needs individuals to be reasonably consistent in action if the theory is to have predictive power. We argue on the basis of experimental evidence that the assumption of such consistency is unwarranted. More realistic models of individual agents must be developed that acknowledge the variance in behavior for a given individual
Nuclear alpha-clustering, superdeformation, and molecular resonances
Nuclear alpha-clustering has been the subject of intense study since the
advent of heavy-ion accelerators. Looking back for more than 40 years we are
able today to see the connection between quasimolecular resonances in heavy-ion
collisions and extremely deformed states in light nuclei. For example
superdeformed bands have been recently discovered in light N=Z nuclei such as
Ar, Ca, Cr, and Ni by -ray spectroscopy.
The search for strongly deformed shapes in N=Z nuclei is also the domain of
charged-particle spectroscopy, and our experimental group at IReS Strasbourg
has studied a number of these nuclei with the charged particle multidetector
array {\sc Icare} at the {\sc Vivitron} Tandem facility in a systematical
manner. Recently the search for -decays in Mg has been
undertaken in a range of excitation energies where previously nuclear molecular
resonances were found in C+C collisions. The breakup reaction
MgC has been investigated at E(Mg) = 130 MeV, an
energy which corresponds to the appropriate excitation energy in Mg for
which the C+C resonance could be related to the breakup
resonance. Very exclusive data were collected with the Binary Reaction
Spectrometer in coincidence with {\sc Euroball IV} installed at the {\sc
Vivitron}.Comment: 10 pages, 4 eps figures included. Invited Talk 10th Nuclear Physics
Workshop Marie and Pierre Curie, Kazimierz Dolny Poland, Sep. 24-28, 2003; To
be published in International Journal of Modern Physics
- …
