13,478 research outputs found

    Aggregate Demand and Supply

    Get PDF
    This paper is part of a broader project that provides a microfoundation to the General Theory of J.M. Keynes. I call this project 'old Keynesian economics' to distinguish it from new-Keynesian economics, a theory that is based on the idea that to make sense of Keynes we must assume that prices are sticky. I describe a multi-good model in which I interpret the definitions of aggregate demand and supply found in the General Theory through the lens of a search theory of the labor market. I argue that Keynes' aggregate supply curve can be interpreted as the aggregate of a set of first order conditions for the optimal choice of labor and, using this interpretation, I reintroduce a diagram that was central to the textbook teaching of Keynesian economics in the immediate post-war period.

    On the indeterminacy of new-Keynesian economics

    Get PDF
    We study identiÞcation in a class of three-equation monetary models. We argue that these models are typically not identiÞed. For any given exactly identiÞed model, we provide an algorithm that generates a class of equivalent models that have the same reduced form. We use our algorithm to provide four examples of the consequences of lack of identiÞcation. In our Þrst two examples we show that it is not possible to tell whether the policy rule or the Phillips curve is forward or backward looking. In example 3 we establish an equivalence between a class of models proposed by Benhabib and Farmer [1] and the standard new-Keynesian model. This result is disturbing since equilibria in the Benhabib-Farmer model are typically indeterminate for a class of policy rules that generate determinate outcomes in the new-Keynesian model. In example 4, we show that there is an equivalence between determinate and indeterminate models even if one knows the structural equations of the model. JEL Classification: C39, C62, D51, E52, E58IdentiÞcation, indeterminacy, new-Keynesian model, transparency

    On Carbon Burning in Super Asymptotic Giant Branch Stars

    Get PDF
    We explore the detailed and broad properties of carbon burning in Super Asymptotic Giant Branch (SAGB) stars with 2755 MESA stellar evolution models. The location of first carbon ignition, quenching location of the carbon burning flames and flashes, angular frequency of the carbon core, and carbon core mass are studied as a function of the ZAMS mass, initial rotation rate, and mixing parameters such as convective overshoot, semiconvection, thermohaline and angular momentum transport. In general terms, we find these properties of carbon burning in SAGB models are not a strong function of the initial rotation profile, but are a sensitive function of the overshoot parameter. We quasi-analytically derive an approximate ignition density, ρign2.1×106\rho_{ign} \approx 2.1 \times 10^6 g cm3^{-3}, to predict the location of first carbon ignition in models that ignite carbon off-center. We also find that overshoot moves the ZAMS mass boundaries where off-center carbon ignition occurs at a nearly uniform rate of ΔMZAMS\Delta M_{\rm ZAMS}/Δfov\Delta f_{\rm{ov}}\approx 1.6 MM_{\odot}. For zero overshoot, fovf_{\rm{ov}}=0.0, our models in the ZAMS mass range \approx 8.9 to 11 MM_{\odot} show off-center carbon ignition. For canonical amounts of overshooting, fovf_{\rm{ov}}=0.016, the off-center carbon ignition range shifts to \approx 7.2 to 8.8 MM_{\odot}. Only systems with fovf_{\rm{ov}} 0.01\geq 0.01 and ZAMS mass \approx 7.2-8.0 MM_{\odot} show carbon burning is quenched a significant distance from the center. These results suggest a careful assessment of overshoot modeling approximations on claims that carbon burning quenches an appreciable distance from the center of the carbon core.Comment: Accepted ApJ; 23 pages, 21 figures, 5 table

    Not lost in translation: Protocols for interpreting trauma-focused CBT

    Get PDF

    Identifying the monetary transmission mechanism using structural breaks

    Get PDF
    We propose a method for estimating a subset of the parameters of a structural rational expectations model by exploiting changes in policy. We define a class of models, midway between a vector autoregression and a structural model, that we call the recoverable structure. As an application of our method we estimate the parameters of a model of the US monetary transmission mechanism. We estimate a vector autoregression and find that its parameters are unstable. However, using our proposed identification method we are able to attribute instability in the parameters of the VAR solely to changes in the parameters of the policy rule. We recover parameter estimates of the recoverable structure and we demonstrate that these parameters are invariant to changes in policy. Since the recoverable structure includes future expectations as explanatory variables our parameter estimates are not subject to the Lucas [24] critique of econometric policy evaluation. JEL Classification: C51, E43, E52, E58Fed, Identification, monetary transmission, recoverable structure, structural breaks

    Shooting the Auctioneer

    Get PDF
    Unemployment Real Business Cycles

    A sunspot-based theory of unconventional monetary policy

    Get PDF
    This paper is about the effectiveness of qualitative easing, a form of unconventional monetary policy that changes the risk composition of the central bank balance sheet. We construct a general equilibrium model where agents have rational expectations, and there is a complete set of financial securities, but where some agents are unable to participate in financial markets. We show that a change in the risk composition of the central bank’s balance sheet affects equilibrium asset prices and economic activity. We prove that, in our model, a policy in which the central bank stabilizes non-fundamental fluctuations in the stock market is self-financing and leads to a Pareto efficient outcome

    Measurement of spray combustion processes

    Get PDF
    A free jet configuration was chosen for measuring noncombusting spray fields and hydrocarbon-air spray flames in an effort to develop computational models of the dynamic interaction between droplets and the gas phase and to verify and refine numerical models of the entire spray combustion process. The development of a spray combustion facility is described including techniques for laser measurements in spray combustion environments and methods for data acquisition, processing, displaying, and interpretation

    Particle phase function measurements by a new Fiber Array Nephelometer: FAN 1

    Get PDF
    A fiber array polar nephelometer of advanced design, the FAN I is capable of in-situ phase function measurements of scattered light from man-made or natural atmospheric particles. The scattered light is measured at 100 different angles throughout 360 degrees, thus providing a potential measurement of the asymmetry of irregularly shaped particles. Phase functions can be measured at 10 to 100 Hz rates and the range of measurable single particle sizes is from 5 micron m to as large as 8mm. For particles smaller than 5 micro m the ensemble average can be measured. The FAN I is microprocessor controlled and the data may be stored on floppy disk or printed out in tabular and/or graphical form. The optical head may be separated from the computer system for operation in field or adverse conditions. Examples of laboratory measured scattering phase functions obtained with the FAN I for spherical particles is given to illustrate its measurement capabilities
    corecore