98,402 research outputs found

    Buffet tests on 1/20 scale lca model with leading edge slats at transonic speeds

    Get PDF
    Buffet measurements have been made on the 1120 scale LCA model (stage 6.45 V 35) with full leading edge slat at transonic speeds in the 1.2m tunnel. Unsteady signals from wing-root strain gauges have been measured and the response at the first wing bending frequency has been utilized for the determination of buffet characteristics. Mabey's technique has been employed to estimate buffeting coefficients at different Mach numbers. Significant reductions in the maximum buffet levels have been found in the presence of leading edge slats, confirming the results obtained from Calspan tests

    Membrane reactor technology for ultrapure hydrogen production

    Get PDF
    The suitability of polymer electrolyte membrane fuel cells (PEMFC) for stationary and vehicular applications because of its low operating temperatures, compactness, higher power density, cleaner exhausts and higher efficiencies compared to conventional internal\ud combustion engines and gas turbines adds to the already soaring demand for hydrogen production for refinery and petrochemical applications

    On Semi-classical Degravitation and the Cosmological Constant Problems

    Full text link
    In this report, we discuss a candidate mechanism through which one might address the various cosmological constant problems. We first observe that the renormalization of gravitational couplings (induced by integrating out various matter fields) manifests non-local modifications to Einstein's equations as quantum corrected equations of motion. That is, at the loop level, matter sources curvature through a gravitational coupling that is a non-local function of the covariant d'Alembertian. If the functional form of the resulting Newton's `constant' is such that it annihilates very long wavelength sources, but reduces to 1/Mpl21/M^2_{pl} (MplM_{pl} being the 4d Planck mass) for all sources with cosmologically observable wavelengths, we would have a complimentary realization of the degravitation paradigm-- a realization through which its non-linear completion and the corresponding modified Bianchi identities are readily understood. We proceed to consider various theories whose coupling to gravity may a priori induce non-trivial renormalizations of Newton's constant in the IR, and arrive at a class of non-local effective actions which yield a suitably degravitating filter function for Newton's constant upon subsequently being integrated out. We motivate this class of non-local theories through several considerations, discuss open issues, future directions, the inevitable question of scheme dependence in semi-classical gravitational calculations and comment on connections with other meditations in the literature on relaxing of the cosmological constant semi-classically.Comment: 15 pages, 2 appendices. References added

    Maximizing resource utilization by slicing of superscalar architecture

    Full text link
    Superscalar architectural techniques increase instruction throughput from one instruction per cycle to more than one instruction per cycle. Modern processors make use of several processing resources to achieve this kind of throughput. Control units perform various functions to minimize stalls and to ensure a continuous feed of instructions to execution units. It is vital to ensure that instructions ready for execution do not encounter a bottleneck in the execution stage; This thesis work proposes a dynamic scheme to increase efficiency of execution stage by a methodology called block slicing. Implementing this concept in a wide, superscalar pipelined architecture introduces minimal additional hardware and delay in the pipeline. The hardware required for the implementation of the proposed scheme is designed and assessed in terms of cost and delay. Performance measures of speed-up, throughput and efficiency have been evaluated for the resulting pipeline and analyzed

    The First Detection of Gravitational Waves

    Full text link
    This article deals with the first detection of gravitational waves by the advanced Laser Interferometer Gravitational Wave Observatory (LIGO) detectors on 14 September 2015, where the signal was generated by two stellar mass black holes with masses 36 M M_{\odot} and 29 M M_{\odot} that merged to form a 62 M M_{\odot} black hole, releasing 3 MM_{\odot} energy in gravitational waves, almost 1.3 billion years ago. We begin by providing a brief overview of gravitational waves, their sources and the gravitational wave detectors. We then describe in detail the first detection of gravitational waves from a binary black hole merger. We then comment on the electromagnetic follow up of the detection event with various telescopes. Finally, we conclude with the discussion on the tests of gravity and fundamental physics with the first gravitational wave detection event.Comment: 20 pages, 9 figures, Published in a special issue of Universe "Varying Constants and Fundamental Cosmology

    Broadbasing and Deepening the Bond Market in India

    Get PDF
    At the time of its independence in 1947 India had only the traditional commercial banks, all with private sector ownership. Like the typical commercial banks in other parts of the world, all banks in India were also not keen to provide medium and long-term finance to industry and other sectors for their fixed asset formation. The banks were willing to fund basically the working capital requirements of the credit-worthy borrowers on the security of tangible assets. Since the government was keen to stimulate setting up of a wide range of new industrial units as also expansion/diversification of the existing units it decided to encourage setting up of financial intermediaries that provided term finance to projects in industry. Thus emerged a well-knit structure of national and state level development financial institutions (DFIs) for meeting requirements of medium and long-term finance of all range of industrial units, from the smallest to the very large ones. Reserve Bank of India (the central banking institutions of the country) and Government of India nurtured DFIS through various types of financial incentives and other supportive measures. The main objective of all these measures was to provide much needed long-term finance to the industry, which the then existing commercial banks were not keen to provide because of the fear of asset-liability mismatch. Since deposits with the banks were mainly short/medium term, extending term loans was considered by the banks to be relatively risky. The five-year development plans envisaged rapid growth of domestic industry even in the private sector to support the import substitution growth model adopted by the national planners. To encourage investment in industry, a conscious policy decision was taken that the DFIs should provide term-finance mainly to the private sector at interest rates that were lower than those applicable to working capital or any other short-term loans. In the early years of the post-Independence period, shortages of various commodities tended to make trading in commodities a more profitable proposition than investment in industry, which carried higher risk. Partly to correct this imbalance, the conscious policy design was to increase attractiveness of long-term investment in industry and infrastructure through relatively lower interest rates. To enable term-lending institutions to finance industry at concessional rates, Government and RBI gave them access to low cost funds. They were allowed to issue bonds with government guarantee, given funds through the budget and RBI allocated sizeable part of RBI's National Industrial Credit (Long Term Operations) funds to Industrial Development Bank of India, the large DFI of the country. Through an appropriate RBI fiat, the turf of the DFIs was also protected, until recently, by keeping commercial banks away from extending large sized term loans to industrial units. Banks were expected to provide small term loans to small-scale industrial units on a priority basis.

    Reliability models applicable to space telescope solar array assembly system

    Get PDF
    A complex system may consist of a number of subsystems with several components in series, parallel, or combination of both series and parallel. In order to predict how well the system will perform, it is necessary to know the reliabilities of the subsystems and the reliability of the whole system. The objective of the present study is to develop mathematical models of the reliability which are applicable to complex systems. The models are determined by assuming k failures out of n components in a subsystem. By taking k = 1 and k = n, these models reduce to parallel and series models; hence, the models can be specialized to parallel, series combination systems. The models are developed by assuming the failure rates of the components as functions of time and as such, can be applied to processes with or without aging effects. The reliability models are further specialized to Space Telescope Solar Arrray (STSA) System. The STSA consists of 20 identical solar panel assemblies (SPA's). The reliabilities of the SPA's are determined by the reliabilities of solar cell strings, interconnects, and diodes. The estimates of the reliability of the system for one to five years are calculated by using the reliability estimates of solar cells and interconnects given n ESA documents. Aging effects in relation to breaks in interconnects are discussed
    corecore