17,405 research outputs found
Valuing the small: countingthe benefits
This paper contains part of a reportcommissioned by a consortium oforganisations concerned with thesuccessful development of sustainabletransport strategies, and drafted byProfessor Phil Goodwin of UCL. It followsa report, Less Traffic Where People Live,which, using case studies of experiencehere and elsewhere in Europe hasdemonstrated that small-scale, or ?softfactors? can be effective in tacklingtransport problems, especially when usedin combination. Such examples includebus priority schemes, measures forimproving walking and cycling, trafficcalming, car clubs, school and workplacetravel plans, and the use of personalisedadvice and information to assist people inreducing the congestion and pollution theycause. The DfT?s report Smarter Choices ?Changing the Way we Travel, has alsohighlighted the significant potential whichexists to reduce traffic and congestion,providing soft factors are accompanied bysupporting measures to manage demand.The DfT has established a unit dedicatedto developing experience on soft factors,including on appraisal. Coupled with therecent report, there is a gainingmomentum behind expanding the role ofsoft factors in transport policy. These areall initiatives which are supported bynational and local government, and onwhich the sponsoring organisations havein recent years become active advisers aswell as campaigners.Taken together, such relatively cheap andpotentially popular initiatives are not onlypowerful contributions to theGovernment?s transport strategy: they arealso the leading examples of initiativeswhich can produce improvements swiftly? an important consideration both forpolitical reasons, and also in order toproduce the momentum and consensusfor longer term initiatives.This attractive combination of relativecheapness, environmental advantage,demonstrated successes in good practice,and speed of delivery would ? one mightthink ? lead to such policies being very highprofile indeed. However, this is not alwaysthe case. The problem this report addressesis reflected in recurrent concerns that themerits of such initiatives are overshadowedby the bigger, longer-term, much moreambitious ? and often much morecontroversial ??big? policies: especiallymassive rail or road infrastructure projects.In some ways it is natural that the ?big?initiatives should receive more attentionthan the ?small?, especially in view of along period of inadequate or distortedinvestment. But taken too far, this can becounter-productive. The question thisreport addresses is whether there is somesystematic reason, deep in the appraisaland forecasting methods, which preventsperfectly good initiatives receiving theattention and funding they deserve. Thesuggestion is that there are indeed someimportant biases of this kind, and thatsorting them out will have very helpfuleffects in avoiding wasted opportunitiesand accelerating delivery.This report addresses the followingquestions and is intended to be a helpfulcontribution to this area of work:> what are the barriers that prevent thesmall, good value-for-money schemesbeing taken up with greaterenthusiasm than the big, poor valuefor-money projects?> are there ways of restoring a balancedimplementation process?It is obvious that such barriers will includepolitical and ideological considerations,and the role of vested interests, but theyare not the focus of this report. Rather,the concern is that there may beweaknesses in the process of appraisaland assessment, preceding anyimplementation, which produce a biasagainst the small schemes. This processis intended to resolve practical questionsof design, economic questions of value formoney, planning questions of consistency,and the relationship between short andlong term objectives: it depends on a setof formal procedures and practices ?surveys, models, forecasts, appraisalframeworks ? built up over many years,and originating in the economic costbenefitanalyses whose principles andbasic features were established in the1960s and 1970s.The suggestion is made that there aresome in-built biases in current appraisaltechniques ? developed, as they were, ina different time and for a different agenda? which discriminate against some of thebest measures, and for some of the leasteffective
The economic costs of road traffic congestion
The main cause of road traffic congestion is that the volume of traffic is tooclose to the maximum capacity of a road or network. Congestion in the UK isworse than many, perhaps most, other European countries. More important, itis getting worse, year by year. Current official forecasts imply that congestionwill be substantially worse by the end of this decade, even on the veryfavourable assumption that all current Government projects and policies areimplemented in full, successfully, and to time. This is because road traffic isgrowing faster than road capacity. This is not a temporary problem: it willcontinue to be the case, in the absence of measures to reduce traffic, because itis infeasible to match a road programme to unrestricted trends in traffic growth.The effect, using the current Government method of measuring congestion,and a long established method of valuing it, would be that the widely quotedfigure of an annual cost of £20 billion, would increase to £30 billion by 2010.Under current social and economic frameworks, there are no feasible policiesthat could reduce congestion to zero in practice, or that would be worthwhiledoing in theory. But savings worth £4b-£6b a year could in principle be madeby congestion charging alone, over the whole network, of which (veryapproximately) half might be reflected in the prices of goods, and half insavings in individuals? own time spent travelling. A good proportion of thiscould alternatively be secured by an appropriate package of alternativemeasures: priority lanes and signalling; switching to other modes includingfreight to rail and passenger movements to public transport, walking andcycling; ?soft? policies to encourage reduced travel by car; land-use patternswhich reduce unnecessary travel; and associated measures to prevent benefitsfrom being eroded by induced travel. The combined effects of road chargingand a supportive set of complementary measures represent the best that couldbe reasonably achieved in the short to medium run. This could reducecongestion costs (as distinct from slowing down their increase) by 40%-50%.These broad-brush figures, though based on long-established methods, must betreated with great caution. The ?cost of congestion?, as used for thesecalculations, is based on relationships which in reality are not exact, stable oreven meaningful. The wrong indicator has been used, comparing average realspeeds with average ideal speeds. But in the real world, speeds are differentevery day, and so is the level of congestion. For just-in-time operation, and formuch personal and business travel, variability and reliability are much moreimportant. The really costly effect of congestion is not the slightly increasedaverage time, but the greater than average effect in particular locations andmarkets, and the greatly increased unreliability.During the near future, until road pricing is implemented, increases in roadcongestion can lead to some shift in the balance of attractiveness of rail freight,sufficient for a proportion of the freight market to transfer from road. Thiswould in turn make a small but significant contribution to reducing congestion,especially in some specific important corridors. Even though rail freight isusually a small proportion of all freight, the annual economic saving incongestion cost, to road users generally, from transferring a 5-times a week,200 mile round trip, mostly on congested motorways, from road to rail wouldbe in the order of £40,000 to £80,000, to which should be added thecommercial cost savings made by the freight operator who chooses to do so. Itshould be emphasised that sustaining this would require measures to preventinduced car traffic filling up the relieved road space.An example of the impact of factoring in unreliability is given by approximatecalculations made for journeys such as Glasgow to Newcastle, Cardiff toDover, or London to Manchester. In free-flow theory these could be 3-hourjourneys, but moderate congestion requires adding an hour to the average timeand another hour safety margin to ensure that a tight delivery slot is not missedtoo often. In congestion so severe as to double the average time, the extrasafety margin for unreliability could be as much as 4 hours, which is simply notfeasible in many cases.The ?total cost of congestion? is a large number, but it is practicallymeaningless and by ?devaluing the currency? it distracts attention from moreimportant, achievable, objectives. It would be better not to use it as a target forpolicy. The two key important things to do are:· Strategic action to reduce traffic volume to a level where conditions do notvary too much from day to day. In some circumstances this will slightlyincrease average speed, though not always: in some road conditions areduction of average speed can greatly improve the smoothness of trafficflow. But in both cases, it will greatly increase reliability, this being moreimportant than the change in average speed;· Practical measures to provide good alternatives for freight and passengermovements which reduce the intensity of use of scarce road space incongested conditions. Even where this only applies to a minority ofmovements, significant effects are possible.The Government plans to ?re-launch? the Ten Year Plan for Transport thisSummer or Autumn. It is not reasonable to expect that the re-launch willinclude congestion charging for cars within the decade, so it will need to planfor it as soon as possible after, and a short-term coping strategy of prioritymeasures to protect the most important classes of movement (both passengerand freight) from congestion in the period before charging is implemented
Structuring the decision process : an evaluation of methods in the structuring the decision process
This chapter examines the effectiveness of methods that are designed to provide structure and support to decision making. Those that are primarily aimed at individual decision makers are examined first and then attention is turned to groups. In each case weaknesses of unaided decision making are identified and how successful the application of formal methods is likely to be in mitigating these weaknesses is assessed
The ellipticities of Galactic and LMC globular clusters
The globular clusters of the LMC are found to be significantly more
elliptical than Galactic globular clusters, but very similar in virtually all
other respects. The ellipticity of the LMC globular clusters is shown not be
correlated with the age or mass of those clusters. It is proposed that the
ellipticity differences are caused by the different strengths of the tidal
fields in the LMC and the Galaxy. The strong Galactic tidal field erases
initial velocity anisotropies and removes angular momentum from globular
clusters making them more spherical. The tidal field of the LMC is not strong
enough to perform these tasks and its globular clusters remain close to their
initial states.Comment: 3 pages LaTeX file with 3 figures incorporated accepted for
publication in MNRAS. Also available by e-mailing spg, or by ftp from
ftp://star-www.maps.susx.ac.uk/pub/papers/spg/ellip.ps.
Where's the Working Class?
From the Communist Manifesto onwards, the self-emancipation of the working class was central to Marx’s thought. And so it was for subsequent generations of Marxists including the later Engels, the pre-WW1 Kautsky, Lenin, Luxemburg, Trotsky and Gramsci. But in much contemporary Marxist theory the active role of the working class seems at the least marginal and at the most completely written off. This article traces the perceived role of the working class in Marxist theory, from Marx and Engels, through the Second and Third Internationals, Stalinism and Maoism, through to the present day. It situates this in political developments changes in the nature of the working class over the last 200 years. It concludes by suggesting a number of questions about Marxism and the contemporary working class that anyone claiming to be a Marxist today needs to answer
Evidence for the Strong Effect of Gas Removal on the Internal Dynamics of Young Stellar Clusters
We present detailed luminosity profiles of the young massive clusters M82-F,
NGC 1569-A, and NGC 1705-1 which show significant departures from equilibrium
(King and EFF) profiles. We compare these profiles with those from N-body
simulations of clusters which have undergone the rapid removal of a significant
fraction of their mass due to gas expulsion. We show that the observations and
simulations agree very well with each other suggesting that these young
clusters are undergoing violent relaxation and are also losing a significant
fraction of their stellar mass. That these clusters are not in equilibrium can
explain the discrepant mass-to-light ratios observed in many young clusters
with respect to simple stellar population models without resorting to
non-standard initial stellar mass functions as claimed for M82-F and NGC
1705-1. We also discuss the effect of rapid gas removal on the complete
disruption of a large fraction of young massive clusters (``infant
mortality''). Finally we note that even bound clusters may lose >50% of their
initial stellar mass due to rapid gas loss (``infant weight-loss'').Comment: 6 pages, 3 figures, MNRAS letters, accepte
Star Cluster Survival in Star Cluster Complexes under Extreme Residual Gas Expulsion
After the stars of a new, embedded star cluster have formed they blow the
remaining gas out of the cluster. Especially winds of massive stars and
definitely the on-set of the first supernovae can remove the residual gas from
a cluster. This leads to a very violent mass-loss and leaves the cluster out of
dynamical equilibrium. Standard models predict that within the cluster volume
the star formation efficiency (SFE) has to be about 33 per cent for sudden
(within one crossing-time of the cluster) gas expulsion to retain some of the
stars in a bound cluster. If the efficiency is lower the stars of the cluster
disperse mostly. Recent observations reveal that in strong star bursts star
clusters do not form in isolation but in complexes containing dozens and up to
several hundred star clusters, i.e. in super-clusters. By carrying out
numerical experiments for such objects placed at distances >= 10 kpc from the
centre of the galaxy we demonstrate that under these conditions (i.e. the
deeper potential of the star cluster complex and the merging process of the
star clusters within these super-clusters) the SFEs can be as low as 20 per
cent and still leave a gravitationally bound stellar population. Such an object
resembles the outer Milky Way globular clusters and the faint fuzzy star
clusters recently discovered in NGC 1023.Comment: 21 pages, 8 figures, accepted by Ap
Simulating star formation in molecular cloud cores IV. The role of turbulence and thermodynamics
We perform SPH simulations of the collapse and fragmentation of low-mass
cores having different initial levels of turbulence
(alpha_turb=0.05,0.10,0.25). We use a new treatment of the energy equation
which captures the transport of cooling radiation against opacity due to both
dust and gas (including the effects of dust sublimation, molecules, and H^-
ions). We also perform comparison simulations using a standard barotropic
equation of state. We find that -- when compared with the barotropic equation
of state -- our more realistic treatment of the energy equation results in more
protostellar objects being formed, and a higher proportion of brown dwarfs; the
multiplicity frequency is essentially unchanged, but the multiple systems tend
to have shorter periods (by a factor ~3), higher eccentricities, and higher
mass ratios. The reason for this is that small fragments are able to cool more
effectively with the new treatment, as compared with the barotropic equation of
state. We find that the process of fragmentation is often bimodal. The first
protostar to form is usually, at the end, the most massive, i.e. the primary.
However, frequently a disc-like structure subsequently forms round this
primary, and then, once it has accumulated sufficient mass, quickly fragments
to produce several secondaries. We believe that this delayed fragmentation of a
disc-like structure is likely to be an important source of very low-mass
hydrogen-burning stars and brown dwarfs.Comment: 14 pages, 8 figures. Accepted for publication by A&
Restrictiveness and guidance in support systems
Restrictiveness and guidance have been proposed as methods for improving the performance of users of support systems. In many companies computerized support systems are used in demand forecasting enabling interventions based on management judgment to be applied to statistical forecasts. However, the resulting forecasts are often ‘sub-optimal’ because many judgmental adjustments are made when they are not required. An experiment was used to investigate whether restrictiveness or guidance in a support system leads to more effective use of judgment. Users received statistical forecasts of the demand for products that were subject to promotions. In the restrictiveness mode small judgmental adjustments to these forecasts were prohibited (research indicates that these waste effort and may damage accuracy). In the guidance mode users were advised to make adjustments in promotion periods, but not to adjust in non-promotion periods. A control group of users were not subject to restrictions and received no guidance. The results showed that neither restrictiveness nor guidance led to improvements in accuracy. While restrictiveness reduced unnecessary adjustments, it deterred desirable adjustments and also encouraged over-large adjustments so that accuracy was damaged. Guidance encouraged more desirable system use, but was often ignored. Surprisingly, users indicated it was less acceptable than restrictiveness
- …
