3,285 research outputs found
Denitrification and inference of nitrogen sources in the karstic Floridan Aquifer
Aquifer denitrification is among the most poorly constrained fluxes in global and regional nitrogen budgets. The few direct measurements of denitrification in groundwaters provide limited information about its spatial and temporal variability, particularly at the scale of whole aquifers. Uncertainty in estimates of denitrification may also lead to underestimates of its effect on isotopic signatures of inorganic N, and thereby confound the inference of N source from these data. In this study, our objectives are to quantify the magnitude and variability of denitrification in the Upper Floridan Aquifer (UFA) and evaluate its effect on N isotopic signatures at the regional scale. Using dual noble gas tracers (Ne, Ar) to generate physical predictions of N2 gas concentrations for 112 observations from 61 UFA springs, we show that excess (i.e. denitrification-derived) N2 is highly variable in space and inversely correlated with dissolved oxygen (O2). Negative relationships between O2 and δ15N NO3 across a larger dataset of 113 springs, well-constrained isotopic fractionation coefficients, and strong 15N:18O covariation further support inferences of denitrification in this uniquely organic-matter-poor system. Despite relatively low average rates, denitrification accounted for 32 % of estimated aquifer N inputs across all sampled UFA springs. Back-calculations of source δ15N NO3 based on denitrification progression suggest that isotopically-enriched nitrate (NO3-) in many springs of the UFA reflects groundwater denitrification rather than urban- or animal-derived inputs. © Author(s) 2012
Peer influence in network markets: a theoretical and empirical analysis
Network externalities spur the growth of networks and the adoption of network goods in two ways. First, they make it more attractive to join a network the larger its installed base. Second, they create incentives for network members to actively recruit new members. Despite indications that the latter "peer effect" can be more important for network growth than the installed-base effect, it has so far been largely ignored in the literature. We address this gap using game-theoretical models. When all early adopters can band together to exert peer influence-an assumption that fits, e.g., the case of firms supporting a technical standard-we find that the peer effect induces additional growth of the network by a factor. When, in contrast, individuals exert peer influence in small groups of size n, the increase in network size is by an additive constant-which, for small networks, can amount to a large relative increase. The difference between small, local, personal networks and large, global, anonymous networks arises endogenously from our analysis. Fundamentally, the first type of networks is "tie-reinforcing," the other, "tie-creating". We use survey data from users of the Internet services, Skype and eBay, to illustrate the main logic of our theoretical results. As predicted by the model, we find that the peer effect matters strongly for the network of Skype users-which effectively consists of numerous small sub-networks-but not for that of eBay users. Since many network goods give rise to small, local networks
Evidence for the classical integrability of the complete AdS(4) x CP(3) superstring
We construct a zero-curvature Lax connection in a sub-sector of the
superstring theory on AdS(4) x CP(3) which is not described by the
OSp(6|4)/U(3) x SO(1,3) supercoset sigma-model. In this sub-sector worldsheet
fermions associated to eight broken supersymmetries of the type IIA background
are physical fields. As such, the prescription for the construction of the Lax
connection based on the Z_4-automorphism of the isometry superalgebra OSp(6|4)
does not do the job. So, to construct the Lax connection we have used an
alternative method which nevertheless relies on the isometry of the target
superspace and kappa-symmetry of the Green-Schwarz superstring.Comment: 1+26 pages; v2: minor typos corrected, acknowledgements adde
Evaluation of the current knowledge limitations in breast cancer research: a gap analysis
BACKGROUND
A gap analysis was conducted to determine which areas of breast cancer research, if targeted by researchers and funding bodies, could produce the greatest impact on patients.
METHODS
Fifty-six Breast Cancer Campaign grant holders and prominent UK breast cancer researchers participated in a gap analysis of current breast cancer research. Before, during and following the meeting, groups in seven key research areas participated in cycles of presentation, literature review and discussion. Summary papers were prepared by each group and collated into this position paper highlighting the research gaps, with recommendations for action.
RESULTS
Gaps were identified in all seven themes. General barriers to progress were lack of financial and practical resources, and poor collaboration between disciplines. Critical gaps in each theme included: (1) genetics (knowledge of genetic changes, their effects and interactions); (2) initiation of breast cancer (how developmental signalling pathways cause ductal elongation and branching at the cellular level and influence stem cell dynamics, and how their disruption initiates tumour formation); (3) progression of breast cancer (deciphering the intracellular and extracellular regulators of early progression, tumour growth, angiogenesis and metastasis); (4) therapies and targets (understanding who develops advanced disease); (5) disease markers (incorporating intelligent trial design into all studies to ensure new treatments are tested in patient groups stratified using biomarkers); (6) prevention (strategies to prevent oestrogen-receptor negative tumours and the long-term effects of chemoprevention for oestrogen-receptor positive tumours); (7) psychosocial aspects of cancer (the use of appropriate psychosocial interventions, and the personal impact of all stages of the disease among patients from a range of ethnic and demographic backgrounds).
CONCLUSION
Through recommendations to address these gaps with future research, the long-term benefits to patients will include: better estimation of risk in families with breast cancer and strategies to reduce risk; better prediction of drug response and patient prognosis; improved tailoring of treatments to patient subgroups and development of new therapeutic approaches; earlier initiation of treatment; more effective use of resources for screening populations; and an enhanced experience for people with or at risk of breast cancer and their families. The challenge to funding bodies and researchers in all disciplines is to focus on these gaps and to drive advances in knowledge into improvements in patient care
Gluino Decay as a Probe of High Scale Supersymmetry Breaking
A supersymmetric standard model with heavier scalar supersymmetric particles
has many attractive features. If the scalar mass scale is O(10 - 10^4) TeV, the
standard model like Higgs boson with mass around 125 GeV, which is strongly
favored by the LHC experiment, can be realized. However, in this scenario the
scalar particles are too heavy to be produced at the LHC. In addition, if the
scalar mass is much less than O(10^4) TeV, the lifetime of the gluino is too
short to be measured. Therefore, it is hard to probe the scalar particles at a
collider. However, a detailed study of the gluino decay reveals that two body
decay of the gluino carries important information on the scalar scale. In this
paper, we propose a test of this scenario by measuring the decay pattern of the
gluino at the LHC.Comment: 29 pages, 9 figures; version published in JHE
Mass-Matching in Higgsless
Modern extra-dimensional Higgsless scenarios rely on a mass-matching between
fermionic and bosonic KK resonances to evade constraints from precision
electroweak measurements. After analyzing all of the Tevatron and LEP bounds on
these so-called Cured Higgsless scenarios, we study their LHC signatures and
explore how to identify the mass-matching mechanism, the key to their
viability. We find singly and pair produced fermionic resonances show up as
clean signals with 2 or 4 leptons and 2 hard jets, while neutral and charged
bosonic resonances are visible in the dilepton and leptonic WZ channels,
respectively. A measurement of the resonance masses from these channels shows
the matching necessary to achieve . Moreover, a large single
production of KK-fermion resonances is a clear indication of compositeness of
SM quarks. Discovery reach is below 10 fb of luminosity for resonances
in the 700 GeV range.Comment: 28 pages, 18 figure
On the relationship between cooling flows and bubbles
A common feature of the X-ray bubbles observed in Chandra images of some cooling flow clusters is that they appear to be surrounded by bright, cool shells. Temperature maps of a few nearby luminous clusters reveal that the shells consist of the coolest gas in the clusters—much cooler than the surrounding medium. Using simple models, we study the effects of this cool emission on the inferred cooling flow properties of clusters. We find that the introduction of bubbles into model clusters that do not have cooling flows results in temperature and surface brightness profiles that resemble those seen in nearby cooling flow clusters. They also approximately reproduce the recent XMM-Newton and Chandra observations of a high minimum temperature of ~1-3 keV. Hence, bubbles, if present, must be taken into account when inferring the physical properties of the intracluster medium. In the case of some clusters, bubbles may account entirely for these observed features, calling into question their designation as clusters with cooling flows. However, since not all nearby cooling flow clusters show bubble-like features, we suggest that there may be a diverse range of physical phenomena that give rise to the same observed features
Flavor in Minimal Conformal Technicolor
We construct a complete, realistic, and natural UV completion of minimal
conformal technicolor that explains the origin of quark and lepton masses and
mixing angles. As in "bosonic technicolor", we embed conformal technicolor in a
supersymmetric theory, with supersymmetry broken at a high scale. The exchange
of heavy scalar doublets generates higher-dimension interactions between
technifermions and quarks and leptons that give rise to quark and lepton masses
at the TeV scale. Obtaining a sufficiently large top quark mass requires strong
dynamics at the supersymmetry breaking scale in both the top and technicolor
sectors. This is natural if the theory above the supersymmetry breaking also
has strong conformal dynamics. We present two models in which the strong top
dynamics is realized in different ways. In both models, constraints from
flavor-changing effects can be easily satisfied. The effective theory below the
supersymmetry breaking scale is minimal conformal technicolor with an
additional light technicolor gaugino. We argue that this light gaugino is a
general consequence of conformal technicolor embedded into a supersymmetric
theory. If the gaugino has mass below the TeV scale it will give rise to an
additional pseudo Nambu-Goldstone boson that is observable at the LHC.Comment: 37 pages; references adde
Thinking about Later Life: Insights from the Capability Approach
A major criticism of mainstream gerontological frameworks is the inability of such frameworks to appreciate and incorporate issues of diversity and difference in engaging with experiences of aging. Given the prevailing socially structured nature of inequalities, such differences matter greatly in shaping experiences, as well as social constructions, of aging. I argue that Amartya Sen’s capability approach (2009) potentially offers gerontological scholars a broad conceptual framework that places at its core consideration of human beings (their values) and centrality of human diversity. As well as identifying these key features of the capability approach, I discuss and demonstrate their relevance to thinking about old age and aging. I maintain that in the context of complex and emerging identities in later life that shape and are shaped by shifting people-place and people-people relationships, Sen’s capability approach offers significant possibilities for gerontological research
Are there Social Spillovers in Consumers’ Security Assessments of Payment Instruments?
Even though security of payments has long been identified as an important aspect of the consumer payment experience, recent literature fails to appropriately assess the extent of social spillovers among payment users. We test for the existence and importance of such spillovers by analyzing whether social influence affects consumers’ perceptions of the security of payment instruments. Based on a 2008–2014 annual panel data survey of consumers, we find strong evidence of social spillovers in payment markets: others’ perceptions of security of payment instruments exert a positive influence on one’s own payment security perceptions. The significant and robust results imply that a consumer’s assessments of security converge to his peers’ average assessment: a 10 percent change in the divergence between one’s own security rating and peers’ average rating will result in a 7 percent change in one’s own rating in the next period. The results are robust to many specifications and do not change when we control for actual fraud or crime data. Our results indicate that spillovers rather than reflection appear to be the cause, although separating the two causes is very difficult (Manski 1993). In particular, the spillovers are stronger for people who experience an exogenous shock to security perception, people who have more social interactions, and younger consumers, who are more likely to be influenced by social media. We also examine the effects of social spillovers on payment behavior (that is, on decisions regarding payment adoption and use). Our results indicate that social spillovers have a rather limited impact on payment behavior, as others’ perceptions seem to affect one’s own payment behavior mainly indirectly through the effect on one’s own perceptions
- …
