4,323 research outputs found
Influence of central venous pressure upon sinus node responses to arterial baroreflex stimulation in man
Measurements were made of sinus node responses to arterial baroreceptor stimulation with phenylephrine injection or neck suction, before and during changes of central venous pressure provoked by lower body negative pressure or leg and lower truck elevation. Variations of central venous pressure between 1.1 and 9.0 mm Hg did not influence arterial baroreflex mediated bradycardia. Baroreflex sinus node responses were augmented by intravenous propranolol, but the level of responses after propranolol was comparable during the control state, lower body negative pressure, and leg and trunk elevation. Sinus node responses to very brief baroreceptor stimuli applied during the transitions of central venous pressure also were comparable in the three states. The authors conclude that physiological variations of central venous pressure do not influence sinus node responses to arterial baroreceptor stimulation in man
Influence of low and high pressure baroreceptors on plasma renin activity in humans
The effects of low and high pressure baroreceptors on plasma renin activity (immunoassay) were evaluated using graded lower body suction (LBS) in six healthy men. LBS at -10 and -20 mmHg for 10 min decreased central venous pressure without changing arterial pressure and thereby presumably reduced low but not high pressure baroreceptor inhibition of renin release. LBS at these levels produced forearm vasoconstriction, but did not increase renin. LBS at -40 mmHG decreased central venous and arterial pulse pressure and thus reduced both low and high pressure baroreceptor inhibition. LBS at this level produced forearm vasoconstriction and tachycardia and increased renin. In summary, reduction in low pressure baroreceptor inhibition in humans did not increase renin in the presence of physiological tonic inhibition from high pressure baroreceptors. Increases in renin did not occur until there was combined reduction of high and low pressure baroreceptor inhibition on plasma renin activity
Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-And-Solve
Can we analyze data without decompressing it? As our data keeps growing, understanding the time complexity of problems on compressed inputs, rather than in convenient uncompressed forms, becomes more and more relevant. Suppose we are given a compression of size of data that originally has size , and we want to solve a problem with time complexity . The naive strategy of "decompress-and-solve" gives time , whereas "the gold standard" is time : to analyze the compression as efficiently as if the original data was small. We restrict our attention to data in the form of a string (text, files, genomes, etc.) and study the most ubiquitous tasks. While the challenge might seem to depend heavily on the specific compression scheme, most methods of practical relevance (Lempel-Ziv-family, dictionary methods, and others) can be unified under the elegant notion of Grammar Compressions. A vast literature, across many disciplines, established this as an influential notion for Algorithm design. We introduce a framework for proving (conditional) lower bounds in this field, allowing us to assess whether decompress-and-solve can be improved, and by how much. Our main results are: - The bound for LCS and the bound for Pattern Matching with Wildcards are optimal up to factors, under the Strong Exponential Time Hypothesis. (Here, denotes the uncompressed length of the compressed pattern.) - Decompress-and-solve is essentially optimal for Context-Free Grammar Parsing and RNA Folding, under the -Clique conjecture. - We give an algorithm showing that decompress-and-solve is not optimal for Disjointness
Antibiotic Spacers in Shoulder Arthroplasty: Comparison of Stemmed and Stemless Implants.
Background: Antibiotic spacers in shoulder periprosthetic joint infection deliver antibiotics locally and provide temporary stability. The purpose of this study was to evaluate differences between stemmed and stemless spacers.
Methods: All spacers placed from 2011 to 2013 were identified. Stemless spacers were made by creating a spherical ball of cement placed in the joint space. Stemmed spacers had some portion in the humeral canal. Operative time, complications, reimplantation, reinfection, and range of motion were analyzed.
Results: There were 37 spacers placed: 22 were stemless and 15 were stemmed. The stemless spacer population was older (70.9 ± 7.8 years vs. 62.8 ± 8.4 years, p = 0.006). The groups had a similar percentage of each gender (stemless group, 45% male vs. stemmed group, 40% male; p = 0.742), body mass index (stemless group, 29.1 ± 6.4 kg/m2 vs. stemmed group, 31.5 ± 8.3 kg/m2; p = 0.354) and Charlson Comorbidity Index (stemless group, 4.2 ± 1.2 vs. stemmed group, 4.2 ± 1.7; p = 0.958). Operative time was similar (stemless group, 127.5 ± 37.1 minutes vs. stemmed group, 130.5 ± 39.4 minutes). Two stemless group patients had self-resolving radial nerve palsies. Within the stemless group, 15 of 22 (68.2%) underwent reimplantation with 14 of 15 having forward elevation of 109° ± 23°. Within the stemmed group, 12 of 15 (80.0%, p = 0.427) underwent reimplantation with 8 of 12 having forward elevation of 94° ± 43° (range, 30° to 150°; p = 0.300). Two stemmed group patients had axillary nerve palsies, one of which self-resolved but the other did not. One patient sustained dislocation of reverse shoulder arthroplasty after reimplantation. One stemless group patient required an open reduction and glenosphere exchange of dislocated reverse shoulder arthroplasty at 6 weeks after reimplantation.
Conclusions: Stemmed and stemless spacers had similar clinical outcomes. When analyzing all antibiotic spacers, over 70% were converted to revision arthroplasties. The results of this study do not suggest superiority of either stemmed or stemless antibiotic spacers
Conditional Lower Bounds for Space/Time Tradeoffs
In recent years much effort has been concentrated towards achieving
polynomial time lower bounds on algorithms for solving various well-known
problems. A useful technique for showing such lower bounds is to prove them
conditionally based on well-studied hardness assumptions such as 3SUM, APSP,
SETH, etc. This line of research helps to obtain a better understanding of the
complexity inside P.
A related question asks to prove conditional space lower bounds on data
structures that are constructed to solve certain algorithmic tasks after an
initial preprocessing stage. This question received little attention in
previous research even though it has potential strong impact.
In this paper we address this question and show that surprisingly many of the
well-studied hard problems that are known to have conditional polynomial time
lower bounds are also hard when concerning space. This hardness is shown as a
tradeoff between the space consumed by the data structure and the time needed
to answer queries. The tradeoff may be either smooth or admit one or more
singularity points.
We reveal interesting connections between different space hardness
conjectures and present matching upper bounds. We also apply these hardness
conjectures to both static and dynamic problems and prove their conditional
space hardness.
We believe that this novel framework of polynomial space conjectures can play
an important role in expressing polynomial space lower bounds of many important
algorithmic problems. Moreover, it seems that it can also help in achieving a
better understanding of the hardness of their corresponding problems in terms
of time
Recommended from our members
A high-wavenumber boundary-element method for an acoustic scattering problem
In this paper we show stability and convergence for a novel Galerkin boundary element method approach to the impedance boundary value problem for the Helmholtz equation in a half-plane with piecewise constant boundary data. This problem models, for example, outdoor sound propagation over inhomogeneous flat terrain. To achieve a good approximation with a relatively low number of degrees of freedom we employ a graded mesh with smaller elements adjacent to discontinuities in impedance, and a special set of basis functions for the Galerkin method so that, on each element, the approximation space consists of polynomials (of degree ) multiplied by traces of plane waves on the boundary. In the case where the impedance is constant outside an interval , which only requires the discretization of , we show theoretically and experimentally that the error in computing the acoustic field on is , where is the number of degrees of freedom and is the wavenumber. This indicates that the proposed method is especially commendable for large intervals or a high wavenumber. In a final section we sketch how the same methodology extends to more general scattering problems
A Survey of Expert Opinion Regarding Rotator Cuff Repair.
Many patients with rotator cuff tears have questions for their surgeons regarding the surgical procedure, perioperative management, restrictions, therapy, and ability to work after a rotator cuff repair. The purpose of our study was to determine common clinical practices among experts regarding rotator cuff repair and to assist them in counseling patients. We surveyed 372 members of the American Shoulder and Elbow Surgeons (ASES) and the Association of Clinical Elbow and Shoulder Surgeons (ACESS); 111 members (29.8%) completed all or part of the survey, and 92.8% of the respondents answered every question. A consensus response (\u3e50% agreement) was achieved on 49% (24 of 49) of the questions. Variability in responses likely reflects the fact that clinical practices have evolved over time based on clinical experience
- …
