1,558 research outputs found
Age Differences in the Desirability of Narcissism
Young adult narcissism has been the focus of much discussion in the personality literature and popular press. Yet no previous studies have addressed whether there are age differences in the relative desirability of narcissistic and non-narcissistic self-descriptions, such as those presented as answer choices on the Narcissistic Personality Inventory (NPI; Raskin & Hall, 1979). In Study 1, younger age was associated with less negative evaluations of narcissistic (vs. non-narcissistic) statements in general, and more positive evaluations of narcissistic statements conveying leadership/authority. In Study 2, age was unrelated to perceiving a fictional target person as narcissistic, but younger age was associated with more positive connotations for targets described with narcissistic statements and less positive connotations for targets described with non-narcissistic statements, in terms of the inferences made about the target’s altruism, conscientiousness, social status, and self-esteem. In both studies, age differences in the relative desirability of narcissism remained statistically significant when adjusting for participants’ own narcissism, and the NPI showed measurement invariance across age. Despite perceiving narcissism similarly, adults of different ages view the desirability of NPI answer choices differently. These results are important when interpreting cross-generational differences in NPI scores, and can potentially facilitate cross-generational understanding
Sampling-based Algorithms for Optimal Motion Planning
During the last decade, sampling-based path planning algorithms, such as
Probabilistic RoadMaps (PRM) and Rapidly-exploring Random Trees (RRT), have
been shown to work well in practice and possess theoretical guarantees such as
probabilistic completeness. However, little effort has been devoted to the
formal analysis of the quality of the solution returned by such algorithms,
e.g., as a function of the number of samples. The purpose of this paper is to
fill this gap, by rigorously analyzing the asymptotic behavior of the cost of
the solution returned by stochastic sampling-based algorithms as the number of
samples increases. A number of negative results are provided, characterizing
existing algorithms, e.g., showing that, under mild technical conditions, the
cost of the solution returned by broadly used sampling-based algorithms
converges almost surely to a non-optimal value. The main contribution of the
paper is the introduction of new algorithms, namely, PRM* and RRT*, which are
provably asymptotically optimal, i.e., such that the cost of the returned
solution converges almost surely to the optimum. Moreover, it is shown that the
computational complexity of the new algorithms is within a constant factor of
that of their probabilistically complete (but not asymptotically optimal)
counterparts. The analysis in this paper hinges on novel connections between
stochastic sampling-based path planning algorithms and the theory of random
geometric graphs.Comment: 76 pages, 26 figures, to appear in International Journal of Robotics
Researc
Immunomagnetic t-lymphocyte depletion (ITLD) of rat bone marrow using OX-19 monoclonal antibody
Graft versus host disease (GVHD) may be abrogated and host survival prolonged by in vitro depletion of T lymphocytes from bone marrow (BM) prior to allotransplantation. Using a mouse anti-rat pan T-lymphocyte monoclonal antibody (0×19) bound to monosized, magnetic, polymer beads, T lymphocytes were removed in vitro from normal bone marrow. The removal of the T lymphocytes was confirmed by flow cytometry. Injection of the T-lymphocyte-depleted bone marrow into fully allogeneic rats prevents the induction of GVHD and prolongs host survival. A highly efficient technique of T-lymphocyte depletion using rat bone marrow is described. It involves the binding of OX-19, a MoAb directed against all rat thy-mocytes and mature peripheral T lymphocytes, to monosized, magnetic polymer spheres. Magnetic separation of T lymphocytes after mixing the allogeneic bone marrow with the bead/OX-19 complex provides for a simple, rapid depletion of T lymphocytes from the bone marrow. In vitro studies using flow cytometry and the prevention of GVHD in a fully allogeneic rat bone marrow model have been used to demonstrate the effectiveness of the depletion procedure. © 1989 Informa UK Ltd All rights reserved: reproduction in whole or part not permitted
Three-year tracking of fatty acid composition of plasma phospholipids in healthy children
Objectives: The fatty acid composition of plasma phospholipids reflects the dietary fatty acid intake as well as endogenous turnover. We aimed at investigating the potential tracking of plasma phospholipid fatty acid composition in children that participated in a prospective cohort study. Methods: 26 healthy children participated in a longitudinal study on health risks and had been enrolled after birth. All children were born at term with birth weights appropriate for gestational age. Follow-up took place at ages 24, 36 and 60 months. At each time point a 24-hour dietary recall was obtained, anthropometric parameters were measured and a blood sample for phospholipid fatty acid analysis was taken. Results: Dietary intake of saturated (SFA), monounsaturated (MUFA) and polyunsaturated (PUFA) fatty acids at the three time points were not correlated. We found lower values for plasma MUFA and the MUFA/SFA ratio at 60 months compared to 24 months. In contrast, total PUFA, total n-6 and n-6 long-chain polyunsaturated fatty acids (LC-PUFA) were higher at 60 months. Significant averaged correlation coefficients (average of Pearson's R for 24 versus 36 months and 36 versus 60 months) were found for n-6 LC-PUFA (r = 0.67), n-6/n-3 LC-PUFA ratio (r = 0.59) and arachidonic acid/linoleic acid ratio (r = 0.64). Partial tracking was found for the docosahexaenoic acid/alpha-linolenic acid ratio (r = 0.33). Body mass index and sum of skinfolds Z-scores were similar in the three evaluations. Conclusions: A significant tracking of n-6 LC-PUFA, n-6 LC-PUFA/n-3 LC-PUFA ratio, arachidonic acid/ linoleic acid ratio and docosahexaenoic acid/alpha-linolenic acid ratio may reflect an influence of individual endogenous fatty acid metabolism on plasma concentrations of some, but not all, fatty acids. Copyright (c) 2007 S. Karger AG, Basel
Motion Planning as Online Learning: A Multi-Armed Bandit Approach to Kinodynamic Sampling-Based Planning
Kinodynamic motion planners allow robots to perform complex manipulation tasks under dynamics constraints or with black-box models. However, they struggle to find high-quality solutions, especially when a steering function is unavailable. This letter presents a novel approach that adaptively biases the sampling distribution to improve the planner's performance. The key contribution is to formulate the sampling bias problem as a non-stationary multi-armed bandit problem, where the arms of the bandit correspond to sets of possible transitions. High-reward regions are identified by clustering transitions from sequential runs of kinodynamic RRT and a bandit algorithm decides what region to sample at each timestep. The letter demonstrates the approach on several simulated examples as well as a 7-degree-of-freedom manipulation task with dynamics uncertainty, suggesting that the approach finds better solutions faster and leads to a higher success rate in execution
SentiBench - a benchmark comparison of state-of-the-practice sentiment analysis methods
In the last few years thousands of scientific papers have investigated
sentiment analysis, several startups that measure opinions on real data have
emerged and a number of innovative products related to this theme have been
developed. There are multiple methods for measuring sentiments, including
lexical-based and supervised machine learning methods. Despite the vast
interest on the theme and wide popularity of some methods, it is unclear which
one is better for identifying the polarity (i.e., positive or negative) of a
message. Accordingly, there is a strong need to conduct a thorough
apple-to-apple comparison of sentiment analysis methods, \textit{as they are
used in practice}, across multiple datasets originated from different data
sources. Such a comparison is key for understanding the potential limitations,
advantages, and disadvantages of popular methods. This article aims at filling
this gap by presenting a benchmark comparison of twenty-four popular sentiment
analysis methods (which we call the state-of-the-practice methods). Our
evaluation is based on a benchmark of eighteen labeled datasets, covering
messages posted on social networks, movie and product reviews, as well as
opinions and comments in news articles. Our results highlight the extent to
which the prediction performance of these methods varies considerably across
datasets. Aiming at boosting the development of this research area, we open the
methods' codes and datasets used in this article, deploying them in a benchmark
system, which provides an open API for accessing and comparing sentence-level
sentiment analysis methods
On Correctness of Data Structures under Reads-Write Concurrency
Abstract. We study the correctness of shared data structures under reads-write concurrency. A popular approach to ensuring correctness of read-only operations in the presence of concurrent update, is read-set validation, which checks that all read variables have not changed since they were first read. In practice, this approach is often too conserva-tive, which adversely affects performance. In this paper, we introduce a new framework for reasoning about correctness of data structures under reads-write concurrency, which replaces validation of the entire read-set with more general criteria. Namely, instead of verifying that all read conditions over the shared variables, which we call base conditions. We show that reading values that satisfy some base condition at every point in time implies correctness of read-only operations executing in parallel with updates. Somewhat surprisingly, the resulting correctness guarantee is not equivalent to linearizability, and is instead captured through two new conditions: validity and regularity. Roughly speaking, the former re-quires that a read-only operation never reaches a state unreachable in a sequential execution; the latter generalizes Lamport’s notion of regular-ity for arbitrary data structures, and is weaker than linearizability. We further extend our framework to capture also linearizability. We illus-trate how our framework can be applied for reasoning about correctness of a variety of implementations of data structures such as linked lists.
Associations between cardiorespiratory fitness, physical activity and clustered cardiometabolic risk in children and adolescents: the HAPPY study
Clustering of cardiometabolic risk factors can occur during childhood and predisposes individuals to cardiometabolic disease. This study calculated clustered cardiometabolic risk in 100 children and adolescents aged 10-14 years (59 girls) and explored differences according to cardiorespiratory fitness (CRF) levels and time spent at different physical activity (PA) intensities. CRF was determined using a maximal cycle ergometer test, and PA was assessed using accelerometry. A cardiometabolic risk score was computed as the sum of the standardised scores for waist circumference, blood pressure, total cholesterol/high-density lipoprotein ratio, triglycerides and glucose. Differences in clustered cardiometabolic risk between fit and unfit participants, according to previously proposed health-related threshold values, and between tertiles for PA subcomponents were assessed using ANCOVA. Clustered risk was significantly lower (p < 0.001) in the fit group (mean 1.21 ± 3.42) compared to the unfit group (mean -0.74 ± 2.22), while no differences existed between tertiles for any subcomponent of PA. Conclusion These findings suggest that CRF may have an important cardioprotective role in children and adolescents and highlights the importance of promoting CRF in youth
Multi-robot grasp planning for sequential assembly operations
This paper addresses the problem of finding robot configurations to grasp assembly parts during a sequence of collaborative assembly operations. We formulate the search for such configurations as a constraint satisfaction problem (CSP).Collision constraints in an operation and transfer constraints between operations determine the sets of feasible robot configurations. We show that solving the connected constraint graph with off-the-shelf CSP algorithms can quickly become infeasible even fora few sequential assembly operations. We present an algorithm which, through the assumption of feasible regrasps, divides the CSP into independent smaller problems that can be solved exponentially faster. The algorithm then uses local search techniques to improve this solution by removing a gradually increasing number of regrasps from the plan. The algorithm enables the user to stop the planner anytime and use the current best plan if the cost of removing regrasps from the plan exceeds the cost of executing those regrasps. We present simulation experiments to compare our algorithm’s performance toa naive algorithm which directly solves the connected constraint graph. We also present a physical robot system which uses the output of our planner to grasp and bring parts together in assembly configurations
Scalable transactions in the cloud: partitioning revisited
Lecture Notes in Computer Science, 6427Cloud computing is becoming one of the most used paradigms to deploy highly available and scalable systems. These systems usually demand the management of huge amounts of data, which cannot be solved with traditional nor replicated database systems as we know them. Recent solutions store data in special key-value structures, in an approach that commonly lacks the consistency provided by transactional guarantees, as it is traded for high scalability and availability. In order to ensure consistent access to the information, the use of transactions is required. However, it is well-known that traditional replication protocols do not scale well for a cloud environment. Here we take a look at current proposals to deploy transactional systems in the cloud and we propose a new system aiming at being a step forward in achieving this goal. We proceed to focus on data partitioning and describe the key role it plays in achieving high scalability.This work has been partially supported by the Spanish Government under grant TIN2009-14460-C03-02 and by the Spanish MEC under grant BES-2007-17362 and by project ReD Resilient Database Clusters (PDTC/EIA-EIA/109044/2008)
- …
