1,221 research outputs found
How hard is it to cross the room? -- Training (Recurrent) Neural Networks to steer a UAV
This work explores the feasibility of steering a drone with a (recurrent)
neural network, based on input from a forward looking camera, in the context of
a high-level navigation task. We set up a generic framework for training a
network to perform navigation tasks based on imitation learning. It can be
applied to both aerial and land vehicles. As a proof of concept we apply it to
a UAV (Unmanned Aerial Vehicle) in a simulated environment, learning to cross a
room containing a number of obstacles. So far only feedforward neural networks
(FNNs) have been used to train UAV control. To cope with more complex tasks, we
propose the use of recurrent neural networks (RNN) instead and successfully
train an LSTM (Long-Short Term Memory) network for controlling UAVs. Vision
based control is a sequential prediction problem, known for its highly
correlated input data. The correlation makes training a network hard,
especially an RNN. To overcome this issue, we investigate an alternative
sampling method during training, namely window-wise truncated backpropagation
through time (WW-TBPTT). Further, end-to-end training requires a lot of data
which often is not available. Therefore, we compare the performance of
retraining only the Fully Connected (FC) and LSTM control layers with networks
which are trained end-to-end. Performing the relatively simple task of crossing
a room already reveals important guidelines and good practices for training
neural control networks. Different visualizations help to explain the behavior
learned.Comment: 12 pages, 30 figure
Reducing product diversity in higher education.
Public systems of higher education have recently attempted to cut costs by providing financial incentives to institutions who reduce the diversity of their programs. We study the profit and welfare effects of reducing product diversity in higher education, against the background of a funding system reform in Flanders (Belgium). We find that dropping duplicated programs at individual institutions tends to be socially undesirable, due to the limited fixed cost and variable cost savings and the students’ low willingness to travel to other institutions. Furthermore, we find that the financial incentives offered to drop programs may be very ineffective, leading to both undesirable reform and undesirable status quo. These findings emphasize the complexities in regulating product diversity in higher education, and serve as a word of caution towards the various decentralized financial incentive schemes that have recently been introduced.Participation; Product; Diversity;
Reducing product diversity in higher education
Public systems of higher education have recently attempted to cut costs by providing financial incentives to institutions who reduce the diversity of their programs. We study the profit and welfare effects of reducing product diversity in higher education, against the background of a funding system reform in Flanders (Belgium). We find that dropping duplicated programs at individual institutions tends to be socially undesirable, due to the limited fixed cost and variable cost savings and the students’ low willingness to travel to other institutions. Furthermore, we find that the financial incentives offered to drop programs may be very ineffective, leading to both undesirable reform and undesirable status quo. These findings emphasize the complexities in regulating product diversity in higher education, and serve as a word of caution towards the various decentralized financial incentive schemes that have recently been introduced.product diversity, higher education
Participation and schooling in a public system of higher education.
We analyze the determinants of participation (whether to study) and schooling (where and what to study) in a public system of higher education, based on a unique dataset of all eligible high school pupils in an essentially closed region (Flanders). We .nd that pupils perceive the available institutions and programs as close substitutes, implying an ambiguous role for travel costs: they hardly aspect the participation decisions, but have a strong impact on the schooling decisions. In addition, high school background plays an important role in both the participation and schooling decisions. To illustrate how our empirical results can inform the debate on reforming public systems, we assess the effects of tuition fee increases. Uniform cost-based tuition fee increases achieve most of the welfare gains; the additional gains from fee di¤erentiation are relatively unimportant. These welfare gains are quite large if one makes conservative assumptions on the social cost of public funds, and there is a substantial redistribution from students to outsiders.
The great divide in scientific productivity. Why the average scientist does not exist.
We use a quantile regression approach to estimate the e¤ects of age, gender, research funding, teaching load and other observed characteristics of academic researchers on the full distribution of research performance, both in its quantity (publications) and quality (citations) dimension. Exploiting the panel nature of our dataset, we estimate a correlated random-e¤ects quantile regression model, accounting for unobserved heterogeneity of researchers. We employ recent advances in quantile regression that allow its application to count data. Estimation of the model for a panel of biomedical and exact scientists at the KU Leuven in the period 1992-2001 shows strong support for our quantile regression approach, revealing the di¤erential impact of almost all regressors along the distribution. We also
nd that variables like funding, teaching load and cohort have a di¤erent impact on research quantity than on research quality.economics of science; research productivity; quantile regression; count data; random effects;
DoShiCo Challenge: Domain Shift in Control Prediction
Training deep neural network policies end-to-end for real-world applications
so far requires big demonstration datasets in the real world or big sets
consisting of a large variety of realistic and closely related 3D CAD models.
These real or virtual data should, moreover, have very similar characteristics
to the conditions expected at test time. These stringent requirements and the
time consuming data collection processes that they entail, are currently the
most important impediment that keeps deep reinforcement learning from being
deployed in real-world applications. Therefore, in this work we advocate an
alternative approach, where instead of avoiding any domain shift by carefully
selecting the training data, the goal is to learn a policy that can cope with
it. To this end, we propose the DoShiCo challenge: to train a model in very
basic synthetic environments, far from realistic, in a way that it can be
applied in more realistic environments as well as take the control decisions on
real-world data. In particular, we focus on the task of collision avoidance for
drones. We created a set of simulated environments that can be used as
benchmark and implemented a baseline method, exploiting depth prediction as an
auxiliary task to help overcome the domain shift. Even though the policy is
trained in very basic environments, it can learn to fly without collisions in a
very different realistic simulated environment. Of course several benchmarks
for reinforcement learning already exist - but they never include a large
domain shift. On the other hand, several benchmarks in computer vision focus on
the domain shift, but they take the form of a static datasets instead of
simulated environments. In this work we claim that it is crucial to take the
two challenges together in one benchmark.Comment: Published at SIMPAR 2018. Please visit the paper webpage for more
information, a movie and code for reproducing results:
https://kkelchte.github.io/doshic
Internal basic research, external basic research and the technological performance of pharmaceutical firms.
We evaluate the impact of basic research on pharmaceutical firms’ technological performance, distinguishing between internal basic research and the exploitation of external basic research findings. We find that firms increase their performance by engaging more in internal basic research, in particular if basic research is conducted in collaboration with university scientists. The exploitation of external basic research improves performance, while the magnitude increases with firms’ involvement in internal basic research. Hence, internal basic research and the exploitation of external basic research are complements, suggesting that internal basic research provides firms with the skills to exploit external basic research more effectively.basic research; industrial innovation; pharmaceutical industry;
Top research productivity and its persistence. A survival time analysis for a panel of Belgian scientists.
The paper contributes to the debate on cumulative advantage effects in academic research by examining top performance in research and its persistence over time, using a panel dataset comprising the publications of biomedical and exact scientists at the KU Leuven in the period 1992-2001. We study the selection of researchers into productivity categories and analyze how they switch between these categories over time. About 25% achieves top performance at least once, while 5% is persistently top. Analyzing the hazard to first and subsequent top performance shows strong support for an accumulative process. Rank, gender, hierarchical position and past performance are highly significant explanatory factors.Economics of science; Effects; Factors; Hazard models; Performance; Persistence; Processes; Productivity; Research; Research productivity; Researchers; Scientists; Selection; Studies; Time;
Influence of anticardiolipin and anti-β2 glycoprotein I antibody cutoff values on antiphospholipid syndrome classification
Background: Anticardiolipin (aCL) and anti-beta 2 glycoprotein I (a beta 2GPI) immunoglobulin (Ig) G/IgM antibodies are 2 of the 3 laboratory criteria for classification of antiphospholipid syndrome (APS). The threshold for clinically relevant levels of antiphospholipid antibodies (aPL) for the diagnosis of APS remains a matter of debate. The aim of this study was to evaluate the variation in cutoffs as determined in different clinical laboratories based on the results of a questionnaire as well as to determine the optimal method for cutoff establishment based on a clinical approach.Methods: The study included samples from 114 patients with thrombotic APS, 138 patients with non-APS thrombosis, 138 patients with autoimmune disease, and 183 healthy controls. aCL and a beta 2GPI IgG/IgM antibodies were measured at 1 laboratory using 4 commercial assays. Assay-specific cutoff values for aPL were obtained by determining 95th and 99th percentiles of 120 compared to 200 normal controls by different statistical methods.Results: Normal reference value data showed a nonparametric distribution. Higher cutoff values were found when calculated as 99th rather than 95th percentiles. These values also showed a stronger association with thrombosis. The use of 99th percentile cutoffs reduced the chance of false positivity but at the same time reduced sensitivity. The decrease in sensitivity was higher than the gain in specificity when 99th percentiles were calculated by methods wherein no outliers were eliminated.Conclusions: We present cutoff values for aPL determined by different statistical methods. The 99th percentile cutoff value seemed more specific. However, our findings indicate the need for standardized statistical criteria to calculate 99th percentile cutoff reference values.Background: Anticardiolipin (aCL) and anti-beta 2 glycoprotein I (a beta 2GPI) immunoglobulin (Ig) G/IgM antibodies are 2 of the 3 laboratory criteria for classification of antiphospholipid syndrome (APS). The threshold for clinically relevant levels of antiphospholipid antibodies (aPL) for the diagnosis of APS remains a matter of debate. The aim of this study was to evaluate the variation in cutoffs as determined in different clinical laboratories based on the results of a questionnaire as well as to determine the optimal method for cutoff establishment based on a clinical approach.Methods: The study included samples from 114 patients with thrombotic APS, 138 patients with non-APS thrombosis, 138 patients with autoimmune disease, and 183 healthy controls. aCL and a beta 2GPI IgG/IgM antibodies were measured at 1 laboratory using 4 commercial assays. Assay-specific cutoff values for aPL were obtained by determining 95th and 99th percentiles of 120 compared to 200 normal controls by different statistical methods.Results: Normal reference value data showed a nonparametric distribution. Higher cutoff values were found when calculated as 99th rather than 95th percentiles. These values also showed a stronger association with thrombosis. The use of 99th percentile cutoffs reduced the chance of false positivity but at the same time reduced sensitivity. The decrease in sensitivity was higher than the gain in specificity when 99th percentiles were calculated by methods wherein no outliers were eliminated.Conclusions: We present cutoff values for aPL determined by different statistical methods. The 99th percentile cutoff value seemed more specific. However, our findings indicate the need for standardized statistical criteria to calculate 99th percentile cutoff reference values.A
RIO Country Report 2015: Belgium
The 2015 series of RIO Country Reports analyse and assess the policy and the national research and innovation system developments in relation to national policy priorities and the EU policy agenda with special focus on ERA and Innovation Union. The executive summaries of these reports put forward the main challenges of the research and innovation systems.JRC.J.6-Innovation Systems Analysi
- …
