975 research outputs found
Multimedia, rapportage
Het project Multimedia was gericht op de productie van een aantal video's en van een interactief programma. De vraag van het management was een projectplan voor multimedia te leveren waarin de volgende doelen werden nagestreefd:
* Beleid ontwikkelen en voorzieningen realiseren voor de ontwikkeling, de opslag en het beheer van multimediale leermiddelen.
* In aanzienlijk meer cursussen dan nu het geval is van multimedia gebruik maken.
* de rol van de Open Universiteit op het gebied van virtuele (multimediale) practica voor complexe vaardigheden intensiveren. Het projectplan werd goedgekeurd en 4 werkpakketten zijn uitgewerkt om deze doelen te bereiken. Werkpakket 1 videoproducties, werkpakket 2 videobeheer, werkpakket 3 videohergebruik en werkpakket 4 videovoorlichtin
Robust GPU-based Virtual Reality Simulation of Radio Frequency Ablations for Various Needle Geometries and Locations
Purpose: Radio-frequency ablations play an important role in the therapy of
malignant liver lesions. The navigation of a needle to the lesion poses a
challenge for both the trainees and intervening physicians. Methods: This
publication presents a new GPU-based, accurate method for the simulation of
radio-frequency ablations for lesions at the needle tip in general and for an
existing visuo-haptic 4D VR simulator. The method is implemented real-time
capable with Nvidia CUDA. Results: It performs better than a literature method
concerning the theoretical characteristic of monotonic convergence of the
bioheat PDE and a in vitro gold standard with significant improvements (p <
0.05) in terms of Pearson correlations. It shows no failure modes or
theoretically inconsistent individual simulation results after the initial
phase of 10 seconds. On the Nvidia 1080 Ti GPU it achieves a very high frame
rendering performance of >480 Hz. Conclusion: Our method provides a more robust
and safer real-time ablation planning and intraoperative guidance technique,
especially avoiding the over-estimation of the ablated tissue death zone, which
is risky for the patient in terms of tumor recurrence. Future in vitro
measurements and optimization shall further improve the conservative estimate.Comment: 18 pages, 14 figures, 1 table, 2 algorithms, 2 movie
Optimizing the use of expert panel reference diagnoses in diagnostic studies of multidimensional syndromes
__Abstract__
Background: In the absence of a gold standard, a panel of experts can be invited to assign a reference diagnosis
for use in research. Available literature offers limited guidance on assembling and working with an expert panel
for this purpose. We aimed to develop a protocol for an expert panel consensus diagnosis and evaluated its
applicability in a pilot project.
Methods: An adjusted Delphi method was used, which started with the assessment of clinical vignettes by 3
experts individually, followed by a consensus discussion meeting to solve diagnostic discrepancies. A panel
facilitator ensured that all experts were able to express their views, and encouraged the use of argumentation to
arrive at a specific diagnosis, until consensus was reached by all experts. Eleven vignettes of patients suspected of
having a primary neurodegenerative disease were presented to the experts. Clinical information was provided
stepwise and included medical history, neurological, physical and cognitive function, brain MRI scan, and follow-up
assessments over 2 years. After the consensus discussion meeting, the procedure was evaluated by the experts.
Results: The average degree of consensus for the reference diagnosis increased from 52% after individual
assessment of the vignettes to 94% after the consensus discussion meeting. Average confidence in the diagnosis
after individual assessment was 85%. This did not increase after the consensus discussion meeting. The process
evaluation led to several recommendations for improvement of the protocol.
Conclusion: A protocol for attaining a reference diagnosis based on expert panel consensus was shown feasible in
research practice
Systematic literature review of methodologies and data sources of existing economic models across the full spectrum of Alzheimer’s disease and dementia from apparently healthy through disease progression to end of life care: a systematic review protocol
Introduction Dementia is one of the greatest health challenges the world will face in the coming decades, as it is one of the principal causes of disability and dependency among older people. Economic modelling is used widely across many health conditions to inform decisions on health and social care policy and practice. The aim of this literature review is to systematically identify, review and critically evaluate existing health economics models in dementia. We included the full spectrum of dementia, including Alzheimer’s disease (AD), from preclinical stages through to severe dementia and end of life. This review forms part of the Real world Outcomes across the Alzheimer’s Disease spectrum for better care: multimodal data Access Platform (ROADMAP) project. Methods and analysis Electronic searches were conducted in Medical Literature Analysis and Retrieval System Online, Excerpta Medica dataBASE, Economic Literature Database, NHS Economic Evaluation Database, Cochrane Central Register of Controlled Trials, Cost-Effectiveness Analysis Registry, Research Papers in Economics, Database of Abstracts of Reviews of Effectiveness, Science Citation Index, Turning Research Into Practice and Open Grey for studies published between January 2000 and the end of June 2017. Two reviewers will independently assess each study against predefined eligibility criteria. A third reviewer will resolve any disagreement. Data will be extracted using a predefined data extraction form following best practice. Study quality will be assessed using the Phillips checklist for decision analytic modelling. A narrative synthesis will be used. Ethics and dissemination The results will be made available in a scientific peer-reviewed journal paper, will be presented at relevant conferences and will also be made available through the ROADMAP project
Phosphate and plaster bonded syntactic foams for collapsible cores in light metal casting
In the casting industry so called “lost cores” are used to fabricate cavities in casted components. Those cores are typically made from inorganic materials like sand and have to satisfy a variety of requirements – some of which are contradictory. Among others, they have to be stable against the thermal and mechanical loads of the casting process; they must not be infiltrated by the metal melt or induce chemical reactions detrimental to the performance of the casting. Last but not least demolding has to be easy, i.e. the core has to be destroyed and removed from the casting very easily and without any residues. Especially for small and delicate cores these requirements are very difficult to meet.
Please click Additional Files below to see the full abstract
TSynD: Targeted Synthetic Data Generation for Enhanced Medical Image Classification
The usage of medical image data for the training of large-scale machine
learning approaches is particularly challenging due to its scarce availability
and the costly generation of data annotations, typically requiring the
engagement of medical professionals. The rapid development of generative models
allows towards tackling this problem by leveraging large amounts of realistic
synthetically generated data for the training process. However, randomly
choosing synthetic samples, might not be an optimal strategy.
In this work, we investigate the targeted generation of synthetic training
data, in order to improve the accuracy and robustness of image classification.
Therefore, our approach aims to guide the generative model to synthesize data
with high epistemic uncertainty, since large measures of epistemic uncertainty
indicate underrepresented data points in the training set. During the image
generation we feed images reconstructed by an auto encoder into the classifier
and compute the mutual information over the class-probability distribution as a
measure for uncertainty.We alter the feature space of the autoencoder through
an optimization process with the objective of maximizing the classifier
uncertainty on the decoded image. By training on such data we improve the
performance and robustness against test time data augmentations and adversarial
attacks on several classifications tasks
- …
