703 research outputs found
Zuinig spuien kan een teler veel geld besparen
Spuien belast het milieu en kost geld. Je gooit mineralen en water weg. Cecilia Stanghellini van Wageningen UR Glastuinbouw heeft een rekenmethode ontwikkeld waarmee te bepalen valt wanneer er gespuid moet worden. Uit bijkomende economische berekeningen blijkt dat zo schoon mogelijk water altijd loon
Biologische klok bepaalt veel processen in de plant : planten reageren gedurende de dag verschillend op teeltmaatregelen
Steeds meer wordt duidelijk dat de biologische klok in de plant een rol speelt bij een enorm groot aantal processen. Voorbeelden zijn zaadkieming, lengtegroei, fotosynthese, productie van plantenhormonen, activiteit van enzymen, opening en sluiting van huidmondjes, en opengaan van bloemen. Dat betekent ook dat planten gedurende de dag verschillend reageren op teeltmaatregelen. Het is een punt om rekening mee te houden bij langdurige belichting, CO2 doseren en de toepassing van andere lichtkleuren. Op dit punt moet nog veel inzicht verworven worde
Veel rood licht geeft compactere planten : plantlengte te sturen door combinatie van SON-T met rode en blauwe LED's
Lichtkleuren beïnvloeden lengte en vertakking van pot- en perkplanten. Maar sturing daarmee is nog lastig. Wageningen Universiteit heeft onderzoek gedaan om meer zicht te krijgen op de mogelijkheden, met petunia en potchrysant als modelplant
Overview of VideoCLEF 2009: New perspectives on speech-based multimedia content enrichment
VideoCLEF 2009 offered three tasks related to enriching video content for improved multimedia access in a multilingual environment. For each task, video data (Dutch-language television, predominantly documentaries) accompanied by speech recognition transcripts were provided.
The Subject Classification Task involved automatic tagging of videos with subject theme labels. The best performance was achieved by approaching subject tagging as an information retrieval task and using both speech recognition transcripts and archival metadata. Alternatively, classifiers were trained using either the training data provided or data collected from Wikipedia or via general Web search. The Affect Task involved detecting narrative peaks, defined as points where viewers perceive heightened dramatic tension. The task was carried out on the “Beeldenstorm” collection containing 45 short-form documentaries on the visual arts. The best runs exploited affective vocabulary and audience directed speech. Other approaches included using topic changes, elevated speaking pitch, increased speaking intensity and radical visual changes. The Linking Task, also called “Finding Related Resources Across Languages,” involved linking video to material on the same subject in a different language.
Participants were provided with a list of multimedia anchors (short video segments) in the Dutch-language “Beeldenstorm” collection and were expected to return target pages drawn from English-language Wikipedia. The best performing methods used the transcript of the
speech spoken during the multimedia anchor to build a query to search an index of the Dutch language Wikipedia. The Dutch Wikipedia pages returned were used to identify related English pages. Participants also experimented with pseudo-relevance feedback, query translation and methods that targeted proper names
Multimedia, rapportage
Het project Multimedia was gericht op de productie van een aantal video's en van een interactief programma. De vraag van het management was een projectplan voor multimedia te leveren waarin de volgende doelen werden nagestreefd:
* Beleid ontwikkelen en voorzieningen realiseren voor de ontwikkeling, de opslag en het beheer van multimediale leermiddelen.
* In aanzienlijk meer cursussen dan nu het geval is van multimedia gebruik maken.
* de rol van de Open Universiteit op het gebied van virtuele (multimediale) practica voor complexe vaardigheden intensiveren. Het projectplan werd goedgekeurd en 4 werkpakketten zijn uitgewerkt om deze doelen te bereiken. Werkpakket 1 videoproducties, werkpakket 2 videobeheer, werkpakket 3 videohergebruik en werkpakket 4 videovoorlichtin
Reproducibility of the lung anatomy under Active Breathing Coordinator control: Dosimetric consequences for scanned proton treatments.
Purpose/Objective The treatment of moving targets with scanning proton beams is challenging. By controlling lung volumes, Active Breathing Control (ABC) assists breath-holding for motion mitigation. The delivery of proton treatment fractions often exceeds feasible breath-hold durations, requiring high breath-hold reproducibility. Therefore, we investigated dosimetric consequences of anatomical reproducibility uncertainties in the lung under ABC, evaluating robustness of scanned proton treatments during breath-hold. Material/Methods T1-weighted MRIs of five volunteers were acquired during ABC, simulating image acquisition during four subsequent breath-holds within one treatment fraction. Deformation vector fields obtained from these MRIs were used to deform 95% inspiration phase CTs of 3 randomly selected non-small-cell lung cancer patients (Figure 1). Per patient, an intensity-modulated proton plan was recalculated on the 3 deformed CTs, to assess the dosimetric influence of anatomical breath-hold inconsistencies. Results Dosimetric consequences were negligible for patient 1 and 2 (Figure 1). Patient 3 showed a decreased volume (95.2%) receiving 95% of the prescribed dose for one deformed CT. The volume receiving 105% of the prescribed dose increased from 0.0% to 9.9%. Furthermore, the heart volume receiving 5 Gy varied by 2.3%. Figure 2 shows dose volume histograms for all relevant structures in patient 3. Conclusion Based on the studied patients, our findings suggest that variations in breath-hold have limited effect on the dose distribution for most lung patients. However, for one patient, a significant decrease in target coverage was found for one of the deformed CTs. Therefore, further investigation of dosimetric consequences from intra-fractional breath-hold uncertainties in the lung under ABC is needed
Vitaliteit aan de basis:een casestudie naar communicatie, samenwerking en besluitvorming binnen de Vereniging COS Nederland
Validating and improving the correction of ocular artifacts in electro-encephalography
For modern applications of electro-encephalography, including brain computer interfaces and single-trial Event Related Potential detection, it is becoming increasingly important that artifacts are accurately removed from a recorded electro-encephalogram (EEG) without affecting the part of the EEG that reflects cerebral activity. Ocular artifacts are caused by movement of the eyes and the eyelids. They occur frequently in the raw EEG and are often the most prominent artifacts in EEG recordings. Their accurate removal is therefore an important procedure in nearly all electro-encephalographic research. As a result of this, a considerable number of ocular artifact correction methods have been introduced over the past decades. A selection of these methods, which contains some of the most frequently used correction methods, is given in Section 1.5. When two different correction methods are applied to the same raw EEG, this usually results in two different corrected EEGs. A measure for the accuracy of correction should indicate how well each of these corrected EEGs recovers the part of the raw EEG that truly reflects cerebral activity. The fact that this accuracy cannot be determined directly from a raw EEG is intrinsic to the need for artifact removal. If, based on a raw EEG, it would be possible to derive an exact reference on what the corrected EEG should be, then there would not be any need for adequate artifact correction methods. Estimating the accuracy of correction methods is mostly done either by using models to simulate EEGs and artifacts, or by manipulating the experimental data in such a way that the effects of artifacts to the raw EEG can be isolated. In this thesis, modeling of EEG and artifact is used to validate correction methods based on simulated data. A new correction method is introduced which, unlike all existing methods, uses a camera to monitor eye(lid) movements as a basis for ocular artifact correction. The simulated data is used to estimate the accuracy of this new correction method and to compare it against the estimated accuracy of existing correction methods. The results of this comparison suggest that the new method significantly increases correction accuracy compared to the other methods. Next, an experiment is performed, based on which the accuracy of correction can be estimated on raw EEGs. Results on this experimental data comply very well with the results on the simulated data. It is therefore concluded that using a camera during EEG recordings provides valuable extra information that can be used in the process of ocular artifact correction. In Chapter 2, a model is introduced that assists in estimating the accuracy of eye movement artifacts for simulated EEG recordings. This model simulates EEG and eye movement artifacts simultaneously. For this, the model uses a realistic representation of the head, multiple dipoles to model cerebral and ocular electrical activity, and the boundary element method to calculate changes in electrical potential at different positions on the scalp. With the model, it is possible to simulate different data sets as if they are recorded using different electrode configurations. Signal to noise ratios are used to assess the accuracy of these six correction methods for various electrode configurations before and after applying six different correction methods. Results show that out of the six methods, second order blind identification, SOBI, and multiple linear regression, MLR, correct most accurately overall as they achieve the highest rise in signal to noise ratio. The occurrence of ocular artifacts is linked to changes in eyeball orientation. In Chapter 2 an eye tracker is used to record pupil position, which is closely linked to eyeball orientation. The pupil position information is used in the model to simulate eye movements. Recognizing the potential benefit of using an eye tracker not only for simulations, but also for correction, Chapter 3 introduces an eye movement artifact correction method that exploits the pupil position information that is provided by an eye tracker. Other correction methods use the electrooculogram (EOG) and/or the EEG to estimate ocular artifacts. Because both the EEG and the EOG recordings are susceptive to cerebral activity as well as to ocular activity, these other methods are at risk of overcorrecting the raw EEG. Pupil position information provides a reference that is linked to the ocular artifact in the EEG but that cannot be affected by cerebral activity, and as a result the new correction method avoids having to solve traditionally problematic issues like forward/backward propagation and evaluating the accuracy of component extraction. By using both simulated and experimental data, it is determined how pupil position influences the raw EEG and it is found that this relation is linear or quadratic. A Kalman filter is used for tuning of the parameters that specify the relation. On simulated data, the new method performs very well, resulting in an SNR after correction of over 10 dB for various patterns of eye movements. When compared to the three methods that performed best in the evaluation of Chapter 2, only the SOBI method which performed best in that evaluation shows similar results for some of the eye movement patterns. However, a serious limitation of the correction method is its inability to correct blink artifacts. In order to increase the variety of applications for which the new method can be used, the new correction should be improved in a way that enables it to correct the raw EEG for blinking artifacts. Chapter 4 deals with implementing such improvements based on the idea that a more advanced eye-tracker should be able to detect both the pupil position and the eyelid position. The improved eye tracker-based ocular artifact correction method is named EYE. Driven by some practical limitations regarding the eye tracking device currently available to us, an alternative way to estimate eyelid position is suggested, based on an EOG recorded above one eye. The EYE method can be used with both the eye tracker information or with the EOG substitute. On simulated data, accuracy of the EYE method is estimated using the EOGbased eyelid reference. This accuracy is again compared against the six other correction methods. Two different SNR-based measures of accuracy are proposed. One of these quantifies the correction of the entire simulated data set and the other focuses on those segments containing simulated blinking artifacts. After applying EYE, an average SNR of at least 9 dB for both these measures is achieved. This implies that the power of the corrected signal is at least eight times the power of the remaining noise. The simulated data sets contain a wide range of eye movements and blink frequencies. For almost all of these data sets, 16 out of 20, the correction results for EYE are better than for any of the other evaluated correction method. On experimental data, the EYE method appears to adequately correct for ocular artifacts as well. As the detection of eyelid position from the EOG is in principle inferior to the detection of eyelid position with the use of an eye tracker, these results should also be considered as an indicator of even higher accuracies that could be obtained with a more advanced eye tracker. Considering the simplicity of the MLR method, this method also performs remarkably well, which may explain why EOG-based regression is still often used for correction. In Chapter 5, the simulation model of Chapter 2 is put aside and, alternatively, experimentally recorded data is manipulated in a way that correction inaccuracies can be highlighted. Correction accuracies of eight correction methods, including EYE, are estimated based on data that are recorded during stop-signal tasks. In the analysis of these tasks it is essential that ocular artifacts are adequately removed because the task-related ERPs, are located mostly at frontal electrode positions and are low-amplitude. These data are corrected and subsequently evaluated. For the eight methods, the overall ranking of estimated accuracy in Figure 5.3, corresponds very well with the correction accuracy of these methods on simulated data as was found in Chapter 4. In a single-trial correction comparison, results suggest that the EYE corrected EEG, is not susceptible to overcorrection, whereas the other corrected EEGs are
- …
