26 research outputs found
Bioinformatics And Brain Imaging:
This chapter reviews some exciting new techniques for analyzing brain imaging data. We describe computer algorithms that can discover patterns of brain structure and function associated with Alzheimer's disease, schizophrenia, and normal and abnormal brain development, based on imaging data collected in large human populations. Extraordinary patterns can be discovered with these techniques: dynamic brain maps reveal how the brain grows in childhood, and how it changes in disease, or responds to medication. Genetic brain maps reveal which aspects of brain structure are inherited, shedding light on the nature/nurture debate. They also identify deficit patterns in those at genetic risk for disease. Probabilistic brain atlases now store thousands of these brain maps, models, and images, collected with an array of imaging devices (MRI/fMRI, PET, 3D cryosection imaging, histology). These atlases capture how the brain varies with age, gender, demographics, and in disease. They relate these variations to cognitive, therapeutic, and genetic parameters. With the appropriate computational tools, these atlases can be stratified to create average maps of brain structure in different diseases, revealing unsuspected features. We describe the tools to interact with these atlases. We also review some of the technical and conceptual challenges in comparing brain data across large populations, highlighting some key neuroscience applications
an overview of modelling and simulation activities for an all electric nose wheel steering system
Prehabilitation in cancer patients undergoing major abdominal surgery: what is the current evidence?
Nonparametric testing for a monotone hazard function via normalized spacings
We study the problem of testing whether a hazard function is monotonic or not. The proposed test statistics, a global test and four localized tests, are all based on normalized spacings. The global test is in fact just the test statistic [Proschan, F. and Pyke, R. (1967). Tests for monotone failure rate. Fifth Berkeley Symposium, 3, 293-313], introduced for testing a constant hazard function versus a nondecreasing nonconstant hazard function. This global test is powerful for detecting global departures of the null hypothesis, but lacks power when there are local departures from the null hypothesis. By localizing the global test, we obtain tests that respond to this drawback. We also show how the testing procedures can be used when dealing with Type II censored data. We evaluate the performance of the test statistics via simulation studies and illustrate them on some data sets
Oral-administration of Antitoxin-a Chicken Egg-yolk Igy in Clostridium-difficile Colitis
Variability@ER’11 - Workshop on Software Variability
As software requirements constantly increase in size and complexity, the need for methods, formalisms, techniques, tools and languages for managing and evolving software artifacts become crucial. One way to manage variability when dealing with a rapidly growing variety of software products is through developing and maintaining families of software products rather than individual products. Variability management is concerned with controlling the versions and the possible variants of software systems. Variability management gained a special interest in various software-related areas in different phases of the software development lifecycle. These areas include conceptual modeling, product line engineering, feature analysis, software reuse, configuration management, generative programming and programming language design. In the context of conceptual modeling, the terminology of variability management has been investigated, yielding ontologies, modeling languages, and classification frameworks. In the areas of software product line engineering and feature analysis, methods for developing core assets and efficiently using them in particular contexts have been introduced. In the software reuse and configuration management fields, different mechanisms for reusing software artifacts and managing software versions have been proposed, including adoption, specialization, controlled extension, parameterization, configuration, generation, template instantiation, analogy construction, assembly, and so on. Finally, generative programming deals with developing programs that synthesize or generate other programs and programming language design provides techniques for expressing and exploiting commonality of source code artifacts, but also for specifying the allowed or potential variability, whether it is static or dynamic. The purpose of this workshop is to promote the theme of variability management from all or part of these different perspectives, identifying possible points of synergy, common problems and solutions, and visions for the future of the area
