99 research outputs found

    Lichens from the Vadstena Monastery churchyard – the burial place of Eric Acharius

    Get PDF
    A list of 120 taxa observed at the Vadstena Monastery churchyard includes some rare species and a few lichenicolous fungi. Lecanora semipallida is reported from the province Östergötland [Ostrogothia] for the first time

    Automating model building in ligand-based predictive drug discovery using the Spark framework

    No full text
    Automation of model building enables new predictive models to be generated in a faster, easier and more straightforward way once new data is available to predict on. Automation can also reduce the demand for tedious bookkeeping that is generally needed in manual workflows (e.g. intermediate files needed to be passed between steps in a workflow). The applicability of the Spark framework related to the creation of pipelines for predictive drug discovery was here evaluated and resulted in the implementation of two pipelines that serves as a proof of concept. Spark is considered to provide good means of creating pipelines for pharmaceutical purposes and its high level approach to distributed computing reduces the effort put on the developer compared to a regular HPC implementation.

    Search Engine Optimization and internet marketing

    No full text
    Sökmotoroptimering eller SEO (Search Engine Optimization), har blivit enmycket viktig del när det gäller att profilera sig online. Det som avses medsökmotoroptimering är att göra förändringar på en webbsida så att denplaceras högre upp bland resultaten hos sökmotorerna, och på ett sådant sättfår en större exponering. När man försöker göra förändringar på en sida såkan man dela in förändringar i ”på” och ”utanför” webbsidan, on och off-site.Förändringar ”on-site” betyder dels att man gör förändringar i html koden ochdels att man försöker göra sidan så användarvänlig som möjligt. Off-sitestrategier består till stor del i olika sätt att anskaffa relevanta inlänkar till sidan.Detta är ett av de viktigaste sätten för sökmotorer att avgöra om innehållet ärviktigt eller inte, fler relevanta inlänkar gör att sökmotorer litar på innehålletoch sidan stiger i ranking. Både optimering on- och off-site förbättrar enwebbsidas chans att ranka högre i sökresultaten.Denna uppsatts kommer att gå in på den grundläggande funktion avsökmotorer och även vad som kan göras på webbsidor för att förbättrarankningen. Den kommer att gå in på begreppet "sökord", vad det betyder försökmotorer och hur man kan skaffa sig, hur man använder och hur man kanplacera dem på en webbsida.Search Engine Optimization (SEO) has become a very important part forcompanies and businesses when it comes to improve their online profile.When you refer to SEO you mean the changes you make on a website toimprove the results in search engines, and in such a way have a greaterexposure. When you make changes to a page, you can divide the changesyou make as ”on” and ”off-site”. Changes "on-site" means that you makechanges in html code to strategically place keywords that are important foryour business. You also attempt to make the pages as user-friendly aspossible. Off-site strategies consist largely of ways to raise the number ofrelevant inbound linked to your page. This is one of the most important waysfor search engines to determine if the content is important or not. A site thathas many relevant inbound links is seen to be more ”important” and will allowthe search engines to rely on the contents and the page moves up in ranking.Both the optimization of on-and off-site improves a web page's chance to rankhigher in search results.This paper will go into the basic function of the search engines and also whatcan be done to webpage’s to improve the ranking of the sites. It will go intothe concept of ”keyword”, what it means for the search engines and how toacquire, how to use and how to place them on a webpage

    Actors and higher order functions : A Comparative Study of Parallel Programming Language Support for Bioinformatics

    No full text
    Parallel programming can sometimes be a tedious task when dealing with problems like race conditions and synchronization. Functional programming can greatly reduce the complexity of parallelization by removing side effects and variables, eliminating the need for locks and synchronization. This thesis assesses the applicability of functional programming and the actor model using the field of bioinformatics as a case study, focusing on genome assembly. Functional programming is found to provide parallelization at a high abstraction level in some cases, but in most of the program there is no way to provide parallelization without adding synchronization and non-pure functional code. The actor model facilitate parallelization of a greater part of the program but increases the program complexity due to communication and synchronization between actors. Neither of the approaches gave efficient speedup due to the characteristics of the algorithm that was implemented, which proved to be memory bound. A shared memory parallelization thus showed to be inefficient and that a need for distributed implementations are needed for achieving speedup for genome assembler

    Automating model building in ligand-based predictive drug discovery using the Spark framework

    No full text
    Automation of model building enables new predictive models to be generated in a faster, easier and more straightforward way once new data is available to predict on. Automation can also reduce the demand for tedious bookkeeping that is generally needed in manual workflows (e.g. intermediate files needed to be passed between steps in a workflow). The applicability of the Spark framework related to the creation of pipelines for predictive drug discovery was here evaluated and resulted in the implementation of two pipelines that serves as a proof of concept. Spark is considered to provide good means of creating pipelines for pharmaceutical purposes and its high level approach to distributed computing reduces the effort put on the developer compared to a regular HPC implementation.

    Actors and higher order functions : A Comparative Study of Parallel Programming Language Support for Bioinformatics

    No full text
    Parallel programming can sometimes be a tedious task when dealing with problems like race conditions and synchronization. Functional programming can greatly reduce the complexity of parallelization by removing side effects and variables, eliminating the need for locks and synchronization. This thesis assesses the applicability of functional programming and the actor model using the field of bioinformatics as a case study, focusing on genome assembly. Functional programming is found to provide parallelization at a high abstraction level in some cases, but in most of the program there is no way to provide parallelization without adding synchronization and non-pure functional code. The actor model facilitate parallelization of a greater part of the program but increases the program complexity due to communication and synchronization between actors. Neither of the approaches gave efficient speedup due to the characteristics of the algorithm that was implemented, which proved to be memory bound. A shared memory parallelization thus showed to be inefficient and that a need for distributed implementations are needed for achieving speedup for genome assembler

    Actors and higher order functions : A Comparative Study of Parallel Programming Language Support for Bioinformatics

    No full text
    Parallel programming can sometimes be a tedious task when dealing with problems like race conditions and synchronization. Functional programming can greatly reduce the complexity of parallelization by removing side effects and variables, eliminating the need for locks and synchronization. This thesis assesses the applicability of functional programming and the actor model using the field of bioinformatics as a case study, focusing on genome assembly. Functional programming is found to provide parallelization at a high abstraction level in some cases, but in most of the program there is no way to provide parallelization without adding synchronization and non-pure functional code. The actor model facilitate parallelization of a greater part of the program but increases the program complexity due to communication and synchronization between actors. Neither of the approaches gave efficient speedup due to the characteristics of the algorithm that was implemented, which proved to be memory bound. A shared memory parallelization thus showed to be inefficient and that a need for distributed implementations are needed for achieving speedup for genome assembler

    Confidence Predictions in Pharmaceutical Sciences

    No full text
    The main focus of this thesis has been on Quantitative Structure Activity Relationship (QSAR) modeling using methods producing valid measures of uncertainty. The goal of QSAR is to prospectively predict the outcome from assays, such as ADMET (Absorption, Distribution, Metabolism, Excretion), toxicity and on- and off-target interactions, for novel compounds. QSAR modeling offers an appealing alternative to laboratory work, which is both costly and time-consuming, and can be applied earlier in the development process as candidate drugs can be tested in silico without requiring to synthesize them first. A common theme across the presented papers is the application of conformal and probabilistic prediction models, which are used in order to associate predictions with a level of their reliability – a desirable property that is essential in the stage of decision making. In Paper I we studied approaches on how to utilize biological assay data from legacy systems, in order to improve predictive models. This is otherwise problematic since mixing data from separate systems will cause issues for most machine learning algorithms. We demonstrated that old data could be used to augment the proper training set of a conformal predictor to yield more efficient predictions while preserving model calibration. In Paper II we studied a new approach of predicting metabolic transformations of small molecules based on transformations encoded in SMIRKS format. In this work use used the probabilistic Cross-Venn-ABERS predictor which overall worked well, but had difficulty in modeling the minority class of imbalanced datasets. In Paper III we studied metabolomics data from patients diagnosed with Multiple Sclerosis and found a set of 15 discriminatory metabolites that could be used to classify patients from a validation cohort into one of two sub types of the disease with high accuracy. We further demonstrated that conformal prediction could be useful for tracking the progression of the disease for individual patients, which we exemplified using data from a clinical trial. In Paper IV we introduced CPSign – a software for cheminformatics modeling using conformal and probabilistic methods. CPSign was compared against other regularly used methods for this task, using 32 benchmark datasets, demonstrating that CPSign produces predictive accuracy on par with the best performing methods
    corecore