2,665 research outputs found

    Dissection of a novel molecular determinant mediating Golgi to trans -Golgi network transition

    Get PDF
    Abstract.: Two major functions of the Golgi apparatus (GA) are formation of complex glycans and sorting of proteins destined for various subcellular compartments or secretion. To fulfill these tasks proper localization of the accessory proteins within the different sub-compartments of the GA is crucial. Here we investigate structural determinants mediating transition of the two glycosyltransferases β-1,4- galactosyltransferase 1 (gal-T1) and the α-1,3-fucosyltransferase 6 (fuc-T6) from the trans-Golgi cisterna to the trans-Golgi network (TGN). Upon treatment with the ionophore monensin both glycosyltransferases are found in TGN-derived swollen vesicles, as determined by confocal fluorescence microscopy and density gradient fractionation. Both enzymes carry a signal consisting of the amino acids E5P6 in gal-T1 and D2P3 in fuc-T6 necessary for the transition of these glycosyltransferases from the trans-Golgi cisterna to the TGN, but not for their steady state localization in the trans-Golgi cistern

    Novel statistical approaches for non-normal censored immunological data: analysis of cytokine and gene expression data

    Get PDF
    Background: For several immune-mediated diseases, immunological analysis will become more complex in the future with datasets in which cytokine and gene expression data play a major role. These data have certain characteristics that require sophisticated statistical analysis such as strategies for non-normal distribution and censoring. Additionally, complex and multiple immunological relationships need to be adjusted for potential confounding and interaction effects. Objective: We aimed to introduce and apply different methods for statistical analysis of non-normal censored cytokine and gene expression data. Furthermore, we assessed the performance and accuracy of a novel regression approach in order to allow adjusting for covariates and potential confounding. Methods: For non-normally distributed censored data traditional means such as the Kaplan-Meier method or the generalized Wilcoxon test are described. In order to adjust for covariates the novel approach named Tobit regression on ranks was introduced. Its performance and accuracy for analysis of non-normal censored cytokine/gene expression data was evaluated by a simulation study and a statistical experiment applying permutation and bootstrapping. Results: If adjustment for covariates is not necessary traditional statistical methods are adequate for non-normal censored data. Comparable with these and appropriate if additional adjustment is required, Tobit regression on ranks is a valid method. Its power, type-I error rate and accuracy were comparable to the classical Tobit regression. Conclusion: Non-normally distributed censored immunological data require appropriate statistical methods. Tobit regression on ranks meets these requirements and can be used for adjustment for covariates and potential confounding in large and complex immunological datasets

    Taking reasonable pluralism seriously: an internal critique of political liberalism

    Get PDF
    The later Rawls attempts to offer a non-comprehensive, but nonetheless moral justification in political philosophy. Many critics of political liberalism doubt that this is successful, but Rawlsians often complain that such criticisms rely on the unwarranted assumption that one cannot offer a moral justification other than by taking a philosophically comprehensive route. In this article, I internally criticize the justification strategy employed by the later Rawls. I show that he cannot offer us good grounds for the rational hope that citizens will assign political values priority over non-political values in cases of conflict about political matters. I also suggest an alternative approach to justification in political philosophy (that is, a weak realist, Williams-inspired account) that better respects the later Rawls’s concern with non-comprehensiveness and pluralism than either his own view or more comprehensive approaches. Thus, if we take reasonable pluralism seriously, then we should adopt what Shklar aptly called ‘liberalism of fear’. </jats:p

    Monitoring and predictive modelling of estuarine benthic macrofauna and their relevance to resource management problems

    Get PDF
    Practical considerations in estuarine management, as well as prediction of the consequences of global change on coastal protection, urgently require a better understanding and better modeling of estuarine ecosystems as influenced by ecological, physical, chemical and morphological processes. Recent Dutch examples of such questions are: the impact of enhanced dredging in the Schelde estuary, the impact of sea level rise on the Wadden Sea and Delta area, concerns about the loss of salt marsh habitats, etc. Benthic communities are good indicators of biotic integrity and reflect the present state of the estuarine ecosystem. The analysis of benthic infauna is a key element of many marine and estuarine monitoring programs. In the Dutch Delta area (SW-Netherlands) there is a relatively long tradition on estuarine macrozoobenthos monitoring, such as implemented e.g. in the BIOMON program. This program was designed to detect long-term trends in the average density, biomass and species composition of large parts of different systems (e.g. Schelde estuary, Oosterschelde, Grevelingen), in order to obtain insight in the natural development of estuarine and coastal areas and the anthropogenic influences on these systems. Running now for over a decade, these programs, together with other field campaigns, provide a unique data set on benthic macrofauna (e.g. for the Schelde estuary over 5000 samples are available at the moment). Until recently these data were hardly processed and used for further analysis. However, such data sets offer the opportunity to analyze and predict patterns in occurrence of benthic macrofauna in a much more profound way. Recently, within a cooperation between decision makers (Rijkswaterstaat, Directie Zeeland), advisers (RIKZ) and scientists (NIOO-CEMO), the possibilities and limitations of using these data sets for the predictions of benthic macrofauna at scales relevant to resource management problems are evaluated. In our approach we use different statistical methodologies to quantify, model and predict patterns at different spatial and temporal scales, going from patterns on a single tidal flat to inter-estuary comparisons and from monthly patterns to decennial trends. Several examples are shown that illustrate the use of these data, going from simple classification techniques to more sophisticated predictive modeling: Changes and shifts in benthos communities are shown for a land reclamation area of Rotterdam harbour in the Haringvliet-delta using classification techniques. Ordination analysis on the saline lake Grevelingen, a former estuary, showed long-term changes in macrobenthic community structure as a consequence of changes in salinity, light penetration, etc. This case study will be dealt with in more detail in a separate contribution. In the Schelde estuary, a detailed study was performed to unravel the use of environmental data in predicting benthic macrofaunal species distributions at different spatial scales (from a single tidal flat to the whole estuary). Statistical techniques such as geostatistics, hierarchical analysis and logistic regression were applied. At these different scales a distinct relation between the environment (e.g. salinity, sediment characteristics) on the one hand and macrofaunal species distributions on the other hand was observed. As a consequence, predictions of macrofaunal distributions can be made quite successful from environmental data within the Schelde estuary. An inter-estuary comparison between the Schelde estuary and Oosterschelde revealed that predictive models should also incorporate system-wide properties of estuarine systems, such as primary production and suspended matter concentrations, in order to perform in a more generic way. The results clearly show their use in making more sensible long-term decisions about matters having direct environmental effects. The results also provide information on how the design of monitoring programs could be improved or optimized, depending on the questions asked. As such, a more synergetic and flexible approach is urgently necessary, in which decision makers, advisers and scientists communicate in a more efficient way

    Genomics in cardiac metabolism

    Get PDF
    Cell biology is in transition from reductionism to a more integrated science. Large-scale analysis of genome structure, gene expression, and metabolites are new technologies available for studying cardiac metabolism in diseases known to modify cardiac function. These technologies have several limitations and this review aims both to assess and take a critical look at some important results obtained in genomics restricted to molecular genetics, transcriptomics and metabolomics of cardiac metabolism in pathophysiological processes known to alter myocardial function. Therefore, our goal was to delineate new signalling pathways and new areas of research from the vast amount of data already published on genomics as applied to cardiac metabolism in diseases such as coronary heart disease, heart failure, and ischaemic reperfusio

    Swarm Keeping Strategies for Spacecraft under J_2 and Atmospheric Drag Perturbations

    Get PDF
    This paper presents several new open-loop guidance methods for spacecraft swarms composed of hundreds to thousands of agents with each spacecraft having modest capabilities. These methods have three main goals: preventing relative drift of the swarm, preventing collisions within the swarm, and minimizing the propellant used throughout the mission. The development of these methods progresses by eliminating drift using the Hill-Clohessy-Wiltshire equations, removing drift due to nonlinearity, and minimizing the J_2 drift. In order to verify these guidance methods, a new dynamic model for the relative motion of spacecraft is developed. These dynamics include the two main disturbances for spacecraft in Low Earth Orbit (LEO), J_2 and atmospheric drag. Using this dynamic model, numerical simulations are provided at each step to show the effectiveness of each method and to see where improvements can be made. The main result is a set of initial conditions for each spacecraft in the swarm which provides the trajectories for hundreds of collision-free orbits in the presence of J_2. Finally, a multi-burn strategy is developed in order to provide hundreds of collision-free orbits under the influence of atmospheric drag. This last method works by enforcing the initial conditions multiple times throughout the mission thereby providing collision-free trajectories for the duration of the mission

    Disagreeable Privacy Policies: Mismatches between Meaning and Users’ Understanding

    Get PDF
    Privacy policies are verbose, difficult to understand, take too long to read, and may be the least-read items on most websites even as users express growing concerns about information collection practices. For all their faults, though, privacy policies remain the single most important source of information for users to attempt to learn how companies collect, use, and share data. Likewise, these policies form the basis for the self-regulatory notice and choice framework that is designed and promoted as a replacement for regulation. The underlying value and legitimacy of notice and choice depends, however, on the ability of users to understand privacy policies. This paper investigates the differences in interpretation among expert, knowledgeable, and typical users and explores whether those groups can understand the practices described in privacy policies at a level sufficient to support rational decision-making. The paper seeks to fill an important gap in the understanding of privacy policies through primary research on user interpretation and to inform the development of technologies combining natural language processing, machine learning and crowdsourcing for policy interpretation and summarization. For this research, we recruited a group of law and public policy graduate students at Fordham University, Carnegie Mellon University, and the University of Pittsburgh (“knowledgeable users”) and presented these law and policy researchers with a set of privacy policies from companies in the e-commerce and news & entertainment industries. We asked them nine basic questions about the policies’ statements regarding data collection, data use, and retention. We then presented the same set of policies to a group of privacy experts and to a group of non-expert users. The findings show areas of common understanding across all groups for certain data collection and deletion practices, but also demonstrate very important discrepancies in the interpretation of privacy policy language, particularly with respect to data sharing. The discordant interpretations arose both within groups and between the experts and the two other groups. The presence of these significant discrepancies has critical implications. First, the common understandings of some attributes of described data practices mean that semi-automated extraction of meaning from website privacy policies may be able to assist typical users and improve the effectiveness of notice by conveying the true meaning to users. However, the disagreements among experts and disagreement between experts and the other groups reflect that ambiguous wording in typical privacy policies undermines the ability of privacy policies to effectively convey notice of data practices to the general public. The results of this research will, consequently, have significant policy implications for the construction of the notice and choice framework and for the US reliance on this approach. The gap in interpretation indicates that privacy policies may be misleading the general public and that those policies could be considered legally unfair and deceptive. And, where websites are not effectively conveying privacy policies to consumers in a way that a “reasonable person” could, in fact, understand the policies, “notice and choice” fails as a framework. Such a failure has broad international implications since websites extend their reach beyond the United States

    Search for Kosterlitz-Thouless transition in a triangular Ising antiferromagnet with further-neighbour ferromagnetic interactions

    Full text link
    We investigate an antiferromagnetic triangular Ising model with anisotropic ferromagnetic interactions between next-nearest neighbours, originally proposed by Kitatani and Oguchi (J. Phys. Soc. Japan {\bf 57}, 1344 (1988)). The phase diagram as a function of temperature and the ratio between first- and second- neighbour interaction strengths is thoroughly examined. We search for a Kosterlitz-Thouless transition to a state with algebraic decay of correlations, calculating the correlation lengths on strips of width up to 15 sites by transfer-matrix methods. Phenomenological renormalization, conformal invariance arguments, the Roomany-Wyld approximation and a direct analysis of the scaled mass gaps are used. Our results provide limited evidence that a Kosterlitz-Thouless phase is present. Alternative scenarios are discussed.Comment: 10 pages, RevTeX 3; 11 Postscript figures (uuencoded); to appear in Phys. Rev. E (1995

    Antiferromagnetic interlayer exchange coupling across an amorphous metallic spacer layer

    Full text link
    By means of magneto-optical Kerr effect we observe for the first time antiferromagnetic coupling between ferromagnetic layers across an amorphous metallic spacer layer. Biquadratic coupling occurs at the transition from a ferromagnetically to an antiferromagnetically coupled region. Scanning tunneling microscopy images of all involved layers are used to extract thickness fluctuations and to verify the amorphous state of the spacer. The observed antiferromagnetic coupling behavior is explained by RKKY interaction taking into account the amorphous structure of the spacer material.Comment: Typset using RevTex, 4 pages with 4 figures (.eps

    A transboundary transport episode of nitrogen dioxide as observed from GOME and its impact in the Alpine region

    Get PDF
    High tropospheric NO<sub>2</sub> amounts are occasionally detected by space-borne spectrometers above cloudy scenes. For monitoring of near-ground air pollution such data are not directly applicable because clouds shield the highly polluted planetary boundary layer (PBL). We present a method based on trajectories which implicitly estimates the additional sub-cloud NO<sub>2</sub> distribution in order to model concentrations at ground stations. The method is applied to a transboundary pollution transport episode which led to high NO<sub>2</sub> vertical tropospheric column densities (VTCs) over middle Europe observed by the Global Ozone Monitoring Experiment (GOME) instrument above clouds on 17 February 2001. The case study shows that pollution originally residing near the ground in central Germany, the Ruhr area and adjacent parts of the Netherlands and Belgium has been advected to higher tropospheric levels by a passing weather front. Combining the above-cloud NO<sub>2</sub> VTCs with trajectory information covering the GOME columns and including their sub-cloud part yields an estimate of the total NO<sub>2</sub> distribution within the tropospheric columns. The highly polluted air masses are then traced by forward trajectories starting from the GOME columns to move further to the Alpine region and their impact there is assessed. Considering ground-based in-situ measurements in the Alpine region, we conclude that for this episode, at least 50% of the NO<sub>2</sub> concentration recorded at the sites can be attributed to transboundary transport during the frontal passage. This study demonstrates the potential of using NO<sub>2</sub> VTCs from GOME detected above clouds when combined with transport modelling
    corecore