299 research outputs found

    The Randomized Competitive Ratio of Weighted k-Server Is at Least Exponential

    Get PDF
    The weighted k-server problem is a natural generalization of the k-server problem in which the cost incurred in moving a server is the distance traveled times the weight of the server. Even after almost three decades since the seminal work of Fiat and Ricklin (1994), the competitive ratio of this problem remains poorly understood even on the simplest class of metric spaces - the uniform metric spaces. In particular, in the case of randomized algorithms against the oblivious adversary, neither a better upper bound that the doubly exponential deterministic upper bound, nor a better lower bound than the logarithmic lower bound of unweighted k-server, is known. In this paper, we make significant progress towards understanding the randomized competitive ratio of weighted k-server on uniform metrics. We cut down the triply exponential gap between the upper and lower bound to a singly exponential gap by proving that the competitive ratio is at least exponential in k, substantially improving on the previously known lower bound of about ln k

    Calcium Enhances Plasmid Gene Transfection Efficiency in Jurkat Cells

    Get PDF
    https://louis.uah.edu/research-horizons/1185/thumbnail.jp

    Change and Aging Senescence as an adaptation

    Get PDF
    Understanding why we age is a long-lived open problem in evolutionary biology. Aging is prejudicial to the individual and evolutionary forces should prevent it, but many species show signs of senescence as individuals age. Here, I will propose a model for aging based on assumptions that are compatible with evolutionary theory: i) competition is between individuals; ii) there is some degree of locality, so quite often competition will between parents and their progeny; iii) optimal conditions are not stationary, mutation helps each species to keep competitive. When conditions change, a senescent species can drive immortal competitors to extinction. This counter-intuitive result arises from the pruning caused by the death of elder individuals. When there is change and mutation, each generation is slightly better adapted to the new conditions, but some older individuals survive by random chance. Senescence can eliminate those from the genetic pool. Even though individual selection forces always win over group selection ones, it is not exactly the individual that is selected, but its lineage. While senescence damages the individuals and has an evolutionary cost, it has a benefit of its own. It allows each lineage to adapt faster to changing conditions. We age because the world changes.Comment: 19 pages, 4 figure

    Engineering and functional modification strategies for T lymphocytes using therapeutic macromolecules, polymers, micro and nanoparticles

    Get PDF
    T cells are crucial components of the adaptive immune system and are known for secreting various cytokines, which induce the proliferation of many immune cells, including T cells themselves. A subset of T cells, called the CD8+ T cells, are known for their ability to kill infected cells and cancer cells. Naturally, the potential in using T cells for immunotherapy of cancer is tremendous. For this purpose, however, it is often required to engineer T cells specifically to treat a particular type of cancer. For example, the FDA-approved immunotherapies, KymriahTM and YescartaTM, which come under the adoptive T cell therapy, require genetic engineering and in vitro expansion of T cells. Currently, viral vectors are being used to introduce the transgenes encoding the chimeric antigen receptor (CAR) to patient-derived T cells in vitro. Due to the safety concerns and high manufacturing costs associated with viral transduction, non-viral vector-mediated T cell-specific transfections are being developed. Besides, the patient’s natural Antigen-presenting cells (APCs), such as the dendritic cells, may not offer consistent activation profiles. Thus, the function of CAR-T cells must be modulated with artificial antigen-specific APCs for more specific and controlled activation profiles, whose manufacturing and maintenance are easier and less expensive. Therefore, it is imperative that we understand T cell biology, improve the engineering and functional modulation strategies to advance the field of T cell therapy. Here, we suggest how to improve the non-viral vector-mediated transfection efficiency in Jurkat cells (a human leukemia T cell line) by using calcium ions; how to express and purify N-terminal Cysteine containing mouse Interleukin-2 (IL-2) at the laboratory scale to enable site-specific bioconjugation of IL-2 with foreign materials; how to associate and intracellularly deliver the IL-2-functionalized nanocargoes up to a certain size-limit to the murine primary T Lymphocytes via the IL-2 receptor (IL-2R)-mediated endocytosis; and finally, how to optimize the manufacture of human HER2- and PD-L1-coated iron-oxide microparticles to functionally modulate and test the efficacies of the HER2 CAR-T cells

    A Decomposition Approach to the Weighted k-Server Problem

    Get PDF
    A natural variant of the classical online k-server problem is the weighted k-server problem, where the cost of moving a server is its weight times the distance through which it moves. Despite its apparent simplicity, the weighted k-server problem is extremely poorly understood. Specifically, even on uniform metric spaces, finding the optimum competitive ratio of randomized algorithms remains an open problem - the best upper bound known is 2^{2^{k+O(1)}} due to a deterministic algorithm (Bansal et al., 2018), and the best lower bound known is Ω(2^k) (Ayyadevara and Chiplunkar, 2021). With the aim of closing this exponential gap between the upper and lower bounds, we propose a decomposition approach for designing a randomized algorithm for weighted k-server on uniform metrics. Our first contribution includes two relaxed versions of the problem and a technique to obtain an algorithm for weighted k-server from algorithms for the two relaxed versions. Specifically, we prove that if there exists an α₁-competitive algorithm for one version (which we call Weighted k-Server - Service Pattern Construction) and there exists an α₂-competitive algorithm for the other version (which we call Weighted k-server - Revealed Service Pattern), then there exists an (α₁α₂)-competitive algorithm for weighted k-server on uniform metric spaces. Our second contribution is a 2^O(k²)-competitive randomized algorithm for Weighted k-server - Revealed Service Pattern. As a consequence, the task of designing a 2^poly(k)-competitive randomized algorithm for weighted k-server on uniform metrics reduces to designing a 2^poly(k)-competitive randomized algorithm for Weighted k-Server - Service Pattern Construction. Finally, we also prove that the Ω(2^k) lower bound for weighted k-server, in fact, holds for Weighted k-server - Revealed Service Pattern

    On Minimizing Generalized Makespan on Unrelated Machines

    Get PDF
    We consider the Generalized Makespan Problem (GMP) on unrelated machines, where we are given n jobs and m machines and each job j has arbitrary processing time p_{ij} on machine i. Additionally, there is a general symmetric monotone norm ?_i for each machine i, that determines the load on machine i as a function of the sizes of jobs assigned to it. The goal is to assign the jobs to minimize the maximum machine load. Recently, Deng, Li, and Rabani [Deng et al., 2023] gave a 3 approximation for GMP when the ?_i are top-k norms, and they ask the question whether an O(1) approximation exists for general norms ?? We answer this negatively and show that, under natural complexity assumptions, there is some fixed constant ? > 0, such that GMP is ?(log^? n) hard to approximate. We also give an ?(log^{1/2} n) integrality gap for the natural configuration LP

    A Simple and Interpretable Predictive Model for Healthcare

    Full text link
    Deep Learning based models are currently dominating most state-of-the-art solutions for disease prediction. Existing works employ RNNs along with multiple levels of attention mechanisms to provide interpretability. These deep learning models, with trainable parameters running into millions, require huge amounts of compute and data to train and deploy. These requirements are sometimes so huge that they render usage of such models as unfeasible. We address these challenges by developing a simpler yet interpretable non-deep learning based model for application to EHR data. We model and showcase our work's results on the task of predicting first occurrence of a diagnosis, often overlooked in existing works. We push the capabilities of a tree based model and come up with a strong baseline for more sophisticated models. Its performance shows an improvement over deep learning based solutions (both, with and without the first-occurrence constraint) all the while maintaining interpretability.Comment: 7 pages, 10 figure

    Development of an automated robotic deburring workcell

    Get PDF
    This thesis deals with the development of an automated robotic deburring workcell for refurbished components with intricate geometry. Deburring of a refurbished workpiece involves establishing the location of the workpiece, establishing its geometry using surface digitizing and mathematical splining algorithms, extrapolating the reconstructed surface to get the edge profile, determining and executing the tool path. The robot chosen for the workcell is YAMAHA Zeta-1 robot designed specifically for deburring. A displacement sensor with an accuracy of 0.004 mm is interfaced to the workcell. The limited computational ability of the robot controller and its closed architecture have necessitated the development of an external controller to coordinate the activities of the workcell. The robot controller neither provides for communication with external computing systems, nor does it allow interfacing external sensors

    A Narrow Quantitative Trait Locus in C. elegans Coordinately Affects Longevity, Thermotolerance, and Resistance to Paraquat

    Get PDF
    By linkage mapping of quantitative trait loci, we previously identified at least 11 natural genetic variants that significantly modulate Caenorhabditis elegans life-span (LS), many of which would have eluded discovery by knock-down or mutation screens. A region on chromosome IV between markers stP13 and stP35 had striking effects on longevity in three inter-strain crosses (each P < 10−9). In order to define the limits of that interval, we have now constructed two independent lines by marker-based selection during 20 backcross generations, isolating the stP13–stP35 interval from strain Bergerac-BO in a CL2a background. These congenic lines differed significantly from CL2a in LS, assayed in two environments (each P < 0.001). We then screened for exchange of flanking markers to isolate recombinants that partition this region, because fine-mapping the boundaries for overlapping heteroallelic spans can greatly narrow the implicated interval. Recombinants carrying the CL2a allele at stP35 were consistently long-lived compared to those retaining the Bergerac-BO allele (P < 0.001), and more resistant to temperature elevation and paraquat (each ∼1.7-fold, P < 0.0001), but gained little protection from ultraviolet or peroxide stresses. Two rounds of recombinant screening, followed by fine-mapping of break-points and survival testing, narrowed the interval to 0.18 Mb (13.35–13.53 Mb) containing 26 putative genes and six small-nuclear RNAs – a manageable number of targets for functional assessment
    corecore