125 research outputs found
Dynamic Resource Allocation Algorithms for Cognitive Radio Systems
Cognitive Radio (CR) is a novel concept for improving the utilization of the radio spectrum. This promises the efficient use of scarce radio resources. Orthogonal Frequency Division Multiplexing (OFDM) is a reliable transmission scheme for Cognitive Radio Systems which provides flexibility in allocating the radio resources in dynamic environment. It also assures no mutual interference among the CR radio channels which are just adjacent to each other. Allocation of radio resources dynamically is a major challenge in cognitive radio systems. In this project, various algorithms for resource allocation in OFDM based CR systems have been studied. The algorithms attempt to maximize the total throughput of the CR system (secondary users) subject to the total power constraint of the CR system and tolerable interference from and to the licensed band (primary users). We have implemented two algorithms Particle Swarm Algorithm(PSO) and Genetic Algorithm(GA) and compared their results
Building Distributed Data Models in a Performance-Optimized, Goal-Oriented Optimization Framework for Cyber-Physical Systems
Cyber-physical systems (CPS) are large, distributed embedded systems that integrate sensing processing, networking and actuation. Developing CPS applications is currently challenging due to the sheer complexity of the related functionality as well as the broad set of constraints and unknowns that must be tackled during operation. Building accurate data representations that model the behavior of the physical environment by establishing important data correlations and capturing physical laws of the monitored entities is critical for dependable decision making under performance and resource constraints. The goal of this thesis is to produce reliable data models starting from raw sensor data under tight resource constraints of the execution platform, while satisfying the timing constraints of the application. This objective was achieved through adaptation policy designs that optimally compute the utilization rates of the available network resources to satisfy the performance requirements of the application while tracking physical entities that can be quasi-static or dynamic in nature. The performance requirements are specified using a declarative, high-level specification notation that correspond to timing, precision and resource constraints of the application. Data model parameters are generated by solving differential equations using data sampled over time and modeling errors occur due to missed data correlations and distributed data lumping of the model parameters. | 203 page
Generative AI in Action: From Core Algorithms to Industry Use Cases
Generative Artificial Intelligence (AI) refers to algorithms capable of creating new content such as
images, text, music, and more by learning patterns and structures from existing data. With the rapid advancements in
machine learning techniques, generative AI has gained significant attention across various industries. The most well-
known generative models include Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and
autoregressive models such as GPT-3. These algorithms have revolutionized fields such as entertainment, healthcare,
marketing, and design by generating realistic, high-quality content. This paper explores the evolution of generative AI,
from early algorithmic models to their current applications. It examines the core technologies behind generative
models, their applications across diverse sectors, and the challenges faced in their development and deployment.
Furthermore, it outlines the ethical considerations, potential risks, and regulatory frameworks necessary for responsible
usage. As generative AI continues to mature, it promises to unlock new possibilities for innovation and creativity but
also necessitates careful consideration of its broader societal impact
Behavior Optimized Image Generation
The last few years have witnessed great success on image generation, which
has crossed the acceptance thresholds of aesthetics, making it directly
applicable to personal and commercial applications. However, images, especially
in marketing and advertising applications, are often created as a means to an
end as opposed to just aesthetic concerns. The goal can be increasing sales,
getting more clicks, likes, or image sales (in the case of stock businesses).
Therefore, the generated images need to perform well on these key performance
indicators (KPIs), in addition to being aesthetically good. In this paper, we
make the first endeavor to answer the question of "How can one infuse the
knowledge of the end-goal within the image generation process itself to create
not just better-looking images but also "better-performing'' images?''. We
propose BoigLLM, an LLM that understands both image content and user behavior.
BoigLLM knows how an image should look to get a certain required KPI. We show
that BoigLLM outperforms 13x larger models such as GPT-3.5 and GPT-4 in this
task, demonstrating that while these state-of-the-art models can understand
images, they lack information on how these images perform in the real world. To
generate actual pixels of behavior-conditioned images, we train a
diffusion-based model (BoigSD) to align with a proposed BoigLLM-defined reward.
We show the performance of the overall pipeline on two datasets covering two
different behaviors: a stock dataset with the number of forward actions as the
KPI and a dataset containing tweets with the total likes as the KPI, denoted as
BoigBench. To advance research in the direction of utility-driven image
generation and understanding, we release BoigBench, a benchmark dataset
containing 168 million enterprise tweets with their media, brand account names,
time of post, and total likes
Number of prescription medications and overall survival in metastatic castrate-resistant prostate cancer
Lobe-specific lymph node sampling is associated with lower risk of cancer recurrence
OBJECTIVE: Adequate intraoperative lymph node (LN) assessment is a critical component of early-stage non-small cell lung cancer (NSCLC) resection. The National Comprehensive Cancer Network and the American College of Surgeons Commission on Cancer (CoC) recommend station-based sampling minimums agnostic to tumor location. Other institutions advocate for lobe-specific LN sampling strategies that consider the anatomic likelihood of LN metastases. We examined the relationship between lobe-specific LN assessment and long-term outcomes using a robust, highly curated cohort of stage I NSCLC patients.
METHODS: We performed a cohort study using a uniquely compiled dataset from the Veterans Health Administration and manually abstracted data from operative and pathology reports for patients with clinical stage I NSCLC (2006-2016). For simplicity in comparison, we included patients who had right upper lobe (RUL) or left upper lobe (LUL) tumors. Based on modified European Society of Thoracic Surgeons guidelines, lobe-specific sampling was defined for RUL tumors (stations 2, 4, 7, and 10 or 11) and LUL tumors (stations 5 or 6, 7, and 10 or 11). Our primary outcome was the risk of cancer recurrence, as assessed by Fine and Gray competing risks modeling. Secondary outcomes included overall survival (OS) and pathologic upstaging. Analyses were adjusted for relevant patient, disease, and treatment variables.
RESULTS: Our study included 3534 patients with RUL tumors and 2667 patients with LUL tumors. Of these, 277 patients (7.8%) with RUL tumors and 621 patients (23.2%) with LUL tumors met lobe-specific assessment criteria. Comparatively, 34.7% of patients met the criteria for count-based assessment, and 25.8% met the criteria for station-based sampling (ie, any 3 N2 stations and 1 N1 station). Adherence to lobe-specific assessment was associated with lower cumulative incidence of recurrence (adjusted hazard ratio [aHR], 0.83; 95% confidence interval [CI], 0.70-0.98) and a higher likelihood of pathologic upstaging (aHR, 1.49; 95% CI, 1.20-1.86). Lobe-specific assessment was not associated with OS.
CONCLUSIONS: Adherence to intraoperative LN sampling guidelines is low. Lobe-specific assessment is associated with superior outcomes in early-stage NSCLC. Quality metrics that assess adherence to intraoperative LN sampling, such as the CoC Operative Standards manual, also should consider lobe-specific criteria
- …
