24 research outputs found
Ask Your Distribution Shift if Pre-Training is Right for You
Pre-training is a widely used approach to develop models that are robust to
distribution shifts. However, in practice, its effectiveness varies:
fine-tuning a pre-trained model improves robustness significantly in some cases
but not at all in others (compared to training from scratch). In this work, we
seek to characterize the failure modes that pre-training can and cannot
address. In particular, we focus on two possible failure modes of models under
distribution shift: poor extrapolation (e.g., they cannot generalize to a
different domain) and biases in the training data (e.g., they rely on spurious
features). Our study suggests that, as a rule of thumb, pre-training can help
mitigate poor extrapolation but not dataset biases. After providing theoretical
motivation and empirical evidence for this finding, we explore two of its
implications for developing robust models: (1) pre-training and interventions
designed to prevent exploiting biases have complementary robustness benefits,
and (2) fine-tuning on a (very) small, non-diverse but de-biased dataset can
result in significantly more robust models than fine-tuning on a large and
diverse but biased dataset. Code is available at
https://github.com/MadryLab/pretraining-distribution-shift-robustness
Dataset Interfaces: Diagnosing Model Failures Using Controllable Counterfactual Generation
Distribution shifts are a major source of failure of deployed machine
learning models. However, evaluating a model's reliability under distribution
shifts can be challenging, especially since it may be difficult to acquire
counterfactual examples that exhibit a specified shift. In this work, we
introduce dataset interfaces: a framework which allows users to scalably
synthesize such counterfactual examples from a given dataset. Specifically, we
represent each class from the input dataset as a custom token within the text
space of a text-to-image diffusion model. By incorporating these tokens into
natural language prompts, we can then generate instantiations of objects in
that dataset under desired distribution shifts. We demonstrate how applying our
framework to the ImageNet dataset enables us to study model behavior across a
diverse array of shifts, including variations in background, lighting, and
attributes of the objects themselves. Code available at
https://github.com/MadryLab/dataset-interfaces
Learning low-rank latent mesoscale structures in networks
It is common to use networks to encode the architecture of interactions
between entities in complex systems in the physical, biological, social, and
information sciences. Moreover, to study the large-scale behavior of complex
systems, it is important to study mesoscale structures in networks as building
blocks that influence such behavior. In this paper, we present a new approach
for describing low-rank mesoscale structure in networks, and we illustrate our
approach using several synthetic network models and empirical friendship,
collaboration, and protein--protein interaction (PPI) networks. We find that
these networks possess a relatively small number of `latent motifs' that
together can successfully approximate most subnetworks at a fixed mesoscale. We
use an algorithm that we call "network dictionary learning" (NDL), which
combines a network sampling method and nonnegative matrix factorization, to
learn the latent motifs of a given network. The ability to encode a network
using a set of latent motifs has a wide range of applications to
network-analysis tasks, such as comparison, denoising, and edge inference.
Additionally, using our new network denoising and reconstruction (NDR)
algorithm, we demonstrate how to denoise a corrupted network by using only the
latent motifs that one learns directly from the corrupted networks.Comment: 55 pages, 14 figures, 1 tabl
Humanity's Last Exam
Benchmarks are important tools for tracking the rapid advancements in large language model (LLM) capabilities. However, benchmarks are not keeping pace in difficulty: LLMs now achieve over 90\% accuracy on popular benchmarks like MMLU, limiting informed measurement of state-of-the-art LLM capabilities. In response, we introduce Humanity's Last Exam (HLE), a multi-modal benchmark at the frontier of human knowledge, designed to be the final closed-ended academic benchmark of its kind with broad subject coverage. HLE consists of 3,000 questions across dozens of subjects, including mathematics, humanities, and the natural sciences. HLE is developed globally by subject-matter experts and consists of multiple-choice and short-answer questions suitable for automated grading. Each question has a known solution that is unambiguous and easily verifiable, but cannot be quickly answered via internet retrieval. State-of-the-art LLMs demonstrate low accuracy and calibration on HLE, highlighting a significant gap between current LLM capabilities and the expert human frontier on closed-ended academic questions. To inform research and policymaking upon a clear understanding of model capabilities, we publicly release HLE at https://lastexam.ai
Humanity's Last Exam
Benchmarks are important tools for tracking the rapid advancements in large language model (LLM) capabilities. However, benchmarks are not keeping pace in difficulty: LLMs now achieve over 90\% accuracy on popular benchmarks like MMLU, limiting informed measurement of state-of-the-art LLM capabilities. In response, we introduce Humanity's Last Exam (HLE), a multi-modal benchmark at the frontier of human knowledge, designed to be the final closed-ended academic benchmark of its kind with broad subject coverage. HLE consists of 3,000 questions across dozens of subjects, including mathematics, humanities, and the natural sciences. HLE is developed globally by subject-matter experts and consists of multiple-choice and short-answer questions suitable for automated grading. Each question has a known solution that is unambiguous and easily verifiable, but cannot be quickly answered via internet retrieval. State-of-the-art LLMs demonstrate low accuracy and calibration on HLE, highlighting a significant gap between current LLM capabilities and the expert human frontier on closed-ended academic questions. To inform research and policymaking upon a clear understanding of model capabilities, we publicly release HLE at https://lastexam.ai
On the Relation of Gene Essentiality to Intron Structure: A Computational and Deep Learning Approach
AbstractIdentification and study of human-essential genes has become of practical importance with the realization that disruption or loss of nearby essential genes can introduce latent-vulnerabilities to cancer cells. Essential genes have been studied by copy-number-variants and deletion events, which are associated with introns. The premise of our work is that introns of essential genes have characteristic properties that are distinct from the introns of nonessential genes. We provide support for the existence of characteristic properties by training a deep learning model on introns of essential and nonessential genes and demonstrated that introns alone can be used to classify essential and nonessential genes with high accuracy (AUC of 0.846). We further demonstrated that the accuracy of the same deep-learning model limited to first introns will perform at an increased level, thereby demonstrating the critical importance of introns and particularly first introns in gene essentiality. Using a computational approach, we identified several novel properties of introns of essential genes, finding that their structure protects against deletion and intron-loss events, and that these traits are especially centered on the first intron. We showed that GC density is increased in the first introns of essential genes, allowing for increased enhancer activity, protection against deletions, and improved splice-site recognition. Furthermore, we found that first introns of essential genes are of remarkably smaller size than their nonessential counterparts, and to protect against common 3’ end deletion events, essential genes carry an increased number of (smaller) introns. To demonstrate the importance of the seven features we identified, we trained a feature–based model using only information from these features and achieved high accuracy (AUC of 0.787).</jats:p
