Yifan Wang
Cascade reward sampling for efficient decoding-time alignment
Aligning large language models (LLMs) with human preferences is critical for their deployment. Recently, decoding-time alignment has emerged as an effective plug-and-play technique that requires no fine-tuning of model parameters. However, generating text that achieves both high reward and high likelihood remains a significant challenge. Existing methods often fail to generate high-reward text or incur substantial computational costs. In this paper, we propose Cascade Reward Sampling (CARDS) to address both issues, guaranteeing the generation of high-reward and high-likelihood text with significantly low costs.
2024-08-02
1 min read
A Theory of Fault-Tolerant Learning
Developing machine learning models that account for potential faults encountered in real-world environments presents a fundamental challenge for mission-critical applications. In this paper, we introduce a novel theoretical framework grounded in learning theory for dealing with faults. In particular, we propose a framework called fault-tolerant PAC learning, aimed at identifying the most fault-tolerant models from a given hypothesis class (such as neural networks). We show that if faults occur randomly, fault-tolerant learning is equivalent to regular PAC learning.
2024-05-25
1 min read
Deconvolving Complex Neuronal Networks into Interpretable Task-Specific Connectomes
Task-specific functional MRI (fMRI) images provide excellent modalities for studying the neuronal basis of cognitive processes. We use fMRI data to formulate and solve the problem of deconvolving task-specific aggregate neuronal networks into a set of basic building blocks called canonical networks, to use these networks for functional characterization, and to characterize the physiological basis of these responses by mapping them to regions of the brain. Our results show excellent task-specificity of canonical networks, i.e., the expression of a small number of canonical networks can be used to accurately predict tasks; generalizability across cohorts, i.
2024-01-25
1 min read
Learning Functional Distributions with Private Labels
We study the problem of learning functional distributions in the presence of noise. The functional is a map from features to distributions over a set of labels and is assumed to belong to a known class of hypotheses. Features are generated by a general random process and labels are sampled independently from the feature-dependent distributions and then passed through a noisy kernel. We consider online learning where at each time step a predictor attempts to predict the actual (label) distribution given only the features revealed so far and noisy labels in prior steps.
2023-05-23
1 min read