2025

Learning Tractable Distributions of Language Model Continuations
Learning Tractable Distributions of Language Model Continuations

Gwen Yidou-Weng, Ian Li, Anji Liu, Oliver Broadrick, Guy Van den Broeck, Benjie Wang

ArXiv 2025

We propose Learning to Look Ahead (LTLA), a hybrid approach that pairs the same base language model for rich prefix encoding with a fixed tractable surrogate model that computes exact continuation probabilities.

Learning Tractable Distributions of Language Model Continuations

Gwen Yidou-Weng, Ian Li, Anji Liu, Oliver Broadrick, Guy Van den Broeck, Benjie Wang

ArXiv 2025

We propose Learning to Look Ahead (LTLA), a hybrid approach that pairs the same base language model for rich prefix encoding with a fixed tractable surrogate model that computes exact continuation probabilities.

Steering LLMs’ Reasoning With Activation State Machines
Steering LLMs’ Reasoning With Activation State Machines

Ian Li, Philip Chen, Max Huang, Andrew Park, Loris D'Antoni, Rose Yu

FoRLM @ NeurIPS 2025ArXiv 2025

We introduce Activation State Machine (ASM), an lightweight dynamic steering mechanism that learns the latent dynamics of ideal reasoning trajectories and applies context-aware interventions at inference time.

Steering LLMs’ Reasoning With Activation State Machines

Ian Li, Philip Chen, Max Huang, Andrew Park, Loris D'Antoni, Rose Yu

FoRLM @ NeurIPS 2025ArXiv 2025

We introduce Activation State Machine (ASM), an lightweight dynamic steering mechanism that learns the latent dynamics of ideal reasoning trajectories and applies context-aware interventions at inference time.