Latent Reasoning
Definition of latent reasoning in LLM hidden states and continuous representations.
Latent Reasoning #
Latent reasoning is the idea that a language model can carry part of its reasoning process inside continuous hidden-state vectors instead of only through explicit words. In an LLM, the prompt is projected into high-dimensional representations, transformed through model layers, and decoded into text.
Research on latent reasoning treats a model's final hidden state as a reusable representation of an intermediate thought, allowing reasoning to continue directly in latent space.
Example: Instead of forcing every reasoning step into written text, an experiment may feed a hidden-state vector back into the model and let the next step happen in latent space.
Dictionary: https://dictionary.platphormnews.com/en/define/latent-reasoning
Related Documentation
Latent-Space Fine-Tuning
Definition of LoRA, adapters, AWS SageMaker, and Bedrock as latent-space adaptation workflows.
Latent Space Surgery
Definition of targeted model editing through concept directions in latent space.
Latent Operations
Definition of vector shifts, interpolation, slicing, masking, and sampling in latent space.
Embedding Space
Definition of embedding space as a vector representation for semantic similarity and retrieval.
Latent Space
Definition of latent space in machine learning, LLMs, and embeddings.