Reflected Intelligence
BlogAbout Categories

Blog

  • Jun 7, 2025

    DynamicRAG: The AI That Knows When to Stop Reading

    A new retrieval system learns to ask 'how much is enough?' and dramatically outperforms the competition by deciding for itself how many documents to consider.

    Continue reading »
  • May 30, 2025

    Absolute Zero: Teaching AI to Reason Without Human Data

    Absolute Zero: Teaching AI to Reason Without Human Data

    Continue reading »
  • May 22, 2025

    Constitutional AI: Harmlessness from AI Feedback

    This post is the fourth in a series exploring reflection techniques in AI systems. For the complete series, see our posts on Reflexion, Self-Refine, and Generative Agents.

    Continue reading »
  • May 21, 2025

    Generative Agents: AI Characters Simulating Human Behavior

    This post is the third in a series exploring reflection techniques in AI systems. For the complete series, see our posts on Reflexion and Self-Refine. The rest of the series will be published shortly.

    Continue reading »
  • May 20, 2025

    Comparing Self-Refine and Reflexion: Two Paths to AI Self-Improvement

    Self-Refine and Reflexion both help AI improve itself—but they take different paths. Here's how these two approaches compare, in plain English.

    Continue reading »
  • May 19, 2025

    Reflexion: Verbal Self-Feedback for AI Agents

    This post is the first in a series exploring reflection techniques in AI systems.

    Continue reading »
  • May 17, 2025

    The Faithfulness Gap: When AI Reasoning Models Hide Their True Thoughts

    The Faithfulness Gap: When AI Reasoning Models Hide Their True Thoughts

    Continue reading »
  • May 15, 2025

    Hallucination Remediation: How Reflection Techniques Reduce AI Confabulation

    Large Language Models (LLMs) have demonstrated remarkable capabilities across numerous domains, but their tendency to “hallucinate” — generating content that appears plausible but is factually incorrect or entirely fabricated — remains one of their most persistent limitations. Recent advances in reflection techniques have emerged as promising approaches for addressing this problem, enabling models to critique their own outputs and improve factual reliability. Building on our previous discussions of reflected intelligence and reflective intelligence in LLMs, this article examines how specific reflection techniques directly combat hallucinations.

    Continue reading »
  • May 12, 2025

    N-CRITICS: How Large Language Models Can Learn From Their Mistakes

    How multiple AI models working together can create a self-correction system that mimics human learning and reduces errors in language model outputs.

    Continue reading »
  • May 11, 2025

    Breaking the Context Barrier: Human-Like Episodic Memory for AI

    Modern AI models have an impressive knack for generating text, but they share a vexing limitation – a short memory. No matter how intelligent a large language model (LLM) is, it can only consider a fixed amount of text at once (its context window). Anything beyond that vanishes from its working memory. If you chat with an AI for long enough, it will eventually forget earlier details of the conversation. Even as context windows have expanded from about 2,000 tokens in GPT-3 to 100,000 tokens in models like Anthropic’s Claude (roughly 75,000 words), and even up to millions of tokens in cutting-edge systems1, the fundamental problem remains: once the window is full, something has to drop out. In other words, today’s LLMs exist almost entirely in the “now,” with no built-in long-term memory beyond what fits in that window.

    1. McKinsey. (2024). What is a context window for Large Language Models? McKinsey Explainers. ↩

    Continue reading »
  • May 10, 2025

    Optimizing Memory and Reflection: Practical Implementations for AI Agents

    This article builds on our previous exploration of memory and reflection in AI agents, diving deeper into practical implementations and recent advancements.

    Continue reading »
  • May 7, 2025

    You Are the Context You Keep: The Memory Revolution in AI

    Modern AI models like Claude and GPT-4 face a curious paradox - they’re incredibly clever, yet surprisingly forgetful. As we explored in our article on reflected intelligence, these systems mirror our intelligence back at us, but unlike humans who build layered memories over a lifetime, today’s AI systems have no true long-term memory of past interactions. They exist entirely in the “now,” only able to “remember” what fits inside their short-term working memory—their context window.

    Continue reading »
  • May 5, 2025

    Reflections Distorted: When AI Becomes a Sycophant

    When we talk about reflection in AI, we usually mean the model’s ability to examine its own reasoning, learn from its mistakes, or serve as a mirror for human thought. But what happens when that mirror becomes warped—when it starts flattering us instead of challenging us?

    Continue reading »
  • May 3, 2025

    Reflective Intelligence in Large Language Models

    Large Language Models (LLMs) possess an impressive ability to reflect vast amounts of human knowledge – effectively serving as mirrors of “reflected intelligence.” However, truly reflective intelligence in LLMs goes a step further: it implies the model can think about its own thinking, analyze its answers, learn from feedback, and iteratively improve its reasoning. This article examines what reflective intelligence means for LLMs, how it differs from mere reflected knowledge, and evaluates several frameworks and techniques designed to imbue LLMs with this introspective capability. We will verify key claims about these methods, discuss their benefits and trade-offs, and highlight the recent research (2023–2024) expanding on these ideas.

    Continue reading »
  • Apr 29, 2025

    Memory and Reflection: Foundations for Autonomous AI Agents

    Introduction

    Continue reading »
  • Apr 26, 2025

    How Self-Reflective AI Is Transforming Industries

    Can an AI think about its own thinking? This once philosophical question is becoming a practical engineering goal. Reflective intelligence — the ability for AI systems to self-reflect on their decisions and adapt accordingly — is emerging as the next frontier in artificial intelligence. Unlike traditional AI that executes tasks without examining its reasoning, a self-reflective AI can monitor its own performance, recognize errors or uncertainties, and improve itself in real-time. Researchers posit that even rudimentary forms of machine self-awareness can significantly enhance an AI system’s adaptability, robustness, and efficiency1.

    1. Johnson, B. (2022). Metacognition for artificial intelligence system safety: An approach to safe and desired behavior. Safety Science, 151, 105743. ↩

    Continue reading »
  • Apr 25, 2025

    Reflective Intelligence: When AI Learns from Itself

    Ever caught yourself mid-sentence thinking “wait, that doesn’t sound right”? That’s reflection—and lately a lot of progress has been made enabling AI to to do do the same thing. In just one year, self-reflective AI systems have transformed from academic curiosities into powerful tools reshaping industries. Instead of bulldozing ahead with potentially wrong answers, these systems take a moment to examine their own thinking, show their work, and fix mistakes before serving up solutions. While our previous article on reflected intelligence explored how AI mirrors human intelligence, this piece examines how AI can actively reflect on its own outputs.

    Continue reading »
  • Apr 23, 2025

    Reflected Intelligence: When AI Holds Up the Mirror

    In behavioral psychology, the mirror test is designed to discover an animal’s capacity for self-awareness. The essence is always the same: does the animal recognize itself in the mirror or think it’s another being altogether? Now humanity faces its own mirror test thanks to the expanding capabilities of AI, and many otherwise intelligent people are failing it.

    Continue reading »

Reflected Intelligence

  • Evan Volgas