Building Language Models that Learn, Remember, and Reason like Experts

Speaker: Niket Tandon, Principal Research Scientist at Microsoft Research, Bengaluru
Date: 07 October 2025

As language models become deeply integrated into our daily workflows, our expectations are shifting. We no longer seek systems that merely predict the next word; we want collaborators that can learn from us, remember what matters, and reason with the rigour of a domain expert. In this talk, Niket Tandon introduced a new class of language models by exploring two converging paths toward that future: systems that learn from the world and about the world.

First, he examined how models can learn from the world-alongside us-by reflecting on past mistakes and adapting to human feedback. Drawing inspiration from the psychological theory of recursive reminding, he presented a memory architecture that enables models to avoid repeating errors and improve through interaction. This is a step toward making language models not just responsive, but reflective and even self-reflective.

Second, he explored how models can learn about the world. Despite their scale, today’s models often fail in high-stakes domains such as law, medicine, and finance because their knowledge is static and incomplete. Niket Tandon explained that we can mitigate this by injecting more knowledge into their memory without compromising trust. To address this, he discussed emerging memory-based strategies for injecting curated knowledge directly into models, moving beyond retrieval-augmented generation (RAG) to build systems that can robustly and efficiently integrate domain expertise while respecting privacy.

Together, these approaches point to a new generation of language models: systems that learn from the world and about the world. Through case studies and early results, the speaker explored how fusing memory and knowledge can create agents that reason better, fail less often, and ultimately serve us more effectively.