In this episode of the Shared Everything, reasoning models take center stage. No longer just text predictors, they now loop, branch, and drag in outside data, which blows open context windows and GPU limits. Alon Horev, CTO of VAST Data, unpacks how this shift strains infrastructure, while Kevin Deierling, SVP of Networking at NVIDIA, explains how NVIDIA Dynamo moves KV caches and workloads across GPUs, networks, and storage to keep agentic workflows moving. Data platforms become an extension of memory, enabling longer chains of thought, real-time agents, and secure, observable data paths. The result is a vivid picture of the AI datacenter as the nervous system for reasoning at scale.