Azure Storage 2026: Built for Agentic Scale and Cloud-Native Apps

2025 was a pivotal year for Azure Storage, and we’re heading into 2026 with a clear focus on helping customers turn AI into real impact.

2025 was a pivotal year for Azure Storage, and we’re heading into 2026 with a clear focus on helping customers turn AI into real impact. As outlined in last December’s Azure Storage News: Unlocking the Future of Data , Azure Storage is evolving as a single, intelligent platform that supports the entire enterprise-scale AI lifecycle with the performance that modern workloads demand.

Looking ahead to 2026, our investments span the full breadth of this lifecycle as AI becomes fundamental in every industry. We improve storage performance for training boundary models, deliver purpose-built solutions for large-scale AI inference and emerging agent applications, and enable cloud-native applications to operate at agent scale. In parallel, we simplify adoption for mission-critical workloads, reduce TCO, and deepen partnerships to co-design AI-optimized solutions with our customers.

We are grateful to our customers and partners for their trust and cooperation, and we are excited to shape the next chapter of Azure Storage together in the coming year.

Extension from training to conclusion

AI workloads range from large, centralized model training to large-scale inference, where models are continuously applied across products, workflows, and real-world decision making. LLM training continues to run on Azure, and we’re investing to stay on the cutting edge by expanding scale, improving throughput, and optimizing how model files, checkpoints, and training datasets move through storage.

The innovations that helped OpenAI operate at unprecedented scale are now available to all businesses. Blob-scaled accounts allow storage to scale across hundreds of scale units within a region and process the millions of objects needed to use enterprise data as training and debugging datasets for applied AI. Our partnership with NVIDIA DGX on Azure shows that scaling translates into real judgement. Cloud DGX was designed to run on Azure, pairing accelerated computing with high-performance Azure Managed Luster Storage (AMLFS) to support LLM research, automotive and robotics applications. AMLFS provides the best price-performance ratio for 24/7 provisioning of GPU fleets. We recently released Preview support for 25 PiB namespaces and up to 512GBps throughput, making AMLFS a best-in-class managed Luster cloud deployment.

Looking ahead, we’re deepening integration between popular first- and third-party AI frameworks such as Microsoft Foundry, Ray, Anyscale, and LangChain, enabling seamless connections to Azure Storage out of the box. Our native Azure Blob Storage integration within Foundry enables the consolidation of enterprise data into Foundry IQ, making blob storage the foundational layer for grounding enterprise knowledge, fine-tuning models, and serving a low-latency context for inference, all under the control of tenant security and management.

From training to full-fledged inference, Azure Storage supports the entire agent lifecycle: from efficient distribution of large model files, storing and retrieving long-term context, to providing data from RAG vector stores. By optimizing for each end-to-end pattern, Azure Storage has an efficient solution for each stage of AI inference.

Evolving cloud-native applications for agent scaling

As inference becomes the dominant AI workload, autonomous agents are reshaping the way cloud-native applications interact with data. Unlike human-driven systems with predictable query patterns, agents work around the clock and enter orders of magnitude more queries than traditional users. This increase in concurrency puts pressure on the database and storage tiers, forcing enterprises to rethink how they design new cloud-native applications.

Azure Storage is being built with SaaS leaders like ServiceNow, Databricks, and Elastic to optimize for scaling agents using our block storage portfolio. Moving forward, Elastic SAN will become the foundational building block of these cloud-native workloads, starting with the transformation of Microsoft’s own database solutions. It offers fully managed block storage pools for different workloads to share provisioned resources with guardrails for multi-tenant data hosting. We’re pushing the boundaries of units at maximum scale to enable denser packaging and capabilities for managing agent traffic patterns for SaaS providers.

As cloud-native workloads adopt Kubernetes for rapid scaling, we simplify the development of stateful applications through our Kubernetes-native storage orchestrator, Azure Container Storage (ACStor), along with CSI drivers. Our recent release of ACStor signals two directional changes that will guide upcoming investments: adopting the Kubernetes operator model to perform more complex orchestration, and openly sourcing the codebase for collaboration and innovation with the broader Kubernetes community.

Together, these investments create a strong foundation for the next generation of cloud-native applications, where storage must scale seamlessly and provide high efficiency to serve as the data platform for agent-scaled systems.

Breaking the price and performance barriers for critical tasks

In addition to evolving AI workloads, enterprises continue to expand their mission-critical workloads in Azure.

SAP and Microsoft are working together to extend the power of SAP Core while introducing AI-driven agents like Joule that enrich Microsoft 365 Copilot with enterprise context. The latest Azure M-Series enhancements add substantial scalability headroom for SAP HANA, pushing disk storage performance to ~780,000 IOPS and 16 GB/s throughput. For shared storage, Azure NetApp Files (ANF) and Azure Premium Files provide the high-performance NFS/SMB foundations that SAP environments rely on, while optimizing total cost of ownership with ANF Flexible Service Level and Azure Files Provisioned v2. We will soon introduce the Elastic ZRS storage service tier in ANF, which delivers zone-redundant high availability and consistent performance through synchronous replication across Availability Zones using Azure’s ZRS architecture without additional operational complexity.

Similarly, Ultra Disks have become the basis of platforms such as BlackRock’s Aladdin, which must respond immediately to market changes and maintain high performance even under heavy loads. With average latency well below 500 microseconds, support for 400K IOPS and 10GB/s throughput, Ultra Disks enable faster risk calculation, more agile portfolio management and more resilient performance on BlackRock’s highest volume trading days. When paired with Ebsv6 VMs, Ultra Disks can reach 800K IOPS and 14GB/s for the most demanding mission-critical workloads. And with flexible provisioning, customers can tune performance exactly to their needs while optimizing TCO.

These combined investments provide enterprises with a more resilient, scalable and cost-effective platform for their most critical workloads.

Designing for new power and power realities

The global rise of artificial intelligence is straining energy grids and hardware supply chains. Rising energy costs, tight data center budgets and industry-wide HDD/SSD shortages mean organizations cannot scale infrastructure by simply adding more hardware. Storage must become more efficient and intelligent by design.

We streamline the entire stack to maximize hardware performance with minimal overhead. Combined with intelligent load balancing and cost-effective tiering, we are uniquely positioned to help customers scale storage sustainably, even as power and hardware availability become strategic constraints. With continued innovation on Azure Boost Data Processing Units (DPUs), we expect the stepping feature to increase storage speed and resources while using even less power per unit.

AI pipelines can include on-premises, neo-cloud GPU clusters, and the cloud, but many of these environments are limited by power capacity or storage pool. When these limits become a bottleneck, we make it easier to move workloads to Azure. We invest in integrations that make external datasets first-class citizens in Azure, enabling seamless access to training, fine-tuning and derived data wherever they are. As cloud storage evolves into AI-ready datasets, Azure Storage is introducing custom pipeline-optimized environments to simplify how customers feed data into downstream AI services.

Accelerating innovation through an ecosystem of storage partners

We can’t do it alone. Azure Storage works closely with strategic partners to take inference performance to the next level. In addition to the self-publishing capabilities available in the Azure Marketplace, we go one step further by dedicating resources with expertise to work with partners to create highly optimized and deeply integrated services.

In 2026, you’ll see more co-engineered solutions like Commvault Cloud for Azure, Dell PowerScale, Azure Native Qumulo, Pure Storage Cloud, Rubrik Cloud Vault, and Veeam Data Cloud. We will focus on hybrid solutions with partners such as VAST Data and Komprise to enable data movement that unlocks the power of Azure AI services and infrastructure – powering compelling agent and customer application AI initiatives.

Here’s to an exciting new year with Azure Storage

As we move into 2026, our vision remains simple: to help every customer get more value from their data with storage that’s faster, smarter and built for the future. Whether it’s enabling AI, scaling cloud-native applications, or powering mission-critical workloads, Azure Storage is here to help you innovate with confidence in the coming year.

Leave a Comment