Solving the AI Bottleneck: Storage Architectures that Keep GPUs Fed


This event qualifies for 1 CPEs


In this technical session, we'll explore how VDURA's V5000 and VPOD architectures address the unique performance, scalability and compliance challenges of modern AI workloads. From metadata-intensive language model training to multi-model defense applications, we'll break down how unified namespaces, dynamic data acceleration and parallel I/O paths eliminate traditional constraints on AI pipelines.

 

Attendees will gain insight into:

  • How to overcome metadata bottlenecks in large-scale training.
  • Strategies for sustaining GPU saturation with 1 TB/s throughout per rack.
  • Balancing training, interference and preprocessing workloads in a single infrastructure.
  • Architecting for edge-to-core AI deployments in federal environments.
  • Meeting governance and compliance requirements while scaling AI models.

Whether you're designing next-generation AI pipelines, optimizing multi-node training or tackling federal AI-specific challenges, this session will provide a blueprint for building storage architectures that deliver both performance and resilience at scale.

Speaker and Presenter Information

David White, Federal Account Executive, VDURA

 

Craig Flaskerud, Storage Architect and Product Manager, VDURA

Relevant Government Agencies

Other Federal Agencies, Federal Government, State & Local Government


Register as Attendee


Add to Calendar


Event Type
Webcast


This event has no exhibitor/sponsor opportunities


When
Tue, Nov 4, 2025, 11:00am - 12:00pm ET


Cost
Complimentary:    $ 0.00


Website
Click here to visit event website


Event Sponsors

VDURA


Organizer
VDURA Government Team at Carahsoft


Contact Event Organizer



Return to search results