Skip to main content

Handling inter-DC/Edge AI-related network traffic: Problem statement
draft-aft-ai-traffic-01

Document Type Expired Internet-Draft (individual)
Expired & archived
Authors Antoine Fressancourt , Luigi Iannone , David Lou , Dirk Trossen
Last updated 2025-10-13 (Latest revision 2025-04-11)
Replaces draft-ai-traffic
RFC stream (None)
Intended RFC status (None)
Formats
Stream Stream state (No stream defined)
Consensus boilerplate Unknown
RFC Editor Note (None)
IESG IESG state Expired
Telechat date (None)
Responsible AD (None)
Send notices to (None)

This Internet-Draft is no longer active. A copy of the expired Internet-Draft is available in these formats:

Abstract

The growth in terms of number of parameters of LLM models as well as the need to use or train those models with private or protected data will require service providers operating LLM-based services to cooperate to train, specialize or serve LLM-based services accross datacenters. Given their structure, the number of parameters they incorporate and the collective communication librairies they are built with, LLM training and inference (or serving) network traffic has specific characteristics. In that regard, understanding the specificities of AI-related workloads is critical to determine how to operate AI-based services in a federated setting across datacenters.

Authors

Antoine Fressancourt
Luigi Iannone
David Lou
Dirk Trossen

(Note: The e-mail addresses provided for the authors of this Internet-Draft may no longer be valid.)