<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.24 (Ruby 3.2.3) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-li-fantel-pdp-frr-00" category="std" consensus="true" submissionType="IETF" xml:lang="en" version="3">
  <!-- xml2rfc v2v3 conversion 3.28.0 -->
  <front>
    <title abbrev="FRR based on Programmable Data Plane">Fast Reroute based on Programmable Data Plane (PDP-FRR)</title>
    <seriesInfo name="Internet-Draft" value="draft-li-fantel-pdp-frr-00"/>
    <author initials="D." surname="Li" fullname="Dan Li">
      <organization>Tsinghua University</organization>
      <address>
        <postal>
          <city>Beijing</city>
          <country>China</country>
        </postal>
        <email>tolidan@tsinghua.edu.cn</email>
      </address>
    </author>
    <author initials="K." surname="Gao" fullname="Kaihui Gao">
      <organization>Zhongguancun Laboratory</organization>
      <address>
        <postal>
          <city>Beijing</city>
          <country>China</country>
        </postal>
        <email>gaokh@zgclab.edu.cn</email>
      </address>
    </author>
    <author initials="S." surname="Wang" fullname="Shuai Wang">
      <organization>Zhongguancun Laboratory</organization>
      <address>
        <postal>
          <city>Beijing</city>
          <country>China</country>
        </postal>
        <email>wangshuai@zgclab.edu.cn</email>
      </address>
    </author>
    <author initials="L." surname="Chen" fullname="Li Chen">
      <organization>Zhongguancun Laboratory</organization>
      <address>
        <postal>
          <city>Beijing</city>
          <country>China</country>
        </postal>
        <email>lichen@zgclab.edu.cn</email>
      </address>
    </author>
    <author initials="X." surname="Geng" fullname="Xuesong Geng">
      <organization>Huawei</organization>
      <address>
        <postal>
          <city>Beijing</city>
          <country>China</country>
        </postal>
        <email>gengxuesong@huawei.com</email>
      </address>
    </author>
    <date year="2025" month="March" day="03"/>
    <area>General [REPLACE]</area>
    <workgroup>Internet Engineering Task Force</workgroup>
    <abstract>
      <?line 59?>

<t>This document introduces a fast reroute architecture within the programmable data plane (PDP-FRR) for enhancing network resilience through rapid failure detection and swift path migration, leveraging in-band network telemetry and source routing. Unlike traditional methods that rely on the control plane and face significant delays in rerouting, the proposed architecture utilizes a white-box modeling of the data plane to distinguish and analyze packet losses accurately, enabling immediate identification for link failures (including black-hole and gray failures). By utilizing in-band network telemetry and source routing, the proposed solution significantly reduces reroute times to a few milliseconds, offering a substantial improvement over existing practices and marking a pivotal advancement in failure tolerance.</t>
    </abstract>
  </front>
  <middle>
    <?line 63?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>In the rapidly evolving landscape of network technologies, ensuring the resilience and reliability of data transmission has become paramount. Traditional approaches to network failure detection and rerouting, heavily reliant on the control plane, often suffer from significant delays due to the inherent latency in failure notification, route learning, and route table updates. These delays can severely impact the performance of time-sensitive applications, making it crucial to explore more efficient methods for failure tolerance. Fast reroute based on programmable data plane (PDP-FRR) architecture leverages the capabilities of the programmable data plane to significantly reduce the time required to detect link failures and reroute traffic, thereby enhancing the overall robustness of datacenter networks.</t>
      <t>PDP-FRR architecture stands at the forefront of innovation by integrating in-band network telemetry (INT <xref target="RFC9232"/>) with source routing (SR <xref target="RFC8402"/>) to facilitate rapid path migration directly within the data plane. Unlike traditional schemes that treat the data plane as a "black box" and struggle to distinguish between different types of packet losses, PDP-FRR adopts a "white box" modeling of the data plane's packet processing logic. This allows for a precise analysis of packet loss types and the implementation of targeted statistical methods for failure detection. By deploying packet counters at both ends of a link and comparing them periodically, PDP-FRR can identify failure-induced packet losses with unprecedented speed and accuracy.</t>
      <t>Furthermore, by pre-maintaining a path information table and utilizing SR (e.g., SRv6 <xref target="RFC8986"/> and SR-MPLS <xref target="RFC8660"/>), PDP-FRR architecture enables the sender to quickly switch traffic to alternative paths without the need for control plane intervention. This not only circumvents the delays associated with traditional control plane reroute but also overcomes the limitations of data plane reroute schemes that cannot pre-prepare for all failure scenarios. The integration of INT allows for real-time failure notification, making it possible to control traffic recovery times within a few milliseconds, significantly faster than conventional methods. This document details the principles, architecture, and operational mechanisms of PDP-FRR, aiming to contribute to the development of more resilient and efficient datacenter networks.</t>
      <section anchor="requirements-language">
        <name>Requirements Language</name>
        <t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>
        <?line -18?>

</section>
    </section>
    <section anchor="terminology">
      <name>Terminology</name>
      <dl>
        <dt>Packet Counters:</dt>
        <dd>
          <t>The counter or data structure used to count the number of passed packets in a given time interval.</t>
        </dd>
        <dt>Path Information Table:</dt>
        <dd>
          <t>The table maintained by the sender that contains information about the available paths and their associated metrics.</t>
        </dd>
        <dt>Upstream Meter (UM):</dt>
        <dd>
          <t>The meter used to measure the number of packets passing through the upstream egress port of a link.</t>
        </dd>
        <dt>Downstream Meter (DM):</dt>
        <dd>
          <t>The meter used to measure the number of packets passing through the downstream ingress port of a link.</t>
        </dd>
        <dt>FDM-U:</dt>
        <dd>
          <t>The FDM agent deployed on the upstream switch, it is used to generate probe packets to collect UM and DM.</t>
        </dd>
        <dt>FDM-D:</dt>
        <dd>
          <t>The FDM agent deployed on the downstream switch, it is used to generate response packets to feedback UM and DM.</t>
        </dd>
      </dl>
    </section>
    <section anchor="pdp-frr-architecture-overview">
      <name>PDP-FRR Architecture Overview</name>
      <figure anchor="fig-pdpfrr-architecture">
        <name>PDP-FRR Architecture.</name>
        <artwork><![CDATA[
              4.SR-based Reroute     Switch#3                  
                       +----------> +------------+               
                       |     +------|            |---------+     
    Endhost#1          |     |      +------------+         |     
+---------------+      |     |                        Switch#4   
|               |------+     |                      +-----------+
| +-----------+ |        Switch#1                   |+--------- |
| |  3. Path  | |      +----------+                 ||  Packet ||
| | Migration | +------|          |                 || Counters||
| | Mechanism | |      +----------+                 ||in Inport||
| +-----------+ |<------+    |                      |+---------+|
|               |       |    |         Switch#2     +-----------+
+---------------+       |    |      +------------+         ^ |   
                        |    |      |+----------+|         | |   
                        |    |      ||   Packet ||         | |   
                        |    +------||  Counters||---------+ |   
                        +-----------||in Outport|| <---------+   
 2.Failure Notification Mechanism   |+----------+|      1.FDM    
                                    +------------+               
]]></artwork>
      </figure>
      <t>Traditional network failure detection methods generate probe packets through the control plane (such as BFD <xref target="RFC5880"/>), treating the network data plane as a "black box". If there is no response to a probe, it is assumed that a link failure has occurred, without the ability to distinguish between fault-induced packet loss and non-fault packet loss (such as congestion loss, policy loss, etc.). PDP-FRR models the packet processing logic in the data plane as a white box, analyzing all types of packet loss and designing corresponding statistical methods. As shown in <xref target="fig-pdpfrr-architecture"/>, PDP-FRR deploys packet counters at both ends of a link, which tally the total number of packets passing through as well as the number of non-fault packet losses, periodically comparing the two sets of counters to precisely measure fault-induced packet loss. This method operates entirely in the data plane, with probe packets directly generated by programmable network chips (e.g., P4), thus allowing for a higher frequency of probes and the ability to detect link failures within a millisecond.</t>
      <t>After detecting a link failure, PDP-FRR enables fast path migration in data plane by combining INT with source routing. As shown in <xref target="fig-pdpfrr-architecture"/>, after a switch detects a link failure, it promptly notifies the sender of the failure information using INT technology; the sender then quickly reroutes the traffic to another available path using source routing, based on a path information table maintained in advance. All processes of this method are completed in the data plane, allowing traffic recovery time to be controlled within a few RTTs (on the order of milliseconds).</t>
      <t>In summary, PDP-FRR architecture involves accurately detecting link failures within the network, distinguishing between packet losses caused by failures and normal packet losses, and then having switches convey failure information back to the end hosts via INT <xref target="RFC9232"/>. The end hosts, in turn, utilize SR (e.g., SRv6 <xref target="RFC8986"/> and SR-MPLS <xref target="RFC8660"/>) to change the paths used by the traffic. Therefore, the PDP-FRR architecture comprises three processes.</t>
    </section>
    <section anchor="failure-detection-mechanism">
      <name>Failure Detection Mechanism</name>
      <figure anchor="fig-detection">
        <name>Failure Detection Mechanism: counter deployment locations and request packet generation.</name>
        <artwork><![CDATA[
         Upstream Switch                   Downstream Switch        
+--------------------------------+  +------------------------------+
|+--------------+  +------------+|  |+-----------------+ +--------+|
||         +---+|  |+--+        ||  ||        +--++---+| |        ||
|| Ingress |FDM||->||UM| Egress ||  || Ingress|DM||FDM|+>| Egress ||
||Pipeline | -U||  ||  |Pipeline||  ||Pipeline|  || -D|| |Pipeline||
||         +---+|  |+--+        ||  ||        +--++---+| |        ||
|+--------------+  +------------++->|+-----------------+ +--------+|
|          +---+    +---+--+     |  |  +---+--+--+                 |
|          |Req|->  |Req|UM|->   |  |  |Req|UM|DM|--->             |
|          +---+    +---+--+     |  |  +---+--+--+                 |
|                                |  |      +----+--+--+            |
|                                |  | <----|Resp|UM|DM|            |
|                                |  |      +----+--+--+            |
+--------------------------------+  +------------------------------+
]]></artwork>
      </figure>
      <t>This document designs a failure detection mechanism (FDM) based on packet counters, leveraging the programmable data plane. As shown in <xref target="fig-detection"/>, this mechanism employs counters at both ends of a link to tally packet losses. So adjacent switches can collaborate to detect failures of any type (including gray failures), and the mechanism is capable of accurately distinguishing non-failure packet losses, thus avoiding false positive.</t>
      <section anchor="counter-deployment">
        <name>Counter Deployment</name>
        <t>FDM places a pair of counter arrays on two directly connected programmable switches to achieve rapid and accurate failure detection. <xref target="fig-detection"/> illustrates the deployment locations of these counters, which include two types of meter arrays: (1) the Upstream Meter (UM) is positioned at the beginning of the egress pipeline of the upstream switch; (2) the Downstream Meter (DM) is located at the end of the ingress pipeline of the downstream switch. 
Each meter records the number of packets passing through. With this arrangement, the difference between UM and DM represents the number of packets lost on the link. It is important to note that packets dropped due to congestion in the switch buffers are not counted, as the counters do not cover the buffer areas.</t>
        <t>Furthermore, to exclude packet losses caused by non-failure reasons, each meter array includes some counters to tally the number of non-failure packet losses (e.g., TTL expiry). Therefore, FDM is capable of accurately measuring the total number of failure-induced packet losses occurring between UM and DM, including losses due to physical device failures (e.g., cable dust or link jitter) and control plane oscillations (e.g., route lookup misses).</t>
        <figure anchor="fig-full-deployment">
          <name>FDM (UM and DM) deployment on all network links.</name>
          <artwork><![CDATA[
                                               +----------+
                                               | switch#3 |
                                               +-----+    |
         +----------+    +---------------+  +->|DM#2 |    |
         |          |    |         +-----+  |  +-----+    |
         |    +-----+    +-----+   |UM#2 |--+  +----------+
         |    |UM#1 |--->|DM#1 |   +-----+                 
         |    +-----+    +-----+   +-----+                 
         |          |    |         |UM#3 |--+  +----------+       
         | switch#1 |    |switch#2 +-----+  |  +-----+    |
         +----------+    +---------------+  +->|DM#3 |    |
                                               +-----+    |
                                               | switch#4 |
                                               +----------+
]]></artwork>
        </figure>
        <t><xref target="fig-full-deployment"/> illustrates the deployment method of FDM across the entire datacenter network. Similar to the BFD mechanism, FDM needs to cover every link in the network. Therefore, each link in the network requires the deployment of a pair of UM and DM. It is important to note that although only the unidirectional deployment from Switch#1 to Switch#2 is depicted in <xref target="fig-full-deployment"/>, Switch#2 also sends traffic to Switch#1. To monitor the link from Switch#2 to Switch#1, FDM deploys a UM on the egress port of Switch#2 and a DM on the ingress port of Switch#1. Consequently, FDM utilizes two pairs of UM and DM to monitor a bidirectional link.</t>
      </section>
      <section anchor="counter-comparison">
        <name>Counter Comparison</name>
        <t>As shown in <xref target="fig-detection"/>, the FDM agent in the upstream switch (FDM-U) periodically sends request packets to the link's opposite end. These request packets record specific data of UM and DM along the path through the INT mechanism. Upon detecting the request packets, the FDM agent in the downstream switch (FDM-D) immediately modifies them as response packets and bounces them back, allowing the packets containing UM and DM data to return to the FDM-U. Subsequently, the FDM-U processes the response packets and calculates the packet loss rate of the link over the past period. If FDM-U continuously fails to receive a response packet, indicating that either the response or request packets are lost, then FDM-U considers the packet loss rate of that link to be 100%. This can be used to detect black-hole failure in the link. In other scenarios, if the packet loss rate exceeds a threshold (e.g., 5%) for an extended period, FDM-U will mark that outgoing link as failure.</t>
        <figure anchor="fig-batch-sync">
          <name>An example for illustrating the batch synchronization provided by request packets.</name>
          <artwork><![CDATA[
             Upstream Switch           Downstream Switch   
         +----------------------+    +--------------------+
         |    +---+      +---+  |    |    +---+    +---+  |
         | 000|Req|000000|Req|00+--->|0000|Req|0000|Req|0 |
         |    +---+      +---+  |    |    +---+    +---+  |
         +----------------------+    +--------------------+
         Req: INT request packet
         0: data packet
]]></artwork>
        </figure>
        <t>To ensure the correctness of packet loss rate statistics, FDM must ensure that the packets recorded by UM and DM belong to the same batch. Upon closer analysis, it's found that request packets provide native batch synchronization, and FDM only needs to reset the counters upon receiving a request packet and then start counting the new batch. Specifically, since packets between two directly connected ports do not get out of order, the sequence of packets passing through UM and DM is consistent. As shown in <xref target="fig-batch-sync"/>, the request packets serve to isolate different intervals and record the number of packets in the right interval. When such a request packet reaches the downstream switch, the DM records the number of packets for the same interval. Thus, UM and DM count the same batch of packets. However, the loss of request packets would disrupt FDM's batch synchronization. To avoid this, FDM configures active queue management to prevent the dropping of request packets during buffer congestion. If a request packet is still lost, it must be due to a failure.</t>
      </section>
      <section anchor="failure-recovery-detection">
        <name>Failure Recovery Detection</name>
        <t>To ensure stable network operation after failure recovery, FDM also periodically monitors the recovery status of links. This requires the FDM-U to send a batch of test packets, triggering UM and DM to count. Then, the FDM-U sends request packets to collect data from UM and DM. If the link's packet loss rate remains below the threshold for an extended period, FDM-U will mark the link as healthy. To reduce the bandwidth overhead of FDM, considering that the detection of failure recovery is not as urgent as failure detection, FDM can use a lower recovery detection frequency, such as once every second.</t>
      </section>
      <section anchor="an-example">
        <name>An Example</name>
        <t>This section presents an example of how FDM calculates the packet loss rate of a link. Assume that 100 packets pass through the upstream switch UM, which records [100,0], with 0 representing no non-fault-related packet loss. Suppose 8 packets are dropped on the physical link and 2 packets are dropped at the ingress pipeline of the downstream switch due to ACL rules. Then, the DM records [90,2], where 90 represents the number of packets that passed through DM, and 2 represents the number of packets dropped due to non-fault reasons. Finally, by comparing the UM with DM, FDM calculates the packet loss rate of the link as 8% ((100-90-2)/100), rather than 10%.</t>
      </section>
    </section>
    <section anchor="failure-notification-mechanism">
      <name>Failure Notification Mechanism</name>
      <t>Traditional control plane reroute schemes require several steps after detecting a failure, including failure notification, route learning, and routing table updates, which can take several seconds to modify traffic paths. Data plane reroute schemes, on the other hand, cannot prepare alternative routes for all possible failure scenarios in advance. To achieve fast reroute in the data plane, PDP-FRR combines INT with source routing to quickly reroute traffic.</t>
      <t>Assume that the sender periodically sends INT probe packets along the path of the traffic to collect fine-grained network information, such as port rates, queue lengths, etc.. After a switch detects a link failure, it promptly notifies the sender of the failure information within the INT probe. 
Specifically, when a probe emitted by an end host is about to be forwarded to an egress link that has failed, PDP-FRR will immediately bounce the probe back within the data plane and mark the failure status in the probe. Finally, the probe with the failure status will return to the sender.</t>
    </section>
    <section anchor="path-migration-mechanism">
      <name>Path Migration Mechanism</name>
      <t>To enable sender-driven fast reroute within data plane, the sender needs to maintain a path information table in advance so that it can quickly switch to another available path upon detecting network failure. Specifically, within the transport layer protocol stack of the sender, this document designs a Path Migration Mechanism (PMM), which periodically probes all available paths to other destinations. Of course, this information can also be obtained through other means, such as from an SDN controller. Then, for a new flow, the sender will randomly select an optimal available path from the path information table and use source routing (e.g., SRv6 <xref target="RFC8986"/> and SR-MPLS <xref target="RFC8660"/>) to control the path of this flow. Similarly, the sender also controls the path of the INT probes using source routing, allowing them to probe the path taken by the traffic flow. The fine-grained network information brought back by these probes can be used for congestion control, such as HPCC <xref target="hpcc"/>.</t>
      <t>When the above FDM mechanism is effective, and the INT information makes the sender aware of a failure on the path, the sender will immediately mark this path as faulty in the path information table and choose other available paths, accordingly modifying the source routing headers of both the data packets and the INT probes. To promptly understand the availability of other paths, PMM will periodically probe other paths and update the path information table, including failure entering and recovering.</t>
    </section>
    <section anchor="Security">
      <name>Security Considerations</name>
      <artwork><![CDATA[
TBD.
]]></artwork>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <artwork><![CDATA[
This document makes no request of IANA.

Note to RFC Editor: this section may be removed on publication as an
RFC.
]]></artwork>
    </section>
    <section numbered="false" anchor="Acknowledgements">
      <name>Acknowledgements</name>
      <artwork><![CDATA[
TBD.
]]></artwork>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC9232">
          <front>
            <title>Network Telemetry Framework</title>
            <author fullname="H. Song" initials="H." surname="Song"/>
            <author fullname="F. Qin" initials="F." surname="Qin"/>
            <author fullname="P. Martinez-Julia" initials="P." surname="Martinez-Julia"/>
            <author fullname="L. Ciavaglia" initials="L." surname="Ciavaglia"/>
            <author fullname="A. Wang" initials="A." surname="Wang"/>
            <date month="May" year="2022"/>
            <abstract>
              <t>Network telemetry is a technology for gaining network insight and facilitating efficient and automated network management. It encompasses various techniques for remote data generation, collection, correlation, and consumption. This document describes an architectural framework for network telemetry, motivated by challenges that are encountered as part of the operation of networks and by the requirements that ensue. This document clarifies the terminology and classifies the modules and components of a network telemetry system from different perspectives. The framework and taxonomy help to set a common ground for the collection of related work and provide guidance for related technique and standard developments.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="9232"/>
          <seriesInfo name="DOI" value="10.17487/RFC9232"/>
        </reference>
        <reference anchor="RFC8986">
          <front>
            <title>Segment Routing over IPv6 (SRv6) Network Programming</title>
            <author fullname="C. Filsfils" initials="C." role="editor" surname="Filsfils"/>
            <author fullname="P. Camarillo" initials="P." role="editor" surname="Camarillo"/>
            <author fullname="J. Leddy" initials="J." surname="Leddy"/>
            <author fullname="D. Voyer" initials="D." surname="Voyer"/>
            <author fullname="S. Matsushima" initials="S." surname="Matsushima"/>
            <author fullname="Z. Li" initials="Z." surname="Li"/>
            <date month="February" year="2021"/>
            <abstract>
              <t>The Segment Routing over IPv6 (SRv6) Network Programming framework enables a network operator or an application to specify a packet processing program by encoding a sequence of instructions in the IPv6 packet header.</t>
              <t>Each instruction is implemented on one or several nodes in the network and identified by an SRv6 Segment Identifier in the packet.</t>
              <t>This document defines the SRv6 Network Programming concept and specifies the base set of SRv6 behaviors that enables the creation of interoperable overlays with underlay optimization.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8986"/>
          <seriesInfo name="DOI" value="10.17487/RFC8986"/>
        </reference>
        <reference anchor="RFC8660">
          <front>
            <title>Segment Routing with the MPLS Data Plane</title>
            <author fullname="A. Bashandy" initials="A." role="editor" surname="Bashandy"/>
            <author fullname="C. Filsfils" initials="C." role="editor" surname="Filsfils"/>
            <author fullname="S. Previdi" initials="S." surname="Previdi"/>
            <author fullname="B. Decraene" initials="B." surname="Decraene"/>
            <author fullname="S. Litkowski" initials="S." surname="Litkowski"/>
            <author fullname="R. Shakir" initials="R." surname="Shakir"/>
            <date month="December" year="2019"/>
            <abstract>
              <t>Segment Routing (SR) leverages the source-routing paradigm. A node steers a packet through a controlled set of instructions, called segments, by prepending the packet with an SR header. In the MPLS data plane, the SR header is instantiated through a label stack. This document specifies the forwarding behavior to allow instantiating SR over the MPLS data plane (SR-MPLS).</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8660"/>
          <seriesInfo name="DOI" value="10.17487/RFC8660"/>
        </reference>
        <reference anchor="RFC8402">
          <front>
            <title>Segment Routing Architecture</title>
            <author fullname="C. Filsfils" initials="C." role="editor" surname="Filsfils"/>
            <author fullname="S. Previdi" initials="S." role="editor" surname="Previdi"/>
            <author fullname="L. Ginsberg" initials="L." surname="Ginsberg"/>
            <author fullname="B. Decraene" initials="B." surname="Decraene"/>
            <author fullname="S. Litkowski" initials="S." surname="Litkowski"/>
            <author fullname="R. Shakir" initials="R." surname="Shakir"/>
            <date month="July" year="2018"/>
            <abstract>
              <t>Segment Routing (SR) leverages the source routing paradigm. A node steers a packet through an ordered list of instructions, called "segments". A segment can represent any instruction, topological or service based. A segment can have a semantic local to an SR node or global within an SR domain. SR provides a mechanism that allows a flow to be restricted to a specific topological path, while maintaining per-flow state only at the ingress node(s) to the SR domain.</t>
              <t>SR can be directly applied to the MPLS architecture with no change to the forwarding plane. A segment is encoded as an MPLS label. An ordered list of segments is encoded as a stack of labels. The segment to process is on the top of the stack. Upon completion of a segment, the related label is popped from the stack.</t>
              <t>SR can be applied to the IPv6 architecture, with a new type of routing header. A segment is encoded as an IPv6 address. An ordered list of segments is encoded as an ordered list of IPv6 addresses in the routing header. The active segment is indicated by the Destination Address (DA) of the packet. The next active segment is indicated by a pointer in the new routing header.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8402"/>
          <seriesInfo name="DOI" value="10.17487/RFC8402"/>
        </reference>
        <reference anchor="RFC5880">
          <front>
            <title>Bidirectional Forwarding Detection (BFD)</title>
            <author fullname="D. Katz" initials="D." surname="Katz"/>
            <author fullname="D. Ward" initials="D." surname="Ward"/>
            <date month="June" year="2010"/>
            <abstract>
              <t>This document describes a protocol intended to detect faults in the bidirectional path between two forwarding engines, including interfaces, data link(s), and to the extent possible the forwarding engines themselves, with potentially very low latency. It operates independently of media, data protocols, and routing protocols. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="5880"/>
          <seriesInfo name="DOI" value="10.17487/RFC5880"/>
        </reference>
        <reference anchor="RFC2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <date month="May" year="2017"/>
            <abstract>
              <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="hpcc" target="https://datatracker.ietf.org/doc/draft-miao-ccwg-hpcc-info/">
          <front>
            <title>Inband Telemetry for HPCC++</title>
            <author>
              <organization/>
            </author>
            <date year="2024"/>
          </front>
        </reference>
      </references>
    </references>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA7Vc/XIbN5L/X1V+B5xVqUgnkpZkb87WZrOR9bFWrWTr9FHZ
vVTuCpwBSUTDGe5gRgxj+p7lnuWe7PoDwADDoST7vKxKTM4AjUaj0f3rRkP9
fv/Zhqlknv6XzIpcHYiqrNWzDT0r6aup9nd33+zuP9tIZHUgTJWKTXE0Uckd
dKuHU22MLvJqMYOeZyc3p882ZKnkgfiLylUpM/Hz1cnl+eHRyS/PNuZjaJJX
qsxVJU7ysc6VKnU+FjfS3InTokxg3GcbaZHkcgrk0lKOqn6m+yMJvbL+LJ31
R2XZ393FZpWuMmh0Kk0lrlRZ1JUSQ2lUKopcXJbFuJTTqRxmShzLSorLTOZK
bF0eX/ZPr662gc3hsFT3QODq6tF+zzbg/8C9ynFkWVeTojx4ttEXOjcH4ngg
zvWzDSGY7WOZ299FCX1uDExxUktxm+t7VRpdLfBdAv8eiLdK/wqv6UFR51UJ
z44mOpf4RE2lzmARikynMv+xsoQGKq0HSe6H/+tA/EUWzfh/lXpSa/eMePiP
SZGPx7XMkxp4k8OilFVRfiYfY1ncTX78fZxkctjm4XogfpLcn5m4Bka1f/b1
mJgDRYO01zByPkDlzBtGzrV/8PW4yHQCNNew8LcBKn8gi7/VysCg/inx8a6W
c6U/cwWAwG9M7McJ9R8kxRRVMi/KqaxAvw6w+dXp0Zv9l/vu++s3r7/z37/7
btd/f7W7fyA2r9V4qnLYRLCFLAfw8g+vX+/Cy7enx2AL8lFEfzJLEvoiRCXL
sQK7MKmqmTl48SKFPVOVMrlT5UCrajSAyb6AHf2CN/NUy6KfJPNxH2n0kfAL
S4i381k+BFMkblSmpgqEIGBk8e7y6Ghnh9vBANBsf3f/Fc673+8LOTQ4YoW/
bybaCBiuphlpEGOR1okyQooRGorSGgpZJhNdqaSqSyXmugJBi2qixCzc/zgX
MYvtBvGj8gnoD1ousGTzorwDukZnWuWJAjIwxHgiSjnTKYyqMxwjVTgaWEqB
0zNzParETFYTMdUwIL7oiUyBfZBjpKvzPsnB0a+8PKh7UYOtFCUv2AAsS6bv
YORSphpJgdmFxpMiNcCNxFlnC7RuOMOkQKFkdl5IbSSBltHjXI90ApYWeM3k
wgAPVlwwRs9JZ1agpYzEBw0y/TvJeI5P+8PiNzEtgArOpBhR10CWVSFSbZBq
rc2EWJDA8uJ3GAAVpxJZYQzSS5IaZAPM90DksCQkmelUpRqeCp3CGhPPJFdc
GGhx50RuxJbOk6xOsdcwA8r9SZHxlEHkC99ueyDeLuwsPlf2LbmYIquJmUCc
IPpSsRI67av0FH6BHEAr1RxUIMu0UbAyqemBwEbsFaUA/4qeudKwoHoKo9wr
0mv4F5TwNxYijA7Kr0nJgb+pLO+490zfFxX0lOk9aKuyO8JrJLgV0DZ4MXAb
aarTNCMXvIl+mrYOzgafnLH2kFLDjNR9kd3jMLCiqUnkTOFCNxJLJnmRFWOt
DC6dqWlCRKDZKMgtaKaWQ3hSLZAAaQmocW4srhATacQQRDNF5YCdiaZxIG4C
TZczEIwEe0wSdSx077tAnydK3mtaG+AAZdqxPXAxKgWrWeOaiFFZTLs2SlqT
UmN3nU9UiYLOQEPzZBEKPC8abe0J1oRMyTInfog91g4yPvUMLZ2BucLMlBsK
hhUGzQTuaFAJWHnWQFWSiUa54o4DBesbELxGm40iyuzAsB5TSQqiK5GUdYK6
Bcyr32ZZAUxO8X9qBHxqnIYzI7i5VhWHwVfZBl+PG9HIflizh+uHCyBnrBCg
O854rCMIfHdtNOqDIoDf/6g1PCSbQ5rQMhGNUpD5xHnTli7VcBGYeSSIm05m
GSzSEEBxroxxGpsoBLVO9QztJzvVeKYEs2FQXjOQqQKVQt0bgZ7kxT0bsiFq
TaXILTxojrbO3t+Ijx+tt//0aZs8WctCia3rK26E7h4bgSzA5KOI0Yqym4pd
EdjnElgGgQausZF7p8MxsAPJqqHDqSAEqNp2X6KHeE6WWICLeM7mFEKM8Thb
8QpDmK1SyAluPVRFDDFI5JGT6Akv6bSYVTQEeSEeYr0b+tY4QqBdYDwNWTMw
WQluOUARsNbFnDUfuoA8wESzozK6zYdlDidEVmA6y8jesjRxbIJJ6CHwGUwz
CXx0uLm8vSKnlCrYlQsy8jwYQUMIIlCHhgWsmUKFggEkKzZyAMYSTKVV2yma
Bl2kOCD6USctNCTWgXpPCHAMt0/acsOkVHWOIlDYA2cxU4gC0HOTj04WpPSn
dYl7B21ID9UYuvQBuYIcdG59EuqZh5Noc2lPI6XGAYPCbqnBeNCDb/ffWeUF
DPvpEzW8vupfXJ5f2+eAZ0GpAzUINxyhBmtYwBymsEtBz8AmJHeg3IDCqmTi
9j354wyjUwK6xCpPHjYSUchx0rhYMYbCzVreoyhx1Uh3wNSDIYQhEl0CFsWX
zIQ149KYIkEQk7J0w40UE/fGFXiQmSnIDKE7ZHqZnmrWMm+NWh2jfQmrjqzh
usB/oCWK9RvMmlNAA9YMtKdgz9NYItZjtDjBxoBtnvXJ0nY7ucbVAD4yesj7
3M3QCR4UC2e1sLjI2pwubBRbe0T0uKJgpZGmXYJmY9nF8OEA7C3g0liXAuBQ
wzYFqqHKsCMuYNNITyyBAbSZkoStmkEzkDxuMTsdPSQXUthVvldZMWOsNmKf
6oBPRQM0HnadA9ncFFfsvaakPecQ+NbgJTnKUeJOgXUuStj9zy9ur2+e9/hf
8f4Dfb86+ffbs6uTY/x+/e7w/Nx/2bAtrt99uD0/br41PY8+XFycvD/mzvBU
RI82nl8c/v05y+n5h8ubsw/vD8+fC/ISobQlQQUw5LxBQN9Q3aXZSJVJQF7w
A/q8Pbr83//ZewWb+V9gN+/v7b2BXc4/Xu/92yv4MYdI264K7ij+CVJebACs
AfiEVFCBATjAVshwPY0wk2IO4BE8x2Bj419/Rsn8ciC+HyazvVc/2Ac44eih
k1n0kGS2+mSlMwux41HHMF6a0fOWpGN+D/8e/XZyDx5+/2fwAEr0917/+YcN
BvE3YIs1AfEFYRI260fWh0AEf0B73DoVARuaDAj6ZBvaGcZO1IJtYD0dYlN0
f8Z4X0HRohRjMJw5Qy+2ijJjNIRm/yww+zdolz0D7AScpwCi4DtCk02mq6CX
JvIecuhss7yHrU1k2G5bV6zL0NYibNIJ76/bmUGUMhUXCue+dXux7fmZ0iM3
+amShnBva/o8bxQD+1oO+7FV7WiD6UScOCvKqnHSNPwx6GfMwPFXZCBtqMOr
dTycHl/0b/2Y8EuAfSE7iaiDsXw0HXaYPbTnsNMde2PK9FYE04fK80Vqk2WI
uW8vaD2OL/ywx08YNpjEIwPDBGfgAaOxR+Cshwg148E3PVA4DIHCB/A/91rN
scl/+49Y/XASqvm8GgAc4cjHJaHxc00Mb758nID/7PT954fwR7+/81QSy5DQ
MnrTIsYkTvJ0Uphqc69NYrnCUsjF0pKIXjctIhKrHyuaV0Si3WoZUlpDIhx2
B0lED5pedqC9DhLLpotYIgno9HIgyEyJZcf020sAraCRNahLS+LCx0/LjkVY
nQ2QcLbYk3Bg48lcgN09y3FzM4mWLL4POq4RZyCLnWXHioT/Ni+tcPfbLOKK
rNGLiMQa1fpPer9WwSMSy1Ayy6DJZ5DAf/0qfh4Jt8Lwo1nFlh6uJxHOnxbx
Q13xKorvI7kAif3BqUXX7wN0HehKtyz2BmhbH+RiHUcdJqexis82Ph6IzZEe
44EcnsdFIRfl8f/0vMvKDp5/IvQaBDvr03UuOF7nXQJvF0dMW6aGoA4w4NvT
Y44R8SiDYkRKS7h8jhv6gQzFQJyNOBskKKhrHA3lbokj55TACwPuTRmsyCjN
REnMAgPlUqW9KKR0uc812Y+RrLOqKywnl5YXeZ9aRG/8/EEsY2VImPiiBxAg
08nC/lBVMtgeeG9IiRIbGnWnRcRKHojl5bMtPZvGp0gfAHlXwob4hgAA4zho
lhQli5SS9B25kYE4dGAehv/4cY3WffrUJAAYRpgnZkx6yD8mATA9wolDSpo/
jrNg8nMF05SmBcw6lwXjzDATE2dpBKgioN2KGPMcg1LYvBO0dyBwrUrYaJcF
ZyNYkD8GxZwtbi8fa2JrX/nMn9t3KSdyggSs2zgg/plxmZrLV7i/JrXNm+G8
OHM20eMJ5c4hmKWMOIoUh2zSZeEm6ErS+oRAkAwgMHc4QphsbQbll8KOjUq4
NBAdAbZSnUA40OghLcyQ01WY7OjIqH6GTkpiULpME3NqVvjUtNumM5Q7J1Di
nJVNXjpzEkZAtXGc+kOXxR/j4AnsiMt42aQQEw/zXjAqrlIcRVni7RMvn+df
m84LIjlcNz5/ArHBbrFWxeX2G4XFdAHuiYzSBB3K6vWqM21kUw3WFWQ2sebT
SFc3N6CqNrAoSivUMLm0PbDHXGDGp7JcrMko6hxPvqLDyUABO/U2cDa90MrT
uaQ19HHGNZEU4gwX8UEFHfJnbatidxEeldGJHOuaMpwSW3RqDQVGNlkFaiIw
CjDiXkvROlLgJKBv0qOFqcu85059vyRZS6EhQJexsv4GA3Y340AzafBSjSid
jM87VwSVptSGlLpUqtEwG+857HTsoYUHTnG0F8AknxxgmNuBloL4PW6zCn9X
PjstsNXRAnB4q0m7E6K8dhtqttO0ACoNqN0JOnl4h++bNvjGNls2LYjKmU0j
LAFTAsz9Ybm8vViKE/uQqdg2S2yBzXZ+CFoglUs9U5SkWor+rRvaP+UH/he9
7R/D/4IWX21Gj0l3B6b4uHRj9LzjvwQR7LJ51BnARVSWV+ofIFz7BSSM3y0V
9wgE2+/T83VUvg4v3Z9lHMF1kHoyFYpzYFZmZqf1T+Hl6+zGruiniVVszPOA
pTnwSVaGp5Qhzwp7Jm+PoQEgGQ8aLf7CMyUbNrXOMhBDc13TaujkAsMt2Ibb
wcF8DImjiqMHztm7AI8fDWGO9eNuVDVlBP7YYSU6IMLCkUcbiGsAJOmvdCoS
eDM65MkyrthTAVj0PhIp5wuKO8Lan7jYxzvMgGFtuOwgo+KJ0LPH3pqRPYu7
5YUZ+t4XmoYcyQyzkQXXX7jzHJsmAP1wKmDToShmLlKbSV0GMQB4uhLPC1HH
5kUDzsG15/ANA4BwybywENKBi4TVtef7zVFtpbrOmlcWVQA0qrGuzqHFTr1l
YGpUoFIcTrH0Oa7xcSBntXlKB2Jrb5sIdyTicUFYeAViSFtKMFSgqHlwmO+y
686r2MetdPUfxdY+j9SZc8exaELNQAh3LC2fPW+NsZKbHoDjPwGZ20kiNi3T
dmS4JpQciJ/oEJiqDkA6AIxQzox5XAFEojxY9BltGAYCRONPl1dHAvX0pU2U
9xdnlK3QU0w2YQ0T1kwVuJ8wb+GjwLKYzUAgtrIpyCRYPGsDmiEVRhkC73iw
zGqQ9lxQ7C1AWtj39xyS2J7YUZrV2gEqSGINWoeMw72IRKi0STUrQGrm9BDM
F1aQhZF1E/C3I/eO/e0g7s3NOVZK6XKxHYFT3MNrjQhH7j7Sb6UXHq694KxR
GCn4xe+JxsTZ1na5ZpOFoSRKqu51ooKCSJ5Fwva9RtWwNZO/6goEs23rR8J0
WmESMAV2v1sCtnatKO7qmcBKPWXDpwePTp72eVqyMvjsRI76MzsvrSZvvkSs
8EUjc2496NxO2HekwxFdHl9s7ttkdNC5fWCwbJHd8SiuY+Rl1C78CggLR2uj
nZ12Z2y3R6cwxOAePQ0oRp8njfzUzp1zRnZedrDdRcO44x6mYdwBxRPk9vQV
e7m6Yk/7rBn5aR8/uVdfPPJDQHZUZ1k/8O8OzoJR2/LmZjtEAAWXXLhMIJoQ
Y2EqI4kWyYfxhEtZjvgwOCmpoo4cMeYuO0pkACPqqc5k6ZIYmOv3eI7tMVZr
2VNoKpemVBEZuzgnExlyciAdjVwl6QrvhGYdamuOmh92szLDE4DxhCtaCLPk
msEdH4sEA1DRsT/KBDL+5A3DATXTic2XrRF8r+lA9WOGQHiQ+3O0QQ6FmBa5
rorS44Vo+P2wOQvZJdslzt3ijFbRQzM8YlBELbZduzKhYeQIz1gwXVxhySKO
4+8YIKBEcZtI3lQlYVmXYhjJ0pc7BAj8iJPvhmvbH41uwioF3VkTQZFW/3Y7
zvGzrOPAzjiVRb6+hWnMCOkS6nTF3u0ejCax6jLB4z8OzaL549W9sc+mRadj
mNHzW2MAcBvre33Gkmvyo+HWzHgF8vKcj7ebKxkIdmDyLn09RRi4UpuBHA9h
HRLXCJORYXp30rS1RT/4uJkrXxLAwzjMRDpxkvzBLtTDQHX8iyDvbG8hrDIF
a5bUmTdQ4akVhU0W+tO28Dh2RkcKtOh0WsijId86r4vaZBx6GmY4UVSO3x4f
oVxKx7o0f7AQSlM+PuKV6i1jxUDcjRC/xxlgP7jRKcHctdOQlQ/Bh0rs7e5+
Y4+PMMweNoVfNsQOrs806eQwqMgFHyD46lGY0qh7fAD2ZJkl5Wth42Wpw5V/
+IbvVwEP6rcKzy9SK9uendsc3AhdceE5AA4dFz7xLo3j7kvAaMuxrk8Bd6V+
u8FEC0OsebcCwnz+zn9drr5zb6LOu7u7lCbcpY/9ukNobjd8x1+6seOXjPz/
mjPwckBWKtbuoMXugc1G2RddIGYoYSX6ZpEnDr8cohZJPFEipfIAxBkZ6iGw
B1jLXP/ORyN4y0qnHGO2tptLxBV8pUnZELdEX+Pug6youz/UNuzJphh2+f42
4xAbeh68MXhDxdadDZ2RU8u7NeYJDIYBr72agMeJ32J5dp3baoS21bBTFLbQ
vVMOnCY7JW+Np5IOTGG6oYpj+xqZYNvGh7CtTKY/oAJRlDZH0BRhzN1crq13
43sKRmPGw3Hsgt91STBAED7HMFZkF3Ax6KCvZ09D6fhZPXSi30hcG7aiBoxQ
1ZX+bLTNIYS2kA3eCUCJAcpApxLcZHGlsS7vS869O39jrWypx5Om30D8ROKk
Oo+2tEtl78J1V1BSHuzikfzUyCJAUrVm1JtJDdrVSKmpCm50MiAzEO+KOcJu
HpW2BLxtC2pe1OABUm3KelahxoHydqokAVRKsVKqjLcTLBOsBx+RJqTOQL3G
I+hcchrNllHcK8sr5bZsFrHNS8p5GpucatJe5NpXRA1aAm/BIbEL1hVv7qFy
iRgZ+aPN5izyyh1d+6OC2LCYKqq08NcRbEFBk/diMiwKgvcR/rSQ2GEeOyZa
pJqWguM2dvxRgMPOtuJoARG1W9oqxomgl2O+rBpB8cTe0nSl+pbeWjjs6oTJ
yFPEEUZSDexq7mw19rXEu/G5ISM55wybhxVPxxLKI4iJwuhsQcoWXCfEK3hz
nQK2RiFCIxev9jze8uCNQ0R3GtPk+JolsFeEYLi6JIzdQJemp1VwiUUeCBph
fjazTDSaEXxtTU+42q8CTR2HvEG9DGggOMUTdor+TMlYMj6VLBvHCcyD4bOM
PAqPbW052EushWNhALaMDK7oLJS3EcXthTtAcAbqZ+jf2/3F1irtNhlvPo9p
iq36pcooiR+VRF3XGF4p8TqCzC69bQNRny/1V+j2O5vbpX3ymYAzA4dH56Ks
M3uZN18xwj+/2e3t4xSp0vBNMMl1Btom6+kChpMnqiLz/mj3Vnq/KVizifSB
ONU5++Fhu1INdiYtBQ73RLUIt9frb8TWFqxp/81uf3/7BXzb7mHDibvItQeh
SFy40V302q4k7b42526/WevGd6fxtmqlZsZa07B+rCnJ8rn1z7u+TWIKL3A7
hcZ9XMm7gAWuO+LURYrXMF1ShgpiBvznZzqn03Oay0EXCCTtBXf76F5feJXR
Fn25q37+Lt7Knb+oXOumOUSM/nZGR22Wv1VKtXMw1JrSufD2ZevaNR6gYSqm
sRxBEVtHVgVHiEsXWzkQq3hBpsu5mRGw2B+XXKDmfGxQHtWYUUpNlbyMjCsy
lY9hdbh+FizdP73ALygi81NGWcVwGW/FuYpkoaZ4mkNBBJpyW7xFRcp8X4qC
fhhiLinYoAJAl7bjtADKf2KdEp7nuQUmnxnmeziT46oHhopLyzqvjfs/TxHN
1aKR5u+v4PS8+Wno8j3Z1Y7EUZwKYqG66z6oC83NjNh+FLY01HbppyVdYIvU
3c4lVPdg5Xxg5Goe11dFNnsLdgWLWNOV3JULyesLMuPkXat6vh1DBatAf1OD
tDmTC9xQZVEVsB9QirBeVv94Traeo6PUZJ0sxdblxcW2s3TRZnWlvlgo3bqi
B9PkSaYIsnM+YhyID1T9UBpl+QgFicIimAsKUQxtialzgExsqiSeBLsdTGgS
el0fv2+KQ0vniLlEGYPQEaCraGFZr0BliynZHLIcQKiYVRqrMFsrQ+N427Pm
hrtRK3+c4UtKJ9316cjSgaRwDv5kxO0eOx2Smu1pVoyktyxmTc1vmJ2dckSF
m7JJOINny1vlm5afG9yzj1hcMaQlrNh6MBmjHEthTtJev3flCHZGzXrjn4oC
ieFfl/r0iUwAxcoVlZkDcOYcTFj5oyDWo8ixqQxCcYTcTWF2kb2Wc3SxhHid
OXJYEqSxqkdRhpwtIJa4oOTIyAL28gX6DyhQMikQzHaZBiwEThBOwiK5NPzC
YbaW0mH0ovgEhcqyGjMdZMNjtSA04F1YjTOjP2bCgmVO/F/xYf4sV2AZWAar
ZiFsyBuEINMDUujCZXQwSOjNJlPu6ac1/9cqgZge+DqyUZqtZPi46d5QTg+z
jDdvj22ns8P3h60O3MQ2jMwjKwddDuLAFv9MAhAYOLrvCy5Xg10sTlIMyA94
/V3gNZUL1G+IZIF3LtSrh+4P9tAlm5wpAQXL4WFylxdzcM1j++cBPm62H32i
3CiDf5X+6TnVpGEG08114/8ADqgMiPtRAAA=

-->

</rfc>
