<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.29 (Ruby 3.4.4) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-ietf-ippm-qoo-03" category="info" consensus="true" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.28.1 -->
  <front>
    <title abbrev="QoO">Quality of Outcome</title>
    <seriesInfo name="Internet-Draft" value="draft-ietf-ippm-qoo-03"/>
    <author fullname="Bjørn Ivar Teigen Monclair">
      <organization>CUJO AI</organization>
      <address>
        <postal>
          <street>Gaustadalléen 21</street>
          <code>0349</code>
          <country>NORWAY</country>
        </postal>
        <email>bjorn.monclair@cujo.com</email>
      </address>
    </author>
    <author fullname="Magnus Olden">
      <organization>CUJO AI</organization>
      <address>
        <postal>
          <street>Gaustadalléen 21</street>
          <code>0349</code>
          <country>NORWAY</country>
        </postal>
        <email>magnus.olden@cujo.com</email>
      </address>
    </author>
    <date year="2025" month="June" day="02"/>
    <area>Transport</area>
    <workgroup>IP Performance Measurement</workgroup>
    <keyword>Quality Attenuation</keyword>
    <keyword>Application Outcomes</keyword>
    <keyword>Quality of Outcome</keyword>
    <keyword>Performance monitoring</keyword>
    <keyword>Network quality</keyword>
    <abstract>
      <?line 117?>
<t>This document introduces the Quality of Outcome (QoO) framework, a novel
approach to network quality assessment designed to align with the needs of
application developers, users, and operators.</t>
      <t>By leveraging the Quality Attenuation metric, QoO provides a unified method for defining and evaluating
application-specific network requirements while ensuring actionable insights for
network optimization and simple quality scores for end-users.</t>
    </abstract>
    <note removeInRFC="true">
      <name>About This Document</name>
      <t>
        The latest revision of this draft can be found at <eref target="https://github.com/getCUJO/QoOID"/>.
        Status information for this document may be found at <eref target="https://datatracker.ietf.org/doc/draft-ietf-ippm-qoo/"/>.
      </t>
      <t>
        Discussion of this document takes place on the
        IP Performance Measurement Working Group mailing list (<eref target="mailto:ippm@ietf.org"/>),
        which is archived at <eref target="https://mailarchive.ietf.org/arch/browse/ippm/"/>.
        Subscribe at <eref target="https://www.ietf.org/mailman/listinfo/ippm/"/>.
      </t>
      <t>Source for this draft and an issue tracker can be found at
        <eref target="https://github.com/getCUJO/QoOID"/>.</t>
    </note>
  </front>
  <middle>
    <?line 126?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>This document introduces the Quality of Outcome network score. Quality of
Outcome is a network quality score designed to be easy to understand, while at
the same time being objective, adaptable to different network quality needs, and
allowing advanced analysis to identify the root cause of network problems. This
document defines a network requirement framework that allows application
developers to specify their network requirements, along with a way to create a
simple user-facing metric based on comparing application requirements to
measurements of network performance. The framework builds on Quality Attenuation
<xref target="TR-452.1"/>, enabling network operators to achieve fault isolation and
effective network planning through composability.</t>
      <t>Quality attenuation is a network quality metric that meets most of the criteria
set out in the requirements; it can capture the probability of a network
satisfying application requirements, it is composable, and it can be compared to
a variety of application requirements. The part that is yet missing is how to
present quality attenuation results to end-users and application developers in
an understandable way. A per-application, per application-type, or per-SLA
approach is most appropriate here. The challenge lies in specifying how to
simplify enough without losing too much in terms of precision and accuracy.</t>
      <t>Taking a probabilistic approach is key because the network stack and
application's network quality adaptation can be highly complex. Applications and
the underlying networking protocols make separate optimizations based on their
perceived network quality over time and saying something about an outcome with
absolute certainty is practically impossible. It is possible, however, to make
educated guesses on the probability of outcomes.</t>
      <t>This document proposes representing network quality as a minimum required
throughput and set of latency and loss percentiles. Application developers,
regulatory bodies and other interested parties can describe network requirements
in the same manner. This document defines a distance measure between perfect and
unusable. This distance measure can, with some assumptions, calculate something
that can be simplified into statements such as “A Video Conference has a 93%
chance of being lag free on this network” all while making it possible to use
the framework both for end-to-end test and analysis from within the network.</t>
      <t>The work proposes a minimum viable framework, and often trades precision for
simplicity. The justification for this is to ensure adoption and usability in
many different contexts such as active testing from applications and monitoring
from network equipment. To counter the loss of precision, is it necessary to
combine measurement results with a description of the measurement approach that
allow for analysis of the precision.</t>
    </section>
    <section anchor="motivation">
      <name>Motivation</name>
      <t>This section describes the features and attributes a network quality framework
must have to be useful for different stakeholders: application developers,
end-users, and network operators/vendors. At a high level, end-users need an
understandable network metric. Application developers require a network metric
that allows them to evaluate how well their application is likely to perform
given the measured network performance. Network operators and vendors need a
metric that facilitates troubleshooting and optimization of their networks.
Existing network quality metrics and frameworks typically address the needs of
one or two of these stakeholders, but the authors have yet to find one that
bridges the needs of all three.</t>
      <t>A key motivation for the Quality of Outcome (QoO) framework is to bridge the gap
between the technical aspects of network performance and the practical needs of
those who depend on it. While solutions exist for many of the problems causing
high and unstable latency in the Internet, the incentives to deploy them have
remained relatively weak. A unifying framework for assessing network quality,
can serve to strengthen these incentives significantly.</t>
      <t>Bandwidth alone is necessary but not sufficient for high-quality modern network
experiences. Idle latency, working latency, jitter, and unmitigated packet loss
are major causes of poor application outcomes. The impact of latency is widely
recognized in network engineering circles <xref target="BITAG"/>, but benchmarking the
quality of network transport remains complex. Most end-users are unable to
relate to metrics other than Mbps, which they have long been conditioned to
think of as the only dimension of network quality.</t>
      <t>Real Time Response under load tests <xref target="RRUL"/> and Responsiveness <xref target="RPM"/> make
significant strides toward creating a network quality metric that is far closer
to application outcomes than bandwidth alone. The latter, in particular, is
successful at being relatively relatable and understandable to end-users.
However, as noted in <xref target="RPM"/>, “Our networks remain unresponsive, not from a lack
of technical solutions, but rather a lack of awareness of the problem.” This
lack of awareness means operators have little incentive to improve network
quality beyond increasing throughput. Despite the availability of open-source
solutions, vendors rarely implement them. The root cause lies in the absence of
a universally accepted network quality framework that captures how well
applications are likely to perform.</t>
      <t>A recent IAB workshop on measuring internet quality for end-users identified a
key insight: users care primarily about application performance rather than
network performance. Among the conclusions was the statement, "A really
meaningful metric for users is whether their application will work properly or
fail because of a lack of a network with sufficient characteristics"
<xref target="RFC9318"/>. Therefore, one critical requirement of a meaningful framework is
its ability to answer the question, "Will an application work properly?"</t>
      <t>Answering this question requires several considerations. First, the Internet is
inherently stochastic from the perspective of any given client, so certainty i
unattainable. Second, different applications have varying needs and adapt
differently to network conditions. A framework aiming to answer this question
must accommodate diverse application requirements. Third, end-users have
individual tolerances for degradation in network conditions and the resulting
effects on application experience. These variations must be factored into the
framework's design.</t>
      <section anchor="design-goal">
        <name>Design Goal</name>
        <t>The overall goal is to describe the requirements for an objective network
quality framework and metric that is useful for end-users, application
developers, and network operators/vendors alike.</t>
      </section>
      <section anchor="requirements">
        <name>Requirements</name>
        <t>This section outlines the three main requirements and their motivation.</t>
        <t>In general, all stakeholders ultimately care about the success of applications
running over the network. Application success depends not just on bandwidth but
also on the delay of network links and computational steps involved in making
the application function. These delays depend on how the application places load
on the network, how the network is affected by environmental conditions, and the
behavior of other users sharing the network resources.</t>
        <t>Different applications have different needs from the network, and they place
different patterns of load on it. To determine whether applications will work
well or fail, a network quality framework must compare measurements of network
performance to a wide variety of application requirements. Flexibility in
describing application requirements and the ability to capture the delay
characteristics of the network in sufficient detail are necessary to compute
application success with satisfactory accuracy and precision.</t>
        <t>How can operators take action when measurements show that applications fail too
often? The framework must support spatial composition <xref target="RFC6049"/>, <xref target="RFC6390"/>
to answer this question. Spatial composition allows results to be divided into
sub-results, each measuring the performance of a required sub-milestone that
must be reached in time for the application to succeed.</t>
        <t>To summarize, the framework and "meaningful metric" should have the following
properties:</t>
        <ol spacing="normal" type="1"><li>
            <t><strong>Capture the information necessary to compute the probability that
  applications will work well.</strong> (Useful for end-users and application
  developers.)</t>
          </li>
          <li>
            <t><strong>Compare meaningfully to different application requirements.</strong></t>
          </li>
          <li>
            <t><strong>Compose.</strong> Allow operators to isolate and quantify the contributions of
different sub-outcomes and sub-paths of the network. (Useful for operators
and vendors.)</t>
          </li>
        </ol>
        <section anchor="requirements-for-end-users">
          <name>Requirements for end-users</name>
          <t>The quality framework should facilitate a metric that is objective, relatable,
and relatively understandable for an end-user. A middle ground between objective
QoS metrics (Throughput, packet loss, jitter, average latency) and subjective
but understandable QoE metrics (MOS, 5-star ratings). The ideal framework should
be objective, like QoS metrics, and understandable, like QoE metrics.</t>
          <t>If these requirements are met, the end-user can understand if a network can
reliably deliver what they care about: the outcomes of applications. Examples
are how quickly a web page loads, the smoothness of a video conference, or
whether or not a video game has any lag.</t>
          <t>Each end user will have an individual tolerance of session quality, below which
their quality of experience becomes personally unacceptable. However it may not
be feasible to capture and represent these tolerances <em>per user</em> as the user
group scales. A compromise is for the quality of experience framework to place
the responsibility for sourcing and representing end-user requirements onto the
application developer. Application developers should perform user-acceptance
testing (UAT) of their application across a range of users, terminals and
network conditions to determine the terminal and network requirements that will
meet the end-user quality threshold for an acceptable subset of their end users.
Some real world examples where 'acceptable levels' have been derived by
application developers include (note: developers of similar applications may
have arrived at different figures):</t>
          <ul spacing="normal">
            <li>
              <t>Remote music collaboration: 28ms latency note-to-ear for direct monitoring,
&lt;2ms jitter</t>
            </li>
            <li>
              <t>Online gaming: 6Mb/s downlink throughput and 30ms RTT to join a multiplayer
game</t>
            </li>
            <li>
              <t>Virtual reality: &lt;20ms RTT from head motion to rendered update in VR</t>
            </li>
          </ul>
          <t>Performing this UAT helps the developer understand what likelihood a new
end-user has of an acceptable Quality of Experience based on the application's
existing requirements towards the network. These requirements can evolve and
improve based on feedback from end users, and in turn better inform the
application's requirements towards the network.</t>
        </section>
        <section anchor="requirements-from-application-and-platform-developers">
          <name>Requirements from Application and Platform Developers</name>
          <t>The framework needs to give developers the ability to describe the network
requirements of their applications. The format for specifying network
requirements must include all relevant dimensions of network quality so that
different applications which are sensitive to different network quality
dimensions can all evaluate the network accurately. Not all developers have
network expertise, so to make it easy for developers to use the framework,
developers must be able to specify network requirements approximately.
Therefore, it must be possible to describe both simple and complex network
requirements. The framework also needs to be flexible so that it can be used
with different kinds of traffic and that extreme network requirements which far
exceed the needs of today's applications can also be articulated.</t>
          <t>If these requirements are met, developers of applications or platforms can state
or test their network requirements and evaluate if the network is sufficient for
a great application outcome. Both the application developers with networking
expertise and those without can use the framework.</t>
        </section>
        <section anchor="requirements-for-network-operators-and-network-solution-vendors">
          <name>Requirements for Network Operators and Network Solution Vendors</name>
          <t>From an operator perspective, the key is to have a framework that lets operators
find the network quality bottlenecks and objectively compare different networks
and technologies. The framework must support mathematically sound
compositionality ('addition' and 'subtraction') to achieve this. Why? Network
operators rarely manage network traffic end-to-end. If a test is purely
end-to-end, the ability to find bottlenecks may be gone. If, however,
measurements can be taken both end-to-end (e.g., a-b-c-d-e) and not-end-to-end
(e.g., a-b-c), the results can be subtracted to isolate the areas outside the
influence of the network operator. In other words, the network quality of a-b-c
and d-e can be separated. Compositionality is essential for fault detection and
accountability.</t>
          <t>By having mathematically correct composition, a network operator can measure two
segments separately, perhaps even with different approaches, and add them
together to understand the end-to-end network quality.</t>
          <t>For another example where composition is useful, look at a typical web page load
sequence. If web page load times are too slow, DNS resolution time, TCP
round-trip time, and the time it takes to establish TLS connections can be
measured separately to get a better idea of where the problem is. A network
quality framework should support this kind of analysis to be maximally useful
for operators. The quality framework must be applicable in both lab testing and
monitoring of production networks. It must be useful on different time scales,
and it can't have a dependency on network technology or OSI layer.</t>
          <t>If these requirements are met, a network operator can monitor and test their
network and understand where the true bottlenecks are, regardless of network
technology.</t>
        </section>
      </section>
    </section>
    <section anchor="background">
      <name>Background</name>
      <t>The foundation of the framework is Quality Attenuation <xref target="TR-452.1"/>. This work
will not go into detail about how to measure Quality Attenuation, but some
relevant techniques are:</t>
      <ul spacing="normal">
        <li>
          <t>Active probing with TWAMP Light <xref target="RFC5357"/> / STAMP <xref target="RFC8762"/> / IRTT
<xref target="IRTT"/></t>
        </li>
        <li>
          <t>Latency Under Load Tests</t>
        </li>
        <li>
          <t>Speed Tests with latency measures</t>
        </li>
        <li>
          <t>Simulating real traffic</t>
        </li>
        <li>
          <t>End-to-end measurements of real traffic</t>
        </li>
        <li>
          <t>TCP SYN ACK / DNS Lookup RTT Capture</t>
        </li>
        <li>
          <t>Estimation</t>
        </li>
      </ul>
      <t>Quality Attenuation represents quality measurements as distributions. Using
Latency distributions to measure network quality is nothing new and has been
proposed by various researchers/practitioners <xref target="Kelly"/><xref target="RFC8239"/><xref target="RFC6049"/>.
The novelty of the Quality Attenuation metric is to view packet loss as infinite
(or too late to be of use e.g. &gt; 3 seconds) latency <xref target="TR-452.1"/>.</t>
      <t>Latency Distributions can be gathered via both passive monitoring and active
testing. The active testing can use any type of traffic, such as TCP, UDP, or
QUIC. It is OSI Layer and network technology independent, meaning it can be
gathered in an end-user application, within some network equipment, or anywhere
in between. Passive methods rely on observing and time-stamping packets
traversing the network. Examples of this include TCP SYN and SYN/ACK packets and
the QUIC spin bit.</t>
      <t>A key assumption behind the choice of latency distribution is that different
applications and application categories fail at different points of the latency
distribution. Some applications, such as downloads, have lenient latency
requirements when compared to real-time application. Video Conferences are
typically sensitive to high 90th percentile latency and to the difference
between the 90th and the 99th percentile. Online gaming typically has a low
tolerance for high 99th percentile latency. All applications require a minumum
level of throughput and a maximum packet loss rate. A network quality metric
that aims to generalize network quality must take the latency distribution,
throughput, and packet loss into consideration.</t>
      <t>Two distributions can be composed using convolution <xref target="TR-452.1"/>.</t>
      <section anchor="discussion-of-other-performance-metrics">
        <name>Discussion of other performance metrics</name>
        <t>Numerous network performance metrics and associated frameworks have been
proposed, adopted, and, at times, misapplied over the years. The following is a
brief overview of several key network quality metrics.</t>
        <t>Each metric is evaluated against the three criteria established in the
requirements section. Throughput is related to user-observable application
outcomes because there must be <em>enough</em> bandwidth available. Adding extra
bandwidth above a certain threshold will, at best, receive diminishing returns
(and any returns are often due to reduced latency). It is not possible to
compute the probability of application success or failure based on throughput
alone for most applications. Throughput can be compared to a variety of
application requirements, but since there is no direct correlation between
throughput and application performance, it is not possible to conclude that an
application will work well even if it is known that enough throughput is
available.</t>
        <t>Throughput cannot be composed.</t>
        <section anchor="average-latency">
          <name>Average Latency</name>
          <t>Average latency relates to user-observable application outcomes in the sense
that the average latency must be low enough to support a good experience.
However, it is not possible to conclude that a general application will work
well based on the fact that the average latency is good enough <xref target="BITAG"/>.</t>
          <t>Average latency can be composed. If the average latency of links a-b and b-c is
known, then the average latency of the composition a-b-c is the sum of a-b and
b-c.</t>
        </section>
        <section anchor="th-percentile-of-latency">
          <name>99th Percentile of Latency</name>
          <t>The 99th percentile of latency relates to user-observable application outcomes
because it captures some information about how bad the tail latency is. If an
application can handle 1% of packets being too late, for instance by maintaining
a playback buffer, then the 99th percentile can be a good metric for measuring
application performance. It does not work as well for applications that are very
sensitive to overly delayed packets because the 99th percentile disregards all
information about the delays of the worst 1% of packets.</t>
          <t>It is not possible to compose 99th-percentile values.</t>
        </section>
        <section anchor="variance-of-latency">
          <name>Variance of latency</name>
          <t>The variance of latency can be calculated from any collection of samples, but
network latency is not necessarily normally distributed, and so it can be
difficult to extrapolate from a measure of the variance of latency to how well
specific applications will work.</t>
          <t>The variance of latency can be composed. If the variance of links a-b and b-c is
known, then the variance of the composition a-b-c is the sum of the variances
a-b and b-c.</t>
        </section>
        <section anchor="inter-packet-delay-variation-ipdv">
          <name>Inter-Packet Delay Variation (IPDV)</name>
          <t>The most common definition of IPDV <xref target="RFC5481"/> measures the difference in
one-delay between subsequent packets. Some applications are very sensitive to
this because of time-outs that cause later-than-usual packets to be discarded.
For some applications, IPDV can be useful in assessing application performance,
especially when it is combined with other latency metrics. IPDV does not contain
enough information to compute the probability that a wide range of applications
will work well.</t>
          <t>IPDV cannot be composed.</t>
        </section>
        <section anchor="packet-delay-variation-pdv">
          <name>Packet Delay Variation (PDV)</name>
          <t>The most common definition of PDV <xref target="RFC5481"/> measures the difference in
one-delay between the smallest recorded latency and each value in a sample.</t>
          <t>PDV cannot be composed.</t>
        </section>
        <section anchor="trimmed-mean-of-latency">
          <name>Trimmed Mean of Latency</name>
          <t>The trimmed mean of latency is the average computed after the worst x percent of
samples have been removed. Trimmed means are typically used in cases where there
is a known rate of measurement errors that should be filtered out before
computing results.</t>
          <t>In the case where the trimmed mean simply removes measurement errors, the result
can be composed in the same way as the average latency. In cases where the
trimmed mean removes real measurements, the trimming operation introduces errors
that may compound when composed.</t>
        </section>
        <section anchor="round-trips-per-minute">
          <name>Round-trips Per Minute</name>
          <t>Round-trips per minute <xref target="RPM"/> is a metric and test procedure specifically
designed to measure delays as experienced by application-layer protocol
procedures such as HTTP GET, establishing a TLS connection, and DNS lookups. It,
therefore, measures something very close to the user-perceived application
performance of HTTP-based applications. RPM loads the network before conducting
latency measurements and is, therefore, a measure of loaded latency (also known
as working latency) well-suited to detecting bufferbloat <xref target="Bufferbloat"/>.</t>
          <t>RPM is not composable.</t>
        </section>
        <section anchor="quality-attenuation">
          <name>Quality Attenuation</name>
          <t>Quality Attenuation is a network performance metric that combines latency and
packet loss into a single variable <xref target="TR-452.1"/>.</t>
          <t>Quality Attenuation relates to user-observable outcomes in the sense that
user-observable outcomes can be measured using the Quality Attenuation metric
directly, or the quality attenuation value describing the time-to-completion of
a user-observable outcome can be computed if the quality attenuation of each
sub-goal required to reach the desired outcome are known <xref target="Haeri22"/>.</t>
          <t>Quality Attenuation is composable because the convolution of quality attenuation
values allows us to compute the time it takes to reach specific outcomes given
the quality attenuation of each sub-goal and the causal dependency conditions
between them <xref target="Haeri22"/>.</t>
        </section>
        <section anchor="summary-of-performance-metrics">
          <name>Summary of performance metrics</name>
          <t>This table summarizes the properties of each of the metrics surveyed.</t>
          <t>The column "Capture probability of general applications working well" records
whether each metric can, in principle, capture the information necessary to
compute the probability that a general application will work well, assuming
measurements capture the properties of the end-to-end network path that the
application is using.</t>
          <table>
            <thead>
              <tr>
                <th align="left">Metric</th>
                <th align="left">Capture probability of general applications working well</th>
                <th align="left">Easy to articulate Application requirements</th>
                <th align="left">Composable</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td align="left">Average latency</td>
                <td align="left">Yes for some applications</td>
                <td align="left">Yes</td>
                <td align="left">Yes</td>
              </tr>
              <tr>
                <td align="left">Variance of latency</td>
                <td align="left">No</td>
                <td align="left">No</td>
                <td align="left">Yes</td>
              </tr>
              <tr>
                <td align="left">IPDV</td>
                <td align="left">Yes for some applications</td>
                <td align="left">No</td>
                <td align="left">No</td>
              </tr>
              <tr>
                <td align="left">PDV</td>
                <td align="left">Yes for some applications</td>
                <td align="left">No</td>
                <td align="left">No</td>
              </tr>
              <tr>
                <td align="left">Average Peak Throughput</td>
                <td align="left">Yes for some applications</td>
                <td align="left">Yes</td>
                <td align="left">No</td>
              </tr>
              <tr>
                <td align="left">99th Percentile of Latency</td>
                <td align="left">No</td>
                <td align="left">No</td>
                <td align="left">No</td>
              </tr>
              <tr>
                <td align="left">Trimmed mean of latency</td>
                <td align="left">Yes for some applications</td>
                <td align="left">Yes</td>
                <td align="left">No</td>
              </tr>
              <tr>
                <td align="left">Round Trips Per Minute</td>
                <td align="left">Yes for some applications</td>
                <td align="left">Yes</td>
                <td align="left">No</td>
              </tr>
              <tr>
                <td align="left">Quality Attenuation</td>
                <td align="left">Yes</td>
                <td align="left">No</td>
                <td align="left">Yes</td>
              </tr>
            </tbody>
          </table>
          <t>Explanations:</t>
          <ul spacing="normal">
            <li>
              <t>"Captures probability" refers to whether the metric captures enough
information to compute the likelihood of an application succeeding.</t>
            </li>
            <li>
              <t>"Articulate requirements" refers to the ease with which application-specific
requirements can be expressed using the metric.</t>
            </li>
            <li>
              <t>"Composable" means whether the metric supports mathematical composition to
allow for detailed network analysis.</t>
            </li>
          </ul>
        </section>
      </section>
    </section>
    <section anchor="sampling-requirements">
      <name>Sampling requirements</name>
      <t>To ensure broad applicability across diverse use cases, this framework
deliberately avoids prescribing specific conditions for sampling such as fixed
time intervals or defined network load levels. This flexibility enables
deployment in both controlled and real-world environments.</t>
      <t>At its core, the framework requires only a latency distribution. When
measurements are taken during periods of network load, the result naturally
includes latency under load. In scenarios such as passive monitoring of
production traffic, capturing artificially loaded conditions may not always be
feasible, whereas passively observing the actual network load may be possible.</t>
      <t>Modeling the full latency distribution may be too complex to allow for easy
adoption of the framework, and reporting latency at selected percentiles offers
a practical compromise between accuracy and deployment considerations. A
commonly accepted set of percentiles spanning from the 0th to the 100th in a
logarithmic-like progression has been suggested by others <xref target="BITAG"/> and is
recommended here: [0th, 10th, 25th, 50th, 75th, 90th, 95th, 99th, 99.9th,
100th].</t>
      <t>The framework is agnostic to traffic direction but mandates that measurements
specify whether latency is one-way or round-trip.</t>
      <t>Importantly, the framework does not enforce a minimum sample count. This means
that even a small number of samples (e.g., 10) could technically constitute a
distribution—though such cases are clearly insufficient for statistical
confidence. The intent is to balance rigor with practicality, recognizing that
constraints vary across devices, applications, and deployment environments.</t>
      <t>To support reproducibility and enable confidence analysis, each measurement must
be accompanied by the following metadata:</t>
      <ul spacing="normal">
        <li>
          <t>Description of the measurement path</t>
        </li>
        <li>
          <t>Timestamp of first sample</t>
        </li>
        <li>
          <t>Total duration of the sampling period</t>
        </li>
        <li>
          <t>Number of samples collected</t>
        </li>
        <li>
          <t>Sampling method, including:
          </t>
          <ul spacing="normal">
            <li>
              <t>Cyclic: One sample every N milliseconds (specify N)</t>
            </li>
            <li>
              <t>Burst: X samples every N milliseconds (specify X and N)</t>
            </li>
            <li>
              <t>Passive: Opportunistic sampling of live traffic (non-uniform intervals)</t>
            </li>
          </ul>
        </li>
      </ul>
      <t>These metadata elements are essential for interpreting the precision and
reliability of the measurements. As demonstrated in <xref target="QoOSimStudy"/>, low sampling
frequencies and short measurement durations can lead to misleadingly optimistic
or imprecise Quality of Outcome (QoO) scores.</t>
    </section>
    <section anchor="describing-network-requirements">
      <name>Describing Network Requirements</name>
      <t>This work builds upon the work already proposed in the Broadband Forum standard
called Quality of Experience Delivered (QED/TR-452) <xref target="TR-452.1"/>, which defines
the Quality Attenuation metric. In essence, QoO describes network requirements
as a list of percentile and latency requirement tuples. In other words, a
network requirement may be expressed as: The network requirement for this app
quality level/app/app category/SLA is “at 4 Mbps, 90% of packets needs to arrive
within 100 ms, 100% of packets needs to arrive within 200ms”. This list can be
as simple as “100% of packets need to arrive within 200ms” or as long as you
would like. For the sake of simplicity, the requirements percentiles must match
one or more of the percentiles defined in the measurements, i.e., one can set
requirements at the [0th, 10th, 25th, 50th, 75th, 90th, 95th, 99th, 99.9th,
100th] percentiles. Packet loss must be reported as a separate value.</t>
      <t>Applications do of course have throughput requirements, and thus a complete
framework for application-level network quality must also take capacity into
account. Insufficient bandwidth may give poor application outcomes without
necessarily inducing a lot of latency. Therefore, the network requirements
should include a minimum throughput requirement. A fully specified requirement
can be thought of as specifying the latency and loss requirements to be met
while the end-to-end network path is loaded in a way that is at least as
demanding of the network as the application itself. This may be achieved by
running the actual application and measuring delay and loss alongside it, or by
generating artificial traffic to a level at least equivalent to the application
traffic load.</t>
      <t>Whether the requirements are one-way or two-way must be specified. Where the
requirement is one-way, the direction (uplink or downlink) must be specified. If
two-way, a decomposition into uplink and downlink measurements may be specified.</t>
      <t>Until now, network requirements and measurements are what is already
standardized in the BBF TR-452 (aka QED) framework <xref target="TR-452.1"/>. The novel part
of this work is what comes next. A method for going from Network Requirements
and Network Measurements to probabilities of good application outcomes.</t>
      <t>To do that it is necessary to make articulating the network requirements a
little bit more complicated. A key design goal was to have a distance measure
between perfect and unusable, and have a way of quantifying what is ‘better’.</t>
      <t>The requirements specification is extended to include the quality required for
perfection and a quality threshold beyond which the application is considered
unusable.</t>
      <t>This is named Network Requirements for Perfection (NRP). As an example: At 4
Mbps, 99% of packets need to arrive within 100ms, 99.9% within 200ms (implying
that 0.1% packet loss is acceptable) for the outcome to be perfect. Network
Requirements Point of Unusability (NRPoU): If 99% of the packets have not
arrived after 200ms, or 99.9% within 300ms, the outcome will be unusable.</t>
      <t>Where the NRPoU percentiles and NRP are a required pair then neither should
define a percentile not included in the other - i.e., if the 99.9th percentile
is part of the NRPoU then the NRP must also include the 99.9th percentile.</t>
    </section>
    <section anchor="creating-network-requirement-specifications">
      <name>Creating network requirement specifications</name>
      <t>A detailed description of how to create a network requirement specification is
out of scope for this document, but this section will provide a rough outline
for how it can be achieved. Additional information about this topic can be found
in <xref target="QoOAppQualityReqs"/>.</t>
      <t>When searching for an appropriate network requirement description for an
application, the goal is to identify the points of perfection and uselessness
for the application. This can be thought of as a search process. Run the
application across a network connection with adjustable quality. Gradually
adjust the network performance while observing the application-level
performance. The application performance can be observed manually by the person
performing the testing, or using automated methods such as recording video stall
duration from within a video player.</t>
      <t>Establish a baseline under excellent network conditions. Then gradually add
delay, packet loss or decrease network capacity until the application no longer
performs perfectly. Continue adding network quality attenuation until the
application fails completely. The corresponding network quality levels are the
points of perfection and unusability.</t>
    </section>
    <section anchor="calculating-quality-of-outcome-qoo">
      <name>Calculating Quality of Outcome (QoO)</name>
      <t>The QoO metric calculates the likelihood of application success based on network
performance, incorporating both latency and packet loss. There are three key
scenarios:</t>
      <ul spacing="normal">
        <li>
          <t>The network meets all the requirements for perfection. Probability of success:
100%.</t>
        </li>
        <li>
          <t>The network fails one or more criteria at the Point of Unusableness (NRPoU).
Probability of success: 0%.</t>
        </li>
        <li>
          <t>The network performance falls between perfection and unusable. In this case, a
QoO score is computed. The QoO score is calculated by taking the worst score
derived from latency and packet loss.</t>
        </li>
      </ul>
      <t>Latency Component The latency-based QoO score is computed as follows:</t>
      <t>QoO_latency = min_{i}(min(max((1 - ((ML_i - NRP_i) / (NRPoU_i - NRP_i))) * 100,
0), 100))</t>
      <t>Where:</t>
      <ul spacing="normal">
        <li>
          <t>ML_i is the Measured Latency at percentile i.</t>
        </li>
        <li>
          <t>NRP_i is the Network Requirement for Perfection at percentile i.</t>
        </li>
        <li>
          <t>NRPoU_i is the Network Requirement Point of Unusableness at percentile i.</t>
        </li>
      </ul>
      <t>Packet Loss Component Packet loss is considered as a separate, single
measurement that applies across the entire traffic sample, not at each
percentile. The packet loss score is calculated using a similar interpolation
formula, but based on the total measured packet loss (MLoss) and the packet loss
thresholds defined in the NRP and NRPoU:</t>
      <t>QoO_loss = min(max((1 - ((MLoss - NRP_Loss) / (NRPoU_Loss - NRP_Loss))) * 100,
0), 100)</t>
      <t>Where:</t>
      <ul spacing="normal">
        <li>
          <t>MLoss is the Measured Packet Loss.</t>
        </li>
        <li>
          <t>NRP_Loss is the acceptable packet loss for perfection.</t>
        </li>
        <li>
          <t>NRPoU_Loss is the packet loss threshold beyond which the application becomes
unusable.</t>
        </li>
      </ul>
      <t>Final QoO Calculation The overall QoO score is the minimum of the latency and
packet loss scores:</t>
      <t>QoO = min(QoO_latency, QoO_loss)</t>
      <t>Example Requirements and Measured Data:</t>
      <ul spacing="normal">
        <li>
          <t>NRP: 4 Mbps {99%, 250 ms, 0.1% loss}, {99.9%, 350 ms, 0.1% loss}</t>
        </li>
        <li>
          <t>NRPoU: {99%, 400 ms, 1% loss}, {99.9%, 401 ms, 1% loss}</t>
        </li>
        <li>
          <t>Measured Latency: 99% = 350 ms, 99.9% = 352 ms</t>
        </li>
        <li>
          <t>Measured Packet Loss: 0.5%</t>
        </li>
        <li>
          <t>Measured Minimum Bandwidth: 32 Mbps / 28 Mbps</t>
        </li>
      </ul>
      <t>Then the QoO is defined:</t>
      <t>QoO_latency = min( (min(max((1 - (350 ms - 250 ms) / (400 ms - 250 ms)) * 100,
0), 100), (min(max((1 - (352 ms - 350 ms) / (401 ms - 350 ms)) * 100, 0), 100))
) = min(33.33, 96.08) = 33.33</t>
      <t>QoO_loss = min(max((1 - (0.5% - 0.1%) / (1% - 0.1%)) * 100, 0), 100) = 55.56</t>
      <t>Finally, the overall QoO score is:</t>
      <t>QoO = min(33.33, 55.56) = 33.33</t>
      <t>In this example, the application has a 33% chance of meeting the quality
expectations on this network, considering both latency and packet loss.</t>
      <t><strong>Implementation Note: Sensitivity to Sampling Accuracy</strong></t>
      <t>Based on the simulation results in <xref target="QoOSimStudy"/>, overly noisy or inaccurate
latency samples can artificially inflate worst-case percentiles, thereby driving
QoO scores lower than actual network conditions would warrant. Conversely,
coarse measurement intervals can miss short-lived spikes entirely, resulting in
an inflated QoO. Users of this framework should consider hardware/software
measurement jitter, clock offset, or other system-level noise sources when
collecting data, and configure sampling frequency and duration to suit their
specific needs.</t>
    </section>
    <section anchor="how-to-find-network-requirements">
      <name>How to find network requirements</name>
      <t>A key advantage of a measurement that spans the range from perfect to unusable,
rather than relying on binary (Good/Bad) or other low-resolution
(Terrible/Bad/OK/Great/Excellent) metrics, is the flexibility it provides. For
example, a chance of lag-free experience below 20% is intuitively undesirable,
while a chance above 90% is intuitively favorable—demonstrating that absolute
perfection is not required for the QoO metric to be meaningful.</t>
      <t>However, it remains necessary to define points representing unusableness and
perfection. There is no universally strict threshold at which network conditions
render an application unusable. For perfection, some applications may have clear
definitions, but for others, such as web browsing and gaming, lower latency is
always preferable. To assist in establishing requirements, it is recommended
that the Network Requirements for Perfection (NRP) be set at the point where
further reductions in latency do not result in a perceivable improvement in
end-user experience.</t>
      <t>Someone who wishes to make a network requirement for an application in the
simplest possible way, should do something along these lines.</t>
      <ul spacing="normal">
        <li>
          <t>Simulate increasing levels of latency</t>
        </li>
        <li>
          <t>Observe the application and note the threshold where the application stops
working perfectly</t>
        </li>
        <li>
          <t>Observe the application and note the threshold where the application stops
being useful at all</t>
        </li>
      </ul>
      <t>Someone who wishes to find sophisticated network requirements might proceed in
this way</t>
      <ul spacing="normal">
        <li>
          <t>Set thresholds for acceptable fps, animation fluidity, i/o latency (voice,
video, actions), or other metrics capturing outcomes that directly affects the
user experience</t>
        </li>
        <li>
          <t>Create a tool for measuring these user-facing metrics</t>
        </li>
        <li>
          <t>Simulate varying latency distribution with increasing levels of latency while
measuring the user facing metrics.</t>
        </li>
      </ul>
      <t>A QoO score at 94 can be communicated as "John's smartphone has a 94% chance of
lag-free Video Conferencing", however, this does not mean that at any point of
time there is a 6% chance of lag. It means there is a 6% chance of experiencing
lag during the entire session/time-period, and the network requirements should
be adjusted accordingly.</t>
      <t>The reason for making the QoO metric for a session is to make it understandable
for an end-user, an end-user should not have to relate to the time period the
metric is for.</t>
      <section anchor="an-example">
        <name>An example</name>
        <t>Example.com's video-conferencing service requirements can be translated into the
QoO Framework. For best performance for video meetings, they specify 4/4 Mbps,
100 ms latency, &lt;1% packet loss, and &lt;30 ms jitter. This can be translated to an
NRP:</t>
        <t>NRP example.com video conferencing service: At minimum 4/4 Mbps.
{0p=70ms,99p=100ms}</t>
        <t>For minimum requirements example.com does not specify anything, but at 500ms
latency or 1000ms 99p latency, a video conference is very unlikely to work in a
remotely satisfactory way.</t>
        <t>NRPoU {0p=500,99p=1000ms}</t>
        <t>Of course, it is possible to specify network requirements for Example.com with
multiple NRP/NRPoU, for different quality levels, one/two way video, and so on.
Then one can calculate the QoO at each level.</t>
      </section>
    </section>
    <section anchor="simulation-insights">
      <name>Insights from Simulation Results</name>
      <t>While the QoO framework itself places no strict requirement on sampling patterns
or measurement technology, a recent simulation study <xref target="QoOSimStudy"/> examined
the metric’s real-world applicability under varying conditions of:</t>
      <ol spacing="normal" type="1"><li>
          <t><strong>Sampling Frequency</strong>: Slow sampling rates (e.g., &lt;1 Hz) risk missing rare,
short-lived latency spikes, resulting in overly optimistic QoO scores.</t>
        </li>
        <li>
          <t><strong>Measurement Noise</strong>: Measurement errors on the same scale as the thresholds
(NRP, NRPoU) can distort high-percentile latencies and cause artificially
lower QoO.</t>
        </li>
        <li>
          <t><strong>Requirement Specification</strong>: Slightly adjusting the latency thresholds or
target percentiles can cause significant changes in QoO, especially when the
measurement distribution is near a threshold.</t>
        </li>
        <li>
          <t><strong>Measurement Duration</strong>: Shorter tests with sparse sampling tend to
underestimate worst-case behavior for heavy-tailed latency distributions,
biasing QoO in a positive direction.</t>
        </li>
      </ol>
      <t>In practice, these findings mean:</t>
      <ul spacing="normal">
        <li>
          <t>Calibrate the combination of sampling rate and total measurement period to
capture fat-tailed distributions of latency with sufficient accuracy.</t>
        </li>
        <li>
          <t>Avoid significant measurement noise where possible (e.g., by calibrating time
sources, accounting for clock drift).</t>
        </li>
        <li>
          <t>Thorougly test application requirement thresholds so that the resulting QoO
scores accurately reflect application performance.</t>
        </li>
      </ul>
      <t>These guidelines are <em>non-normative</em> but reflect empirical evidence on how QoO
performs.</t>
    </section>
    <section anchor="user-testing">
      <name>Insights from user testing</name>
      <t>A study involving 25 participants tested the Quality of Outcome (QoO) framework
in a real-world settings <xref target="QoOUserStudy"/>. Participants used specially equipped
routers in their homes for 10 days, providing both network performance data and
feedback through pre- and post-trial surveys.</t>
      <t>Participants found QoO metrics more intuitive and actionable than traditional
metrics (e.g., speed tests). QoO directly aligned with their self-reported
experiences, increasing trust and engagement.</t>
      <t>These results provide supporting evidence for QoO's value as a user-focused
tool, bridging technical metrics with real-world application performance to
enhance end-user satisfaction.</t>
    </section>
    <section anchor="known-weaknesses-and-open-questions">
      <name>Known Weaknesses and open questions</name>
      <t>A method has been described for simplifying the comparison between application
network requirements and quality attenuation measurements. This simplification
introduces several artifacts, the significance of which may vary depending on
context. The following section discusses some known limitations.</t>
      <t>Volatile networks - in particular, mobile cellular networks - pose a challenge
for network quality prediction, with the level of assurance of the prediction
likely to decrease as session duration increases. Historic network conditions
for a given cell may help indicate times of network load or reduced transmission
power, and their effect on throughput/latency/loss. However: as terminals are
mobile, the signal bandwidth available to a given terminal can change by an
order of magnitude within seconds due to physical radio factors. These include
whether the terminal is at the edge of cell, or undergoing cell handover, the
interference and fading from the local environment, and any switch between radio
bearers with differing signal bandwidth and transmission-time intervals (e.g. 4G
and 5G). This suggests a requirement for measuring quality attenuation to and
from an individual terminal, as that can account for the factors described
above. How that facility is provisioned onto indiviudal terminals, and how
terminal-hosted applications can trigger a quality attenuation query, is an open
question.</t>
      <section anchor="missing-temporal-information-in-distributions">
        <name>Missing Temporal Information in Distributions.</name>
        <t>These two latency series: 1,200,1,200,1,200,1,200,1,200 and
1,1,1,1,1,200,200,200,200,200 will have identical distributions, but may have
different application performance. Ignoring this information is a tradeoff
between simplicity and precision. To capture all information necessary to
perfectly capture outcomes quickly gets into extreme levels of overhead and high
computational complexity. An application's performance depends on reactions to
varying network performance, meaning nearly all different series of latencies
may have different application outcomes.</t>
      </section>
      <section anchor="subsampling-the-real-distribution">
        <name>Subsampling the real distribution</name>
        <t>Additionally, it is not feasible to capture latency for every packet
transmitted. Probing and sampling can be performed, but some aspects will always
remain unknown. This introduces an element of probability. Absolute perfection
cannot be achieved; rather than disregarding this reality, it is more practical
to acknowledge it. Therefore, discussing the probability of outcomes provides a
more accurate and meaningful approach.</t>
      </section>
      <section anchor="assuming-linear-relationship-between-perfect-and-unusable-and-that-it-is-not-really-a-probability">
        <name>Assuming Linear Relationship between Perfect and Unusable (and that it is not really a probability)</name>
        <t>One can conjure up scenarios where 50ms latency is actually worse than 51ms
latency as developers may have chosen 50ms as the threshold for changing
quality, and the threshold may be imperfect. Taking these scenarios into account
would add another magnitude of complexity to determining network requirements
and finding a distance measure (between requirement and actual measured
capability).</t>
      </section>
      <section anchor="binary-bandwidth-threshold">
        <name>Binary Bandwidth threshold</name>
        <t>Choosing this is to reduce complexity, but it must be acknowledged that the
applications are not that simple. Network requirements can be set up per quality
level (resolution, fps etc.) for the application if necessary.</t>
      </section>
      <section anchor="arbitrary-selection-of-percentiles">
        <name>Arbitrary selection of percentiles</name>
        <t>A selection of percentiles is necessary for simplicity, because more complex
methods may slow adoption of the framework. The 0th (minimum) and 50th (median)
percentiles are commonly used for their inherent significance. According to
<xref target="BITAG"/>, the 90th, 98th, and 99th percentiles are particularly important for
certain applications. Generally, higher percentiles provide more insight for
interactive applications, but only up to a certain threshold—beyond which
applications may treat excessive delays as packet loss and adapt accordingly.
The choice between percentiles such as the 95th, 96th, 96.5th, or 97th is not
universally prescribed and may vary between application types. Therefore,
percentiles must be selected arbitrarily, based on the best available knowledge
and the intended use case.</t>
      </section>
    </section>
    <section anchor="implementation-status">
      <name>Implementation status</name>
      <t>Note to RFC Editor: This section <bcp14>MUST</bcp14> be removed before publication of the
document.</t>
      <t>This section records the status of known implementations of the protocol defined
by this specification at the time of posting of this Internet-Draft, and is
based on a proposal described in <xref target="RFC7942"/>. The description of implementations
in this section is intended to assist the IETF in its decision processes in
progressing drafts to RFCs. Please note that the listing of any individual
implementation here does not imply endorsement by the IETF. Furthermore, no
effort has been spent to verify the information presented here that was supplied
by IETF contributors. This is not intended as, and must not be construed to be,
a catalog of available implementations or their features. Readers are advised to
note that other implementations may exist.</t>
      <t>According to <xref target="RFC7942"/>, "this will allow reviewers and working groups to assign
due consideration to documents that have the benefit of running code, which may
serve as evidence of valuable experimentation and feedback that have made the
implemented protocols more mature. It is up to the individual working groups to
use this information as they see fit".</t>
      <section anchor="qoo-c">
        <name>qoo-c</name>
        <ul spacing="normal">
          <li>
            <t>Link to the open-source repository:  </t>
            <t>
https://github.com/getCUJO/qoo-c</t>
          </li>
          <li>
            <t>The organization responsible for the implementation:  </t>
            <t>
CUJO AI</t>
          </li>
          <li>
            <t>A brief general description:  </t>
            <t>
A C library for calculating Quality of Outcome</t>
          </li>
          <li>
            <t>The implementation's level of maturity:  </t>
            <t>
A complete implentation of the specification described in this document</t>
          </li>
          <li>
            <t>Coverage:  </t>
            <t>
The library is tested with unit tests</t>
          </li>
          <li>
            <t>Licensing:  </t>
            <t>
MIT</t>
          </li>
          <li>
            <t>Implementation experience:  </t>
            <t>
Tested by the author. Needs additional testing by third parties.</t>
          </li>
          <li>
            <t>Contact information:  </t>
            <t>
Bjørn Ivar Teigen Monclair: bjorn.moncalir@cujo.com</t>
          </li>
          <li>
            <t>The date when information about this particular implementation was last
updated:  </t>
            <t>
27th of May 2025</t>
          </li>
        </ul>
      </section>
      <section anchor="goresponsiveness">
        <name>goresponsiveness</name>
        <ul spacing="normal">
          <li>
            <t>Link to the open-source repository:  </t>
            <t>
https://github.com/network-quality/goresponsiveness  </t>
            <t>
The specific pull-request:
https://github.com/network-quality/goresponsiveness/pull/56</t>
          </li>
          <li>
            <t>The organization responsible for the implementation:  </t>
            <t>
University of Cincinatti for goresponsiveness as a whole, Domos for the QoO
part.</t>
          </li>
          <li>
            <t>A brief general description:  </t>
            <t>
A network quality test written in Go. Capable of measuring RPM and QoO.</t>
          </li>
          <li>
            <t>The implementation's level of maturity:  </t>
            <t>
In active development</t>
          </li>
          <li>
            <t>Coverage:  </t>
            <t>
The QoO part is tested with unit tests</t>
          </li>
          <li>
            <t>Licensing:  </t>
            <t>
GPL 2.0</t>
          </li>
          <li>
            <t>Implementation experience:  </t>
            <t>
Needs testing by third parties</t>
          </li>
          <li>
            <t>Contact information:  </t>
            <t>
Bjørn Ivar Teigen Monclair: bjorn.monclair@cujo.com  </t>
            <t>
William Hawkins III: hawkinwh@ucmail.uc.edu</t>
          </li>
          <li>
            <t>The date when information about this particular implementation was last
updated:  </t>
            <t>
10th of January 2024</t>
          </li>
        </ul>
      </section>
    </section>
    <section anchor="conventions-and-definitions">
      <name>Conventions and Definitions</name>
      <t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>
      <?line -18?>

</section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>The Quality of Outcome (QoO) framework introduces a method for assessing network
quality based on probabilistic outcomes derived from latency, packet loss, and
throughput measurements. While the framework itself is primarily analytical and
does not define a new protocol, some security considerations arise from its
deployment and use.</t>
      <t><strong>Measurement Integrity and Authenticity</strong></t>
      <t>QoO relies on accurate and trustworthy measurements of network performance. If
an attacker can manipulate these measurements—either by injecting falsified data
or tampering with the measurement process—they could distort the resulting QoO
scores. This could mislead users, operators, or regulators into making incorrect
assessments of network quality.</t>
      <t>To mitigate this risk:</t>
      <ul spacing="normal">
        <li>
          <t>Measurement agents can authenticate with the systems collecting or analyzing
QoO data.</t>
        </li>
        <li>
          <t>Measurement data can be transmitted over secure channels (e.g., TLS) to ensure
confidentiality and integrity.</t>
        </li>
        <li>
          <t>Digital signatures may be used to verify the authenticity of measurement
reports.</t>
        </li>
      </ul>
      <t><strong>Risk of Misuse and Gaming</strong></t>
      <t>As QoO scores may influence regulatory decisions, service-level agreements
(SLAs), or user trust, there is a risk that network operators or application
developers might attempt to "game" the system. For example, they might optimize
performance only for known test conditions or falsify requirement thresholds to
inflate QoO scores.</t>
      <t>Mitigations include:</t>
      <ul spacing="normal">
        <li>
          <t>Independent verification of application requirements and measurement
methodologies.</t>
        </li>
        <li>
          <t>Use of randomized or blind testing procedures.</t>
        </li>
        <li>
          <t>Transparency in how QoO scores are derived and what assumptions are made.</t>
        </li>
      </ul>
      <t><strong>Privacy Considerations</strong></t>
      <t>QoO measurements may involve collecting detailed performance data from end-user
devices or applications. Depending on the deployment model, this could include
metadata such as IP addresses, timestamps, or application usage patterns.</t>
      <t>To protect user privacy:</t>
      <ul spacing="normal">
        <li>
          <t>Data collection should follow the principle of data minimization, only
collecting what is strictly necessary.</t>
        </li>
        <li>
          <t>Personally identifiable information (PII) should be anonymized or
pseudonymized where possible.</t>
        </li>
        <li>
          <t>Users should be informed about what data is collected and how it is used, in
accordance with applicable privacy regulations (e.g., GDPR).</t>
        </li>
      </ul>
      <t><strong>Denial of Service (DoS) Risks</strong></t>
      <t>Active measurement techniques used to gather QoO data (e.g., TWAMP, STAMP,
synthetic traffic generation) can place additional load on the network. If not
properly rate-limited, this could inadvertently degrade service or be exploited
by malicious actors to launch DoS attacks.</t>
      <t>Recommendations:</t>
      <ul spacing="normal">
        <li>
          <t>Implement rate limiting and access control for active measurement tools.</t>
        </li>
        <li>
          <t>Ensure that measurement traffic does not interfere with critical services.</t>
        </li>
        <li>
          <t>Monitor for abnormal measurement patterns that may indicate abuse.</t>
        </li>
      </ul>
      <t><strong>Trust in Application Requirements</strong></t>
      <t>QoO depends on application developers to define Network Requirements for
Perfection (NRP) and Network Requirements Point of Unusableness (NRPoU). If
these are defined inaccurately—either unintentionally or maliciously—the
resulting QoO scores may be misleading.</t>
      <t>To address this:</t>
      <ul spacing="normal">
        <li>
          <t>Encourage peer review and publication of application requirement profiles.</t>
        </li>
        <li>
          <t>Where QoO is used for regulatory or SLA enforcement, require independent
validation of requirement definitions.</t>
        </li>
      </ul>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document has no IANA actions.</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <date month="May" year="2017"/>
            <abstract>
              <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="RFC7942">
          <front>
            <title>Improving Awareness of Running Code: The Implementation Status Section</title>
            <author fullname="Y. Sheffer" initials="Y." surname="Sheffer"/>
            <author fullname="A. Farrel" initials="A." surname="Farrel"/>
            <date month="July" year="2016"/>
            <abstract>
              <t>This document describes a simple process that allows authors of Internet-Drafts to record the status of known implementations by including an Implementation Status section. This will allow reviewers and working groups to assign due consideration to documents that have the benefit of running code, which may serve as evidence of valuable experimentation and feedback that have made the implemented protocols more mature.</t>
              <t>This process is not mandatory. Authors of Internet-Drafts are encouraged to consider using the process for their documents, and working groups are invited to think about applying the process to all of their protocol specifications. This document obsoletes RFC 6982, advancing it to a Best Current Practice.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="205"/>
          <seriesInfo name="RFC" value="7942"/>
          <seriesInfo name="DOI" value="10.17487/RFC7942"/>
        </reference>
        <reference anchor="TR-452.1" target="https://www.broadband-forum.org/download/TR-452.1.pdf">
          <front>
            <title>TR-452.1: Quality Attenuation Measurement Architecture and Requirements</title>
            <author>
              <organization>Broadband Forum</organization>
            </author>
            <date year="2020" month="September"/>
          </front>
        </reference>
        <reference anchor="BITAG" target="https://www.bitag.org/documents/BITAG_latency_explained.pdf">
          <front>
            <title>Latency Explained</title>
            <author>
              <organization>BITAG</organization>
            </author>
            <date year="2022" month="October"/>
          </front>
        </reference>
        <reference anchor="RFC5481">
          <front>
            <title>Packet Delay Variation Applicability Statement</title>
            <author fullname="A. Morton" initials="A." surname="Morton"/>
            <author fullname="B. Claise" initials="B." surname="Claise"/>
            <date month="March" year="2009"/>
            <abstract>
              <t>Packet delay variation metrics appear in many different standards documents. The metric definition in RFC 3393 has considerable flexibility, and it allows multiple formulations of delay variation through the specification of different packet selection functions.</t>
              <t>Although flexibility provides wide coverage and room for new ideas, it can make comparisons of independent implementations more difficult. Two different formulations of delay variation have come into wide use in the context of active measurements. This memo examines a range of circumstances for active measurements of delay variation and their uses, and recommends which of the two forms is best matched to particular conditions and tasks. This memo provides information for the Internet community.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="5481"/>
          <seriesInfo name="DOI" value="10.17487/RFC5481"/>
        </reference>
        <reference anchor="RFC6049">
          <front>
            <title>Spatial Composition of Metrics</title>
            <author fullname="A. Morton" initials="A." surname="Morton"/>
            <author fullname="E. Stephan" initials="E." surname="Stephan"/>
            <date month="January" year="2011"/>
            <abstract>
              <t>This memo utilizes IP performance metrics that are applicable to both complete paths and sub-paths, and it defines relationships to compose a complete path metric from the sub-path metrics with some accuracy with regard to the actual metrics. This is called "spatial composition" in RFC 2330. The memo refers to the framework for metric composition, and provides background and motivation for combining metrics to derive others. The descriptions of several composed metrics and statistics follow. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6049"/>
          <seriesInfo name="DOI" value="10.17487/RFC6049"/>
        </reference>
        <reference anchor="RFC6390">
          <front>
            <title>Guidelines for Considering New Performance Metric Development</title>
            <author fullname="A. Clark" initials="A." surname="Clark"/>
            <author fullname="B. Claise" initials="B." surname="Claise"/>
            <date month="October" year="2011"/>
            <abstract>
              <t>This document describes a framework and a process for developing Performance Metrics of protocols and applications transported over IETF-specified protocols. These metrics can be used to characterize traffic on live networks and services. This memo documents an Internet Best Current Practice.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="170"/>
          <seriesInfo name="RFC" value="6390"/>
          <seriesInfo name="DOI" value="10.17487/RFC6390"/>
        </reference>
        <reference anchor="RFC9318">
          <front>
            <title>IAB Workshop Report: Measuring Network Quality for End-Users</title>
            <author fullname="W. Hardaker" initials="W." surname="Hardaker"/>
            <author fullname="O. Shapira" initials="O." surname="Shapira"/>
            <date month="October" year="2022"/>
            <abstract>
              <t>The Measuring Network Quality for End-Users workshop was held virtually by the Internet Architecture Board (IAB) on September 14-16, 2021. This report summarizes the workshop, the topics discussed, and some preliminary conclusions drawn at the end of the workshop.</t>
              <t>Note that this document is a report on the proceedings of the workshop. The views and positions documented in this report are those of the workshop participants and do not necessarily reflect IAB views and positions.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="9318"/>
          <seriesInfo name="DOI" value="10.17487/RFC9318"/>
        </reference>
        <reference anchor="RPM" target="https://datatracker.ietf.org/doc/html/draft-ietf-ippm-responsiveness">
          <front>
            <title>Responsiveness under Working Conditions</title>
            <author>
              <organization/>
            </author>
            <date year="2022" month="July"/>
          </front>
        </reference>
        <reference anchor="RRUL" target="https://www.bufferbloat.net/projects/bloat/wiki/RRUL_Spec/">
          <front>
            <title>Real-time response under load test specification</title>
            <author>
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="Bufferbloat" target="https://queue.acm.org/detail.cfm?id=2071893">
          <front>
            <title>Bufferbloat: Dark buffers in the Internet</title>
            <author>
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="Haeri22" target="https://www.mdpi.com/2073-431X/11/3/45">
          <front>
            <title>Mind Your Outcomes: The ΔQSD Paradigm for Quality-Centric Systems Development and Its Application to a Blockchain Case Study</title>
            <author>
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="RFC5357">
          <front>
            <title>A Two-Way Active Measurement Protocol (TWAMP)</title>
            <author fullname="K. Hedayat" initials="K." surname="Hedayat"/>
            <author fullname="R. Krzanowski" initials="R." surname="Krzanowski"/>
            <author fullname="A. Morton" initials="A." surname="Morton"/>
            <author fullname="K. Yum" initials="K." surname="Yum"/>
            <author fullname="J. Babiarz" initials="J." surname="Babiarz"/>
            <date month="October" year="2008"/>
            <abstract>
              <t>The One-way Active Measurement Protocol (OWAMP), specified in RFC 4656, provides a common protocol for measuring one-way metrics between network devices. OWAMP can be used bi-directionally to measure one-way metrics in both directions between two network elements. However, it does not accommodate round-trip or two-way measurements. This memo specifies a Two-Way Active Measurement Protocol (TWAMP), based on the OWAMP, that adds two-way or round-trip measurement capabilities. The TWAMP measurement architecture is usually comprised of two hosts with specific roles, and this allows for some protocol simplifications, making it an attractive alternative in some circumstances. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="5357"/>
          <seriesInfo name="DOI" value="10.17487/RFC5357"/>
        </reference>
        <reference anchor="RFC8762">
          <front>
            <title>Simple Two-Way Active Measurement Protocol</title>
            <author fullname="G. Mirsky" initials="G." surname="Mirsky"/>
            <author fullname="G. Jun" initials="G." surname="Jun"/>
            <author fullname="H. Nydell" initials="H." surname="Nydell"/>
            <author fullname="R. Foote" initials="R." surname="Foote"/>
            <date month="March" year="2020"/>
            <abstract>
              <t>This document describes the Simple Two-way Active Measurement Protocol (STAMP), which enables the measurement of both one-way and round-trip performance metrics, like delay, delay variation, and packet loss.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8762"/>
          <seriesInfo name="DOI" value="10.17487/RFC8762"/>
        </reference>
        <reference anchor="IRTT" target="https://github.com/heistp/irtt">
          <front>
            <title>Isochronous Round-Trip Tester</title>
            <author>
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="Kelly" target="https://www.cambridge.org/core/journals/advances-in-applied-probability/article/abs/networks-of-queues/38A1EA868A62B09C77A073BECA1A1B56">
          <front>
            <title>Networks of Queues</title>
            <author initials="F. P." surname="Kelly" fullname="Frank P. Kelly">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="RFC8239">
          <front>
            <title>Data Center Benchmarking Methodology</title>
            <author fullname="L. Avramov" initials="L." surname="Avramov"/>
            <author fullname="J. Rapp" initials="J." surname="Rapp"/>
            <date month="August" year="2017"/>
            <abstract>
              <t>The purpose of this informational document is to establish test and evaluation methodology and measurement techniques for physical network equipment in the data center. RFC 8238 is a prerequisite for this document, as it contains terminology that is considered normative. Many of these terms and methods may be applicable beyond the scope of this document as the technologies originally applied in the data center are deployed elsewhere.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8239"/>
          <seriesInfo name="DOI" value="10.17487/RFC8239"/>
        </reference>
        <reference anchor="QoOSimStudy" target="https://github.com/getCUJO/qoosim">
          <front>
            <title>Quality of Outcome Simulation Study</title>
            <author initials="B. I. T." surname="Monclair" fullname="Bjørn Ivar Teigen Monclair">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="QoOUserStudy" target="https://assets.ycodeapp.com/assets/app24919/Documents/LaiW4tJQ2kj4OOTiZbnf48MbS22rQHcZQmCriih9-published.pdf">
          <front>
            <title>Application Outcome Aware Root Cause Analysis</title>
            <author initials="B. I. T." surname="Monclair" fullname="Bjørn Ivar Teigen Monclair">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
        <reference anchor="QoOAppQualityReqs" target="https://assets.ycodeapp.com/assets/app24919/Documents/U6TlxIlbcl1dQfcNhnCleziJWF23P5w0xWzOARh8-published.pdf">
          <front>
            <title>Performance Measurement of Web Applications</title>
            <author initials="T." surname="Østensen" fullname="Torjus Østensen">
              <organization/>
            </author>
            <date>n.d.</date>
          </front>
        </reference>
      </references>
    </references>
    <?line 1129?>

<section numbered="false" anchor="acknowledgments">
      <name>Acknowledgments</name>
      <t>The authors would like to thank Will Hawkins, Stuart Cheshire, Jason Livingood, Olav Nedrelid, Greg Mirsky, Tommy Pauly, Marcus Ihlar, Tal Mizrahi, Ruediger Geib, Mehmet Şükrü Kuran and Neil Davies for their feedback and input to this document.</t>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA8V923IbZ5Lm/f8UtXQ4TCoAUCQlW+J0Tw91NN2iRItUe3pm
OxwFoACUWaiCqwqkaLUi/BB9MRMxs7G3+wZzMVfbsbf7EH6SzS8z/1OhIMvd
vTE9MRYB1OE/5J/HLzOHw6Fp87bIjpOdr9dpkbe3STVLXq3bSbXMdkw6HtfZ
NX6sXu2YSdpm86q+PU7yclYZM60mZbqke6d1OmuHedbOhvlqtRx+X1XDu0em
WY+XedPkVdneruiy06eXz5LkkyQtmoqemZfTbJXRf8p2Z5DsnJ48on+qmv56
fflsx5Tr5Tirj82U3npsJlXZZGWzbo6Ttl5nhgZ1ZNI6S+lBl3VaNquqbnfM
TVVfzetqvaKvT8+T86yeVfUyLSdZcpalzbrOlniducpu6dLpsUmGiZ34Sdtm
5Tptabz4+mS1KvIJf7QL0oSX+3XCt+GbllWZt1Wdl3P88jJrMarke7nPXNNL
aEJJ8jHjTBJZup1v6BH0wOQ5bsL3yzQvsIi03v+AlR9V9Rzfp/VkQd8v2nbV
HO/v4zJ8lV9nI3vZPr7YH9fVTZPt4wH7uHGet4v1mG6dZ+3jN1+92qc9P32C
XwragaYNHiqXjmju+52LTbpuF1WNdaUbk2S2LgqhkUff/fk/6jI5vU7r5DLL
51mZnFXlpEjzmq+kcaVl/gOv93GCZyYnp/xLJlMdf1fV5Wip9/zDZP1dhRHw
JU1bZxmN8Hm6btp0mhbFn/8XveDwgH+dVFMawN2jew/147psQcUvX73+5uT3
m0M9S+flukleFUSaHze2Jd8xqnDH33BkpgRRtLR3x8bgzLlPdIxeP3v84PDh
3WM6UM++/vZx9SQr7Nd3j47w9fnpU/qGvvji4b1DfHG6XBVMWELV06yl0TdJ
k02E6JPL18N79w9HB8c8HMsZ3Ld9RyWk1+QElNbS4+iLJC2nyevs+3UuPzY7
/FBHIImuLJFGXaXTMS5/VtVrWTc+9clFtmozsIHk8O7hXdkmXgR7//mTZ8eJ
Jcubm5vR2D5rOMOzmNyn1U1Z0Nf7diKj1XRGD3h0ennyPJrqC3prOblNnr5d
EZGV2XTrkHHrR40nb9O5jmKy5nXY55u/LeRd32b2XToqO/dXk7bSmR/qxh4d
PeSNBcdIJ1dZm9Cmp7fJ79I6l83YPT1/8rs92fX79x4cMBn0X6r8bZzzjl4Q
TfA2yb2f3733EPderOjitEgeV8tV1eR8IzG+s6yt80mj1x4JFT5f59OsoJk0
WBS6pWzoC7BB4oE3HR6H+2lI11lRrfS1m6S7hcbp24dHBw94JU4eJWCNzaJa
JXUGIYD7kosvT14/fcLnJCTkRZaclm1Wl7QeeZOkycWCZMjU8ugdvd5v+Ce8
q8IWdi7adVq3yeNF1iyIqO3VngI+6aeBPB0zBdyshiTGaNfbfRr3cL0CUTb7
tMEH+3cf7osUnejTh7kOdEgilEc5vHt3rDTy+vwsotvXGck/Wm6SLVnTJGsS
qnViRQZtxJR3rpFb0noOjmRHSNSWtjVopPYSgoh1f9Eui/2uaK+jNwXk+tW6
uLW0+vr1mxed8aXFsM2XWaL3ZzpGrEAC6ZI0q2ySz1Tg9g6Uj9N6NsvqMd3W
jmht9ld19R2xm2afv9q/ya/yfbz92wt63D7OuL8hZmrhD8mTlAS0PLsh5SZp
A0LZ6R3M9+tsnY3SiXIY5qSjyWz5m3z668O7Xxw8eHhE932ZEv0fHsZvPiPF
J/l9ta6dWnGcgDL/75++vnhCp7VOp/l8yWdI+e3wMdEMDszFbUOntAkPDrPZ
07aJ9JW2Itp+VFSTq8mCeEvyOKUVJ/Kd3vbPBku7nK5yFuk0/KPhvaODf9w/
ONg/2r93X9nJ0f0vcOQuvzk5O0+KfL5QXvHgi89Zulxc0g/0Falvl/GMT5tq
sqirsiKp+ppk3HR4WecrUgNoNnX/iAIVY5HlTbvaz+sWL/xtVhS38eP19Dbg
TF9jX3pFjR7iZ6QsXiXnI3lQ/PKdcD0m6XJc59N5xjs8qeps/zvatJK01/10
eg0+1tAhHaZYdzqdRIpj5aakYLX5pMj203GzX+rohtVsyFTT7B89ODl4evLg
8wcnnx8+uvvw8RdfnNCaP3r6+OTg5ODR/c93dGEPj5gJv86YLmlTF7QS1bxO
l5iqCpCkSSHWcRZJCbvIl7zP8Qpt6qwJXbguhFoCwuhdsg+oblvWr0dBJJug
yZc7Cbjkr4bDIc2K5B7Jghu6NklJZSAx8+b1C5nGmyare+bRo5MnJzfEHIms
KuLMpGTRF7RFt03+ISL45TNKmyZrm9EtNDbacZ6afLdPHw/vPTx4uP/ESfgX
af7Nvfarrw+vvrv36tVl/k/jcnbvwdn44vCw/vrLyT99vXxc5/ni4XC1Hhd5
sxDhvyNzp0nqfpH21MQLsMVQwMZ+k41DHvCh6V9W9Xd0FP/8L3T+YFb9Teb8
5vPL4u1pMZ4UB9OvZ5OXi/Jxkf2Qf/XNs8Oj8/s3d99+88Ork9eLB905GxAD
HRTIoNZcEoknVlUiVtzW1XRNJ41Zcg8Z79KC7SUzOhIZTtmA+F5ZEXM0NEJS
BScLHJsyNsESTKJp+A3TrMnnpHwxyySmVgo94m1llk3BU0waUN1UOC+dx0FC
xIZ/wIDxTUo2XzMy5tFtUtBVdTqH9A3HHarNS1aBBtjwhEZ6TaoSFJJ1SUKQ
hkM/L6opC4FpNstLPAovyq7TAk8g2zIY1tBKTzfVOlC8k5tFXmQJzGfWxlJW
99MxfZeTKCdGzhqbsfdWKxLVau7wS+ngEodxy9eAGYqSR+b7kJdhJPu4zKfT
IjPmE4hP3joW5794W+1Q+FWj4Apjr2D1rbuxfHm0p2Oad9rc4k/WOMgIK6cD
XZG0NXg/MdAsYe1knGF9qjF0ClJxaG+n6arllaIHTHMoCJhA971MKkwJhiy8
6oZXWWTElL4VfoRH5PB25LNbnncNnjVhnkVTt8+EHCErrRklWDTjFo3JIAtn
HeyxPwH05JSUAoyCrvUkYjzlYiBCMDyOvO6lGppOUdE8hD8nNykv4qTOSOwk
qVGawOYPZ+kEMxaSTsakbNCJKMmcXa5SobjgBEWU2VZm6dlYE62DZ3Uj1o/8
FMfrvMDRLHt9N+/eWTvv/fsBUShtH8bgqVuPKh95sldpXZJZui5gElSFo3qT
0W4zGfghFWlZyqGuq/V8wTOsGpX6dATscNLgoPcSqi4V79UyI46aLKuG2TgI
Y1KTEU12miFem1Tr1qqk4dL9XZKDeGiViUJhbuOCQAfBs9x7TUNjaWa3H9qL
AR5Ig7VzKjJhbfoaOkiyn3yuTJqQ7CSzQN6z5ZGybysYTTxTevotzYidgjQS
+riobvC0FfETUPH3PetHP9He8G45bsMD62fLtFaGhusPO59eot5RcgKaGga3
DfBF+JwhvG3shMSVFy9OvBjJdYv4ixWM6CxZZLXSJqnZRZGV84wU44xNCD1g
mKdOkk8MzlxWMvHgYGFzi4pXo62qZLnGm2izs3rJh4EWZpI3lg+nk8mapCQo
7TJl2y71W96QypmEw73KbmnXhL+IPFOe2pKxJ7zKz/yzZlNKMu/j5VUCWJCs
ICsPdFBkb0eRusEPxGt45Yvb4MjhTxpWW02qglYxvSKWmxFVYAlDWdN41sFs
ydAmTDI6gNONsZGIr4Vls3xK+XVNBanJyzLGwtKgK5UWWGtDSkZVrOmlk6wm
e62k59AyraB30CzIGkhykH6TE8WMyKbiX/XzALsIqT4AIWIOJiMJRlOYJnPS
6Umh0HF3D6EOARIyloKgowr31ZnSf8imvK5Cm7wkBWC5XtrDhYVmBrRai/3H
fMJbBPiKqKpJeAHpuWQeRLsVqjGmzuawBKqayKWagnxZo6Gp1Ak7IWCmTfkY
40fQAklY4lHjrFdwGOVWLFaJf5dZLbIs6ZFl0xyHFE5zEQNEZu1NRvo42D+x
XyardblmhmQf072HhjQQOQUSgHq3Xq6Yogb0WzHB9DJPHoaZkdK0nkroXDTZ
CqejVWHU4DTS+v/047+dJL8j0V3Bl8IqAL18wTvz8OhTQ4cfX9AGiP5QpHMS
VlkmBJG7k/XTj/8OwazKx1JOMLFXS2OspDQZn6JA2NFOOF2rrYb0j3hMmCNY
3WJWV0teAl18fSUTHZG/KhZCb56ernPmjqH6jK2fERklpI9DJfUcCCqirNYE
0o75HpkRrfPa8Ch5wrmya96ddFqtnCq5ttISfJqI4zbQqtg59jZY+FTEL2aL
peI5ph2eE0Zb+AJLkqBI9pDQSCtxrYNn0KD5aITMdYAB51DrSCVt0hqajqFD
OyYiTQL9xMki1YnkGKysVxTPDq/2JgjRmyiGvERu0/QeN44RFOeziiadBmqz
uufdoROleUZ62LrWw0rykn4h1tana7jdNUvaLaLb60w1YyK22boQI8PtAh2A
q2yBUEbdHG8RsgPjRLGQzIZytX9NV8AeItWMxgTJwVZRMQikONRmut10hLV9
mChJ2xiX5TnBjOUGE2rAtFRLpkWxmzIWxjcZnULRfMP50VoX+VVWsKKr6qeZ
w9UZbu20X0N9uaFeYl10FXSmJlT7oDTT/iC2RmetIqM4axZkEVhLL7LChFK8
pk7S5OnbvOmVGPISeb/b+waBRJVy6XRaw00cGbkVkTqO702lLyOtISSFAWnd
Ld8iLoVGCAn6HK3WDD5NPIJJXfxm8QuY85HYyjKi8hNWTpaO0pVzfIyFr7xF
XsH3zNOVsVIDn9tssigxVWIhKziHt1gVvEBy/lQD8KtBM6T53yzI6OMYNVh5
TpzkG2bdrEYwA8qwCTx65mXuQIsJx8YdGBOTP7O/shFz0orqjrN5wJ/ykqX2
dcZTpREU1a1QMtacBPaSA0Z0AgqOB9Ke3mTpFTRc+A9uhVnaBWOWwx6PHmoZ
GMhBOo3CFBCuLOf0qlJJIBgKjGrm9WVbQAV9RBO6yafghAW2ngWdZaAglpLM
22Y9o1tyNlFpHFiIoSPUigirdDZK9pY2J4dsJa5xOvWLNEisFum++C4nC6Ee
6Jou8zafp6KkcKwLHB7oANqV7+itrAMLy6+q+Mw77YzFGUk3IoVQlcrB7Ke0
wrTqk4oW4AdWFLyUKee0FRLlmuT1hA5x8u4dR/hgfWIZxvSgxTKVGdCqmu89
kdvHtBbCkMjmNl7JPoPVEVg+NVRsdUkYpgDeOXvqRW+jY1gmZ+NVw64OFkF0
4PjEslE/xmGZ2MCQmHTQHa74pMrJrcoC0plEWaMsqEM7RAMI7CSX0MNfbwns
YDkQkXn/XgPCUaSKfjs/o59YoQ4IDITIHrG2uknrqbgdxOD5kCkNRSil/ab9
z2oD+75nq2VxxjH1yv4XqZAVbTCru9Ac8bExpJKAtCEu01b1vOD48Z+8KUKR
kTwLLdeR+dLaEbTMdEKEnHQdBtA1X609m1dyoCf6wNuAD5YoQzTiyZUB23FM
zzEnoT4SRiAIuZJ3F/5yXvyYW42gnrLLafNKEn1Ekl60CSHRGSwCDsH+rSV8
mU6EO1IfZ7cVfAklNrIJXChkwYySJzS3vBVunl4DsRKYT8R+h021JkPGBFOz
grWmIYrhJsgGZpOymYF7zRrl/IJxk4m6btjdSnvRiFikHV61PcZmx7emDpfG
qRIm1knrbFONYJlHHAQjRMz6xsas2ROcqmPWhnz9m0MXq3Ue5qxKQICq9/ZY
PNE0sBq7mROvyTEfMYKDExAKP6ULnAXTq9CcLCt1X08QGlk3PLsbZQ7OTBok
O5gZVhCuPLjHcEj0VGICOni4oTN9Z1fzuslhFlkjBe4D0kXMjCjBeTDYneUo
0+2RWH1eyJAtBmlODBn+kGbH0MkSuMD790wWdUZjgo+nFD8bn5nQk8qPD2YS
ah4mJ35miRPspWxu1Kj4fg0rBbbEzjeYDbGYaIbh5H6zQ+TAt8pJoLWxt9uh
QO1HDKHA6jOOQshrlDzLibUMIq2BR1YuWH+npWvaipaB/UHMJfiQ0x6s1J2J
GZKyIprtpMh5F5sqdIuQSk6skD6I2X2RQVYMAishInlmB9ck9kXBgA7FVgkc
SMbdIwfCbpwTPjARgkVOSellb5hf3GB5xIShk1otSXuA6JvyAc4+6IPM62lo
drASRQprfp1PEXVsq4KWF3qHxlvmZPqqSVD2DNgpjmIOQsETXzG7gMKBeJWG
aa/hVcp11XgqY/ieJ8RUrf8BCoJbjc8aDWbAMvwEfBIRqudVWohdXzGJFMmc
vlHF2Llmug5jNT19dGODSQebUE67cjWwFUPbrzfA8DM2IQJtV5lMKQSIdexd
4l6CJmKlHpZDwrIwmpRuBfETb03Qk0/LZE6iixZnwIZHaMkk2LMl0Q5cmWCZ
wiiZq4mU7/i0G1Ovxe8vbsfAvRLZpvZuMRlYurODBFTh1Q0SygYIWOsunDIu
K9CuaNJXMjHogGtxwUKwt9kKUuy6Kq5FaxAXEvuLQqqbrUteQUtz/IYmsGTY
Id25iYPxDettpop8SAN3vR0hQhpM8DSMMfzZ13ldlQwsLIJzMrC7Q+YZnbmc
SAcCnWWAnEQAm2yQ1LsSRdrDYfrkA/wmjMaB4zhW54atb7+VuXlGRLod1LyS
95kVVbXuLnF84HuH38dKq+jNTlAZ9iHQjCCkBh/yusg519hJsiXaZULRzOgd
WB0fF2N5RmZC7p1qygA+GHazDCwQZWEYiQnGdESpVRcdEZSh3BUEFCs/oRdN
STiLYuj2nIjs5sgUs8BbF+DgAYZuMVKa2WEbBO8QRpBINraqjFe2EZJNO4TD
GkVbVYZdnL/phBV5o5r1iu2wRpGPkwD5yLoEoJHQ1OXD0cO779+bLdJq5PCT
4VPUMxXEtcYsxmjDRQYAOD/Un0luwYPodUQV5o5YWF2xcYEEdy7h7m+dM8aK
mRoPEr7BgRPrcgl3Bh4AbE42heMYH5ZQJn/IRN+IBcTOhrq3g3VfF1P1MeKO
SsPhRnQfRBCOjTkYJXfuPA4ozoGbq7KXfjYCKzy1ZMvhZK18dOdOsvumR2p1
g4f0GC+6RnvmkAfnz6vOUdSXXg0oPpB37pgj+wiyQzGOE/b9RpFniTWLvUg8
w6MC4AVnZy7PqWJIcOCdpQ12hiyHfugLIrJF93yOosm7V+NpgWOSpgsxHMvh
eLlY0djkarrT3ovJWnOkMgQ4CmcfDwzeHljOHVtZlRT7duiGginhdAm61Xr6
3MPN19WF837sXjqzchA6gwKPESNznG9pzy6ifRps5s6Yvq6e+hecvboYJPeH
9GsNG4poo9lT59EUzpDuCpHsCxcCmk8SjHjQ4zBwV7nXQqOxTtmYkTOJqjlg
F405pX9kkocGE/0GrxECP7dg9FCfiYGmrYhKrxAdixPI0lpHJRolT98q7hB3
gN3SsCZXsDvp+I1p8bHIADjL4JolmeML63dIk2uOp01cPA1Rd2NlLhEBlCd7
1RyRRI63kdlSpHNajqfgihkHlDB+nHxmOil09k3NHu9kDygdVuv5JFLCqWT/
mBEdMvDMedUdJigvAfgDNDGmWvEWiHmkLh2EkJaky9HQseszeDrU/2OlqxC/
hTvIhgbmx7crVY2+tU44fDCcKZQ0ZKlyHJeZImk7ecNOV8vI+wcfOC8qVYTU
dGF3krJTPIPVLht9iGLSjrAi2qusudIbI9oatlHGoTJMAES6mCUGp8G+3Tcn
l3s+7hG+I53UiOCR2EuBuKBr1BoR1Y2Ua44a9xhubajgSaxA7ogMlhihhKMB
+jLA6cTnzK44jJMG1oVlX544wFo0Oi8TsTRLZ/qiYkB8ykKLbs70REGfIVr5
LHgKh86az4TG2XmLzIpr1r+34BPhbCvWpETuwst4HP6Cw0BmdpF2tFsiXiOn
qJaH09S96Jnlc7i99kh+3yF5QfZWBpWJGP6EpDzxjFoTpA4fLBvnQMfLOWxN
L5NAY42wvo/aDkgk/eqQ7hAOTc9+VcLuw6mnn4+Tz8/G+8AO3ODrq6QDfTi6
S7e+vrzE3n5XkXJDYggGHpH6bYbUMjAPeujv8pohxlhv2rNjeqe9ky2HRZZO
2YYUNahGYiI0qvWK3Qz04N+9Nkbxt85tQ0RKdxarRvVmXeKQ+zJvZYdgvqiq
KbPiGxc/ZbbGDpmQaIIw2NOADwXomHDjPmtMZoOBHXQdnOdNrBZcbkoRyIuM
zUo+ONaJ6943I4VwDNcbr5SjYMWH0XjWNbAU2D7V5Lp84bPm50fWq4nghSEj
wRvPibb4JU8cSZtYkxeLkLYRPq6Q8jtGT+QssaZYzOR6GJAGi0RjFdbpwV69
T2Ed3J5H+CNIAmfXiHO4+ErTE2CBU44V3S1+NwntQP4iNza3TvitOFUTvA17
jpG4wHho24kdBg/JKHlZcTA9XEV2oLkI2FvW65uMfYgKjoIoZNytONRC5KmF
o3nQSQhNtdaKDZxYnGovb2aAxVv15YxM4N2FJNYnhegat98MqlEIq/W0kBnd
u3td9Ck7bxyFQdCzAc5xYdV9HbqIJjs1bOf6TbnKS4mIt3UK+1mtcbove9vi
nVtB3LTZs7Smww4LLY6ut9U0vf2sielD9rjhQdpwVsu23c9ok7GsiJ4JbKSe
P3kBRwIMdBCAkrbjiUP4egadtONRiiPFJiWFP4vtd6uMjpJHlWL0t4g+XnGP
PjSORnWtObiv8EtWlrs02c+MaJIW5vEqgnnYby80QJX8Tswr84yjdN5vEfrh
RS/mOI4k97Dk7UabCoCDvfnGQItw5VyErUI0jmxn9R06m0MBm9jbDcbQsDXG
scOqqOZ5tkHskU9kiZgRrHTBkTQwyEzg2pCR7H6WTkXh+oxH8hmpQJzcgW/2
QtQ1RCgwFbe/sStovI2skb1lWsKSCALlfGY8GG6UnMKeYOoDYHON24z/fdBl
+ryE4XJBZ6cTMucw8OnMQz1jbLoeaTieSmEgASJvNxvNRyQQh+PhZDgdZmJX
kvIz9BeZ8KK9QRBCcA+3SyXJC9ZHwBNA4BQnAMEgFq8kaYu1hjIjkrBrSJMp
1d2KYgdqiG1AaWcyICYFGrkbikJ0aYEfd7eY1hmQ15J9WzP2ggI/D9164vDz
iNKsy9aj4x8xAoEzBWJCmlQ1K4UBKYUuVXd4MDKL+aSfTJPN1d2nYy1uGdK9
SEkhyxDd6rBei8jLVHchQuWAsWmrucYmwzwRp+3rJm/iH56xvi9rrOq7au+h
x8+FT8iyr6orqNWpxWPFxjJN6fu1xIuIqqOf2GsnPBpQ8YZs10Hy5OUFO8yV
7eCSQXL5+Nyws2TYIs1SvrTTYd8fCSiQsaA0GZGUN4vk8sUFjKVS9tDSpHG4
N7/KrFplmIXV+qZZCkqSuQeogiRni3V7qEnNQctiWKu+YjTZLMqcGSP6A1nP
5jevpok8W8K5trjfx05USNaTnF8yWhy4FBTrjRKBh9r8JY+6AyrcPlAjYpA8
jsB4dcVQF0+XqAKftZa725InZBn553oOjLh38uriNGHr5ecF9bZDIhORTXdi
2alsscsp2DSUVoklSc3Ouzkp64W6buxW+jEzbPUR2QfiopPQ5Ax/huDFGMLX
lxIXJu4o1FtCLfDuwB80ryRGaoMNHLeT7ArHF3oeLEAYwL+N07sFLwNHPebI
Ju2JREVBuLnNe5Jk5xdAWYjDH1nQ798n+5LtLN8hAZq/Q+YzWZzv3uGP9+/p
mbagxBsGRb3AMUbWc0M/XaygxPEneZe1mHUmfI0m6bJhB3+WSD/65annSt2Q
UudK4gbJxe9fJiePf0tDBMN4QTxovWLTV33weGDDQVGGHfdtjnMINQHyKnhx
Krh857keJW8Y+mhXIPox3LCuPMo5cLoQW+qGaRU2MpweRiHsHHVEbAwJ5RgV
CtoQNe8LkpMhbTXAZZzj/f697NLh0UP9U0I4bDBItmjroJvbUzVVS7vOaVCB
ZxkTJ0mc04HLzC7UYGLOFpQ3tt6pBJI/+fvkCNHtivT/PbfdEdUbt15PovVS
mTxn0A7N/zpPhYWt0gaYsMCfoplC7MxW5iassYOmt3ovfKpIewoskoFD4BPx
DJI3T87ZO/v1m9PHNi8GPOoFeFTkOQv4WFDdaWCjKN4wMm4m8Nh4f3+o0A9s
PkMTZoQ6WD+nadHgmXsh50QjA6Pk3K4J59CCQgrmttUYOFe7RGDVcOMvV5yf
xDvaGFoB4Ek6QWnv7RY6yb13zR4vPJL+3ccx04e5lCgsHFmyGGPeOvizT1Oh
oS+sXj9ZVLmodEXPyWEiXIRuuQ74rJMXpyW7AH/j2Gfk0FtVuXdx2NeZ8HWj
hH2U4Ss8bdiyOvSV4AGzkq03+6CO9ZqVYQIhcykpCRI8fbSRZsPs2XjseuTo
YFj1w7s4Bi7RKUqCEge1m/Iki4DifKfVix4+jB4zir2QAXpe8n5I+zI+umCh
zd2n2MGMEAGM7WifvUDPXy/XS8M+XtmNyMOZiuKzXkZsB4pYoFl14LCaA5Ev
xQsmUJj8h01uy8oMx9IDGohIbhAkm4kaGQ6D5XEEUUPk+Kbq8Psgg5T5N8Pi
cd+1VV07bBBop7yZrBuLPRYdO4x8a2zMvFwvsxqSoA/jHyZD0IGrJjnjxIO8
COdSd9JlIClL/Afsx1T0OqLzZd5ouQ8PBbol6eM8gjb5GwgZpEFkM76QhQaH
oATTh+O/JW3Dxra80LE+E5rAHNDwNgBE2Uxhr8VrhH+RxedPQVUYpyOuvJFA
rBxIDsQIixQgcxAhd1HAIJ+0zpwmfEcyWu+EoGqB8eIgnUynHER6S7zVBFeM
K9aIFXUYBFGg7w0EZg2kI2Cz7MYFMpBmKNoQfM6N2ZUkuFv7BWvGkr82XWfC
Z1BoYOrivVaAQZ8MXINmG9CgA71xCDFB/nDGonfL26U1khHBeSGaNBw5j90W
bGZWJ2FmtdkGMlCVNme4EO8FT8lGV9ig1lR25XjdlNEtuGCbBd5ZHsUATzMF
1ZTRyGLghVje+UyfdFWSnFD/piQ+tyEJGk8pMBzClcEYAqYxEpfciQbwVVEy
J3FAX0m6+RmS9oFtm62KSijCNtnd0nmqJXVEjO00Kme1pmSaVNMQ7ulh/h+1
npZF96OiBWwWhX+AmEq2jpZeKAOSkbqEFOgenUs7nJn9Dn2PhEIiwMThmAlo
PAR3Mry97Fkqt90nqJYAADWUexVyuVQXFKtL9IvuM4vScy9K6SK75Zeb8jpU
mH4hBRjL1PIA2886Z4hK8sbmOFVPChQqv+Lih4wPBhZ3QdOiNx98yj4F1Qwl
gcSaCgPmFGDtLLPGtwx0BVfkOi9wuN9y/E1KlAWr3V0F3U2lxwCE7wBkZsvB
Z744rTIhVXETNHKeZ1UnTiw0S0yH9vrWRCoZpJ0AS8g4mAbz9XUIumMmTUE8
DAAGF2Zz0R0k0emqNDw6jtGSwley5aQxZfNrh8FrIVUZZApi47KIZaR3M5Vd
b37vToxNLJ9qLk55y4Fwi16e2cJczKud7yU4ohipBbshYYOrfRZFoH6pCoLA
kjedoM0ilMOJlyxVV+Im1pQga1jrWvVNAbqzTV5xpYP6sXSaQf6hheiyjuja
j+EZ4Q0fwyzCe0iA+IfrbnJexPDDJTIxKZbNnEpQapUlu3W4Rh0+9x4cIEFN
nTIdewKAW5L0QwFxW+OCMR/w47aOOjcNKXeAIpvGsHEZJL2wmQq3f6LJR5zP
lGKCyN0hqxmoBnvOLJi0mdB5gsh8xsCeDRuO5+djlPBjwhB3WaLbtAOTMbkw
lbJJ52rGjDkjlR1Zoqp7d5botfJOx2EAcyQGZ1REhaf+Z2CfFhzt0D8RVL8D
AiWuoFPt1ya2EclH0MhfRSICiUPRmAY5n6SwTb2SKqFSWAHMpHhrlJvQsD84
n8s6Xy7pSWdZWnZFZqu/LfW3gBWFklvXnlTEma2YIPz2reXa0EyVuQWIJFJN
ifsTG7gM3qMBC2c9IyCO+UzSxgGdWvHgwKwWZVEqw8yiYgpZXTNyFgSgMQOE
3vOiZTdSxYm2CP6rJi9WAgfXJB+EGUvaZJG7O1gQRgPc6iyanneHATvTNWjD
micolZU2fdoQR+Q6UzfRKOzr2YkbOlgHfsQcn1ipuR0WU5NxihKLoCaPbi0e
/rJLKa9dhKiBmpWc5SWA+uHXQDIt+WuXqMu7pHqFiy7QCSUTCzLHVXFFOmBY
h80KJZXjaRMoy+zODaswcfDDlQwy7vG+MsiXl5fnyfOnlwNv+Ep2cBzDEukJ
n3fBPm8O4MCh4YAi7tD64kHMkzmF2PqPWIv0xYhCy7gDxce4hqKoxzYfrZ6g
YaMYrFAsIxQRaSLtrBME8MiJXCjAjjsS83hwwD12GfPBR8mkTTd5fo8Z47BZ
52r4a9QWWeG+Ji5sBv+JLQdMIbfM29YIU2Lqq8PW50qPiqFt+mlUxok0aUJ2
aDacTinM33mhmgCUvY4LqT+KsdU06DUIBX619VLlAi5CunaO4+1xBCM2OkLV
HeBuWPVMGH+Qy2Njt4j6CFpJ5RCymPvHFypozNAVdNP3PoCGSeBw6glnE7qU
EvHVSgkBzkZUdstvAHMXnv3unRY73rr2UW25yCII/YA0kJ7xGdHWbe7Muukq
CRthbRmzU27dlnHWq/mZZUjcMlj3MAabFmH41sOKQ6fysrMQOBwXnEXDpnCf
/1KyHi1aWBNuGqv6aNKMG5mrMyR+TaK76+xW8nV4JYv1skx2bG5Nx5PV42Xw
/AFsYUd1kcZB8bPAGcl1tlAbgQzJSb5CpsLkI5J4tvrWPsb3wcMaSLAEDLID
yYlqHwZrtQW5gUwZ5zaJTGHGaCBWZoz5oy1Qv+1/f0z+0gWmW59qPVKPyIsQ
rpHbll/lD80fzR+HP/O/n73gb3drdD0NLen6ljZW7feaYr1hkWxd6ejWj/9f
dD2G1mPgx9e/rH7B4/+aWzeGxgbKz13/F67aLx1acD2G9uGR/ZcOzdLaeZZe
hR71v8XQfimtdYa23XO5ef0v+99fvWqXW0xAd/1/3aqx4YEBRvZI5/n/NUPr
02b+iudve9VHXR8xD8ONakpZhmNjhk7yN6FkglS3LQSCIixerOst4osxyYe8
MUEmi6atdKNj2ZTFKI3lxEu4UKqFw2E5nSoc2yY09BQ2p0FtpKygtvdboJKa
SPXWsoG8GE507qg3omf6GsRpIlRq5IMkFSZJfP1GgZ8FhYIsTpGBcBdwjHRT
cZBMreUwuUORQyOK5qAZbbaWCfRi9hIMBGviKzgiYXOcKQgzva7yKVfodFaC
03eDpDc+M3ZQ1oCe5W9RRpa1ZnhLr5E1Z4vNB1Nj5KlknykkbxaUG+Di2llj
pESd1nYXYBKnMcMdPtV8wrQYapqbLxeBFTtBvgRMg3ojy9zVwuFSZGkvPAH4
cVLoY3u5tjDtqSTNw9NQTaMUG0ws9OgkJeppst9CoT3e/vQlzdiB0xBjB/jM
uyN6UFhkmAUAUgeskrPGvoqaK6eKL1Ut+GDXNJGUyO4GDpNxZmw+6UB8R/61
QDc5aFMrSK81VzMM9lBB7q6+sTFnFbdrkluQ4N6PONIbEauyWTLcK8GeBqT5
GFfjtYv1HNhsUjphgRMC0f0mK6SESFClmO4HZ0DMyxVlDLJdraEV1YkIiK9b
K+nEiOc2rO6l2ZjhS5uV1nV3NUSADFLudHAXH+CCNUU1p31vF8t8MuQUbRrX
vNa0YgtSJKKYz6Vk8vhWfOFBSUD15XA1QZLEJXYdu3mc/Pd/pvcM6HX47+F9
/Pc+//0F//2Q/34ofz+U/47wr+EB/vc/qAUYoWzTeVlxHSjMRTMnxP3AyIA1
3IRA6WbqWA1PkbGpV5ZlBu5i+LPh5aTt90BzuFmX2GcuENk9y87xn0G2TLKg
CrG4kqVErzIZZtbiyGQ0QSrO8kSaQgbBNZt4cXB3Dw8opr4SHicWlDT9ds0d
C0Kq/unHPyH7Z76QIywuWbCNSZGlCGDmZadyJdKcuBBKWqAZ5QzF2Gxngpyb
etnKpGkhBdbyeVWLWHO0zGnntpCkHL205d6WtDsMxkMVLScOsut8ksVVljRx
IaD5Dju99KgEQHbBgSy35qiClI70M3DCK6o0In5vgB6Qyc71tuiM5ELTbQR1
IimKalkpY6iffLgiMsxvQJKBpgLyEtfMUM9MtxO/VSgiNF3XEXbcyS/h5HTd
yw1K0OBrhl+dEBYM6EDhmsgnJlF+J3l8O6EVPU5elZklv4z9vi+JKosiV5Ru
smsPwcs9vu/RukYnzH90L/3wXf8o2WFyr8JS6aW8PetSSva7qXG0FEFAPai7
Jek/qOeKVFcnpPf4mDeZW/ckKwKxF6fl8F2kH7SuckzYT0DrQDjnRWe3wD9B
hEshT61U+c9Bk6k/DBgPY2dgZrVkr9ga8s2Cs8YCArD7KgpcgZxrBAfyBn/C
nXurdY+xNEgqRC4yhvyB0sDSjoZ1rydeE7IpeVGBMZdTYFuIrFeKp9HMThKr
09vEwczVFdxpUJlIpY56asBl6LL+dO0nUlyDft/9+ukTbT65l8TtSUTj1Xr4
5sOeY1Y+eIOB00LjIF8SvLcQvyBV86Yj8KQ/gAPL+NKH7XrFFSa6aWOp6Xm8
VQy8Cp5qI7ve/jS2NDzxMpcFxHoleknh/y1W+Xb/4sUJeOlPP/4bsf97Wsb2
4d0IQOPyb6VUgVGoOMnCZNlAHHzwcossP7x7d9n89OO/q9ThpVKURdq4/GAe
St8Ttz+QsemN1Nqlf2+rtblh6cSV70BGytauMq3GoGX1rUoamDqhpsIwNDJR
JgtbsHtZeahHeKVV5PNy41wTNxxlIy2BydWf2xgsqqiyv1YjidtPnAcBHF+T
CpyQSQci3jYFYWc/TIPQyJ9yYXKS8bCQtMCU8/p0Ghex036NZ2qoJKip2IUz
DQV33QuM5kAao6NJb08nUmEN3W8mqqychmqCx7biaHDpga31pm36sQkBQDnC
gBLHLKqwDHVUubTtP2KksElY3NUZcApW/0Jx3U2ua6VWI1cUdz/bELfoSa0W
hg4qHYSAcddxpFPlQaJjrZGWFx9yyueNNYIY7cDtprSOFGdBp9gOWJrQWFVc
hithA+6hS78lE2NmNUrhVpp6zHVTbFXHwGSKysy43CpcJPANN0/ujcWpuLlk
otDzxPnfxsadE+ccshRacxPCYhGxM++tuuM39k42PI35JnBdbGQDBho5rQf/
aQ+Z2102lRV0EPJmr9APFLtiLYTd9YprrsAxoPVX9vqeezoz+tYB5zhGya8I
1upzWHW1hVwik123xz/TmDfgHGQy3Ay21xPYMPtvLM2IKDdWVNu67SzOHz3T
TtfJbnqVJiSdwwYD3TREzRHjiuDGpv9YC+tGA9Ysgt/ymQpa980rZ1X26iNh
7YCzcCqo1eT8dxrSYkhnb+161vqnvvhEVInfVuRw4abNgpvhohot6z1GCY1K
E5n5ldhoyVwSYIdUneWC0K54Qbc1j+lp55PYdj4DTe7jO2+kCKothcfxMt3L
n378F0kz/unHf1UrN85tCFsHc8bE21Ysa+TQO5y1j/i62DYKTejI7IlPe6o6
afVyV0m/2zXEuh2yoFWRhnWxFenSt5jerChx7t+/+/L1+R5r3UiJk4yzYzRQ
uWdUCXr4ETrIAXQQEcafRnpJsssQJ9cB6e7o4NM4macJihDtucJiNtgv7FzX
y3U8MdGMzpFThjG+KX2rH8yrerN3DICoToHVFZ0GUwCKprmyUww8O5R50CCi
qRzJ1+HAOFgMJKNffMfpEn53pBrxqXt9zvwiqJ25SvNaMKllljOn1Qp+okqh
15pXoeHJUNJyfEV05qFqVwq2EJ0ouBVAN26Jp6sg43NYWAzM6x4h9W48iU2e
x7YzQp/WHZ2Mxpx413WndZEmUdvujj//MPiwgLmD8jqpVplX8G2TL9swJqjo
zPukzU6x8uyA0TLPnMyPYfjiOVZWSwqRVJ/oweTzK9pqJQAFRgVyGjq3Vdjo
p8uoDLiME0kbZgatVduC1n59CxCumdwSIgiEJINK3FGvT59r2eE3a7hBm4a7
qfeUZFXtpVcXS3UKgr9rgDJblxvABlctL6iHV7oNQSLWFEWq2S1kC1skz+sU
BRQLeHa5hHUoMEIQiyh2HfdzV7kOUXKai7ylK4HOU56HOGVa8jCs20lKMNrn
OVyUpDQzr5A4ULpuqyW7LGwSsHXVC76FcX6cakozLwrj/E1hHzVbfnJlizE8
daUyUs7G4QRRCQ6gJlNRhDW3wgL3l6C3uV1TlB0xrE9GNUol/MINMvxaO7tj
zdpQV/iUFduYWW2XpLH0hdJdjyvUb1yjBdu0t7dg4GJwz4+IBzlujTOiCu36
xullqB/Z+1SJFkkQhh63nfK9hBBOpgkVeOY2Zw8Lf3g+XOhSkzCavuBkT96e
S6LqKbzNXsKqXlWqxGuNEG/hBJulFpnOEqmYpBgZFxniWGzoC5Emr9IDq6cu
v18aMpRjSJGOHH5L+CBGnQfLDoXOAJcTqmZ8RyQX0tJFZfKIHrvlhcnmy8Kj
OqPJNN12jfHechtN7YEIFzs8SdziXBs1KyJwzaql3Vn/k0+wwelPbf8kRaPz
dVy7WZQGPrnbdsvXVeCQcImDeuntV4Xs9o6Mo6Xs7cau0iXf2pf8Ggb2t+/y
97v07+4yfbu7e0Dif3f37MW3Of1BK/xtvpfs61oH3+3tJXewmwNzd4+dVXt7
qrIw4fD9iso/s9jSFz5yFughObaIH2pv6FE0u3pm/yN4hB94SD8dbTzLqKPn
BViaX+3zWM/0KnPs+xkoqjcM6Ca+kDu0N5Fn4ktokUJv7WTxyUt7JASOgGcN
M/ovnc4pw+ijNBUfrj6qeNC1LTQENAqyaF+vMB+z5bCFAwKHryFyoH/2fLu5
oEeZszE2HHasoIqiWr2xhIfHMdXF1IavhQrkTY7iur9s0l1Mdro5Ed0F22lp
LbwuqFoaTrrD1ByFhbeG13+ksaWlmOnYB7r+M67gi8PrRAhdit22/VGig83u
UPWMxbUvNoDmElqQ1deFDxgA++B5U/aAxJEw0uuui8It5BMJkvFKHKtfO3lH
9hCcq+K2ZosMD0R/AbZ6BsnRxm92MY/17nvW6b1x7727B9Ev2OQOPzlmi+zX
7jVia+HzIX0ObwgIgWTD6P6n4Y9nuqCuKeBxcnQoU9xPDh/wX2y6C3VjPXNH
8318dTfpcFUZIP0li8VULjP3322Q92DzKYdyx1H4lIPoO/uUxDPnPR3U0dHo
6IgW6fPR3Qf4jj9/4HBimehf7B2/6sB92ngJ3Xv//uj+50rPNoLeR8IRQeqQ
+N5gSFbuqhthsHGUpI7J0dGnie9fDC3FSllbMxZ5OpPWVv+MOxoPHB//WX3J
mDt3Tm2XNhnBS65LfaHZj1qd0UVuTxTjcecOWk0GrLbRoliV7w2/GZfUPOSy
yht2iualrWnrUmxc3BimXwjGQWFF2ICsZww5ZyxwH2gaDqkkU1I84ExxWwMX
9o3tvtgB4QTQHokE3aR1ncILT1o6w74KNOOs0rqJY+YensWV3fKmkbjqsGC1
p1nlVwzdgygsGF6gTamQeMjl8Gci2miYKMqlRV1jYJlNqbPbSdRRT9H+b7+p
Zi3+iASybaYwKSpuyDZrMvGCiwukuW3abGkDKxWit9rThzPRjE2ThludmOJA
S/CWUmHcR8RtPFlxPtZG48ILuS1q5yBvHONjS+JL8WZwoc/eSIkWYJqiFFyq
SaTJhsYBWJCIC0k1ZQ3TOjK5UKT6MU3QTI+rTXF8AtWeSnhgd5+TQbL/KJ3u
+SUiQhn6wo1m9zKra+CycNn+q9/uP4cnZv+ptSr3fL8IlWAhDC9vrVel4dii
cYc+DQ53kc6H3JY8am2AAP7h3U+5A3bZrvOgLUeT1zI9MfLds6Ryy8PNm2bp
dcW3/PTjnzxuwOJc0HkR881Cl6smlIUeWScgbE6YRpFsIxZpB+SKa9iGqZHH
W112antGzQzWkfYKgR/YXpdBLZWwNySQQ1xww2opaArAGsrm+TZSO76LjfUm
0bNINxr0AJoRDWGvKMORjM871sIvM0tHQTkuFAsd12Sl2AJnUrlqoDzJo7eM
QgpXjMKVIV1WyLDJuUp5nFIZR1ZzrRrkwGu+YspHO7iluGxrzVPeI4E0mtm6
5tPBRXtkKWg8DpZYKa0wXpMdNJqUKVU9pWy9Mk1fYT+sysI9F6pSWjvfoFxS
4wMkW4ELnZ3U+koCD2iCahMc/1JOSoP1KaUcLcRN3Iu05JCNq+6YhW1R1XkS
lKG4k7waa3/mjgTXCsOaAOfLJznXd+QAaasV9GabjOTcRH/rF0hpEy0qII3Y
ty07M+imWi0EWBf2Xo3L53P9TfZysoEkRRJotXkVs+BcapdDb5PMVowH0NKW
xDXX+ZRhFvl+5RNmr1FzD40o2OE30L5izV4g02zCnQfwhv2E28RmdWpnPGbS
MFNiCqTxPrY+9raqirg2i1IIp3OisZLg1zhJMKAW22izF63LHt0P0ZN4bGlk
cUsxHmf8Ti5U6NVOmuPDe0FK6XJd6p4R89n5qlqgyUOzJE1qtcBWi3758F6g
Xxongjpl/uitO77sto0jKGCUc1VEgLRcZWWljgiBr7eWZafJ55/G4k7K9WYq
xXuvclsjyddzixgP3AvaN2ifs28Ff+jLKffSq+/8JN5zrNFE3c7cOl1CmGmj
gYSl920Fco8p2XUtyj2jyrsdqkyna9YgKqmpDAlLKbCZKvFtw1ubOysTY6r1
pefouVKL78TFI62hOyISoA3nAzOcBBvJzeRRwrIvX4P7nBeKJdT2QZjyM1d4
n6XjmNlq6GykL8UZryaKqOG3rj/EvX0FihmBgPlW8b+Kg5yydb864otEj+3E
WPwQua2fgblu8F+7BJh6t39VMHGO2FoHgx3XyLy7u/r1FwhdPny4+jXHaN9L
/XB7abRe4avcSbCTpUPAckWUAToV9/E4Z9jQM+n5CPjSq/xCbPbcwhYzfnVd
+o7VN9rcMTU19/iB9pMGPRqJ7Y54Oao3CeZE77ZTkjm9sigtqy6E1Zg+2M8D
uxyQFzMzo9182CO2z68daCMhW8c0Djwwrm2fns6QAsvQpYgSPFHsg7DQN+f5
c4dPHYfyMLYmTqXLtjajufDW52u1PpN3n3ibdKhNubES3zjUE54cIPMZmGRb
rpaVVS+jNtRlAHrWfqXGCQu1Uly53QEHsrk2S2AeNzCGJQhqTeP375m04HUx
rcuB+unHf23CFJ04O0kiXFbsBIZsNbO9HJ3V/syabHfukGkfooO5qovD6//q
IPnyh72kzpsrtmnlgpqFcGTfOmud7dzYvLVWvocMe3lFJ44bOQawmuQlTFEM
LPxSC8tUQQEXLuBuEWVet8DQoMEOxP22xxQE6QuUM8q/huXFZNwWCS21DkIv
Ax4mmjnscukYGXrbL8JouywmyIqjh5ApXQReoAJVaHuVtGmNEv0h8EEoHiMB
foefLi3TybplTZuGgooqcYknUWRiGHenJHGJ7l6pH8PI3Ouu/RO13nkq2N9M
OsbYNrArdnw4YgGAR/LuhPwyKU8eOWVcV2GGDmTp9e1Q0Q19qlHDpDXORS9i
ByRbEJXW3nKoNykXpDkb4jhrMtZVIXdYn2A37mPiOuPa8g4pWuKSFiKq11rE
bVzSx4lczNJWMpilrZ1EXEE31OA6Pe9tNhS87CfICYy2N3yj+GJEf3dcWU/k
GAXdZEa8AaQU0LjUbzNIFPJq4RLi+ZnW+azdkyhhBTgHREgWFzyNEeaeSm2z
pHYRNFLHvuCt4k3zbaiQMVowfmxL9UKbFTEn/T6TtuEIzd5BDkUpgJHr7A7L
S/uobLnKa84wy641F6aS3tgYhI2n94gA1qpsGfV3n7C6rh/fQ2cWtiuNunHJ
4X0G/NBurVIG90l2WLv4QFKDT/lkKg14M5nOrAAJX4c/Txk7INbBW7jSlj/K
XDd9RVyftqmV9oDiQKMZLzWz+uBuMk1vabPFm+R8un3hX049gfvENYdTlDH8
CkNx/1Z0UomE0bucy5Q0HB0MxshAnUDnbSSC7XxKrpZ9JWlL7GAjBc2igYxr
yyo03HBPBWYreyPJkHBmWSG1qPjwyMQhhIcWgm68kdYMQguqrRmNxblT83Qu
0GlLbtYBbUFNmnrF9Y4tUWFlaShQlrmoD9tFYuNVE24QBkuQDmCdT+fC+zSD
za0KD3pTPm/AZ4iVZKXYNl79t8qbsLZPkt9yvZ5vsvQKLjAVUNUqK13fajhI
FcXqshptqom46CRhwePApYBx3vh6wxGSeSt+tw+JEqcgsWqur7OPC0qe2Yra
LFppkrbbrGOAYuaJrw5uNU6wkzI+4qVF+l3L0N3LKK/NgtamUobc1oSVakcF
6RsaE6E1/R3HhwtnDSKWhDI5FnJLBtmyGnN91qwo8EV4JVcnZecqXL1zsea6
oBo6VNNc/YWWhBNXMR7Vceqwgqa/3HjN3oGLgONXs9I51ZXi4UH+klUa9qhv
+DbFKOUiSjwZcVVmxYrb7k5YGubaLzjKP65qV4qbLSxW+gDlgg7k7Gk0SGX/
SVxPe19F375Ab9T9e8wqmu/5iiAFL7OngbToq0kuWHyZhOsAy8oRq0Jcjq40
KMnICYbLlIipBQzTNqTQPD8tMb5a3DZ8XMGYqkQMJUEINZmFcJqw8oB7ae6S
bbKpxCEmXO6o0haiAiDnhUYd4UrdI+h30cKBV2sKJxpxT6MMZpLPkGw+OVQ7
XpWkStM06DTYg8qjNmNS4VzrPLGt+BRsLGIZb+CwU0OAOXFy7zlj2+8/37Mn
WJKiGw+4dd5V74jq4wZsg5OU0UZ6YXNnXcSBaOqp4EZVT3GBBN0Nz78Mhy9G
EiTCXdrCnBObmY9jWhxuZPg43reeBu9TF8ICHSD0q+GiEi9Pt/sisW+adx2g
ysO5Eb+tbzmiIy0CS2M5sHhdztQwusyQV01DOA3Ar0SHUY+YkYokWL7OaIJE
a46Tg8EhWelb/svrezCw/4fvO/8f9NcWUCtIK9atNZlcwhb9/Uo71abnZaWe
No4iBRMDkUDIZ9Vs5hIIfIqc6BY2kZVjF669dhEDhKMqZM7j7S53Hlzbunye
tVpd0Lbi9P5TnD1uEsybT+qgljVLFZisxREYPnsSib/PmlhzYuHDFieK1Amx
0Pisgd2jbfkOOqWkp3NDVrfIss3eRKAPxoWQ+vciyB3hCnVjb3kttCl1uMHG
I7ARY/Y17fuanFvq4+oQ7F8S/5tRvtEy3u9c+2yxa8a+XH1wOnWUv7atu+iQ
r9ivzrQoISwjsT9ilSyVldUE2gHcoIX6U2ZhRRzaIg1GBpE446vqWuj53yVh
aNdVKnd0q92k7Yqw9uqy/Q23usTQCubueRvl8ali4VOzIxSmo00b2U1Sw4+3
RpHNf9KoqOttqA5brZeXvMjZOH+t7SCaRb5ynP88SMqxoL5k17Wj9buMaXIF
lmCQe+aVdaJV5XfYd+5Rb0ujiJF5/27QCpzTS1qBQMOMV33+/kHgu0ybsI+r
j4OiZWspj+v6ZcQeheSGD1/5bNDw0F2nOWbESGwKy6VzvMP34MYu9UVFkGje
LjpF2kaPXh/ghFR77MPu8lsSMiTjS10JPelSya4TyoGQVCNoHQAMDXDhug+y
4Y8Ea+DwV37e5vGiqhrPahvfFiUYvZy0oG9yQLnTpK9koljYIA9BS3A41KUF
9br/EfYlKkFdYQsvEh1212MhBojZJVk7GfkMpCj6OvNsXWm9HufEWriSelB+
P/B8wSjf8lOcLOdtG0nGtlVKfSZc9tbYlALQE9pwJlur34hNgfoxu+rlFxjo
ffmKdPS03DPhaFJ5j9SsYRNe1yAHhmmh/D6wbkYASmlGAwkRV2hGdGDN0X6A
/+LFnd4L8j5vpwD8ZEu4cHKcbc4TFzN+LlUuIQkgCKUvk3uktYXVlGfPCT+M
lUTtQheXNAHpyYxXoptvNAX66cc/hcBQswGXaLlbNNIwpBSTLzQdNevjpq8k
p+KAHCc1SOO1ANDu6wMpxIJXVPLdP5f/jvgTUtS+kBxmpLGFwBFbnEvLYDkL
tMdK5i58TSgiIspwabe2bFKqZJ9jHyIwMofPvLXjzrGxPJEL1kwZ7iyVxsTL
FcPzUPFm3ZiXlUQKXz97nDydorzVsar0ep7O3lxcSjI/13+3Ra1X67HXNvhc
GJsaZpMj7RO05qxYbfxW3CGWdh4NqvEGrhQHtxBSwwlCeTcZVI0rtlFw7itx
2ln8GzeKID49fFKnM7WQ8sa4xUy1FAhX/rX7CLAhLcYXD+8d/kEOeCeZrjNk
kytosvGYJ7cBrQPeYJynTy+f4fkoiDa1lVo0u4s99MZVnAJ2DoNudG9QYaGQ
1CFBbOjUUdNCpwzbz9tPJh4mV6HyoUYphi9dzEUKaQYWhjhKnglQZ8mKTFkZ
Mto5COJqYK00p50Ogk2BCxVzxWNp8SsZLTKJ4T9DezVsJy8Gl5GDEqoWtabU
Vq1fwlQNMj4grisCsGdrWeAxnaQU5UXSopKFcEdjg7osryW9lmsijkhzShFr
l2zRKVmH/FDjV1mUgu6TcNJJsDbc7zHg0J52BsmO4FhEoYUgqTN0isu0q7yF
6qCt7qqxpDIvDfwOUakzVj30cKktrBUywAxKOiKsANuCB5Nqmg28T8wI+gc1
+Z0jfMbOSl4jcYx6QmEVxjt97buWqe1LblcCeRB6TFU1XvKi2hZswuyFNJxV
vzFpI1XCO0aicGNIfIRm2h3RBL6vquEEoKAXKDCgD4dhPZQ4BhccacDDbo+N
SZJF266a4/39ed4u1mPEnPfJBHz85qtX++5RnEdQz0nP/sGBjldYfIYXqYIS
7z4/G09JTk65s3AirQBtdeiAX/ClJ8njhCMvqoFMPpgJZwcVv5OMTOcO5GWm
m/ThNn1P7rDbaCtrRfwy4nJROi/e+riSorv8XE6f0kHnLqjB7iMSgK144WUv
JoB4o/QW3XZ2eokvO7LGO9/l2a5+Hmt/a9K3amiWKOKT+lxgG4MR3l9PRZNR
dB2SH9EkLaAafvaj7/78H3WZnJIYpvfktCfJGZqxpTkJtvF3VV2OSPtCKKz+
h8n6uwpUYZd8yhFI7nvTn4bsVanO9jB3K9KGtIMVnjLlsRxCa6CNOCNmcXj3
8D4T8byyFHbNINW/jp7VChmqvr2/+XTZSgejXq2LYshR/KY9/sueuY9n7COV
4S8/Pm9EiVLSf4yC82XatrnW1YhfKDGVG1IUia89qZZVEwKJ6WnYl9HHHcWu
553jmTc1vBbsdHtejVD9XRotzALvJfpjpBLOGv3CM3pa2u7MavxuPXGIaXH1
gF9w5J6fv0gOR3d//tjJAdt2qv4Ghwqfg0OVJN+gbF66TL5Mb64A4j49PT0m
cYIPN4t/WE+WJKhH68mIrNX/X2cQ5a2wIV8h1bzmc3iP05GRllGqoYsuMh6E
LQA+pBBwgbRkByrwzkD+TV6+4r9fP/36zenrp0/w98WXJy9euD+MXnHx5as3
L574v/ydj1+dnT19+URupm+T6Cuzc3by+x1RenZenV+evnp58mJng2NLbjKD
510dQFaXTMTlHz0+/9//8+Be8u7dfyPN5PDg4OH79/rhwcEX9+gDFlvexjaa
fIT0hRWWcZIkOyQn6SpvxT/OGMgbUSpxGP4ZK/OH4+RX48nq4N7f6xeYcPSl
XbPoS16zzW82bpZF7Pmq5zVuNaPvOysdj/fk99Fnu+7Bl7/6DRcEGB48+M3f
G8O1oLMJH3IQU1CYVujn52P/kSszrCvkG7XZNHbLrZzp4jx1DIty3sS+bOnB
Bj4ybBYbh2M9pG0Dzsbhk3wppcy4qqgECvA8Z1e4WiplduO0Q02BaOxqxWV8
EwSWNfuGXhUWm9YCGpxcFoKNYNfNaxsrOCH9gaMW9AWyycBDUfqSC0vH/lQO
9tOk2kWnBVMQzozDGDPkWJFgwgrWkqJFsm7lQIVxNlfz049/0uIyYxhj32ka
1IyOjVRgA64COD+UR5UQnAv3RtghMQq5jm12q6VvLRZtE1KjkDgFuvLFWnST
kQgATXJDMW5zxqHaOWZQ1eoMVYQy10cAoMIIBW6sjS0gwhWplsQt57IMcJXn
zZWkGQfTINFm3YOp3SXm73bOkkrmiruyJVsLdaGArpYTwKKNOo9mgEqI6ZXA
gzTuZlLL2GtcZoXDj1y+uNjjrppcqh2wLK2Ti4qqlpxyS1x445N8Dp4ncVKp
oq9u5rUYiqEFnAaE2Olvx/XtuQw9E/Nr4CKhGOYNAwfptc85nQf0e9IEMEd+
HXL81my4uY27dQ4E5AgJMlkz8tJ5nak7evfixYmmOQiqCfQ/CNHyjNBkK8/u
saOUJK5taELHPTv9EOtcrtgPsDMnbrETbKkAvcPs1Fu9S9CcP2RxbzVIHrA+
bSedcVdGj0St9QjdboObkSFp8zpDjKg5EyLVXCMO1TOVnpa211MrWxj4srZ1
5e6WpOMsCzBtYHTZLhkiAZNNcUTzl1yVDnj3Itdeegz3df3uGFkH2kV3cA6h
OICaw8hxUz2tm8XuUWRJIPqz8k562OZMV+d0YTrpyiPLFDcq8gmGLQtPn6sg
tQEGYwZtYUdGi1d3aIRY0JMAe8P0EPDzJUrSa/7HJCxmaVy1Y+uLPT2HJciV
Z4H5sRWlhX1FeXcNMjwthFo4EwQPAl9M8ytZE971J8w0fBdfzZ0QVJA6HrX3
FXaRR8SefbVuBkyozDjcitkydgLyRlKyD18MEYNrJKxqC0ZJM7tQud09Pz3d
C7pepmVV3lrigXXTZOup+yqGdyrN1U1wvzwbBMNaMw+Qp5IHVbQtyEHDgGBn
wMWhEwa7s6T8ExeQUrR4kdm1tGyICVB56/Mn56/3mAifZCUwgbR+F5oqsvuk
IsYLpse0eCK20AbUPYdR6hjrXCKzlv87Hv7Nydn5ILm4xD+muS3pKi6Br1VC
bJXOqhQAN6PwQ6+CwJXKML2HWxvDtS9NxoBHJUYyZBRYNu0QbDolfoGC8NwH
G1WfMpcTw7ktMLuKCncabvWNYFO15uBoJV1RinRdEpnTqqhqAbp9bbMuZV2F
S1kDR3DGPCAbVE+l4pE24tDEuM2FraqC+cxTaU/S7QPg+wY417CFHsnuo9wQ
63k6R37YmTTCkJeOpbF1txI8n8fENSt1+LF0bFW6S4ZeEs8L+6OF+aWWcQWQ
ivDoBwLJ5wRvy1M1G3mqYXXODxU57FRU4mqorPgJb7ZlXTya2SuBZLZz7wBF
VnANJ0sPfF3LtVoDVS4U+0iLdqXThbMpT2SKZAp5WiIZh1lgxpm18DALhCYO
0GyDbBPJz7iCMz1Mqipq8Q4XnQx0DvqE6t3a40FgZ/owbLCVqMi0pGlO3cvj
MnvO0B6JGXV68vKkx4QKbV1EHspKrlRMDW4eDocJ/NR4yokLaIvu8+5Yuklk
01/vQHnIdt6LYSb+RlungRt9sOstLa/YYWG9FcRk2jV8MY9ptxc5QiFfcU7f
C64LUSFP8FWRXhMRTWFv0MfntFak1NXNFVldl3Scb5PzdI0A3llaT4gHnC4Y
LnpJx+Us/6FOF/kgeb3OpjlQZM+zfExXZguShsn/+R9//s+r+s//mfwW0E8l
1rwgEXadZ00QNna+elFeYdPxdILlG5n/BwurumEK1gAA

-->

</rfc>
