<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE rfc [
<!ENTITY nbsp "&#160;">
<!ENTITY zwsp "&#8203;">
<!ENTITY nbhy "&#8209;">
<!ENTITY wj "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt"?>
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" category="info" docName="draft-condrey-rats-pop-00" ipr="trust200902" obsoletes="" updates="" submissionType="independent" xml:lang="en" version="3">

  <front>
    <title abbrev="Proof of Process">Proof of Process: An Evidence Framework for Digital Authorship Attestation</title>

    <seriesInfo name="Internet-Draft" value="draft-condrey-rats-pop-00"/>

    <author fullname="David Condrey" initials="D." surname="Condrey">
      <organization>Writerslogic Inc</organization>
      <address>
        <postal>
          <country>United States</country>
        </postal>
        <email>david@writerslogic.com</email>
        <uri>https://writerslogic.com</uri>
      </address>
    </author>

    <date year="2026" month="February" day="6"/>

    <area>Security</area>
    <workgroup>Remote ATtestation procedureS</workgroup>

    <keyword>attestation</keyword>
    <keyword>evidence</keyword>
    <keyword>authorship</keyword>
    <keyword>RATS</keyword>
    <keyword>behavioral</keyword>
    <keyword>VDF</keyword>
    <keyword>verifiable delay function</keyword>
    <keyword>process documentation</keyword>
    <keyword>digital provenance</keyword>

    <abstract>
      <t>
        This document specifies Proof of Process (PoP), an evidence framework
        for digital authorship attestation. The framework produces tamper-evident,
        independently verifiable process evidence describing how documents evolve
        over time, without capturing document content or keystroke data.
      </t>
      <t>
        Proof of Process implements a specialized profile of the Remote ATtestation
        procedureS (RATS) architecture, extending it with behavioral entropy
        bindings, Verifiable Delay Function (VDF) temporal proofs, and a taxonomy
        of absence claims with explicit trust requirements. The specification
        defines two complementary CBOR-encoded formats: Evidence Packets (.pop)
        produced by Attesters during document authorship, and Attestation Results
        (.war) produced by Verifiers after appraising Evidence.
      </t>
      <t>
        The framework is designed around four core principles: privacy by
        construction (no content storage), zero trust (locally generated,
        independently verifiable), evidence over inference (observable facts,
        not conclusions), and cost-asymmetric forgery (expensive to fake,
        not impossible). This specification does not address AI detection,
        stylometric analysis, or intent inference.
      </t>
    </abstract>

    <note removeInRFC="true">
      <name>About This Document</name>
      <t>
        Status of this Memo: This Internet-Draft is submitted in full conformance
        with the provisions of BCP 78 and BCP 79.
      </t>
      <t>
        Internet-Drafts are working documents of the Internet Engineering Task
        Force (IETF). Note that other groups may also distribute working documents
        as Internet-Drafts. The list of current Internet-Drafts is at
        <eref target="https://datatracker.ietf.org/drafts/current/"/>.
      </t>
      <t>
        Internet-Drafts are draft documents valid for a maximum of six months
        and may be updated, replaced, or obsoleted by other documents at any
        time. It is inappropriate to use Internet-Drafts as reference material
        or to cite them other than as "work in progress."
      </t>
    </note>

  </front>

  <middle>
    <!-- Section 1: Introduction -->
    <section anchor="introduction" xml:base="sections/introduction.xml">
  <name>Introduction</name>

  <t>
    This document specifies Proof of Process, an evidence framework for
    digital authorship attestation. The framework produces tamper-evident,
    independently verifiable process evidence describing how documents
    evolve over time.
  </t>

  <section anchor="problem-statement">
    <name>Problem Statement</name>

    <t>
      Digital documents lack provenance information about their creation
      process. While cryptographic signatures prove key possession and
      timestamps prove existence at a point in time, neither addresses
      a fundamental question: how did this document come to exist?
    </t>

    <t>
      Existing approaches to authorship verification fall into two
      categories, each with significant limitations:
    </t>

    <dl>
      <dt>Surveillance-Based Systems:</dt>
      <dd>
        <t>
          Screen recording, keystroke logging, and continuous monitoring
          capture detailed process information but fundamentally violate
          author privacy. Such systems require trust in the monitoring
          infrastructure and create risks of data exfiltration or misuse.
          The evidence they produce is not independently verifiable without
          access to surveillance archives controlled by third parties.
        </t>
      </dd>

      <dt>Content Analysis Systems:</dt>
      <dd>
        <t>
          Stylometric analysis, AI detection tools, and linguistic forensics
          attempt to infer process from product. These approaches make
          probabilistic claims about authorship based on content patterns
          but cannot provide verifiable evidence of the creation process.
          They are vulnerable to false positives, false negatives, and
          adversarial manipulation.
        </t>
      </dd>
    </dl>

    <t>
      Neither approach produces evidence that is simultaneously:
    </t>

    <ul>
      <li>Privacy-preserving (no content capture or transmission)</li>
      <li>Independently verifiable (no trust in monitoring infrastructure)</li>
      <li>Tamper-evident (modifications are detectable)</li>
      <li>Process-documenting (captures how, not just what)</li>
    </ul>

    <t>
      The need for such evidence is growing across multiple domains:
    </t>

    <ul>
      <li>
        Academic integrity: Institutions require process evidence for
        submissions, particularly as AI-generated content becomes
        indistinguishable from human writing.
      </li>
      <li>
        Legal documentation: Courts and regulatory bodies require
        provenance evidence for documents submitted as evidence.
      </li>
      <li>
        Creative works: Authors and creators seek to document their
        creative process for copyright, attribution, and authenticity
        purposes.
      </li>
      <li>
        Professional certification: Regulated professions may require
        evidence that work product was created through specified processes.
      </li>
    </ul>
  </section>

  <section anchor="scope">
    <name>Scope</name>

    <section anchor="what-this-specifies">
      <name>What This Specification Defines</name>

      <t>
        This specification defines:
      </t>

      <ul>
        <li>
          Evidence format: The structure of Evidence Packets (.pop files)
          containing checkpoint chains, behavioral entropy bindings, and
          cryptographic proofs.
        </li>
        <li>
          Attestation result format: The structure of Attestation Results
          (.war files) produced by Verifiers after appraising Evidence.
        </li>
        <li>
          Checkpoint structure: The cryptographic construction binding
          document states, timing proofs, and behavioral evidence.
        </li>
        <li>
          Verification procedures: Algorithms for independent verification
          of Evidence Packets without access to the original Attesting
          Environment.
        </li>
        <li>
          Claim taxonomy: Classification of claims into chain-verifiable
          and monitoring-dependent categories with explicit trust
          requirements.
        </li>
      </ul>
    </section>

    <section anchor="what-this-does-not-specify">
      <name>What This Specification Does NOT Define</name>

      <t>
        This specification explicitly excludes:
      </t>

      <ul>
        <li>
          Content analysis: No examination of document content for
          stylometric patterns, linguistic features, or semantic meaning.
        </li>
        <li>
          Authorship determination: No claims about the identity of the
          author beyond cryptographic key binding. The specification
          documents process, not persons.
        </li>
        <li>
          Intent inference: No claims about the cognitive state, creative
          intent, or mental processes of the author.
        </li>
        <li>
          AI detection: No classification of content as "human-written"
          or "AI-generated." The specification provides evidence about
          process; interpretation is left to Relying Parties.
        </li>
        <li>
          Surveillance mechanisms: No screen capture, keystroke content
          logging, or continuous monitoring. Behavioral signals are
          captured as timing entropy, not content.
        </li>
      </ul>
    </section>

    <section anchor="rats-relationship">
      <name>Relationship to RATS Architecture</name>

      <t>
        This specification implements a specialized profile of the Remote
        ATtestation procedureS (RATS) architecture <xref target="RFC9334"/>
        with domain-specific extensions for behavioral evidence and process
        documentation.
      </t>

      <t>
        The RATS role mappings are:
      </t>

      <dl>
        <dt>Attester:</dt>
        <dd>
          The witnessd-core library running on the author's device,
          producing Evidence Packets (.pop files).
        </dd>

        <dt>Verifier:</dt>
        <dd>
          Any implementation capable of parsing and appraising Evidence
          Packets, producing Attestation Results (.war files).
        </dd>

        <dt>Relying Party:</dt>
        <dd>
          The entity consuming Attestation Results for trust decisions
          (e.g., academic institutions, publishers, legal systems).
        </dd>
      </dl>

      <t>
        The Evidence produced by this specification extends standard RATS
        evidence with behavioral entropy bindings and Verifiable Delay
        Function proofs for temporal ordering.
      </t>
    </section>
  </section>

  <section anchor="design-goals">
    <name>Design Goals</name>

    <t>
      The Proof of Process framework is designed around four core principles:
    </t>

    <section anchor="privacy-by-construction">
      <name>Privacy by Construction</name>

      <t>
        The framework enforces privacy through structural constraints, not
        policy promises:
      </t>

      <ul>
        <li>
          No document content is stored in Evidence Packets. Content is
          represented only by cryptographic hashes.
        </li>
        <li>
          No characters typed are captured. Keystroke timing is recorded
          without association to specific characters.
        </li>
        <li>
          No screenshots or screen recordings are taken. Visual content
          is never captured by the Attesting Environment.
        </li>
        <li>
          Behavioral data is aggregated into statistical summaries before
          inclusion in Evidence, preventing reconstruction of input
          sequences.
        </li>
      </ul>
    </section>

    <section anchor="zero-trust">
      <name>Zero Trust: Locally Generated, Independently Verifiable</name>

      <t>
        Evidence generation and verification are fully decoupled:
      </t>

      <ul>
        <li>
          Evidence is generated entirely on the Attester device with no
          network dependency during creation.
        </li>
        <li>
          Verification requires only the Evidence Packet itself; no access
          to the original device, the document content, or external
          services (except for optional external anchor validation).
        </li>
        <li>
          Multiple independent Verifiers can appraise the same Evidence,
          enabling adversarial review.
        </li>
        <li>
          No trusted third party is required for basic verification.
        </li>
      </ul>
    </section>

    <section anchor="evidence-over-inference">
      <name>Evidence Over Inference</name>

      <t>
        The framework provides observable evidence, not conclusions:
      </t>

      <ul>
        <li>
          Claims are classified as chain-verifiable (provable from
          Evidence alone) or monitoring-dependent (requiring Attesting
          Environment trust).
        </li>
        <li>
          Attestation Results document what was verified, not what should
          be believed.
        </li>
        <li>
          Confidence scores and caveats accompany all assessments,
          enabling informed Relying Party decisions.
        </li>
        <li>
          The specification makes no absolute claims about authorship,
          intent, or authenticity.
        </li>
      </ul>
    </section>

    <section anchor="cost-asymmetric-forgery">
      <name>Cost-Asymmetric Forgery</name>

      <t>
        The framework makes forgery expensive relative to genuine creation:
      </t>

      <ul>
        <li>
          Verifiable Delay Functions (VDFs) establish minimum elapsed time
          that cannot be circumvented through parallelization.
        </li>
        <li>
          Captured behavioral entropy commits to timing measurements that
          cannot be regenerated without access to the original input stream.
        </li>
        <li>
          Hash chain construction ensures any modification invalidates all
          subsequent evidence.
        </li>
        <li>
          The forgery-cost-section quantifies attack costs, enabling risk
          assessment by Relying Parties.
        </li>
      </ul>

      <t>
        IMPORTANT: This framework does not claim to make forgery impossible.
        It raises the cost of forgery relative to honest participation and
        provides quantified bounds on attack economics.
      </t>
    </section>
  </section>

  <section anchor="terminology">
    <name>Terminology</name>

    <t>
      The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
      "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
      "OPTIONAL" in this document are to be interpreted as described in
      BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and
      only when, they appear in all capitals, as shown here.
    </t>

    <t>
      This document uses the following terms:
    </t>

    <dl>
      <dt>Evidence Packet (.pop):</dt>
      <dd>
        <t>
          The primary evidence artifact produced by the Attester, containing
          checkpoint chains, behavioral entropy bindings, VDF proofs, and
          optional attestations. The file extension ".pop" derives from
          "Proof of Process." Evidence Packets are CBOR-encoded with
          semantic tag 1347571280 (0x50505020, ASCII "PPP ").
        </t>
      </dd>

      <dt>Attestation Result (.war):</dt>
      <dd>
        <t>
          The verification certificate produced by a Verifier after
          appraising an Evidence Packet. Contains the verification verdict,
          verified claims, confidence score, and Verifier signature. The
          file extension ".war" derives from "Writers Authenticity Report."
          Attestation Results are CBOR-encoded with semantic tag 1463894560
          (0x57415220, ASCII "WAR ").
        </t>
      </dd>

      <dt>Checkpoint:</dt>
      <dd>
        <t>
          A cryptographic snapshot of document state at a point in time,
          including content hash, timing proof, and behavioral entropy
          binding. Checkpoints form a hash chain where each checkpoint
          references its predecessor.
        </t>
      </dd>

      <dt>Jitter Seal:</dt>
      <dd>
        <t>
          Captured behavioral entropy from human input events, bound to
          the checkpoint chain. Unlike injected entropy (random delays
          added by software), captured entropy commits to actual measured
          timing that existed only at the moment of observation. See
          <xref target="jitter-seal"/> for detailed specification.
        </t>
      </dd>

      <dt>Verifiable Delay Function (VDF):</dt>
      <dd>
        <t>
          A cryptographic primitive that requires sequential computation
          and cannot be parallelized, used to establish minimum elapsed
          time between checkpoints. The VDF output for each checkpoint is
          entangled with the previous checkpoint and behavioral entropy,
          preventing precomputation. See <xref target="vdf-mechanisms"/>
          for detailed specification.
        </t>
      </dd>

      <dt>Chain-verifiable Claim:</dt>
      <dd>
        <t>
          A claim that can be verified from the Evidence Packet alone,
          without trusting the Attesting Environment beyond basic data
          integrity. Examples include checkpoint chain integrity, VDF
          proof validity, and minimum elapsed time bounds.
        </t>
      </dd>

      <dt>Monitoring-dependent Claim:</dt>
      <dd>
        <t>
          A claim that requires trust in the Attesting Environment's
          accurate reporting of monitored events. Examples include
          behavioral anomaly detection and input source attribution.
          Such claims explicitly document their trust requirements.
        </t>
      </dd>

      <dt>Attesting Environment (AE):</dt>
      <dd>
        <t>
          The software and hardware environment where Evidence is generated.
          Per the RATS architecture <xref target="RFC9334"/>, the AE
          produces claims about its own state and the processes it observes.
        </t>
      </dd>

      <dt>Evidence Tier:</dt>
      <dd>
        <t>
          A classification of Evidence Packets based on which optional
          sections are present. Tiers range from Basic (checkpoint chain
          only) to Maximum (hardware attestation, external anchors, forgery
          cost analysis). See <xref target="evidence-tiers"/> for tier
          definitions.
        </t>
      </dd>
    </dl>
  </section>

  <section anchor="document-structure">
    <name>Document Structure</name>

    <t>
      This specification is organized as follows:
    </t>

    <ul>
      <li>
        <xref target="evidence-model"/>: Defines the overall architecture,
        RATS role mappings, and the two complementary formats (Evidence
        Packets and Attestation Results).
      </li>
      <li>
        <xref target="jitter-seal"/>: Specifies the Jitter Seal mechanism
        for capturing and binding behavioral entropy to the checkpoint
        chain.
      </li>
      <li>
        <xref target="vdf-mechanisms"/>: Specifies Verifiable Delay
        Function constructions for temporal ordering and minimum elapsed
        time proofs.
      </li>
      <li>
        <xref target="absence-proofs"/>: Defines the taxonomy of absence
        claims and the trust requirements for each claim type.
      </li>
      <li>
        <xref target="forgery-cost-bounds"/>: Specifies the methodology
        for quantifying forgery costs and attack economics.
      </li>
      <li>
        <xref target="security-considerations"/>: Analyzes security
        properties, threat models, and attack mitigations.
      </li>
      <li>
        <xref target="privacy-considerations"/>: Documents privacy
        properties and data handling requirements.
      </li>
      <li>
        <xref target="iana-considerations"/>: Requests IANA registrations
        for CBOR tags, EAT claims, and media types.
      </li>
    </ul>

    <t>
      Appendices provide CDDL schemas, test vectors, and implementation
      guidance.
    </t>
  </section>

  <section anchor="conventions">
    <name>Conventions and Definitions</name>

    <section anchor="cddl-notation">
      <name>CDDL Notation</name>

      <t>
        Data structures in this document are specified using the Concise
        Data Definition Language (CDDL) <xref target="RFC8610"/>. CDDL
        provides a notation for expressing CBOR and JSON data structures.
      </t>

      <t>
        The normative CDDL definitions appear inline in the relevant
        sections.
      </t>
    </section>

    <section anchor="intro-cbor-encoding">
      <name>CBOR Encoding</name>

      <t>
        Both Evidence Packets and Attestation Results use CBOR (Concise
        Binary Object Representation) encoding per <xref target="RFC8949"/>.
        CBOR provides efficient binary encoding with support for semantic
        tags and extensibility.
      </t>

      <t>
        This specification uses:
      </t>

      <ul>
        <li>
          Semantic tags for type identification (Evidence Packet: 1347571280,
          Attestation Result: 1463894560)
        </li>
        <li>
          Integer keys (1-99) for core protocol fields to minimize encoding
          size
        </li>
        <li>
          String keys for vendor extensions and application-specific fields
        </li>
        <li>
          Deterministic encoding (RFC 8949 Section 4.2) RECOMMENDED for
          signature verification
        </li>
      </ul>
    </section>

    <section anchor="cose-signatures">
      <name>COSE Signatures</name>

      <t>
        Cryptographic signatures use COSE (<xref target="IANA.cose"/>, CBOR
        Object Signing and Encryption) <xref target="RFC9052"/>. The
        COSE_Sign1 structure provides single-signer signatures suitable
        for Evidence and Attestation Result authentication.
      </t>

      <t>
        Recommended algorithms:
      </t>

      <ul>
        <li>EdDSA with Ed25519 (RECOMMENDED for new implementations)</li>
        <li>ECDSA with P-256 (for compatibility with existing PKI)</li>
      </ul>
    </section>

    <section anchor="eat-tokens">
      <name>EAT Tokens</name>

      <t>
        This specification defines an Entity Attestation Token (EAT)
        profile per <xref target="RFC9711"/>. EAT provides a framework
        for attestation claims with support for custom claim types.
      </t>

      <t>
        The EAT profile URI for Proof of Process evidence is:
      </t>

      <artwork><![CDATA[
https://example.com/rats/eat/profile/pop/1.0
]]></artwork>

      <t>
        Custom EAT claims proposed for IANA registration are defined in
        <xref target="iana-considerations"/>.
      </t>
    </section>

    <section anchor="hash-notation">
      <name>Hash Function Notation</name>

      <t>
        This document uses the following notation for cryptographic hash
        functions:
      </t>

      <ul>
        <li>H(x): SHA-256 hash of input x (default)</li>
        <li>H^n(x): n iterations of hash function H</li>
        <li>HMAC(k, m): HMAC-SHA256 with key k and message m</li>
      </ul>

      <t>
        SHA-256 is the RECOMMENDED hash algorithm. Implementations MAY
        support SHA3-256 for algorithm agility.
      </t>
    </section>
  </section>

</section>

    <!-- Section 2: Evidence Model -->
    <section anchor="evidence-model" xml:base="sections/evidence-model.xml">
  <name>Evidence Model</name>

  <t>
    This section defines the top-level architecture of the witnessd
    Proof of Process evidence model. The design follows the RATS
    (Remote ATtestation procedureS) architecture <xref target="RFC9334"/>
    while introducing domain-specific extensions for behavioral
    evidence and process documentation.
  </t>

  <section anchor="rats-architecture-mapping">
    <name>RATS Architecture Mapping</name>

    <t>
      This specification implements a RATS <xref target="RFC9334"/>
      profile with these role mappings: the witnessd-core library
      acts as Attester, producing Evidence Packets (.pop files);
      verification implementations act as Verifiers, producing
      Attestation Results (.war files); and consuming entities
      (institutions, publishers) act as Relying Parties.
    </t>

    <t>
      Key properties: Evidence is generated locally on the Attester
      device without network dependency; verification requires only
      the Evidence packet; Evidence contains cryptographic hashes,
      not content; behavioral signals are aggregated into statistical
      summaries.
    </t>
  </section>

  <section anchor="two-formats">
    <name>Two Complementary Formats</name>

    <t>
      The witnessd protocol defines two file formats that serve
      distinct roles in the attestation workflow:
    </t>

    <section anchor="pop-format">
      <name>Evidence Packet (.pop)</name>

      <t>
        The .pop (Proof of Process) file is the primary Evidence
        artifact produced by the Attester. It contains all
        cryptographic proofs and behavioral evidence accumulated
        during document authorship.
      </t>

      <t>
        CBOR encoding with semantic tag:
      </t>

      <artwork><![CDATA[
Tag: 1347571280 (0x50505020, ASCII "PPP ")
Type: tagged-evidence-packet
]]></artwork>

      <t>
        The Evidence packet is the authoritative record of the
        authoring process. It may be:
      </t>

      <ul>
        <li>
          Submitted to a Verifier for appraisal
        </li>
        <li>
          Archived alongside the document for future verification
        </li>
        <li>
          Shared with Relying Parties who perform their own verification
        </li>
      </ul>

      <t>
        The .pop file is typically larger than the .war file because
        it contains complete checkpoint chains, VDF proofs, and
        behavioral evidence.
      </t>
    </section>

    <section anchor="war-format">
      <name>Attestation Result (.war)</name>

      <t>
        The .war (Writers Authenticity Report) file is the Attestation
        Result produced by a Verifier after appraising an Evidence
        packet. It serves as a portable verification certificate.
      </t>

      <t>
        CBOR encoding with semantic tag:
      </t>

      <artwork><![CDATA[
Tag: 1463894560 (0x57415220, ASCII "WAR ")
Type: tagged-attestation-result
]]></artwork>

      <t>
        The Attestation Result is designed for distribution alongside
        published documents. It provides:
      </t>

      <ul>
        <li>
          A signed verdict from a trusted Verifier
        </li>
        <li>
          Summary of verified claims without full evidence
        </li>
        <li>
          Confidence score for Relying Party decision-making
        </li>
        <li>
          Caveats documenting verification limitations
        </li>
      </ul>

      <t>
        Relying Parties may trust the .war file based on the
        Verifier's reputation, or they may request the original
        .pop file for independent verification.
      </t>
    </section>

    <section anchor="format-relationship">
      <name>Format Relationship</name>

      <t>
        The two formats are linked by the reference-packet-id field
        in the Attestation Result, which matches the packet-id of
        the appraised Evidence packet:
      </t>

      <artwork><![CDATA[
Evidence Packet (.pop)          Attestation Result (.war)
+-------------------+           +------------------------+
| packet-id: UUID   |<----------| reference-packet-id:   |
|                   |           |   UUID (same value)    |
+-------------------+           +------------------------+
]]></artwork>

      <t>
        This binding ensures that each Attestation Result is
        unambiguously tied to a specific Evidence packet.
      </t>
    </section>
  </section>

  <section anchor="evidence-packet-structure">
    <name>Evidence Packet Structure</name>

    <t>
      The evidence-packet structure contains the complete attestation
      evidence produced by the Attester. The normative CDDL definition
      is provided in the schema appendix; this section describes the
      semantic meaning of each component.
    </t>

    <artwork type="cddl"><![CDATA[
evidence-packet = {
    1 => uint,                      ; version
    2 => tstr,                      ; profile (EAT profile URI)
    3 => uuid,                      ; packet-id
    4 => pop-timestamp,             ; created
    5 => document-ref,              ; document
    6 => [+ checkpoint],            ; checkpoints

    ; Tiered optional sections
    ? 10 => presence-section,       ; presence
    ? 11 => forensics-section,      ; forensics
    ? 12 => keystroke-section,      ; keystroke
    ? 13 => hardware-section,       ; hardware
    ? 14 => external-section,       ; external
    ? 15 => absence-section,        ; absence
    ? 16 => forgery-cost-section,   ; forgery-cost
    ? 17 => declaration,            ; declaration

    * tstr => any,                  ; extensions
}
]]></artwork>

    <section anchor="required-fields">
      <name>Required Fields</name>

      <dl>
        <dt>version (key 1):</dt>
        <dd>
          Schema version number. Currently 1. Implementations MUST
          reject packets with unrecognized major versions.
        </dd>

        <dt>profile (key 2):</dt>
        <dd>
          <t>
            EAT profile URI identifying this specification:
          </t>
          <artwork><![CDATA[
https://example.com/rats/eat/profile/pop/1.0
]]></artwork>
          <t>
            IANA registration to be requested upon working group adoption.
          </t>
        </dd>

        <dt>packet-id (key 3):</dt>
        <dd>
          UUID (<xref target="RFC9562"/>) uniquely identifying this Evidence packet.
          Generated by the Attester at packet creation time.
        </dd>

        <dt>created (key 4):</dt>
        <dd>
          Timestamp when this packet was finalized. CBOR tag 1
          (epoch-based date/time).
        </dd>

        <dt>document (key 5):</dt>
        <dd>
          Reference to the documented artifact. See
          <xref target="document-binding"/>.
        </dd>

        <dt>checkpoints (key 6):</dt>
        <dd>
          Ordered array of checkpoint structures forming the
          evidence chain. See <xref target="checkpoint-chain"/>.
        </dd>
      </dl>
    </section>

    <section anchor="tiered-sections">
      <name>Tiered Optional Sections</name>

      <t>
        The optional sections (keys 10-17) correspond to evidence
        tiers. Their presence determines the evidence strength:
      </t>

      <table>
        <thead>
          <tr>
            <th>Key</th>
            <th>Section</th>
            <th>Tier Contribution</th>
            <th>Reference</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>10</td>
            <td>presence-section</td>
            <td>Standard (+)</td>
            <td>-</td>
          </tr>
          <tr>
            <td>11</td>
            <td>forensics-section</td>
            <td>Enhanced (+)</td>
            <td>-</td>
          </tr>
          <tr>
            <td>12</td>
            <td>keystroke-section</td>
            <td>Enhanced (+)</td>
            <td><xref target="jitter-seal"/></td>
          </tr>
          <tr>
            <td>13</td>
            <td>hardware-section</td>
            <td>Maximum (+)</td>
            <td>-</td>
          </tr>
          <tr>
            <td>14</td>
            <td>external-section</td>
            <td>Maximum (+)</td>
            <td>-</td>
          </tr>
          <tr>
            <td>15</td>
            <td>absence-section</td>
            <td>Maximum (+)</td>
            <td><xref target="absence-proofs"/></td>
          </tr>
          <tr>
            <td>16</td>
            <td>forgery-cost-section</td>
            <td>Maximum (+)</td>
            <td><xref target="forgery-cost-bounds"/></td>
          </tr>
          <tr>
            <td>17</td>
            <td>declaration</td>
            <td>All tiers</td>
            <td>-</td>
          </tr>
        </tbody>
      </table>
    </section>

    <section anchor="extensibility">
      <name>Extensibility</name>

      <t>
        The evidence-packet structure supports forward-compatible
        extensions through string-keyed fields:
      </t>

      <ul>
        <li>
          Integer keys 1-99 are reserved for this specification
        </li>
        <li>
          String keys MAY be used for vendor or application-specific
          extensions
        </li>
        <li>
          Verifiers MUST ignore unrecognized string-keyed fields
        </li>
        <li>
          Verifiers MUST reject packets containing unrecognized
          integer keys in the reserved range
        </li>
      </ul>
    </section>
  </section>

  <section anchor="checkpoint-chain">
    <name>Checkpoint Chain</name>

    <t>
      The checkpoint chain is the core evidentiary structure. Each
      checkpoint represents a witnessed document state, cryptographically
      linked to its predecessor.
    </t>

    <section anchor="checkpoint-structure">
      <name>Checkpoint Structure</name>

      <artwork type="cddl"><![CDATA[
checkpoint = {
    1 => uint,                      ; sequence
    2 => uuid,                      ; checkpoint-id
    3 => pop-timestamp,             ; timestamp
    4 => hash-value,                ; content-hash
    5 => uint,                      ; char-count
    6 => uint,                      ; word-count
    7 => edit-delta,                ; delta
    8 => hash-value,                ; prev-hash
    9 => hash-value,                ; checkpoint-hash
    10 => vdf-proof,                ; vdf-proof
    11 => jitter-binding,           ; jitter-binding
    12 => bstr .size 32,            ; chain-mac

    * tstr => any,                  ; extensions
}
]]></artwork>

      <dl>
        <dt>sequence (key 1):</dt>
        <dd>
          Zero-indexed ordinal position in the checkpoint chain.
          MUST be strictly monotonically increasing.
        </dd>

        <dt>checkpoint-id (key 2):</dt>
        <dd>
          UUID uniquely identifying this checkpoint within the packet.
        </dd>

        <dt>timestamp (key 3):</dt>
        <dd>
          Local timestamp when the checkpoint was created. Note that
          local timestamps are untrusted; temporal ordering is
          established by VDF causality.
        </dd>

        <dt>content-hash (key 4):</dt>
        <dd>
          Cryptographic hash of the document content at this checkpoint.
          SHA-256 RECOMMENDED.
        </dd>

        <dt>char-count (key 5), word-count (key 6):</dt>
        <dd>
          Document statistics at this checkpoint. Informational only;
          not cryptographically bound.
        </dd>

        <dt>delta (key 7):</dt>
        <dd>
          Edit delta since previous checkpoint. Contains character
          counts for additions, deletions, and edit operations.
          No content is included.
        </dd>

        <dt>prev-hash (key 8):</dt>
        <dd>
          Hash of the previous checkpoint (checkpoint-hash{N-1}).
          For the genesis checkpoint (sequence = 0), this MUST be
          32 zero bytes.
        </dd>

        <dt>checkpoint-hash (key 9):</dt>
        <dd>
          Binding hash computed over all checkpoint fields, creating
          the hash chain.
        </dd>

        <dt>vdf-proof (key 10):</dt>
        <dd>
          Verifiable Delay Function proof establishing minimum elapsed
          time. See <xref target="vdf-mechanisms"/>.
        </dd>

        <dt>jitter-binding (key 11):</dt>
        <dd>
          Captured behavioral entropy binding. See
          <xref target="jitter-seal"/>.
        </dd>

        <dt>chain-mac (key 12):</dt>
        <dd>
          HMAC-SHA256 binding the checkpoint to the chain key,
          preventing transplantation of checkpoints between sessions.
        </dd>
      </dl>
    </section>

    <section anchor="hash-chain-construction">
      <name>Hash Chain Construction</name>

      <t>
        The checkpoint chain forms a cryptographic hash chain through
        the prev-hash linkage:
      </t>

      <artwork><![CDATA[
+---------------+     +---------------+     +---------------+
| Checkpoint 0  |     | Checkpoint 1  |     | Checkpoint 2  |
|---------------|     |---------------|     |---------------|
| prev-hash:    |<----| prev-hash:    |<----| prev-hash:    |
|   (32 zeros)  |  H  |   H(CP_0)     |  H  |   H(CP_1)     |
| checkpoint-   |---->| checkpoint-   |---->| checkpoint-   |
|   hash: H_0   |     |   hash: H_1   |     |   hash: H_2   |
+---------------+     +---------------+     +---------------+
]]></artwork>

      <t>
        The checkpoint-hash is computed as:
      </t>

      <artwork><![CDATA[
checkpoint-hash = H(
    "witnessd-checkpoint-v1" ||
    sequence ||
    checkpoint-id ||
    timestamp ||
    content-hash ||
    char-count ||
    word-count ||
    CBOR(delta) ||
    prev-hash ||
    CBOR(vdf-proof) ||
    CBOR(jitter-binding)
)
]]></artwork>

      <t>
        This construction ensures that any modification to any field
        in any checkpoint invalidates all subsequent checkpoint hashes,
        providing tamper-evidence for the entire chain.
      </t>
    </section>

    <section anchor="mmr-compatibility">
      <name>MMR Compatibility</name>

      <t>
        The checkpoint chain is designed for compatibility with
        Merkle Mountain Range (<xref target="MMR"/>) structures,
        enabling efficient inclusion proofs for subsets of checkpoints:
      </t>

      <ul>
        <li>
          Each checkpoint-hash serves as a leaf in the MMR
        </li>
        <li>
          Verifiers can request inclusion proofs for specific checkpoints
          without receiving the entire chain
        </li>
        <li>
          External anchors can commit to MMR roots, enabling efficient
          batch timestamping
        </li>
      </ul>

      <t>
        MMR proofs are optional and may be included in the
        external-section when third-party verification services
        provide them.
      </t>
    </section>
  </section>

  <section anchor="document-binding">
    <name>Document Binding</name>

    <t>
      The document-ref structure binds the Evidence packet to a
      specific document without including the document content.
    </t>

    <artwork type="cddl"><![CDATA[
document-ref = {
    1 => hash-value,                ; content-hash
    2 => tstr,                      ; filename (optional)
    3 => uint,                      ; byte-length
    4 => uint,                      ; char-count
    ? 5 => hash-salt-mode,          ; salt mode
    ? 6 => bstr,                    ; salt-commitment
}
]]></artwork>

    <section anchor="content-hash-binding">
      <name>Content Hash Binding</name>

      <t>
        The content-hash (key 1) is the cryptographic hash of the
        final document state. This is the same value as the
        content-hash in the final checkpoint.
      </t>

      <t>
        To verify document binding:
      </t>

      <ol>
        <li>
          Compute H(document-content)
        </li>
        <li>
          Compare with document-ref.content-hash
        </li>
        <li>
          Compare with checkpoints{-1}.content-hash
        </li>
        <li>
          All three values MUST match
        </li>
      </ol>
    </section>

    <section anchor="salt-modes">
      <name>Salt Modes for Privacy</name>

      <t>
        The hash-salt-mode field controls how the content hash is
        computed, enabling privacy-preserving verification scenarios:
      </t>

      <table>
        <thead>
          <tr>
            <th>Value</th>
            <th>Mode</th>
            <th>Hash Computation</th>
            <th>Verification</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>0</td>
            <td>unsalted</td>
            <td>H(content)</td>
            <td>Anyone with document can verify</td>
          </tr>
          <tr>
            <td>1</td>
            <td>author-salted</td>
            <td>H(salt || content)</td>
            <td>Author reveals salt to chosen verifiers</td>
          </tr>
          <tr>
            <td>2</td>
            <td>third-party-escrowed</td>
            <td>H(salt || content)</td>
            <td>Escrow releases salt under conditions</td>
          </tr>
        </tbody>
      </table>

      <t>
        When salted modes are used:
      </t>

      <ul>
        <li>
          The salt-commitment field contains H(salt), not the salt itself
        </li>
        <li>
          The author or escrow provides the salt out-of-band for verification
        </li>
        <li>
          Verifiers confirm H(provided-salt) matches salt-commitment
          before using it
        </li>
      </ul>

      <t>
        Salted modes enable scenarios where the document binding should
        not be globally verifiable (e.g., unpublished manuscripts,
        confidential documents).
      </t>
    </section>
  </section>

  <section anchor="evidence-tiers">
    <name>Evidence Tiers</name>

    <t>
      Evidence packets are classified into tiers based on which
      optional sections are present. Higher tiers provide stronger
      evidence at the cost of more extensive data collection and
      larger packet sizes.
    </t>

    <section anchor="tier-basic">
      <name>Basic Tier</name>

      <t>
        The Basic tier contains only the required checkpoint chain
        and document reference:
      </t>

      <ul>
        <li>Checkpoints with VDF proofs and jitter bindings</li>
        <li>Document reference with content hash</li>
        <li>Optional declaration (author attestation)</li>
      </ul>

      <t>
        Basic tier evidence proves:
      </t>

      <ul>
        <li>
          A sequence of document states was created over time
        </li>
        <li>
          Minimum elapsed time between states (via VDF)
        </li>
        <li>
          Behavioral entropy was captured at each checkpoint
        </li>
      </ul>

      <t>
        Basic tier is suitable for low-stakes documentation, personal
        records, or contexts where the author's attestation is generally
        trusted.
      </t>
    </section>

    <section anchor="tier-standard">
      <name>Standard Tier</name>

      <t>
        The Standard tier adds presence verification to the Basic tier:
      </t>

      <ul>
        <li>All Basic tier components</li>
        <li>presence-section: Human presence challenges and responses</li>
      </ul>

      <t>
        Standard tier evidence additionally proves:
      </t>

      <ul>
        <li>
          A human responded to interactive challenges during authorship
        </li>
        <li>
          Response timing is consistent with human reaction times
        </li>
      </ul>

      <t>
        Standard tier is suitable for academic submissions, professional
        reports, and contexts requiring reasonable assurance of human
        involvement.
      </t>
    </section>

    <section anchor="tier-enhanced">
      <name>Enhanced Tier</name>

      <t>
        The Enhanced tier adds forensic analysis to the Standard tier:
      </t>

      <ul>
        <li>All Standard tier components</li>
        <li>forensics-section: Edit topology, AI indicators, metrics</li>
        <li>keystroke-section: Detailed jitter samples</li>
      </ul>

      <t>
        Enhanced tier evidence additionally provides:
      </t>

      <ul>
        <li>
          Statistical analysis of editing patterns
        </li>
        <li>
          Edit topology showing where changes occurred (not what)
        </li>
        <li>
          AI indicator scores for forensic assessment
        </li>
        <li>
          Detailed keystroke timing for entropy verification
        </li>
      </ul>

      <t>
        Enhanced tier is suitable for legal documents, regulatory
        compliance, and high-value intellectual property.
      </t>
    </section>

    <section anchor="tier-maximum">
      <name>Maximum Tier</name>

      <t>
        The Maximum tier adds hardware binding, external anchors,
        absence proofs, and forgery cost analysis:
      </t>

      <ul>
        <li>All Enhanced tier components</li>
        <li>hardware-section: TPM/Secure Enclave attestation</li>
        <li>external-section: RFC 3161, blockchain timestamps</li>
        <li>absence-section: Negative evidence claims</li>
        <li>forgery-cost-section: Economic attack cost analysis</li>
      </ul>

      <t>
        Maximum tier evidence additionally provides:
      </t>

      <ul>
        <li>
          Hardware binding proving evidence was created on a specific device
        </li>
        <li>
          Third-party timestamps establishing absolute time
        </li>
        <li>
          Absence proofs for bounded claims (e.g., max paste size)
        </li>
        <li>
          Quantified forgery cost bounds for risk assessment
        </li>
      </ul>

      <t>
        Maximum tier is suitable for litigation support, forensic
        investigation, and contexts requiring the strongest available
        evidence.
      </t>
    </section>

    <section anchor="tier-selection">
      <name>Tier Selection Guidance</name>

      <t>
        The appropriate tier depends on the trust requirements and
        privacy constraints of the use case:
      </t>

      <table>
        <thead>
          <tr>
            <th>Tier</th>
            <th>Value</th>
            <th>Typical Use Cases</th>
            <th>Privacy Impact</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>Basic</td>
            <td>1</td>
            <td>Personal notes, internal docs</td>
            <td>Minimal</td>
          </tr>
          <tr>
            <td>Standard</td>
            <td>2</td>
            <td>Academic, professional</td>
            <td>Low</td>
          </tr>
          <tr>
            <td>Enhanced</td>
            <td>3</td>
            <td>Legal, regulatory</td>
            <td>Moderate</td>
          </tr>
          <tr>
            <td>Maximum</td>
            <td>4</td>
            <td>Litigation, forensic</td>
            <td>Higher</td>
          </tr>
        </tbody>
      </table>

      <t>
        Authors SHOULD select the minimum tier that meets their
        verification requirements. Higher tiers collect more behavioral
        data and create larger Evidence packets.
      </t>
    </section>
  </section>

  <section anchor="attestation-result-structure">
    <name>Attestation Result Structure</name>

    <t>
      The attestation-result structure contains the Verifier's
      assessment of an Evidence packet. It implements a witnessd-specific
      profile of EAR (Entity Attestation Results) as defined in
      <xref target="I-D.ietf-rats-ear"/>.
    </t>

    <artwork type="cddl"><![CDATA[
attestation-result = {
    1 => uint,                      ; version
    2 => uuid,                      ; reference-packet-id
    3 => pop-timestamp,             ; verified-at
    4 => forensic-assessment,       ; verdict
    5 => float32,                   ; confidence-score
    6 => [+ result-claim],          ; verified-claims
    7 => cose-signature,            ; verifier-signature
    ? 8 => tstr,                    ; verifier-identity
    ? 9 => verifier-metadata,       ; additional info
    ? 10 => [+ tstr],               ; caveats
    * tstr => any,                  ; extensions
}
]]></artwork>

    <section anchor="verdict-field">
      <name>Verdict Field</name>

      <t>
        The verdict (key 4) is the Verifier's overall forensic
        assessment using the forensic-assessment enumeration:
      </t>

      <table>
        <thead>
          <tr>
            <th>Value</th>
            <th>Assessment</th>
            <th>Meaning</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>0</td>
            <td>not-assessed</td>
            <td>Verification incomplete or not attempted</td>
          </tr>
          <tr>
            <td>1</td>
            <td>strongly-human</td>
            <td>Evidence strongly indicates human authorship</td>
          </tr>
          <tr>
            <td>2</td>
            <td>likely-human</td>
            <td>Evidence consistent with human authorship</td>
          </tr>
          <tr>
            <td>3</td>
            <td>inconclusive</td>
            <td>Evidence neither confirms nor refutes claims</td>
          </tr>
          <tr>
            <td>4</td>
            <td>likely-ai-assisted</td>
            <td>Evidence suggests AI assistance</td>
          </tr>
          <tr>
            <td>5</td>
            <td>strongly-ai-generated</td>
            <td>Evidence strongly indicates AI generation</td>
          </tr>
        </tbody>
      </table>

      <t>
        IMPORTANT: The verdict describes the behavioral evidence, not
        a claim about the author's intent or the cognitive origin of
        ideas. It reflects observable patterns in the documented
        process.
      </t>
    </section>

    <section anchor="confidence-score">
      <name>Confidence Score</name>

      <t>
        The confidence-score (key 5) is a floating-point value in the
        range [0.0, 1.0] representing the Verifier's confidence in
        the verdict:
      </t>

      <ul>
        <li>0.0 - 0.3: Low confidence (limited evidence)</li>
        <li>0.3 - 0.7: Moderate confidence (typical evidence)</li>
        <li>0.7 - 1.0: High confidence (strong evidence)</li>
      </ul>

      <t>
        The confidence score incorporates:
      </t>

      <ul>
        <li>Evidence tier (higher tiers increase confidence ceiling)</li>
        <li>Checkpoint chain completeness</li>
        <li>Entropy sufficiency in jitter bindings</li>
        <li>VDF calibration attestation presence</li>
        <li>External anchor confirmations</li>
      </ul>
    </section>

    <section anchor="verified-claims">
      <name>Verified Claims</name>

      <t>
        The verified-claims array (key 6) contains individual claim
        verification results:
      </t>

      <artwork type="cddl"><![CDATA[
result-claim = {
    1 => uint,                      ; claim-type
    2 => bool,                      ; verified
    ? 3 => tstr,                    ; detail
    ? 4 => confidence-level,        ; claim-confidence
}
]]></artwork>

      <t>
        The claim-type values correspond to the absence-claim-type
        enumeration, enabling direct mapping between Evidence claims
        and Attestation Result verification outcomes.
      </t>
    </section>

    <section anchor="verifier-signature">
      <name>Verifier Signature</name>

      <t>
        The verifier-signature (key 7) is a COSE_Sign1 signature
        over the Attestation Result payload (fields 1-6 plus any
        optional fields 8-10). This signature:
      </t>

      <ul>
        <li>
          Authenticates the Verifier identity
        </li>
        <li>
          Ensures integrity of the Attestation Result
        </li>
        <li>
          Enables Relying Parties to verify the result came from
          a trusted Verifier
        </li>
      </ul>
    </section>

    <section anchor="caveats">
      <name>Caveats</name>

      <t>
        The caveats array (key 10) documents limitations and warnings
        that Relying Parties should consider:
      </t>

      <ul>
        <li>
          "No hardware attestation available"
        </li>
        <li>
          "External anchors pending confirmation"
        </li>
        <li>
          "Jitter entropy below recommended threshold"
        </li>
        <li>
          "Author declares AI tool usage"
        </li>
      </ul>

      <t>
        Verifiers MUST include appropriate caveats when the Evidence
        has known limitations. Relying Parties SHOULD review caveats
        before making trust decisions.
      </t>
    </section>
  </section>

  <section anchor="cbor-encoding">
    <name>CBOR Encoding</name>

    <t>
      Both Evidence packets and Attestation Results use CBOR
      (Concise Binary Object Representation) encoding per RFC 8949.
    </t>

    <section anchor="cbor-tags">
      <name>Semantic Tags</name>

      <t>
        Top-level structures use semantic tags for type identification:
      </t>

      <table>
        <thead>
          <tr>
            <th>Tag</th>
            <th>Hex</th>
            <th>ASCII</th>
            <th>Structure</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>1347571280</td>
            <td>0x50505020</td>
            <td>"PPP "</td>
            <td>tagged-evidence-packet</td>
          </tr>
          <tr>
            <td>1463894560</td>
            <td>0x57415220</td>
            <td>"WAR "</td>
            <td>tagged-attestation-result</td>
          </tr>
        </tbody>
      </table>

      <t>
        These tags enable format detection without external metadata.
        Parsers can identify the packet type by examining the leading
        tag value.
      </t>
    </section>

    <section anchor="key-encoding">
      <name>Key Encoding Strategy</name>

      <t>
        The schema uses a dual key encoding strategy for efficiency
        and extensibility:
      </t>

      <dl>
        <dt>Integer Keys (1-99):</dt>
        <dd>
          Reserved for core protocol fields defined in this specification.
          Provides compact encoding and enables efficient parsing.
        </dd>

        <dt>String Keys:</dt>
        <dd>
          Used for vendor extensions, application-specific fields, and
          future protocol extensions before standardization. Provides
          self-describing field names at the cost of encoding size.
        </dd>
      </dl>

      <t>
        Example size comparison for a field named "forensics":
      </t>

      <artwork><![CDATA[
Integer key (11):     1 byte  (0x0B)
String key ("forensics"): 10 bytes (0x69666F72656E73696373)
]]></artwork>

      <t>
        For a typical Evidence packet with dozens of fields, integer
        keys reduce packet size by 20-40%.
      </t>
    </section>

    <section anchor="deterministic-encoding">
      <name>Deterministic Encoding</name>

      <t>
        Evidence packets SHOULD use deterministic CBOR encoding
        (RFC 8949 Section 4.2) to enable:
      </t>

      <ul>
        <li>
          Byte-exact reproduction of packets for signature verification
        </li>
        <li>
          Consistent hashing for cache and deduplication purposes
        </li>
        <li>
          Simplified debugging and comparison
        </li>
      </ul>

      <t>
        Deterministic encoding requirements:
      </t>

      <ul>
        <li>
          Map keys sorted in bytewise lexicographic order
        </li>
        <li>
          Integers encoded in minimal representation
        </li>
        <li>
          Floating-point values canonicalized
        </li>
      </ul>
    </section>
  </section>

  <section anchor="eat-profile">
    <name>EAT Profile</name>

    <t>
      This specification defines an EAT (Entity Attestation Token)
      profile for Proof of Process evidence. The profile URI is:
    </t>

    <artwork><![CDATA[
https://example.com/rats/eat/profile/pop/1.0
]]></artwork>

    <section anchor="eat-claims">
      <name>Custom EAT Claims</name>

      <t>
        The following custom claims are proposed for IANA registration
        upon working group adoption:
      </t>

      <table>
        <thead>
          <tr>
            <th>Claim Name</th>
            <th>Type</th>
            <th>Description</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>pop-forensic-assessment</td>
            <td>uint</td>
            <td>forensic-assessment enumeration value</td>
          </tr>
          <tr>
            <td>pop-presence-score</td>
            <td>float</td>
            <td>Presence challenge pass rate (0.0-1.0)</td>
          </tr>
          <tr>
            <td>pop-evidence-tier</td>
            <td>uint</td>
            <td>Evidence tier (1-4)</td>
          </tr>
          <tr>
            <td>pop-ai-composite-score</td>
            <td>float</td>
            <td>AI indicator composite score (0.0-1.0)</td>
          </tr>
        </tbody>
      </table>
    </section>

    <section anchor="ar4si-extension">
      <name>AR4SI Trustworthiness Extension</name>

      <t>
        The Attestation Result includes a proposed extension to the
        AR4SI (<xref target="I-D.ietf-rats-ar4si"/>) trustworthiness
        vector:
      </t>

      <artwork><![CDATA[
behavioral-consistency: -1..3
  -1 = no claim
   0 = behavioral evidence inconsistent with human authorship
   1 = behavioral evidence inconclusive
   2 = behavioral evidence consistent with human authorship
   3 = behavioral evidence strongly indicates human authorship
]]></artwork>

      <t>
        This extension enables integration of witnessd Attestation
        Results with broader trustworthiness assessment frameworks.
      </t>
    </section>
  </section>

  <section anchor="evidence-model-security">
    <name>Security Considerations</name>

    <section anchor="tamper-evidence">
      <name>Tamper-Evidence vs. Tamper-Proof</name>

      <t>
        The evidence model provides tamper-EVIDENCE, not tamper-PROOF:
      </t>

      <ul>
        <li>
          <t>Tamper-evident:</t>
          <t>
            Modifications to Evidence packets are detectable through
            cryptographic verification. The hash chain,
            VDF entanglement,
            and HMAC bindings ensure that any alteration invalidates
            the Evidence.
          </t>
        </li>

        <li>
          <t>Not tamper-proof:</t>
          <t>
            An adversary with sufficient resources can fabricate
            Evidence by investing the computational time required
            by VDF proofs and generating plausible behavioral data.
            The forgery-cost-section quantifies this investment.
          </t>
        </li>
      </ul>

      <t>
        Relying Parties should understand this distinction when
        making trust decisions.
      </t>
    </section>

    <section anchor="verification-independence">
      <name>Independent Verification</name>

      <t>
        Evidence packets are designed for independent verification:
      </t>

      <ul>
        <li>
          All cryptographic proofs are included in the packet
        </li>
        <li>
          Verification requires no access to the original device
        </li>
        <li>
          Verification requires no network access (except for
          external anchor validation)
        </li>
        <li>
          Multiple independent Verifiers can appraise the same Evidence
        </li>
      </ul>

      <t>
        This property enables adversarial verification: a skeptical
        Relying Party can verify Evidence without trusting the
        Attester's infrastructure.
      </t>
    </section>

    <section anchor="privacy-construction">
      <name>Privacy by Construction</name>

      <t>
        The evidence model enforces privacy through structural
        constraints:
      </t>

      <ul>
        <li>
          <t>No content storage:</t>
          <t>
            Evidence contains hashes of document states, not content.
            The document itself is never included in Evidence packets.
          </t>
        </li>

        <li>
          <t>No keystroke capture:</t>
          <t>
            Individual characters typed are not recorded. Timing
            intervals are captured without association to specific
            characters.
          </t>
        </li>

        <li>
          <t>Aggregated behavioral data:</t>
          <t>
            Raw timing data is aggregated into histograms before
            inclusion in Evidence. Optional raw interval disclosure
            is user-controlled.
          </t>
        </li>

        <li>
          <t>No screenshots or screen recording:</t>
          <t>
            Visual content is never captured by the Attesting
            Environment.
          </t>
        </li>
      </ul>
    </section>

    <section anchor="attester-trust">
      <name>Attesting Environment Trust</name>

      <t>
        The evidence model assumes a minimally trusted Attesting
        Environment:
      </t>

      <ul>
        <li>
          <t>Chain-verifiable claims (absence-claim-types 1-15):</t>
          <t>
            Can be verified from Evidence alone without trusting
            the AE beyond basic data integrity.
          </t>
        </li>

        <li>
          <t>Monitoring-dependent claims (absence-claim-types 16-63):</t>
          <t>
            Require trust that the AE accurately reported monitored
            events. The ae-trust-basis field documents these assumptions.
          </t>
        </li>
      </ul>

      <t>
        Hardware attestation (hardware-section) increases AE trust
        by binding Evidence to verified hardware identities.
      </t>
    </section>
  </section>

</section>

    <!-- Section 3: Jitter Seal (Behavioral Entropy) -->
    <section anchor="jitter-seal" xml:base="sections/jitter-seal.xml">
  <name>Jitter Seal: Captured Behavioral Entropy</name>

  <t>
    This section defines the Jitter Seal mechanism, a novel contribution
    to behavioral evidence that binds captured timing entropy to the
    checkpoint chain. Unlike injected entropy (random delays added by
    software), captured entropy commits to actual measured timing from
    human input events.
  </t>

  <section anchor="jitter-design-principles">
    <name>Design Principles</name>

    <t>
      The Jitter Seal addresses a fundamental limitation in existing
      attestation frameworks: the inability to distinguish evidence
      generated during genuine human interaction from evidence
      reconstructed after the fact.
    </t>

    <dl>
      <dt>Captured vs. Injected Entropy:</dt>
      <dd>
        <t>
          Injected entropy (e.g., random delays inserted by software)
          can be regenerated if the random seed is known. Captured
          entropy commits to timing measurements that existed only at
          the moment of observation. An adversary cannot regenerate
          captured entropy without access to the original input stream.
        </t>
      </dd>

      <dt>Commitment Before Observation:</dt>
      <dd>
        <t>
          The entropy commitment is computed and bound to the checkpoint
          chain before the summarized statistics are finalized. This
          prevents an adversary from crafting statistics that match a
          predetermined commitment.
        </t>
      </dd>

      <dt>Privacy-Preserving Aggregation:</dt>
      <dd>
        <t>
          Raw timing intervals are aggregated into histogram buckets,
          preserving statistical properties while preventing
          reconstruction of the original keystroke sequence. The
          raw intervals MAY be disclosed for enhanced verification
          but are not required.
        </t>
      </dd>
    </dl>
  </section>

  <section anchor="jitter-binding-structure">
    <name>Jitter Binding Structure</name>

    <t>
      The jitter-binding structure appears in each checkpoint and
      contains five fields:
    </t>

    <artwork type="cddl"><![CDATA[
jitter-binding = {
    1 => hash-value,           ; entropy-commitment
    2 => [+ entropy-source],   ; sources
    3 => jitter-summary,       ; summary
    4 => bstr .size 32,        ; binding-mac
    ? 5 => [+ uint],           ; raw-intervals (optional)
}
]]></artwork>

    <section anchor="entropy-commitment">
      <name>Entropy Commitment (Key 1)</name>

      <t>
        The entropy-commitment is a cryptographic hash of the raw
        timing intervals concatenated in observation order:
      </t>

      <artwork><![CDATA[
entropy-commitment = H(interval{0} || interval{1} || ... || interval{n})
]]></artwork>

      <t>
        where H is the hash algorithm specified in the hash-value
        structure (SHA-256 RECOMMENDED), and each interval is encoded
        as a 32-bit unsigned integer representing milliseconds.
      </t>

      <t>
        This commitment is computed BEFORE the histogram summary,
        ensuring the raw data cannot be manipulated to match a
        desired statistical profile.
      </t>
    </section>

    <section anchor="entropy-sources">
      <name>Entropy Sources (Key 2)</name>

      <t>
        The sources array identifies which input modalities contributed
        to the captured entropy:
      </t>

      <table>
        <thead>
          <tr>
            <th>Value</th>
            <th>Source</th>
            <th>Description</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>1</td>
            <td>keystroke-timing</td>
            <td>Inter-key intervals from keyboard input</td>
          </tr>
          <tr>
            <td>2</td>
            <td>pause-patterns</td>
            <td>Gaps between editing bursts (&gt;2 seconds)</td>
          </tr>
          <tr>
            <td>3</td>
            <td>edit-cadence</td>
            <td>Rhythm of insertions/deletions over time</td>
          </tr>
          <tr>
            <td>4</td>
            <td>cursor-movement</td>
            <td>Navigation timing within document</td>
          </tr>
          <tr>
            <td>5</td>
            <td>scroll-behavior</td>
            <td>Document scrolling patterns</td>
          </tr>
          <tr>
            <td>6</td>
            <td>focus-changes</td>
            <td>Application focus gain/loss events</td>
          </tr>
        </tbody>
      </table>

      <t>
        Implementations MUST include at least one source. The
        keystroke-timing source (1) provides the highest entropy
        density and SHOULD be included when keyboard input is
        available.
      </t>
    </section>

    <section anchor="jitter-summary">
      <name>Jitter Summary (Key 3)</name>

      <t>
        The summary provides verifiable statistics without exposing
        raw timing data:
      </t>

      <artwork type="cddl"><![CDATA[
jitter-summary = {
    1 => uint,                 ; sample-count
    2 => [+ histogram-bucket], ; timing-histogram
    3 => float32,              ; estimated-entropy-bits
    ? 4 => [+ anomaly-flag],   ; anomalies (if detected)
}

histogram-bucket = {
    1 => uint,                 ; lower-bound-ms
    2 => uint,                 ; upper-bound-ms
    3 => uint,                 ; count
}
]]></artwork>

      <t>
        The estimated-entropy-bits field is calculated using Shannon
        entropy over the histogram distribution:
      </t>

      <artwork><![CDATA[
H = -sum(p[i] * log2(p[i])) for all buckets where p[i] > 0
p[i] = count[i] / total_samples
]]></artwork>

      <t>
        RECOMMENDED bucket boundaries (in milliseconds):
        0, 50, 100, 200, 500, 1000, 2000, 5000, +infinity.
        These boundaries capture the typical range of human typing
        and pause behavior.
      </t>
    </section>

    <section anchor="binding-mac">
      <name>Binding MAC (Key 4)</name>

      <t>
        The binding-mac cryptographically binds the jitter data to
        the checkpoint chain:
      </t>

      <artwork><![CDATA[
binding-mac = HMAC-SHA256(
    key = checkpoint-chain-key,
    message = entropy-commitment ||
              CBOR(sources) ||
              CBOR(summary) ||
              prev-checkpoint-hash
)
]]></artwork>

      <t>
        This binding ensures that:
      </t>

      <ol>
        <li>
          The jitter data cannot be transplanted between checkpoints
        </li>
        <li>
          The jitter data cannot be modified without invalidating the
          checkpoint chain
        </li>
        <li>
          The temporal ordering of jitter observations is preserved
        </li>
      </ol>
    </section>

    <section anchor="raw-intervals">
      <name>Raw Intervals (Key 5, Optional)</name>

      <t>
        The raw-intervals array MAY be included for enhanced
        verification. When present, verifiers can:
      </t>

      <ul>
        <li>
          Recompute the entropy-commitment and verify it matches
        </li>
        <li>
          Recompute the histogram and verify consistency
        </li>
        <li>
          Perform statistical analysis beyond the histogram
        </li>
      </ul>

      <t>
        Privacy consideration: Raw intervals may constitute
        biometric-adjacent data. See <xref target="jitter-privacy"/>.
      </t>
    </section>
  </section>

  <section anchor="jitter-vdf-entanglement">
    <name>VDF Entanglement</name>

    <t>
      The Jitter Seal achieves temporal binding through entanglement
      with the VDF proof chain. The VDF input for checkpoint N
      includes the jitter binding from checkpoint N:
    </t>

    <artwork><![CDATA[
VDF_input{N} = H(
    checkpoint-hash{N-1} ||
    content-hash{N} ||
    jitter-binding{N}.entropy-commitment
)

VDF_output{N} = VDF(VDF_input{N}, iterations{N})
]]></artwork>

    <t>
      This entanglement creates a causal dependency: the VDF output
      cannot be computed until the jitter entropy is captured and
      committed. Combined with the VDF's sequential computation
      requirement, this ensures that:
    </t>

    <ol>
      <li>
        The jitter data existed before the VDF computation began
      </li>
      <li>
        The checkpoint cannot be backdated without recomputing the
        entire VDF chain from that point forward
      </li>
      <li>
        The minimum time between checkpoints is bounded by VDF
        computation time plus jitter observation time
      </li>
    </ol>
  </section>

  <section anchor="jitter-verification">
    <name>Verification Procedure</name>

    <t>
      A Verifier appraises the Jitter Seal through the following
      procedure:
    </t>

    <ol>
      <li>
        <t>Structural Validation:</t>
        <t>
          Verify all required fields are present and correctly typed
          per the CDDL schema.
        </t>
      </li>

      <li>
        <t>Binding MAC Verification:</t>
        <t>
          Recompute the binding-mac using the checkpoint chain key
          and verify it matches the provided value.
        </t>
      </li>

      <li>
        <t>Entropy Commitment Verification (if raw-intervals present):</t>
        <t>
          Recompute H(intervals) and verify it matches
          entropy-commitment.
        </t>
      </li>

      <li>
        <t>Histogram Consistency (if raw-intervals present):</t>
        <t>
          Recompute histogram buckets from raw intervals and verify
          consistency with the provided summary.
        </t>
      </li>

      <li>
        <t>Entropy Threshold Check:</t>
        <t>
          Verify estimated-entropy-bits meets the minimum threshold
          for the claimed evidence tier. RECOMMENDED minimum: 32 bits
          for Standard tier, 64 bits for Enhanced tier.
        </t>
      </li>

      <li>
        <t>Sample Count Check:</t>
        <t>
          Verify sample-count is consistent with the document size
          and claimed editing duration. Anomalously low sample counts
          relative to content length indicate potential evidence gaps.
        </t>
      </li>

      <li>
        <t>Anomaly Assessment:</t>
        <t>
          If anomaly-flags are present, incorporate them into the
          overall forensic assessment. The presence of anomalies
          does not invalidate the evidence but affects confidence.
        </t>
      </li>

      <li>
        <t>VDF Entanglement Verification:</t>
        <t>
          Verify the entropy-commitment appears in the VDF input
          computation for this checkpoint.
        </t>
      </li>
    </ol>

    <t>
      The verification result contributes to the chain-verifiable
      claims defined in the absence-section:
    </t>

    <ul>
      <li>
        jitter-entropy-above-threshold (claim type 8): PROVEN if
        estimated-entropy-bits exceeds threshold
      </li>
      <li>
        jitter-samples-above-count (claim type 9): PROVEN if
        sample-count exceeds threshold
      </li>
    </ul>
  </section>

  <section anchor="jitter-anomalies">
    <name>Anomaly Detection</name>

    <t>
      The Attesting Environment MAY flag anomalies in the captured
      timing data:
    </t>

    <table>
      <thead>
        <tr>
          <th>Value</th>
          <th>Flag</th>
          <th>Indication</th>
        </tr>
      </thead>
      <tbody>
        <tr>
          <td>1</td>
          <td>unusually-regular</td>
          <td>
            Timing distribution has lower variance than typical
            human input (coefficient of variation &lt; 0.1)
          </td>
        </tr>
        <tr>
          <td>2</td>
          <td>burst-detected</td>
          <td>
            Sustained high-speed input exceeding 200 WPM for
            &gt;30 seconds
          </td>
        </tr>
        <tr>
          <td>3</td>
          <td>gap-detected</td>
          <td>
            Significant editing gap (&gt;5 minutes) within what
            appears to be a continuous session
          </td>
        </tr>
        <tr>
          <td>4</td>
          <td>paste-heavy</td>
          <td>
            &gt;50% of content added via paste operations in this
            checkpoint interval
          </td>
        </tr>
      </tbody>
    </table>

    <t>
      Anomaly flags are informational and do not constitute claims
      about authorship or intent. They provide context for Verifier
      appraisal and Relying Party decision-making.
    </t>
  </section>

  <section anchor="jitter-rats-differentiation">
    <name>Relationship to RATS Evidence</name>

    <t>
      The Jitter Seal extends the RATS evidence model
      <xref target="RFC9334"/> in several ways:
    </t>

    <dl>
      <dt>Behavioral Evidence:</dt>
      <dd>
        <t>
          Traditional RATS evidence attests to system state (software
          versions, configuration, integrity). The Jitter Seal attests
          to behavioral characteristics of the input stream, capturing
          properties that emerge only during genuine interaction.
        </t>
      </dd>

      <dt>Continuous Attestation:</dt>
      <dd>
        <t>
          Unlike point-in-time attestation, Jitter Seals are
          accumulated throughout an authoring session. Each checkpoint
          adds to the behavioral evidence corpus, with earlier seals
          constraining what later seals can claim.
        </t>
      </dd>

      <dt>Non-Reproducible Evidence:</dt>
      <dd>
        <t>
          RATS evidence can typically be regenerated by returning to
          the same system state. Jitter Seal evidence cannot be
          regenerated because the timing entropy existed only at the
          moment of capture.
        </t>
      </dd>

      <dt>Epoch Marker Compatibility:</dt>
      <dd>
        <t>
          The VDF-entangled Jitter Seal can function as a local
          freshness mechanism compatible with the Epoch Markers
          framework <xref target="I-D.ietf-rats-epoch-markers"/>.
          The VDF output chain provides relative ordering; external
          anchors provide absolute time binding.
        </t>
      </dd>
    </dl>
  </section>

  <section anchor="jitter-privacy">
    <name>Privacy Considerations</name>

    <t>
      Keystroke timing data is biometric-adjacent: while not
      traditionally classified as biometric data, timing patterns
      can potentially identify individuals or reveal sensitive
      information about cognitive state or physical condition.
    </t>

    <section anchor="jitter-privacy-mitigations">
      <name>Mitigation Measures</name>

      <ul>
        <li>
          <t>Histogram Aggregation:</t>
          <t>
            By default, only aggregated histogram data is included in
            the evidence packet. Raw intervals are optional and SHOULD
            only be disclosed when enhanced verification is required.
          </t>
        </li>

        <li>
          <t>Bucket Granularity:</t>
          <t>
            The RECOMMENDED bucket boundaries (50ms minimum width)
            prevent reconstruction of exact keystroke sequences while
            preserving statistically significant patterns.
          </t>
        </li>

        <li>
          <t>No Character Mapping:</t>
          <t>
            Timing intervals are recorded without association to
            specific characters or words. The evidence captures
            rhythm without content.
          </t>
        </li>

        <li>
          <t>Session Isolation:</t>
          <t>
            Jitter data is bound to a specific evidence packet and
            checkpoint chain. Cross-session correlation requires
            access to multiple evidence packets.
          </t>
        </li>
      </ul>
    </section>

    <section anchor="jitter-privacy-disclosure">
      <name>Disclosure Recommendations</name>

      <t>
        Implementations SHOULD inform users that:
      </t>

      <ol>
        <li>
          Typing rhythm information is captured and included in
          evidence packets
        </li>
        <li>
          Evidence packets may be shared with Verifiers and
          potentially with Relying Parties
        </li>
        <li>
          Raw timing data (if disclosed) could theoretically be
          used for behavioral analysis
        </li>
      </ol>

      <t>
        Users SHOULD have the option to:
      </t>

      <ol>
        <li>
          Disable raw-intervals disclosure (histogram-only mode)
        </li>
        <li>
          Request deletion of evidence packets after verification
        </li>
        <li>
          Review captured entropy statistics before packet
          finalization
        </li>
      </ol>
    </section>
  </section>

  <section anchor="jitter-security">
    <name>Security Considerations</name>

    <section anchor="jitter-replay-attacks">
      <name>Replay Attacks</name>

      <t>
        An adversary might attempt to replay captured jitter data
        from a previous session. This attack is mitigated by:
      </t>

      <ol>
        <li>
          VDF entanglement: The jitter commitment is bound to the
          VDF chain, which includes the previous checkpoint hash
        </li>
        <li>
          Chain MAC: The binding-mac includes the previous
          checkpoint hash, preventing transplantation
        </li>
        <li>
          Content binding: The jitter data is associated with
          specific content hashes that change with each edit
        </li>
      </ol>
    </section>

    <section anchor="jitter-simulation-attacks">
      <name>Simulation Attacks</name>

      <t>
        An adversary might attempt to generate synthetic timing data
        that mimics human patterns. The cost of this attack is
        bounded by:
      </t>

      <ol>
        <li>
          <t>Entropy requirement:</t>
          <t>
            Meeting the entropy threshold requires sufficient
            variation in timing. Perfectly regular synthetic input
            will fail the entropy check.
          </t>
        </li>
        <li>
          <t>Real-time constraint:</t>
          <t>
            The VDF entanglement requires that jitter data be
            captured before VDF computation. Generating synthetic
            timing that passes statistical tests while maintaining
            real-time constraints is non-trivial.
          </t>
        </li>
        <li>
          <t>Statistical consistency:</t>
          <t>
            Synthetic timing must be consistent across all
            checkpoints. Anomaly detection may flag statistically
            improbable patterns.
          </t>
        </li>
      </ol>

      <t>
        The Jitter Seal does not claim to make simulation impossible,
        only to make it costly relative to genuine interaction.
        The forgery-cost-section provides quantified bounds on
        attack costs.
      </t>
    </section>

    <section anchor="jitter-ae-trust">
      <name>Attesting Environment Trust</name>

      <t>
        The Jitter Seal relies on the Attesting Environment to
        accurately capture and report timing data. A compromised
        AE could fabricate jitter data. This is addressed by:
      </t>

      <ol>
        <li>
          Hardware binding (hardware-section) for AE integrity
        </li>
        <li>
          Calibration attestation for VDF speed verification
        </li>
        <li>
          Clear documentation of AE trust assumptions in
          absence-claim structures (ae-trust-basis field)
        </li>
      </ol>

      <t>
        Chain-verifiable claims (1-15) do not depend on AE trust
        beyond basic data integrity. Monitoring-dependent claims
        (16-63) explicitly document their AE trust requirements.
      </t>
    </section>
  </section>

</section>

    <!-- Section 4: VDF Mechanisms -->
    <section anchor="vdf-mechanisms" xml:base="sections/vdf-mechanisms.xml">
  <name>Verifiable Delay Functions</name>

  <t>
    This section specifies the Verifiable Delay Function (VDF)
    mechanisms used to establish temporal ordering and minimum
    elapsed time between checkpoints. The design is algorithm-agile,
    supporting both iterated hash constructions and succinct VDF
    schemes.
  </t>

  <section anchor="vdf-construction">
    <name>VDF Construction</name>

    <t>
      A VDF proof appears in each checkpoint and contains the
      following fields:
    </t>

    <artwork type="cddl"><![CDATA[
vdf-proof = {
    1 => vdf-algorithm,            ; algorithm
    2 => vdf-params,               ; params
    3 => bstr,                     ; input
    4 => bstr,                     ; output
    5 => bstr,                     ; proof
    6 => duration,                 ; claimed-duration
    7 => uint,                     ; iterations
    ? 8 => calibration-attestation, ; calibration (RECOMMENDED)
}
]]></artwork>

    <section anchor="vdf-algorithms">
      <name>Algorithm Registry</name>

      <t>
        The following VDF algorithms are defined:
      </t>

      <table>
        <thead>
          <tr>
            <th>Value</th>
            <th>Algorithm</th>
            <th>Status</th>
            <th>Proof Size</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>1</td>
            <td>iterated-sha256</td>
            <td>MUST support</td>
            <td>0 (implicit)</td>
          </tr>
          <tr>
            <td>2</td>
            <td>iterated-sha3-256</td>
            <td>SHOULD support</td>
            <td>0 (implicit)</td>
          </tr>
          <tr>
            <td>16</td>
            <td>pietrzak-rsa2048</td>
            <td>MAY support</td>
            <td>~1 KB</td>
          </tr>
          <tr>
            <td>17</td>
            <td>wesolowski-rsa2048</td>
            <td>MAY support</td>
            <td>~256 bytes</td>
          </tr>
          <tr>
            <td>18</td>
            <td>pietrzak-class-group</td>
            <td>MAY support</td>
            <td>~2 KB</td>
          </tr>
          <tr>
            <td>19</td>
            <td>wesolowski-class-group</td>
            <td>MAY support</td>
            <td>~512 bytes</td>
          </tr>
        </tbody>
      </table>

      <t>
        Algorithm values 1-15 are reserved for iterated hash
        constructions. Values 16-31 are reserved for succinct VDF
        schemes. Values 32+ are available for future allocation.
      </t>
    </section>

    <section anchor="vdf-iterated-hash">
      <name>Iterated Hash Construction</name>

      <t>
        The iterated hash VDF computes:
      </t>

      <artwork><![CDATA[
output = H^n(input)

where H^n denotes n iterations of hash function H:
  H^0(x) = x
  H^n(x) = H(H^(n-1)(x))
]]></artwork>

      <t>
        Parameters for iterated hash VDFs:
      </t>

      <artwork type="cddl"><![CDATA[
iterated-hash-params = {
    1 => hash-algorithm,    ; hash-function
    2 => uint,              ; iterations-per-second
}
]]></artwork>

      <t>
        The iterations-per-second field records the calibrated
        performance of the Attesting Environment, enabling Verifiers
        to assess whether the claimed-duration is plausible for the
        iteration count.
      </t>

      <t>
        Properties of iterated hash VDFs:
      </t>

      <dl>
        <dt>Verification Cost:</dt>
        <dd>
          O(n) -- Verifier must recompute all iterations.
          This is acceptable for the iteration counts typical in
          authoring scenarios (10^6 to 10^9 iterations).
        </dd>

        <dt>Parallelization Resistance:</dt>
        <dd>
          Inherently sequential. Each iteration depends on the
          previous output. No known parallelization attack.
        </dd>

        <dt>Hardware Acceleration:</dt>
        <dd>
          SHA-256 acceleration (e.g., Intel SHA Extensions, ARM
          Cryptography Extensions) provides ~3-5x speedup over
          software. This is accounted for in calibration.
        </dd>
      </dl>
    </section>

    <section anchor="vdf-succinct">
      <name>Succinct VDF Construction</name>

      <t>
        Succinct VDFs (<xref target="Pietrzak2019"/>,
        <xref target="Wesolowski2019"/>) provide O(log n) or O(1)
        verification time at the cost of larger proof size and more
        complex computation.
      </t>

      <artwork type="cddl"><![CDATA[
succinct-vdf-params = {
    10 => uint,             ; modulus-bits (e.g., 2048)
    ? 11 => uint,           ; security-parameter
}
]]></artwork>

      <t>
        Key set 10-19 disambiguates succinct params from iterated
        hash params (key set 1-9) without requiring a type tag.
      </t>

      <t>
        Succinct VDFs are OPTIONAL and intended for scenarios where:
      </t>

      <ul>
        <li>
          Verification must complete in bounded time regardless of
          delay duration
        </li>
        <li>
          Evidence packets may contain very long VDF chains
          (millions of checkpoints)
        </li>
        <li>
          Third-party Verifiers cannot afford O(n) recomputation
        </li>
      </ul>

      <t>
        When using succinct VDFs, the proof field contains the
        cryptographic proof of correct computation. For iterated
        hash VDFs, the proof field is empty (verification is by
        recomputation).
      </t>
    </section>
  </section>

  <section anchor="vdf-causality">
    <name>Causality Property</name>

    <t>
      The VDF chain establishes unforgeable temporal ordering through
      structural causality. This is a key novel contribution of the
      Proof of Process framework.
    </t>

    <section anchor="vdf-entanglement">
      <name>Checkpoint Entanglement</name>

      <t>
        The VDF input for checkpoint N is computed as:
      </t>

      <artwork><![CDATA[
VDF_input{N} = H(
    VDF_output{N-1} ||      ; Previous VDF output
    content-hash{N} ||      ; Current document state
    jitter-commitment{N} || ; Captured behavioral entropy
    sequence{N}             ; Checkpoint sequence number
)
]]></artwork>

      <t>
        For the genesis checkpoint (N = 0):
      </t>

      <artwork><![CDATA[
VDF_input{0} = H(
    session-entropy ||      ; Random 256-bit session seed
    content-hash{0} ||      ; Initial document state
    jitter-commitment{0} ||
    0x00000000              ; Sequence zero
)
]]></artwork>

      <t>
        This construction ensures that:
      </t>

      <ol>
        <li>
          <t>Sequential Dependency:</t>
          <t>
            VDF_output{N} cannot be computed without VDF_output{N-1}.
            The chain is inherently sequential.
          </t>
        </li>

        <li>
          <t>Content Binding:</t>
          <t>
            Each VDF output is bound to a specific document state.
            Changing the content invalidates all subsequent VDF proofs.
          </t>
        </li>

        <li>
          <t>Jitter Binding:</t>
          <t>
            The behavioral entropy commitment is entangled with the
            VDF, as detailed in <xref target="jitter-vdf-entanglement"/>.
          </t>
        </li>

        <li>
          <t>No Precomputation:</t>
          <t>
            The adversary cannot precompute VDF outputs because the
            input depends on runtime values (content, jitter) that
            are unknown until the checkpoint is created.
          </t>
        </li>
      </ol>
    </section>

    <section anchor="vdf-temporal-ordering">
      <name>Temporal Ordering Without Trusted Time</name>

      <t>
        The causality property provides relative temporal ordering
        without relying on trusted timestamps:
      </t>

      <dl>
        <dt>Relative Ordering:</dt>
        <dd>
          Checkpoint N necessarily occurred after checkpoint N-1,
          because VDF_input{N} requires VDF_output{N-1}.
        </dd>

        <dt>Minimum Elapsed Time:</dt>
        <dd>
          <t>
            The time between checkpoints N-1 and N is at least:
          </t>
          <artwork><![CDATA[
min_elapsed{N} = iterations{N} / calibration_rate
]]></artwork>
          <t>
            where calibration_rate is the attested iterations-per-second
            for the device.
          </t>
        </dd>

        <dt>Cumulative Time Bound:</dt>
        <dd>
          <t>
            The total minimum time to produce the evidence packet is:
          </t>
          <artwork><![CDATA[
min_total = sum(iterations[i] / calibration_rate) for i = 0..N
]]></artwork>
        </dd>

        <dt>Absolute Time Binding:</dt>
        <dd>
          External anchors (RFC 3161 timestamps, blockchain proofs)
          bind the checkpoint chain to absolute time. The VDF provides
          the ordering; anchors provide the epoch.
        </dd>
      </dl>
    </section>

    <section anchor="vdf-backdating">
      <name>Backdating Resistance</name>

      <t>
        An adversary attempting to backdate evidence must:
      </t>

      <ol>
        <li>
          Generate content that produces the desired content-hash
        </li>
        <li>
          Generate jitter data that produces valid entropy-commitment
        </li>
        <li>
          Compute the VDF chain from the backdated checkpoint forward
        </li>
        <li>
          Complete all of the above before any external anchor
          confirms a later checkpoint
        </li>
      </ol>

      <t>
        The cost of step 3 grows linearly with the number of
        subsequent checkpoints and the iteration count per checkpoint.
        This cost is quantified in the forgery-cost-section.
      </t>

      <t>
        Crucially, the adversary cannot parallelize step 3: VDF
        computation is inherently sequential. Even with unlimited
        computational resources, the adversary must wait for each
        VDF to complete before starting the next.
      </t>
    </section>
  </section>

  <section anchor="vdf-calibration">
    <name>Calibration Attestation</name>

    <t>
      Calibration attestation addresses the verification problem:
      how does a Verifier know whether the claimed iterations could
      have been computed in the claimed duration on the Attester's
      hardware?
    </t>

    <section anchor="vdf-calibration-structure">
      <name>Attestation Structure</name>

      <artwork type="cddl"><![CDATA[
calibration-attestation = {
    1 => uint,              ; calibration-iterations
    2 => pop-timestamp,     ; calibration-time
    3 => cose-signature,    ; hw-signature
    4 => bstr,              ; device-nonce
    ? 5 => tstr,            ; device-model
}
]]></artwork>

      <dl>
        <dt>calibration-iterations (key 1):</dt>
        <dd>
          The number of VDF iterations completed in a 1-second
          calibration burst at session start.
        </dd>

        <dt>calibration-time (key 2):</dt>
        <dd>
          Timestamp when calibration was performed. SHOULD be
          within 24 hours of the first checkpoint.
        </dd>

        <dt>hardware-signed attestation (key 3):</dt>
        <dd>
          COSE_Sign1 signature over the calibration data, produced
          by hardware-bound keys (Secure Enclave, TPM, etc.).
        </dd>

        <dt>device-nonce (key 4):</dt>
        <dd>
          Random 256-bit value generated at calibration time.
          Prevents replay of calibration attestations across sessions.
        </dd>

        <dt>device-model (key 5, optional):</dt>
        <dd>
          Human-readable device identifier for reference purposes.
          Not used in verification.
        </dd>
      </dl>
    </section>

    <section anchor="vdf-calibration-procedure">
      <name>Calibration Procedure</name>

      <t>
        The Attesting Environment performs calibration as follows:
      </t>

      <ol>
        <li>
          <t>Generate Nonce:</t>
          <t>
            Generate a cryptographically random 256-bit device-nonce.
          </t>
        </li>

        <li>
          <t>Initialize Timer:</t>
          <t>
            Record high-resolution start time T_start.
          </t>
        </li>

        <li>
          <t>Execute Calibration Burst:</t>
          <t>
            Compute VDF iterations using the session's VDF algorithm,
            starting from H(device-nonce), until 1 second has elapsed.
          </t>
        </li>

        <li>
          <t>Record Result:</t>
          <t>
            calibration-iterations = number of iterations completed.
          </t>
        </li>

        <li>
          <t>Generate Attestation:</t>
          <t>
            Construct the attestation payload and sign with
            hardware-bound key.
          </t>
        </li>
      </ol>

      <t>
        The attestation payload for signing:
      </t>

      <artwork><![CDATA[
attestation-payload = CBOR({
    "alg": vdf-algorithm,
    "iter": calibration-iterations,
    "nonce": device-nonce,
    "time": calibration-time
})
]]></artwork>
    </section>

    <section anchor="vdf-calibration-verification">
      <name>Calibration Verification</name>

      <t>
        A Verifier validates calibration attestation as follows:
      </t>

      <ol>
        <li>
          <t>Signature Verification:</t>
          <t>
            Verify the COSE_Sign1 signature using the device's
            public key (from hardware-section or certificate chain).
          </t>
        </li>

        <li>
          <t>Nonce Uniqueness:</t>
          <t>
            Verify the device-nonce has not been seen in other
            sessions (optional, requires Verifier state).
          </t>
        </li>

        <li>
          <t>Plausibility Check:</t>
          <t>
            Verify calibration-iterations falls within expected
            range for the device class:
          </t>
          <ul>
            <li>Mobile devices: 10^5 - 10^7 iterations/second</li>
            <li>Desktop/laptop: 10^6 - 10^8 iterations/second</li>
            <li>Server-class: 10^7 - 10^9 iterations/second</li>
          </ul>
        </li>

        <li>
          <t>Consistency Check:</t>
          <t>
            For each checkpoint, verify:
          </t>
          <artwork><![CDATA[
claimed-duration >= iterations / (calibration-iterations * tolerance)
]]></artwork>
          <t>
            where tolerance accounts for measurement variance
            (RECOMMENDED: 1.1, i.e., 10% margin).
          </t>
        </li>
      </ol>
    </section>

    <section anchor="vdf-calibration-trust">
      <name>Trust Model</name>

      <t>
        Calibration attestation relies on hardware-bound key integrity:
      </t>

      <ul>
        <li>
          <t>With hardware attestation:</t>
          <t>
            The calibration rate is trustworthy to the extent that
            the hardware security module is trustworthy. An adversary
            cannot claim faster-than-actual calibration without
            compromising the HSM.
          </t>
        </li>

        <li>
          <t>Without hardware attestation:</t>
          <t>
            The calibration rate is self-reported by the Attesting
            Environment. The Verifier should apply conservative
            assumptions and may require external anchors for
            time verification.
          </t>
        </li>
      </ul>

      <t>
        The hardware-section documents whether hardware attestation
        is available and which platform is used.
      </t>
    </section>
  </section>

  <section anchor="vdf-verification">
    <name>Verification Procedure</name>

    <t>
      A Verifier appraises VDF proofs through the following procedure:
    </t>

    <section anchor="vdf-verification-iterated">
      <name>Iterated Hash Verification</name>

      <t>
        For iterated hash VDFs, verification requires recomputation:
      </t>

      <ol>
        <li>
          <t>Reconstruct Input:</t>
          <t>
            Compute VDF_input{N} from the checkpoint data using
            the entanglement formula in
            <xref target="vdf-entanglement"/>.
          </t>
        </li>

        <li>
          <t>Recompute VDF:</t>
          <t>
            Execute iterations{N} hash iterations starting from
            VDF_input{N}.
          </t>
        </li>

        <li>
          <t>Compare Output:</t>
          <t>
            Verify the computed output matches the claimed
            VDF_output{N}.
          </t>
        </li>

        <li>
          <t>Verify Duration (if calibration present):</t>
          <t>
            Apply the consistency check from
            <xref target="vdf-calibration-verification"/>.
          </t>
        </li>
      </ol>

      <t>
        For large evidence packets, Verifiers MAY use sampling
        strategies:
      </t>

      <ul>
        <li>
          Verify first and last checkpoints fully
        </li>
        <li>
          Randomly sample intermediate checkpoints
        </li>
        <li>
          Verify chain linkage (prev-hash) for all checkpoints
        </li>
      </ul>
    </section>

    <section anchor="vdf-verification-succinct">
      <name>Succinct VDF Verification</name>

      <t>
        For succinct VDFs, verification uses the cryptographic proof:
      </t>

      <ol>
        <li>
          <t>Reconstruct Input:</t>
          <t>
            Compute VDF_input{N} as above.
          </t>
        </li>

        <li>
          <t>Parse Proof:</t>
          <t>
            Decode the proof field according to the algorithm
            specification.
          </t>
        </li>

        <li>
          <t>Verify Proof:</t>
          <t>
            Execute the algorithm-specific verification procedure
            (<xref target="Pietrzak2019"/> or <xref target="Wesolowski2019"/>).
          </t>
        </li>

        <li>
          <t>Verify Duration:</t>
          <t>
            Apply calibration consistency check.
          </t>
        </li>
      </ol>
    </section>
  </section>

  <section anchor="vdf-algorithm-agility">
    <name>Algorithm Agility</name>

    <section anchor="vdf-migration">
      <name>Migration Path</name>

      <t>
        Evidence packets MAY contain checkpoints using different
        VDF algorithms. This enables migration scenarios:
      </t>

      <ul>
        <li>
          Upgrading from iterated-sha256 to iterated-sha3-256
        </li>
        <li>
          Transitioning from iterated hash to succinct VDF
        </li>
        <li>
          Adopting post-quantum secure constructions
        </li>
      </ul>

      <t>
        Algorithm changes SHOULD occur at session boundaries.
        Within a session, algorithm consistency is RECOMMENDED
        for simplicity.
      </t>
    </section>

    <section anchor="vdf-post-quantum">
      <name>Post-Quantum Considerations</name>

      <t>
        Current VDF constructions have varying post-quantum security:
      </t>

      <dl>
        <dt>Iterated Hash (SHA-256, SHA3-256):</dt>
        <dd>
          Grover's algorithm provides quadratic speedup for
          preimage attacks. This affects collision resistance
          but not the sequential computation property. The
          VDF remains secure with doubled iteration counts.
        </dd>

        <dt>RSA-based (<xref target="Pietrzak2019"/>,
        <xref target="Wesolowski2019"/>):</dt>
        <dd>
          Vulnerable to Shor's algorithm. Not recommended for
          long-term evidence that must remain verifiable in a
          post-quantum era.
        </dd>

        <dt>Class-group based:</dt>
        <dd>
          Based on class group computations in imaginary quadratic
          fields. Quantum security is less well understood but
          believed to be stronger than RSA.
        </dd>
      </dl>

      <t>
        For evidence intended to remain valid for decades,
        iterated hash VDFs are RECOMMENDED.
      </t>
    </section>
  </section>

  <section anchor="vdf-security">
    <name>Security Considerations</name>

    <section anchor="vdf-acceleration">
      <name>Hardware Acceleration Attacks</name>

      <t>
        An adversary with specialized hardware (ASICs, FPGAs) may
        compute VDF iterations faster than the calibrated rate.
        Mitigations:
      </t>

      <ul>
        <li>
          <t>Calibration Reflects Actual Hardware:</t>
          <t>
            Calibration is performed on the actual device, so the
            calibration rate already accounts for any acceleration
            available to the Attester.
          </t>
        </li>

        <li>
          <t>Asymmetric Advantage Limited:</t>
          <t>
            SHA-256 is widely optimized. The speedup from custom
            hardware over commodity CPUs with SHA extensions is
            typically less than 10x.
          </t>
        </li>

        <li>
          <t>Economic Analysis:</t>
          <t>
            The forgery-cost-section quantifies the cost of
            acceleration attacks in terms of hardware investment
            and time.
          </t>
        </li>
      </ul>
    </section>

    <section anchor="vdf-parallelization">
      <name>Parallelization Resistance</name>

      <t>
        VDFs are designed to resist parallelization:
      </t>

      <dl>
        <dt>Iterated Hash:</dt>
        <dd>
          Each iteration depends on the previous output. No
          parallelization is possible without breaking the hash
          function's preimage resistance.
        </dd>

        <dt>Succinct VDFs:</dt>
        <dd>
          Based on repeated squaring in groups with unknown order.
          Parallelization would require factoring the modulus
          (RSA-based) or solving the class group order problem
          (class-group based).
        </dd>
      </dl>

      <t>
        The key insight: an adversary with P processors cannot
        compute the VDF P times faster. The best known attacks
        provide negligible parallelization advantage.
      </t>
    </section>

    <section anchor="vdf-time-memory">
      <name>Time-Memory Tradeoffs</name>

      <t>
        For iterated hash VDFs, an adversary might attempt to
        precompute and store intermediate values:
      </t>

      <ul>
        <li>
          <t>Rainbow Tables:</t>
          <t>
            Precomputing H^n(x) for many x values. Mitigated by
            the unpredictable VDF input (includes content hash
            and jitter commitment).
          </t>
        </li>

        <li>
          <t>Checkpoint Tables:</t>
          <t>
            Storing every k-th intermediate value during legitimate
            computation. Enables faster recomputation from nearby
            checkpoints but does not help with backdating attacks
            (which require computing from a specific starting point).
          </t>
        </li>
      </ul>

      <t>
        No practical time-memory tradeoff significantly reduces
        the sequential computation requirement.
      </t>
    </section>

    <section anchor="vdf-calibration-attacks">
      <name>Calibration Attacks</name>

      <t>
        Attacks on the calibration system:
      </t>

      <dl>
        <dt>Throttled Calibration:</dt>
        <dd>
          <t>
            Adversary intentionally slows device during calibration
            to report lower iterations-per-second, then computes VDFs
            faster than claimed.
          </t>
          <t>
            Mitigation: Plausibility checks based on device class.
            Anomalously slow calibration for a known device model
            triggers Verifier skepticism.
          </t>
        </dd>

        <dt>Calibration Replay:</dt>
        <dd>
          <t>
            Adversary reuses calibration attestation from a slower
            device.
          </t>
          <t>
            Mitigation: Device-nonce binds calibration to session.
            Hardware signature binds to specific device key.
          </t>
        </dd>

        <dt>Device Key Compromise:</dt>
        <dd>
          <t>
            Adversary extracts hardware-bound signing key.
          </t>
          <t>
            Mitigation: Hardware security modules are designed to
            resist key extraction. This attack requires physical
            access and significant resources.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="vdf-timing-attacks">
      <name>Timing Side Channels</name>

      <t>
        VDF computation timing may leak information:
      </t>

      <ul>
        <li>
          <t>Iteration Count Inference:</t>
          <t>
            Network observers may infer iteration counts from
            checkpoint timing. This reveals only what is already
            public in the evidence packet.
          </t>
        </li>

        <li>
          <t>Content Inference:</t>
          <t>
            VDF computation time is independent of content (fixed
            iteration count per checkpoint). No content leakage
            through timing.
          </t>
        </li>
      </ul>

      <t>
        VDF implementations SHOULD use constant-time hash operations
        where available, though timing variations in VDF computation
        itself do not compromise security.
      </t>
    </section>
  </section>

</section>

    <!-- Section 5: Absence Proofs -->
    <section anchor="absence-proofs" xml:base="sections/absence-proofs.xml">
  <name>Absence Proofs: Negative Evidence</name>

  <t>
    This section defines the Absence Proofs mechanism, which enables
    bounded claims about what did NOT occur during document creation.
    Unlike positive evidence (proving something happened), absence
    proofs provide negative evidence (proving something did not happen,
    within defined bounds and trust assumptions).
  </t>

  <section anchor="absence-design-philosophy">
    <name>Design Philosophy</name>

    <t>
      Absence proofs address a fundamental question in process
      attestation: can we make meaningful claims about events that
      did not occur? The answer is nuanced and depends on carefully
      articulated trust boundaries.
    </t>

    <section anchor="absence-value-proposition">
      <name>The Value of Bounded Claims</name>

      <t>
        Traditional evidence systems focus on positive claims: "X
        happened at time T." Absence proofs extend this to negative
        claims: "X did not exceed threshold Y during interval (T1, T2)."
      </t>

      <t>
        The value of bounded claims lies in their falsifiability:
      </t>

      <dl>
        <dt>Positive Claim:</dt>
        <dd>
          "The author typed this document" -- difficult to verify,
          requires trust in the entire authoring environment.
        </dd>

        <dt>Bounded Negative Claim:</dt>
        <dd>
          "No single edit added more than 500 characters" -- verifiable
          directly from the checkpoint chain without additional trust
          assumptions.
        </dd>
      </dl>

      <t>
        Bounded claims shift the burden of proof: instead of claiming
        what DID happen (which requires comprehensive monitoring), we
        claim what did NOT happen (which can be bounded by observable
        evidence).
      </t>
    </section>

    <section anchor="absence-limits">
      <name>Inherent Limits of Negative Evidence</name>

      <t>
        Absence proofs have fundamental limitations that MUST be
        clearly communicated:
      </t>

      <ul>
        <li>
          <t>Monitoring Gaps:</t>
          <t>
            Absence claims are valid only during monitored intervals.
            Gaps in monitoring create gaps in absence guarantees.
          </t>
        </li>

        <li>
          <t>Trust Boundaries:</t>
          <t>
            Some absence claims require trust in the Attesting
            Environment (AE). This trust must be explicitly documented.
          </t>
        </li>

        <li>
          <t>Threshold Semantics:</t>
          <t>
            "No paste above 500 characters" does not imply "no paste."
            Claims are bounded, not absolute.
          </t>
        </li>

        <li>
          <t>Behavioral Consistency, Not Authorship:</t>
          <t>
            Absence claims describe observable behavioral patterns,
            NOT authorship, intent, or cognitive processes. They
            document consistency between declared process and
            observable evidence.
          </t>
        </li>
      </ul>
    </section>
  </section>

  <section anchor="absence-trust-boundary">
    <name>Trust Boundary: Chain-Verifiable vs. Monitoring-Dependent</name>

    <t>
      The critical architectural distinction in absence proofs is
      between claims verifiable from the Evidence alone (trustless)
      and claims that require trust in the Attesting Environment's
      monitoring capabilities.
    </t>

    <section anchor="absence-chain-verifiable-intro">
      <name>Chain-Verifiable Claims (1-15)</name>

      <t>
        Chain-verifiable claims can be verified by any party with
        access to the Evidence packet. No trust in the Attesting
        Environment is required beyond basic data integrity. These
        claims are derived purely from the checkpoint chain structure.
      </t>

      <t>
        A Verifier can independently confirm these claims by:
      </t>

      <ol>
        <li>
          Parsing the checkpoint chain
        </li>
        <li>
          Verifying chain integrity (hashes, MACs, VDF linkage)
        </li>
        <li>
          Computing the relevant metrics from checkpoint data
        </li>
        <li>
          Comparing against the claimed thresholds
        </li>
      </ol>

      <t>
        No interaction with the Attester or trust in its monitoring
        capabilities is needed.
      </t>
    </section>

    <section anchor="absence-monitoring-dependent-intro">
      <name>Monitoring-Dependent Claims (16-63)</name>

      <t>
        Monitoring-dependent claims require trust that the Attesting
        Environment correctly observed and reported specific events.
        These claims cannot be verified from the checkpoint chain
        alone because they depend on real-time monitoring of events
        external to the document state.
      </t>

      <t>
        For monitoring-dependent claims, the Verifier must assess:
      </t>

      <ol>
        <li>
          Whether the AE had the capability to observe the relevant
          events (clipboard access, process enumeration, etc.)
        </li>
        <li>
          Whether the AE was operating with integrity during the
          monitoring period
        </li>
        <li>
          Whether monitoring was continuous or had gaps
        </li>
        <li>
          What attestation (if any) supports the AE integrity claim
        </li>
      </ol>

      <t>
        The ae-trust-basis structure documents these trust assumptions
        explicitly, enabling informed Relying Party decisions.
      </t>
    </section>

    <section anchor="absence-trust-comparison">
      <name>Trust Model Comparison</name>

      <table>
        <thead>
          <tr>
            <th>Aspect</th>
            <th>Chain-Verifiable</th>
            <th>Monitoring-Dependent</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>Verification</td>
            <td>Independent, trustless</td>
            <td>Requires AE trust</td>
          </tr>
          <tr>
            <td>Data Source</td>
            <td>Checkpoint chain only</td>
            <td>Real-time event monitoring</td>
          </tr>
          <tr>
            <td>Confidence Basis</td>
            <td>Cryptographic proof</td>
            <td>AE integrity attestation</td>
          </tr>
          <tr>
            <td>Forgery Resistance</td>
            <td>Requires VDF recomputation</td>
            <td>Requires AE compromise</td>
          </tr>
          <tr>
            <td>Claim Types</td>
            <td>1-15</td>
            <td>16-63</td>
          </tr>
        </tbody>
      </table>
    </section>
  </section>

  <section anchor="absence-chain-verifiable-claims">
    <name>Chain-Verifiable Claims (Types 1-15)</name>

    <t>
      The following claims can be verified directly from the Evidence
      packet without trusting the Attesting Environment's monitoring
      capabilities:
    </t>

    <table>
      <thead>
        <tr>
          <th>Type</th>
          <th>Claim</th>
          <th>Proves</th>
          <th>Verification Method</th>
        </tr>
      </thead>
      <tbody>
        <tr>
          <td>1</td>
          <td>max-single-delta-chars</td>
          <td>No single checkpoint added more than N characters</td>
          <td>max(delta.chars-added) across all checkpoints</td>
        </tr>
        <tr>
          <td>2</td>
          <td>max-single-delta-bytes</td>
          <td>No single checkpoint added more than N bytes</td>
          <td>Derived from char counts with encoding factor</td>
        </tr>
        <tr>
          <td>3</td>
          <td>max-net-delta-chars</td>
          <td>No single checkpoint had net change exceeding N chars</td>
          <td>max(|chars-added - chars-deleted|) per checkpoint</td>
        </tr>
        <tr>
          <td>4</td>
          <td>min-vdf-duration-seconds</td>
          <td>Total VDF time exceeds N seconds</td>
          <td>sum(claimed-duration) across checkpoints</td>
        </tr>
        <tr>
          <td>5</td>
          <td>min-vdf-duration-per-kchar</td>
          <td>At least N seconds of VDF time per 1000 characters</td>
          <td>total_vdf_seconds / (final_char_count / 1000)</td>
        </tr>
        <tr>
          <td>6</td>
          <td>checkpoint-chain-complete</td>
          <td>No gaps in checkpoint sequence</td>
          <td>Verify sequence numbers are consecutive</td>
        </tr>
        <tr>
          <td>7</td>
          <td>checkpoint-chain-consistent</td>
          <td>All prev-hash values match prior checkpoint-hash</td>
          <td>Verify hash chain linkage</td>
        </tr>
        <tr>
          <td>8</td>
          <td>jitter-entropy-above-threshold</td>
          <td>Captured entropy exceeds N bits</td>
          <td>sum(estimated-entropy-bits) from jitter-binding</td>
        </tr>
        <tr>
          <td>9</td>
          <td>jitter-samples-above-count</td>
          <td>Jitter sample count exceeds N</td>
          <td>sum(sample-count) from jitter-summary</td>
        </tr>
        <tr>
          <td>10</td>
          <td>revision-points-above-count</td>
          <td>Document had at least N revision points</td>
          <td>Count checkpoints where chars-deleted &gt; 0</td>
        </tr>
        <tr>
          <td>11</td>
          <td>session-count-above-threshold</td>
          <td>Evidence spans at least N sessions</td>
          <td>Count distinct session boundaries in chain</td>
        </tr>
      </tbody>
    </table>

    <section anchor="absence-chain-verification-detail">
      <name>Verification Details</name>

      <t>
        For each chain-verifiable claim, the Verifier performs:
      </t>

      <artwork type="pseudocode"><![CDATA[
verify_chain_claim(evidence, claim):
    (1) Verify chain integrity first
    if not verify_chain_hashes(evidence.checkpoints):
        return INVALID("Chain integrity failure")
    if not verify_vdf_linkage(evidence.checkpoints):
        return INVALID("VDF linkage failure")

    (2) Compute the metric from checkpoint data
    observed_value = compute_metric(evidence.checkpoints, claim.type)

    (3) Compare against threshold
    match claim.type:
        case MAX_SINGLE_DELTA_CHARS:
            passes = (observed_value <= claim.threshold)
        case MIN_VDF_DURATION_SECONDS:
            passes = (observed_value >= claim.threshold)

    (4) Return verification result
    if passes:
        return PROVEN(observed_value, claim.threshold)
    else:
        return FAILED(observed_value, claim.threshold)
]]></artwork>

      <t>
        The key property: verification depends ONLY on cryptographically
        verifiable checkpoint data, not on any external monitoring
        claims by the AE.
      </t>
    </section>
  </section>

  <section anchor="absence-monitoring-claims">
    <name>Monitoring-Dependent Claims (Types 16-63)</name>

    <t>
      The following claims require trust in the Attesting Environment's
      monitoring capabilities. Each claim documents the specific AE
      capability required and the basis for trusting that capability.
    </t>

    <table>
      <thead>
        <tr>
          <th>Type</th>
          <th>Claim</th>
          <th>AE Capability Required</th>
          <th>Trust Basis</th>
        </tr>
      </thead>
      <tbody>
        <tr>
          <td>16</td>
          <td>max-paste-event-chars</td>
          <td>Clipboard monitoring</td>
          <td>OS-reported paste events</td>
        </tr>
        <tr>
          <td>17</td>
          <td>max-clipboard-access-chars</td>
          <td>Clipboard content access</td>
          <td>Application-level clipboard hooks</td>
        </tr>
        <tr>
          <td>18</td>
          <td>no-paste-from-ai-tool</td>
          <td>Clipboard source attribution</td>
          <td>OS process enumeration + clipboard</td>
        </tr>
        <tr>
          <td>20</td>
          <td>max-insertion-rate-wpm</td>
          <td>Real-time keystroke monitoring</td>
          <td>Input event stream timing</td>
        </tr>
        <tr>
          <td>21</td>
          <td>no-automated-input-pattern</td>
          <td>Input timing analysis</td>
          <td>Statistical pattern recognition</td>
        </tr>
        <tr>
          <td>22</td>
          <td>no-macro-replay-detected</td>
          <td>Input source verification</td>
          <td>OS input subsystem attestation</td>
        </tr>
        <tr>
          <td>32</td>
          <td>no-file-import-above-bytes</td>
          <td>File operation monitoring</td>
          <td>Application file access hooks</td>
        </tr>
        <tr>
          <td>33</td>
          <td>no-external-file-open</td>
          <td>File system monitoring</td>
          <td>OS file access events</td>
        </tr>
        <tr>
          <td>40</td>
          <td>no-concurrent-ai-tool</td>
          <td>Process enumeration</td>
          <td>OS process list attestation</td>
        </tr>
        <tr>
          <td>41</td>
          <td>no-llm-api-traffic</td>
          <td>Network traffic monitoring</td>
          <td>Network stack inspection</td>
        </tr>
        <tr>
          <td>48</td>
          <td>max-idle-gap-seconds</td>
          <td>Continuous activity monitoring</td>
          <td>Input event stream continuity</td>
        </tr>
        <tr>
          <td>49</td>
          <td>active-time-above-threshold</td>
          <td>Active time measurement</td>
          <td>Input event correlation</td>
        </tr>
      </tbody>
    </table>

    <section anchor="absence-ae-trust-documentation">
      <name>Trust Basis Documentation</name>

      <t>
        Each monitoring-dependent claim MUST include an ae-trust-basis
        structure that documents the trust assumptions:
      </t>

      <artwork type="cddl"><![CDATA[
ae-trust-basis = {
    1 => ae-trust-target,   ; trust-target
    2 => tstr,              ; justification
    3 => bool,              ; verified
}

ae-trust-target = &(
    witnessd-software-integrity: 1,
    os-reported-events: 2,
    application-reported-events: 3,
    tpm-attested-elsewhere: 16,
    se-attested-elsewhere: 17,
    unverified-assumption: 32,
)
]]></artwork>

      <dl>
        <dt>witnessd-software-integrity (1):</dt>
        <dd>
          Trust that the witnessd software itself is unmodified and
          correctly implements monitoring. Requires software attestation
          or code signing verification.
        </dd>

        <dt>os-reported-events (2):</dt>
        <dd>
          Trust that the operating system correctly reports events
          (clipboard, process list, file access). Requires OS integrity.
        </dd>

        <dt>application-reported-events (3):</dt>
        <dd>
          Trust that the authoring application correctly reports events.
          Weakest trust level; application may be compromised.
        </dd>

        <dt>tpm-attested-elsewhere (16):</dt>
        <dd>
          TPM attestation of the AE state exists in the
          hardware-section. Cross-reference for verification.
        </dd>

        <dt>se-attested-elsewhere (17):</dt>
        <dd>
          Secure Enclave attestation of the AE state exists in the
          hardware-section. Cross-reference for verification.
        </dd>

        <dt>unverified-assumption (32):</dt>
        <dd>
          The claim is based on assumptions that cannot be verified.
          Relying Party must decide whether to accept based on context.
        </dd>
      </dl>

      <t>
        The justification field provides human-readable explanation
        of why the trust basis is believed adequate. The verified
        field indicates whether the trust basis was cryptographically
        verified (true) or merely assumed (false).
      </t>
    </section>
  </section>

  <section anchor="absence-monitoring-coverage">
    <name>Monitoring Coverage</name>

    <t>
      Honest documentation of monitoring gaps is essential for
      meaningful absence claims. The monitoring-coverage structure
      captures the scope and limitations of AE monitoring.
    </t>

    <artwork type="cddl"><![CDATA[
monitoring-coverage = {
    1 => bool,                  ; keyboard-monitored
    2 => bool,                  ; clipboard-monitored
    3 => [+ time-interval],     ; monitoring-intervals
    4 => float32,               ; coverage-fraction
    ? 5 => hardware-attestation, ; monitoring-attestation
}

time-interval = {
    1 => pop-timestamp,         ; start
    2 => pop-timestamp,         ; end
}
]]></artwork>

    <section anchor="absence-coverage-fields">
      <name>Coverage Fields</name>

      <dl>
        <dt>keyboard-monitored (key 1):</dt>
        <dd>
          Boolean indicating whether keyboard input events were
          monitored during the session. If false, claims about
          typing patterns (20-22) cannot be made.
        </dd>

        <dt>clipboard-monitored (key 2):</dt>
        <dd>
          Boolean indicating whether clipboard operations were
          monitored. If false, claims about paste events (16-18)
          cannot be made.
        </dd>

        <dt>monitoring-intervals (key 3):</dt>
        <dd>
          Array of time intervals during which monitoring was active.
          Gaps between intervals represent periods where monitoring
          was suspended (application backgrounded, system sleep, etc.).
        </dd>

        <dt>coverage-fraction (key 4):</dt>
        <dd>
          Fraction of total session time covered by monitoring,
          calculated as sum(interval_duration) / total_session_duration.
          Values below 0.95 indicate significant monitoring gaps
          that may affect absence claim confidence.
        </dd>

        <dt>monitoring-attestation (key 5, optional):</dt>
        <dd>
          Hardware attestation that monitoring was active during
          the claimed intervals. Provides stronger assurance than
          self-reported coverage.
        </dd>
      </dl>
    </section>

    <section anchor="absence-gap-semantics">
      <name>Gap Semantics</name>

      <t>
        Monitoring gaps have explicit semantic impact on absence claims:
      </t>

      <ul>
        <li>
          <t>Covered Intervals:</t>
          <t>
            Absence claims apply fully during covered intervals.
            "No paste above 500 chars during (T1, T2)" means the
            AE would have detected any such paste.
          </t>
        </li>

        <li>
          <t>Gap Intervals:</t>
          <t>
            During gaps, monitoring-dependent claims cannot be made.
            An event could have occurred unobserved.
          </t>
        </li>

        <li>
          <t>Gap-Aware Claims:</t>
          <t>
            If coverage-fraction is below 1.0, absence claims SHOULD
            include a caveat noting the monitoring gap percentage.
          </t>
        </li>
      </ul>

      <t>
        Chain-verifiable claims (1-15) are NOT affected by monitoring
        gaps because they are derived from the checkpoint chain, which
        has no gaps (checkpoint-chain-complete verifies this).
      </t>
    </section>
  </section>

  <section anchor="absence-structure">
    <name>Absence Section Structure</name>

    <t>
      The absence-section appears as an optional field (key 15) in
      the evidence-packet:
    </t>

    <artwork type="cddl"><![CDATA[
absence-section = {
    1 => monitoring-coverage,     ; monitoring-coverage
    2 => [+ absence-claim],       ; claims
    ? 3 => claim-summary,         ; claim-summary
}

claim-summary = {
    1 => uint,                    ; chain-verifiable-count
    2 => uint,                    ; monitoring-dependent-count
    3 => bool,                    ; all-claims-attested
}

absence-claim = {
    1 => absence-claim-type,      ; claim-type
    2 => absence-threshold,       ; threshold
    3 => absence-proof,           ; proof
    4 => absence-confidence,      ; confidence
    ? 5 => ae-trust-basis,        ; ae-trust-basis (monitoring)
}

absence-threshold = {
    1 => uint / float32 / null,   ; value
}

absence-proof = {
    1 => absence-proof-method,    ; proof-method
    2 => absence-evidence,        ; evidence
}

absence-proof-method = &(
    checkpoint-chain-analysis: 1,
    keystroke-analysis: 2,
    platform-attestation: 3,
    network-attestation: 4,
    statistical-inference: 5,
)

absence-evidence = {
    ? 1 => [uint, uint],          ; checkpoint-range
    ? 2 => uint,                  ; max-observed-value
    ? 3 => float32,               ; max-observed-rate
    ? 4 => tstr,                  ; statistical-test
    ? 5 => float32,               ; p-value
    ? 6 => bstr,                  ; attestation-ref
}

absence-confidence = {
    1 => confidence-level,        ; level
    2 => [* tstr],                ; caveats
}

confidence-level = &(
    proven: 1,
    high: 2,
    medium: 3,
    low: 4,
)
]]></artwork>

    <section anchor="absence-confidence-levels">
      <name>Confidence Levels</name>

      <dl>
        <dt>proven (1):</dt>
        <dd>
          The claim is cryptographically provable from the Evidence.
          Only chain-verifiable claims (1-15) can achieve this level.
        </dd>

        <dt>high (2):</dt>
        <dd>
          Strong evidence supports the claim. For monitoring-dependent
          claims, requires hardware attestation of AE integrity and
          high monitoring coverage (&gt;95%).
        </dd>

        <dt>medium (3):</dt>
        <dd>
          Reasonable evidence supports the claim. AE integrity is
          assumed but not hardware-attested. Monitoring coverage
          is acceptable (&gt;80%).
        </dd>

        <dt>low (4):</dt>
        <dd>
          Weak evidence supports the claim. Significant caveats apply.
          Monitoring gaps exist or AE trust basis is unverified.
        </dd>
      </dl>
    </section>
  </section>

  <section anchor="absence-verification-procedure">
    <name>Verification Procedure</name>

    <t>
      A Verifier appraises absence claims through a structured
      procedure that distinguishes chain-verifiable from
      monitoring-dependent claims:
    </t>

    <section anchor="absence-verify-chain">
      <name>Step 1: Verify Chain-Verifiable Claims</name>

      <t>
        For claims with type 1-15:
      </t>

      <ol>
        <li>
          <t>Verify Evidence Integrity:</t>
          <t>
            Verify checkpoint chain hashes, VDF linkage, and
            structural validity per the base protocol.
          </t>
        </li>

        <li>
          <t>Extract Metrics:</t>
          <t>
            Compute the relevant metric from checkpoint data
            (e.g., max delta chars, total VDF duration).
          </t>
        </li>

        <li>
          <t>Compare Threshold:</t>
          <t>
            Verify the computed metric satisfies the claimed threshold.
          </t>
        </li>

        <li>
          <t>Assign Confidence:</t>
          <t>
            Chain-verifiable claims that pass receive confidence
            level "proven" (1).
          </t>
        </li>
      </ol>
    </section>

    <section anchor="absence-verify-monitoring">
      <name>Step 2: Appraise Monitoring-Dependent Claims</name>

      <t>
        For claims with type 16-63:
      </t>

      <ol>
        <li>
          <t>Assess AE Trust Basis:</t>
          <t>
            Examine the ae-trust-basis for each claim. Determine
            whether the trust target is appropriate for the claim
            type and whether it was verified.
          </t>
        </li>

        <li>
          <t>Evaluate Monitoring Coverage:</t>
          <t>
            Check monitoring-coverage to determine whether the
            relevant monitoring was active. Verify coverage-fraction
            is adequate for the confidence level claimed.
          </t>
        </li>

        <li>
          <t>Cross-Reference Hardware Attestation:</t>
          <t>
            If ae-trust-target is tpm-attested-elsewhere (16) or
            se-attested-elsewhere (17), verify the corresponding
            attestation exists in hardware-section.
          </t>
        </li>

        <li>
          <t>Evaluate Evidence:</t>
          <t>
            Examine the absence-evidence for supporting data.
            Statistical tests should have appropriate p-values;
            attestation references should be verifiable.
          </t>
        </li>

        <li>
          <t>Assign Confidence:</t>
          <t>
            Based on the above factors, assign confidence level
            (2-4). Level 1 (proven) is NOT available for
            monitoring-dependent claims.
          </t>
        </li>

        <li>
          <t>Document Caveats:</t>
          <t>
            Record any limitations or assumptions in the caveats
            array of the verification result.
          </t>
        </li>
      </ol>
    </section>

    <section anchor="absence-verify-summary">
      <name>Step 3: Produce Verification Summary</name>

      <t>
        The Verifier produces a result-claim for each absence-claim
        examined:
      </t>

      <artwork type="cddl"><![CDATA[
result-claim = {
    1 => uint,                ; claim-type
    2 => bool,                ; verified (claim holds)
    ? 3 => tstr,              ; detail (reasoning)
    ? 4 => confidence-level,  ; claim-confidence
}
]]></artwork>

      <t>
        The aggregated results appear in the attestation-result
        (.war file) as the verified-claims array.
      </t>
    </section>
  </section>

  <section anchor="absence-rats-mapping">
    <name>RATS Architecture Mapping</name>

    <t>
      Absence proofs extend the RATS (Remote ATtestation procedureS)
      evidence model <xref target="RFC9334"/> in several ways:
    </t>

    <section anchor="absence-rats-roles">
      <name>Role Distribution</name>

      <dl>
        <dt>Attester Responsibility:</dt>
        <dd>
          <t>
            The Attester (witnessd AE) generates absence claims
            based on its monitoring observations. For chain-verifiable
            claims, the Attester merely assembles checkpoint data
            in a format that enables Verifier computation. For
            monitoring-dependent claims, the Attester makes assertions
            about events it observed (or did not observe).
          </t>
        </dd>

        <dt>Verifier Responsibility:</dt>
        <dd>
          <t>
            The Verifier independently verifies chain-verifiable
            claims by recomputing metrics from Evidence. For
            monitoring-dependent claims, the Verifier appraises
            the trust basis and determines whether to accept the
            Attester's monitoring assertions.
          </t>
        </dd>

        <dt>Relying Party Responsibility:</dt>
        <dd>
          <t>
            The Relying Party consumes the attestation-result
            (.war file) and decides whether the verified claims
            meet their requirements. Different use cases may
            require different confidence levels or claim types.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="absence-rats-evidence-extension">
      <name>Evidence Model Extension</name>

      <t>
        Standard RATS evidence attests to system state (software
        versions, configuration). Absence proofs add a new category:
      </t>

      <dl>
        <dt>State Evidence (traditional RATS):</dt>
        <dd>
          "The system was in configuration C at time T."
        </dd>

        <dt>Behavioral Consistency Evidence (absence proofs):</dt>
        <dd>
          "Observable behavior during interval (T1, T2) was consistent
          with constraint X."
        </dd>
      </dl>

      <t>
        This extension enables attestation about processes, not just
        states. The checkpoint chain provides the evidentiary basis
        for process claims that would otherwise require continuous
        trusted monitoring.
      </t>
    </section>

    <section anchor="absence-rats-appraisal-policy">
      <name>Appraisal Policy Integration</name>

      <t>
        Verifiers MAY define appraisal policies that specify:
      </t>

      <ul>
        <li>
          Which absence claim types are required for acceptance
        </li>
        <li>
          Minimum confidence levels for each claim type
        </li>
        <li>
          Required trust basis for monitoring-dependent claims
        </li>
        <li>
          Minimum monitoring coverage thresholds
        </li>
      </ul>

      <t>
        Example policy (informative):
      </t>

      <artwork><![CDATA[
policy:
  required_claims:
    - type: 1   # max-single-delta-chars
      threshold: 500
      min_confidence: proven
    - type: 4   # min-vdf-duration-seconds
      threshold: 3600
      min_confidence: proven
    - type: 16  # max-paste-event-chars
      threshold: 200
      min_confidence: high
      required_trust_basis: (1, 16, 17)  (SE or TPM attested)
  min_monitoring_coverage: 0.95
]]></artwork>
    </section>
  </section>

  <section anchor="absence-security">
    <name>Security Considerations</name>

    <section anchor="absence-security-limits">
      <name>What Absence Claims Do NOT Prove</name>

      <t>
        Absence claims have explicit limits that MUST be understood
        by all parties:
      </t>

      <dl>
        <dt>Absence claims do NOT prove authorship:</dt>
        <dd>
          "No single edit added more than 500 characters" does not
          prove who performed the edits. It proves only that the
          observable edit pattern had this property.
        </dd>

        <dt>Absence claims do NOT prove intent:</dt>
        <dd>
          "No paste from AI tool detected" does not prove the author
          intended to write without AI assistance. The author may
          have used AI tools in ways that evade detection.
        </dd>

        <dt>Absence claims do NOT prove cognitive process:</dt>
        <dd>
          Behavioral patterns consistent with human typing do not
          prove human cognition produced the content. The claims
          describe observable behavior, not mental states.
        </dd>

        <dt>Absence claims do NOT prove completeness:</dt>
        <dd>
          Claims apply only to monitored intervals. Events during
          monitoring gaps are not covered by absence claims.
        </dd>
      </dl>

      <t>
        Framing claims as "behavioral consistency" rather than
        "human authorship" avoids overclaiming and maintains
        intellectual honesty about what the evidence actually shows.
      </t>
    </section>

    <section anchor="absence-security-ae-compromise">
      <name>Attesting Environment Compromise</name>

      <t>
        Monitoring-dependent claims are only as trustworthy as the
        Attesting Environment:
      </t>

      <ul>
        <li>
          <t>Software Compromise:</t>
          <t>
            Modified witnessd software could fabricate monitoring
            observations. Mitigated by code signing and software
            attestation.
          </t>
        </li>

        <li>
          <t>OS Compromise:</t>
          <t>
            Compromised OS could report false clipboard contents
            or process lists. Mitigated by hardware attestation
            of OS integrity.
          </t>
        </li>

        <li>
          <t>Hardware Compromise:</t>
          <t>
            Physical access to device could enable hardware-level
            attacks. This is outside the threat model for most
            use cases.
          </t>
        </li>
      </ul>

      <t>
        The ae-trust-basis structure explicitly documents which
        trust assumptions apply, enabling Relying Parties to make
        informed decisions about acceptable risk.
      </t>
    </section>

    <section anchor="absence-security-evasion">
      <name>Monitoring Evasion</name>

      <t>
        Sophisticated adversaries may attempt to evade monitoring:
      </t>

      <dl>
        <dt>Timing-Based Evasion:</dt>
        <dd>
          Performing prohibited actions during monitoring gaps.
          Mitigated by high coverage requirements and gap
          documentation.
        </dd>

        <dt>Tool-Based Evasion:</dt>
        <dd>
          Using tools not in the detection list (e.g., novel
          AI tools). The claim "no-concurrent-ai-tool" refers
          to known tools; unknown tools may evade detection.
        </dd>

        <dt>Channel-Based Evasion:</dt>
        <dd>
          Using alternative input channels (screen readers,
          accessibility features) not monitored by the AE.
          Mitigated by comprehensive input monitoring.
        </dd>

        <dt>Simulation:</dt>
        <dd>
          Generating input patterns that mimic human behavior.
          The jitter-seal and VDF mechanisms make this costly
          but not impossible. See forgery-cost-section.
        </dd>
      </dl>

      <t>
        Absence proofs do not claim to make evasion impossible,
        only to make it costly and to document the monitoring
        coverage that was actually achieved.
      </t>
    </section>

    <section anchor="absence-security-statistical">
      <name>Statistical Claim Limitations</name>

      <t>
        Claims based on statistical inference (proof-method 5)
        have inherent uncertainty:
      </t>

      <ul>
        <li>
          p-values indicate probability, not certainty
        </li>
        <li>
          Multiple testing increases false positive risk
        </li>
        <li>
          Adversarial inputs may exploit statistical assumptions
        </li>
      </ul>

      <t>
        Statistical claims SHOULD be assigned confidence level
        "medium" (3) or "low" (4) unless supported by additional
        evidence.
      </t>
    </section>
  </section>

  <section anchor="absence-privacy">
    <name>Privacy Considerations</name>

    <t>
      Absence claims may reveal information about the authoring
      process:
    </t>

    <ul>
      <li>
        <t>Edit Pattern Disclosure:</t>
        <t>
          Chain-verifiable claims reveal aggregate statistics about
          edit sizes and frequencies. This is inherent in the
          checkpoint chain and cannot be hidden without removing
          the evidentiary basis for claims.
        </t>
      </li>

      <li>
        <t>Tool Usage Disclosure:</t>
        <t>
          Claims like "no-concurrent-ai-tool" implicitly reveal
          that the AE was monitoring for AI tool usage. Users
          should be informed of this monitoring.
        </t>
      </li>

      <li>
        <t>Behavioral Fingerprinting:</t>
        <t>
          Detailed jitter data and monitoring observations could
          theoretically enable behavioral fingerprinting. The
          histogram aggregation in jitter-binding mitigates this
          for timing data.
        </t>
      </li>
    </ul>

    <t>
      Users SHOULD be informed which absence claims will be
      generated and have the option to disable specific monitoring
      capabilities if privacy concerns outweigh the value of
      those claims.
    </t>
  </section>

</section>

    <!-- Section 6: Forgery Cost Bounds -->
    <section anchor="forgery-cost-bounds"
             xml:base="sections/forgery-cost-bounds.xml">
  <name>Forgery Cost Bounds (Quantified Security)</name>

  <t>
    This section defines the forgery cost bounds mechanism, which
    provides quantified security analysis for Proof of Process evidence.
    Rather than claiming evidence is "secure" or "insecure" in absolute
    terms, this framework expresses security as minimum resource costs
    that an adversary must expend to produce counterfeit evidence.
  </t>

  <section anchor="fcb-design-philosophy">
    <name>Design Philosophy</name>

    <t>
      Traditional security claims are often binary: a system is either
      "secure" or "broken." This framing poorly serves attestation
      scenarios where:
    </t>

    <ul>
      <li>
        Adversary capabilities vary across resource levels
      </li>
      <li>
        Evidence value degrades gracefully rather than failing
        completely
      </li>
      <li>
        Relying Parties have different risk tolerances
      </li>
      <li>
        Hardware costs and computational speeds change over time
      </li>
    </ul>

    <t>
      The Proof of Process framework adopts quantified security:
      expressing security guarantees in terms of measurable costs
      (time, entropy, economic resources) that bound adversary
      capabilities.
    </t>

    <section anchor="fcb-cost-asymmetry">
      <name>Cost-Asymmetric Forgery</name>

      <t>
        The design goal is cost asymmetry: producing genuine evidence
        should be inexpensive (a natural byproduct of authoring), while
        producing counterfeit evidence should require resources
        disproportionate to any benefit gained.
      </t>

      <t>
        Cost asymmetry is achieved through three mechanisms:
      </t>

      <dl>
        <dt>Time Asymmetry (VDF):</dt>
        <dd>
          Genuine evidence accumulates VDF proofs during natural
          authoring time. Forgery requires recomputing the VDF chain,
          which cannot be parallelized. The adversary must spend
          wall-clock time proportional to the claimed authoring
          duration.
        </dd>

        <dt>Entropy Asymmetry (Jitter Seal):</dt>
        <dd>
          Genuine evidence captures behavioral entropy that exists only
          at the moment of observation. Forgery requires either guessing
          the entropy commitment (computationally infeasible) or
          simulating human input patterns in real time (bounded by
          the same VDF constraints).
        </dd>

        <dt>Economic Asymmetry (Resource Cost):</dt>
        <dd>
          The combined time and entropy requirements translate to
          economic costs. Forging evidence for a 10-hour authoring
          session requires at minimum 10 hours of compute time plus
          specialized resources, making mass forgery economically
          impractical.
        </dd>
      </dl>
    </section>

    <section anchor="fcb-non-claims">
      <name>What Forgery Cost Bounds Do NOT Claim</name>

      <t>
        Forgery cost bounds explicitly avoid claims that evidence is:
      </t>

      <ul>
        <li>
          <strong>Unforgeable:</strong> Given sufficient resources, any
          evidence can be forged. The bounds quantify "sufficient."
        </li>
        <li>
          <strong>Guaranteed authentic:</strong> Bounds express minimum
          forgery costs, not maximum. Cheaper attacks may exist that
          have not been discovered.
        </li>
        <li>
          <strong>Irrefutable proof:</strong> Evidence supports claims
          with quantified confidence, not mathematical certainty.
        </li>
        <li>
          <strong>Permanent:</strong> Cost bounds depreciate as hardware
          improves. Evidence verified today may have different bounds
          when re-evaluated in the future.
        </li>
      </ul>
    </section>
  </section>

  <section anchor="fcb-structure">
    <name>Forgery Cost Section Structure</name>

    <t>
      The forgery-cost-section appears in each evidence packet and
      contains four required components:
    </t>

    <artwork type="cddl"><![CDATA[
forgery-cost-section = {
    1 => time-bound,           ; time-bound
    2 => entropy-bound,        ; entropy-bound
    3 => economic-bound,       ; economic-bound
    4 => security-statement,   ; security-statement
}
]]></artwork>

    <t>
      These components represent orthogonal dimensions of forgery cost.
      A complete security assessment considers all four dimensions.
    </t>
  </section>

  <section anchor="fcb-time-bound">
    <name>Time Bound</name>

    <t>
      The time-bound quantifies the minimum wall-clock time required to
      recompute the VDF chain, establishing a lower bound on forgery
      duration.
    </t>

    <artwork type="cddl"><![CDATA[
time-bound = {
    1 => uint,                 ; total-iterations
    2 => uint,                 ; calibration-rate
    3 => tstr,                 ; reference-hardware
    4 => float32,              ; min-recompute-seconds
    5 => bool,                 ; parallelizable
    ? 6 => uint,               ; max-parallelism
}
]]></artwork>

    <section anchor="fcb-time-fields">
      <name>Field Definitions</name>

      <dl>
        <dt>total-iterations (key 1):</dt>
        <dd>
          Sum of all VDF iterations across all checkpoints in the
          evidence packet, computed as
          sum(checkpoint{i}.vdf-proof.iterations) for all i.
        </dd>

        <dt>calibration-rate (key 2):</dt>
        <dd>
          The attested iterations-per-second from the calibration
          attestation. This represents the maximum VDF computation
          speed on the Attesting Environment's hardware.
        </dd>

        <dt>reference-hardware (key 3):</dt>
        <dd>
          Human-readable description of the hardware used for
          calibration (e.g., "Apple M2 Pro", "Intel i9-13900K").
          Used for plausibility assessment, not verification.
        </dd>

        <dt>min-recompute-seconds (key 4):</dt>
        <dd>
          Minimum wall-clock seconds required to recompute the VDF
          chain on reference hardware, calculated as
          total-iterations / calibration-rate.
          This is a lower bound: actual recomputation on slower
          hardware takes longer.
        </dd>

        <dt>parallelizable (key 5):</dt>
        <dd>
          Boolean indicating whether the VDF algorithm permits
          parallelization. For iterated hash VDFs (algorithms 1-15),
          this is always false. For certain succinct VDF constructions,
          limited parallelization may be possible.
        </dd>

        <dt>max-parallelism (key 6, optional):</dt>
        <dd>
          If parallelizable is true, the maximum parallel speedup
          factor. For iterated hash VDFs, this field is absent.
        </dd>
      </dl>
    </section>

    <section anchor="fcb-time-verification">
      <name>Time Bound Verification</name>

      <t>
        A Verifier computes and validates the time bound as follows:
      </t>

      <ol>
        <li>
          <t>Sum Iterations:</t>
          <t>
            Traverse all checkpoints and sum the iterations field from
            each VDF proof.
          </t>
        </li>

        <li>
          <t>Verify Calibration:</t>
          <t>
            If calibration attestation is present, verify the hardware
            signature and check that calibration-rate matches the
            attested iterations-per-second.
          </t>
        </li>

        <li>
          <t>Compute Minimum Time:</t>
          <t>
            Divide total-iterations by calibration-rate. Verify the
            result matches min-recompute-seconds within floating-point
            tolerance.
          </t>
        </li>

        <li>
          <t>Plausibility Check:</t>
          <t>
            Verify min-recompute-seconds is consistent with the claimed
            authoring duration. Significant discrepancy (e.g., 10-hour
            claimed session with 1-minute VDF time) indicates either
            misconfiguration or potential manipulation.
          </t>
        </li>
      </ol>
    </section>

    <section anchor="fcb-time-parallelization">
      <name>Parallelization Resistance</name>

      <t>
        The security of time bounds depends on VDF parallelization
        resistance. For iterated hash VDFs:
      </t>

      <ul>
        <li>
          Each iteration depends on the previous output
        </li>
        <li>
          No known technique computes H^n(x) faster than n sequential
          hash operations
        </li>
        <li>
          An adversary with P processors cannot compute the chain P
          times faster
        </li>
      </ul>

      <t>
        This property ensures that time bounds reflect wall-clock time,
        not aggregate compute time. An adversary with a data center
        cannot forge 10 hours of evidence in 10 minutes by using 60x
        more processors.
      </t>

      <t>
        See <xref target="vdf-parallelization"/> for detailed analysis
        of parallelization resistance in each VDF algorithm.
      </t>
    </section>
  </section>

  <section anchor="fcb-entropy-bound">
    <name>Entropy Bound</name>

    <t>
      The entropy-bound quantifies the unpredictability in the evidence
      chain, establishing a lower bound on the probability of guessing
      or replaying entropy commitments.
    </t>

    <artwork type="cddl"><![CDATA[
entropy-bound = {
    1 => float32,              ; total-entropy-bits
    2 => uint,                 ; sample-count
    3 => float32,              ; entropy-per-sample
    4 => float32,              ; brute-force-probability
    5 => bool,                 ; replay-possible
    ? 6 => tstr,               ; replay-prevention
}
]]></artwork>

    <section anchor="fcb-entropy-fields">
      <name>Field Definitions</name>

      <dl>
        <dt>total-entropy-bits (key 1):</dt>
        <dd>
          Aggregate entropy across all Jitter Seals in the evidence
          packet, expressed in bits. Computed as
          sum(jitter-summary[i].estimated-entropy-bits) for all i.
        </dd>

        <dt>sample-count (key 2):</dt>
        <dd>
          Total number of timing samples captured across all Jitter
          Seals. Higher sample counts increase confidence in the
          entropy estimate.
        </dd>

        <dt>entropy-per-sample (key 3):</dt>
        <dd>
          Average entropy contribution per timing sample, calculated as
          total-entropy-bits / sample-count. Typical human typing
          contributes 2-4 bits per inter-key interval.
        </dd>

        <dt>brute-force-probability (key 4):</dt>
        <dd>
          Probability of successfully guessing the entropy commitment
          by brute force, calculated as 2^(-total-entropy-bits).
          For 64 bits of entropy, this is approximately 5.4 x 10^-20.
        </dd>

        <dt>replay-possible (key 5):</dt>
        <dd>
          Boolean indicating whether Jitter Seal replay is theoretically
          possible. This is false when VDF entanglement is properly
          configured (entropy commitment appears in VDF input).
        </dd>

        <dt>replay-prevention (key 6, optional):</dt>
        <dd>
          Human-readable description of replay prevention mechanisms.
          Typical value: "VDF entanglement with prev-checkpoint
          binding".
        </dd>
      </dl>
    </section>

    <section anchor="fcb-entropy-verification">
      <name>Entropy Bound Verification</name>

      <t>
        A Verifier computes and validates the entropy bound as follows:
      </t>

      <ol>
        <li>
          <t>Aggregate Entropy:</t>
          <t>
            Sum estimated-entropy-bits from each checkpoint's
            jitter-summary. Verify the total matches total-entropy-bits.
          </t>
        </li>

        <li>
          <t>Count Samples:</t>
          <t>
            Sum sample-count from each jitter-summary. Verify
            consistency with the claimed sample-count.
          </t>
        </li>

        <li>
          <t>Verify Entropy Estimates:</t>
          <t>
            If raw-intervals are disclosed, recompute the histogram
            and Shannon entropy. Verify consistency with the claimed
            entropy estimate.
          </t>
        </li>

        <li>
          <t>Check Replay Prevention:</t>
          <t>
            Verify each entropy-commitment appears in the corresponding
            VDF input. If VDF entanglement is absent, set
            replay-possible to true.
          </t>
        </li>

        <li>
          <t>Compute Brute-Force Probability:</t>
          <t>
            Calculate 2^(-total-entropy-bits) and verify it matches
            the claimed brute-force-probability within floating-point
            tolerance.
          </t>
        </li>
      </ol>
    </section>

    <section anchor="fcb-entropy-requirements">
      <name>Minimum Entropy Requirements</name>

      <t>
        RECOMMENDED minimum entropy thresholds by evidence tier:
      </t>

      <table>
        <thead>
          <tr>
            <th>Tier</th>
            <th>Min Total Entropy</th>
            <th>Brute-Force Probability</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>Basic</td>
            <td>32 bits</td>
            <td>&lt; 2.3 x 10^-10</td>
          </tr>
          <tr>
            <td>Standard</td>
            <td>64 bits</td>
            <td>&lt; 5.4 x 10^-20</td>
          </tr>
          <tr>
            <td>Enhanced</td>
            <td>128 bits</td>
            <td>&lt; 2.9 x 10^-39</td>
          </tr>
        </tbody>
      </table>

      <t>
        Evidence packets failing to meet minimum entropy thresholds
        SHOULD be flagged in the security-statement caveats.
      </t>
    </section>
  </section>

  <section anchor="fcb-economic-bound">
    <name>Economic Bound</name>

    <t>
      The economic-bound translates time and entropy requirements into
      monetary costs, enabling Relying Parties to assess forgery
      feasibility in economic terms.
    </t>

    <artwork type="cddl"><![CDATA[
economic-bound = {
    1 => tstr,                 ; cost-model-version
    2 => pop-timestamp,        ; cost-model-date
    3 => cost-estimate,        ; compute-cost
    4 => cost-estimate,        ; time-cost
    5 => cost-estimate,        ; total-min-cost
    6 => cost-estimate,        ; cost-per-hour-claimed
}

cost-estimate = {
    1 => float32,              ; usd
    2 => float32,              ; usd-low
    3 => float32,              ; usd-high
    4 => tstr,                 ; basis
}
]]></artwork>

    <section anchor="fcb-economic-fields">
      <name>Field Definitions</name>

      <dl>
        <dt>cost-model-version (key 1):</dt>
        <dd>
          Identifier for the cost model used (e.g.,
          "witnessd-cost-2025Q1"). Cost models are versioned because
          hardware prices and computational costs change over time.
        </dd>

        <dt>cost-model-date (key 2):</dt>
        <dd>
          Timestamp when the cost model was established. Cost estimates
          should be re-evaluated if the model is more than 12 months
          old.
        </dd>

        <dt>compute-cost (key 3):</dt>
        <dd>
          <t>
            Cost of computational resources required to recompute the
            VDF chain. Includes:
          </t>
          <ul>
            <li>Cloud compute instance cost for
            min-recompute-seconds</li>
            <li>Electricity cost for sustained computation</li>
            <li>Amortized hardware cost if using dedicated
            equipment</li>
          </ul>
        </dd>

        <dt>time-cost (key 4):</dt>
        <dd>
          Opportunity cost of the wall-clock time required for forgery.
          An adversary attempting to forge 10-hour evidence cannot use
          that time for other purposes. This is modeled as the economic
          value of the adversary's time.
        </dd>

        <dt>total-min-cost (key 5):</dt>
        <dd>
          Minimum total cost to forge the evidence, combining compute
          and time costs. This is the primary metric for cost-benefit
          analysis.
        </dd>

        <dt>cost-per-hour-claimed (key 6):</dt>
        <dd>
          Forgery cost normalized by claimed authoring duration,
          calculated as total-min-cost / claimed-duration-hours.
          This metric enables comparison across evidence packets of
          different lengths.
        </dd>
      </dl>
    </section>

    <section anchor="fcb-cost-estimate">
      <name>Cost Estimate Structure</name>

      <t>
        Each cost-estimate includes a point estimate and confidence
        range:
      </t>

      <dl>
        <dt>usd (key 1):</dt>
        <dd>
          Point estimate in US dollars. This is the expected cost
          under typical assumptions.
        </dd>

        <dt>usd-low (key 2):</dt>
        <dd>
          Lower bound of 90% confidence interval. Represents cost
          assuming adversary has access to discounted resources.
        </dd>

        <dt>usd-high (key 3):</dt>
        <dd>
          Upper bound of 90% confidence interval. Represents cost
          assuming adversary must acquire resources at market rates.
        </dd>

        <dt>basis (key 4):</dt>
        <dd>
          Human-readable description of the cost calculation basis
          (e.g., "AWS c7i.large @ $0.085/hr + $0.10/kWh electricity").
        </dd>
      </dl>
    </section>

    <section anchor="fcb-economic-computation">
      <name>Cost Computation</name>

      <t>
        Reference cost computation for compute-cost:
      </t>

      <artwork><![CDATA[
Compute cost model:
  hourly_rate = cloud_rate + elec_rate * power
  compute_hours = min_recompute_seconds / 3600
  compute_cost_usd = hourly_rate * compute_hours

Confidence interval (assumes 50% rate variance):
  compute_cost_low = compute_cost_usd * 0.5
  compute_cost_high = compute_cost_usd * 1.5
]]></artwork>

      <t>
        Reference cost computation for time-cost:
      </t>

      <artwork><![CDATA[
Time cost model (opportunity cost, skilled labor rate):
  hourly_value = 50.0
  time_cost_usd = hourly_value * (min_recompute_seconds / 3600)

Confidence interval (labor rate variance):
  time_cost_low = time_cost_usd * 0.2
  time_cost_high = time_cost_usd * 4.0
]]></artwork>

      <t>
        These are reference calculations. Implementations MAY use
        different cost models appropriate to their deployment context.
      </t>
    </section>

    <section anchor="fcb-cost-depreciation">
      <name>Cost Model Depreciation</name>

      <t>
        Hardware costs decrease and computational speeds increase over
        time. Cost estimates depreciate accordingly:
      </t>

      <ul>
        <li>
          <t>Moore's Law Effect:</t>
          <t>
            Compute cost per operation halves approximately every 2
            years. A cost model from 2023 overestimates 2025 forgery
            costs by roughly 2x.
          </t>
        </li>

        <li>
          <t>Hardware Acceleration:</t>
          <t>
            New hardware (GPUs, ASICs) may provide step-function
            improvements for specific algorithms. Cost models should
            be updated when significant new hardware becomes available.
          </t>
        </li>

        <li>
          <t>Cloud Pricing:</t>
          <t>
            Cloud compute costs generally decrease over time. Cost
            models should reference current pricing.
          </t>
        </li>
      </ul>

      <t>
        Verifiers SHOULD apply a depreciation adjustment when
        evaluating cost bounds with cost-model-date more than 12
        months in the past:
      </t>

      <artwork><![CDATA[
years_elapsed = (current_date - cost_model_date) / 365
depreciation_factor = 0.7 ^ years_elapsed  # ~30% annual decrease
adjusted_cost = original_cost * depreciation_factor
]]></artwork>
    </section>
  </section>

  <section anchor="fcb-security-statement">
    <name>Security Statement</name>

    <t>
      The security-statement provides a formal claim about the evidence
      security, including explicit assumptions and caveats.
    </t>

    <artwork type="cddl"><![CDATA[
security-statement = {
    1 => tstr,                 ; claim
    2 => formal-security-bound, ; formal
    3 => [+ tstr],             ; assumptions
    4 => [* tstr],             ; caveats
}

formal-security-bound = {
    1 => float32,              ; min-seconds
    2 => float32,              ; min-entropy-bits
    3 => float32,              ; min-cost-usd
}
]]></artwork>

    <section anchor="fcb-statement-fields">
      <name>Field Definitions</name>

      <dl>
        <dt>claim (key 1):</dt>
        <dd>
          <t>
            Human-readable security claim. MUST be phrased as a
            minimum bound, not an absolute guarantee. Example:
          </t>
          <t>
            "Forging this evidence requires at minimum 8.3 hours of
            sequential computation, 67 bits of entropy prediction,
            and an estimated $42-$126 in resources."
          </t>
        </dd>

        <dt>formal (key 2):</dt>
        <dd>
          Machine-readable security bounds for automated policy
          evaluation.
        </dd>

        <dt>assumptions (key 3):</dt>
        <dd>
          <t>
            List of assumptions under which the security claim holds.
            MUST include at minimum:
          </t>
          <ul>
            <li>
              Cryptographic assumption (e.g., "SHA-256 preimage
              resistance")
            </li>
            <li>
              Hardware assumption (e.g., "Calibration attestation is
              accurate")
            </li>
            <li>
              Adversary model (e.g., "Adversary cannot parallelize
              VDF computation")
            </li>
          </ul>
        </dd>

        <dt>caveats (key 4):</dt>
        <dd>
          <t>
            List of limitations or warnings about the security claim.
            Examples:
          </t>
          <ul>
            <li>
              "Cost estimates based on 2024Q4 cloud pricing"
            </li>
            <li>
              "Entropy estimate assumes timing samples are independent"
            </li>
            <li>
              "Does not protect against Attesting Environment
              compromise"
            </li>
          </ul>
        </dd>
      </dl>
    </section>

    <section anchor="fcb-formal-bounds">
      <name>Formal Security Bound</name>

      <t>
        The formal-security-bound provides three orthogonal minimum
        requirements for forgery:
      </t>

      <dl>
        <dt>min-seconds (key 1):</dt>
        <dd>
          Minimum wall-clock seconds to forge the evidence. Derived
          from time-bound.min-recompute-seconds.
        </dd>

        <dt>min-entropy-bits (key 2):</dt>
        <dd>
          Minimum entropy bits an adversary must predict or generate.
          Derived from entropy-bound.total-entropy-bits.
        </dd>

        <dt>min-cost-usd (key 3):</dt>
        <dd>
          Minimum cost in USD to forge the evidence. Derived from
          economic-bound_total-min-cost_usd-low (conservative estimate).
        </dd>
      </dl>

      <t>
        Relying Parties can evaluate these bounds against their risk
        tolerance. For example, a policy might require:
      </t>

      <artwork><![CDATA[
Example Relying Party policy:
  accept_evidence if:
      min-seconds >= 3600        AND   (At least 1 hour)
      min-entropy-bits >= 64     AND   (At least 64 bits)
      min-cost-usd >= 100              (At least $100)
]]></artwork>
    </section>
  </section>

  <section anchor="fcb-verification-procedure">
    <name>Verification Procedure</name>

    <t>
      A Verifier computes and validates forgery cost bounds through
      the following procedure:
    </t>

    <ol>
      <li>
        <t>Compute Time Bound:</t>
        <t>
          Sum VDF iterations across all checkpoints. Retrieve
          calibration-rate from calibration attestation. Compute
          min-recompute-seconds = total-iterations / calibration-rate.
        </t>
      </li>

      <li>
        <t>Compute Entropy Bound:</t>
        <t>
          Aggregate entropy estimates from all Jitter Seals. Verify
          VDF entanglement for each seal. Compute brute-force
          probability.
        </t>
      </li>

      <li>
        <t>Compute Economic Bound:</t>
        <t>
          Apply cost model to time bound. Compute confidence intervals.
          Normalize by claimed duration.
        </t>
      </li>

      <li>
        <t>Construct Security Statement:</t>
        <t>
          Generate human-readable claim. Populate formal bounds.
          List applicable assumptions. Add any relevant caveats.
        </t>
      </li>

      <li>
        <t>Validate Claimed Bounds:</t>
        <t>
          Compare computed bounds against those claimed in the
          evidence packet. Flag discrepancies exceeding tolerance.
        </t>
      </li>

      <li>
        <t>Apply Depreciation:</t>
        <t>
          If cost-model-date is stale, apply depreciation adjustment
          to economic bounds.
        </t>
      </li>
    </ol>

    <t>
      The Verifier MAY recompute bounds using its own cost model rather
      than accepting the Attester's claimed bounds. Independent
      recomputation is RECOMMENDED for high-stakes verification.
    </t>
  </section>

  <section anchor="fcb-security">
    <name>Security Considerations</name>

    <section anchor="fcb-adversary-model">
      <name>Assumed Adversary Capabilities</name>

      <t>
        Forgery cost bounds assume an adversary with:
      </t>

      <ul>
        <li>
          Access to commodity hardware at market prices
        </li>
        <li>
          Ability to execute VDF algorithms correctly
        </li>
        <li>
          No ability to parallelize inherently sequential VDFs
        </li>
        <li>
          No ability to predict behavioral entropy in advance
        </li>
        <li>
          No compromise of the Attesting Environment during evidence
          generation
        </li>
      </ul>

      <t>
        Bounds may not hold against adversaries who:
      </t>

      <ul>
        <li>
          Have access to specialized hardware (ASICs) at below-market
          cost
        </li>
        <li>
          Can compromise the Attesting Environment
        </li>
        <li>
          Discover novel attacks on VDF or hash function constructions
        </li>
        <li>
          Have access to quantum computers capable of breaking
          cryptographic assumptions
        </li>
      </ul>
    </section>

    <section anchor="fcb-bound-limitations">
      <name>Limitations of Cost Bounds</name>

      <t>
        Forgery cost bounds provide lower bounds, not guarantees:
      </t>

      <dl>
        <dt>Unknown Attacks:</dt>
        <dd>
          The bounds assume current best-known attacks. Future
          cryptanalytic advances may reduce actual forgery costs.
        </dd>

        <dt>Cost Model Accuracy:</dt>
        <dd>
          Economic estimates depend on cost model assumptions. Actual
          adversary costs may differ based on resource access.
        </dd>

        <dt>Entropy Estimation:</dt>
        <dd>
          Shannon entropy estimates assume independent samples.
          Correlations in timing data may reduce effective entropy.
        </dd>

        <dt>Calibration Trust:</dt>
        <dd>
          Time bounds depend on calibration accuracy. Without hardware
          attestation, calibration is self-reported and may be
          manipulated.
        </dd>
      </dl>
    </section>

    <section anchor="fcb-not-guaranteed">
      <name>What Bounds Do NOT Guarantee</name>

      <t>
        Forgery cost bounds explicitly do NOT provide:
      </t>

      <ul>
        <li>
          <t>Authenticity Proof:</t>
          <t>
            Evidence meeting cost thresholds is not proven authentic.
            It is proven expensive to forge. These are distinct claims.
          </t>
        </li>

        <li>
          <t>Content Verification:</t>
          <t>
            Bounds say nothing about document content, quality, or
            accuracy. Only the process evidence is bounded.
          </t>
        </li>

        <li>
          <t>Intent Attribution:</t>
          <t>
            Bounds do not prove who created the evidence or why.
            Identity and intent are outside the scope of cost analysis.
          </t>
        </li>

        <li>
          <t>Long-Term Security:</t>
          <t>
            Bounds depreciate over time. Evidence considered secure
            today may have insufficient bounds in 10 years.
          </t>
        </li>
      </ul>
    </section>

    <section anchor="fcb-policy-guidance">
      <name>Policy Guidance for Relying Parties</name>

      <t>
        Relying Parties should establish policies based on:
      </t>

      <ol>
        <li>
          <t>Risk Assessment:</t>
          <t>
            What is the cost of accepting forged evidence? High-stakes
            decisions require higher cost thresholds.
          </t>
        </li>

        <li>
          <t>Adversary Economics:</t>
          <t>
            Would forgery be economically rational? If forgery costs
            exceed potential gain, rational adversaries will not
            attempt it.
          </t>
        </li>

        <li>
          <t>Time Sensitivity:</t>
          <t>
            How quickly must evidence be verified? Long verification
            delays may reduce the utility of cost bounds.
          </t>
        </li>

        <li>
          <t>Corroborating Evidence:</t>
          <t>
            Cost bounds are one factor among many. External anchors,
            hardware attestation, and contextual information all
            contribute to overall confidence.
          </t>
        </li>
      </ol>
    </section>
  </section>

</section>

    <!-- Section 7: Cross-Document Provenance Links -->
    <section anchor="provenance-links"
             xml:base="sections/provenance-links.xml">
  <name>Cross-Document Provenance Links</name>

  <t>
    This section defines a mechanism for establishing cryptographic
    relationships between Evidence packets. Provenance links enable
    authors to prove that one document evolved from, merged with, or
    was derived from other documented works.
  </t>

  <section anchor="provenance-motivation">
    <name>Motivation</name>

    <t>
      Real-world authorship rarely occurs in isolation. Documents evolve
      through multiple stages:
    </t>

    <ul>
      <li>
        Research notes become draft papers become published articles
      </li>
      <li>
        Multiple contributors merge their sections into a
        collaborative work
      </li>
      <li>
        A thesis chapter is extracted and expanded into a standalone
        paper
      </li>
      <li>
        A codebase is forked, modified, and the changes documented
      </li>
    </ul>

    <t>
      Without provenance links, each Evidence packet is
      cryptographically isolated. An author cannot prove that their
      final manuscript evolved from the lab notes they documented six
      months earlier. Provenance
      links provide this capability while maintaining the privacy and
      security properties of witnessd Evidence.
    </t>
  </section>

  <section anchor="provenance-section-structure">
    <name>Provenance Section Structure</name>

    <t>
      The provenance section is an optional component of the Evidence
      packet, identified by integer key 20. When present, it documents
      the relationship between the current Evidence packet and one or
      more parent packets.
    </t>

    <sourcecode type="cddl"><![CDATA[
; Provenance section for cross-document linking
; Key 20 in evidence-packet
provenance-section = {
    ? 1 => [+ provenance-link],     ; parent-links
    ? 2 => [+ derivation-claim],    ; derivation-claims
    ? 3 => provenance-metadata,     ; metadata
}

; Link to a parent Evidence packet
provenance-link = {
    1 => uuid,                       ; parent-packet-id
    2 => hash-value,                 ; parent-chain-hash
    3 => derivation-type,           ; how this document relates
    4 => pop-timestamp,             ; when derivation occurred
    ? 5 => tstr,                    ; relationship-description
    ? 6 => [+ uint],                ; inherited-checkpoints
    ? 7 => cose-signature,          ; cross-packet-attestation
}

; Type of derivation relationship
derivation-type = &(
    continuation: 1,                 ; same work, new packet
    merge: 2,                        ; from multiple sources
    split: 3,                        ; Extracted from larger work
    rewrite: 4,                      ; Substantial revision
    translation: 5,                  ; Language translation
    fork: 6,                         ; independent branch
    citation-only: 7,                ; references only
)

; Claims about what was derived and how
derivation-claim = {
    1 => derivation-aspect,          ; what-derived
    2 => derivation-extent,          ; extent
    ? 3 => tstr,                     ; description
    ? 4 => float32,                  ; estimated-percentage
}

derivation-aspect = &(
    structure: 1,                    ; Document organization
    content: 2,                      ; Textual content
    ideas: 3,                        ; Conceptual elements
    data: 4,                         ; Data or results
    methodology: 5,                  ; Methods or approach
    code: 6,                         ; Source code
)

derivation-extent = &(
    none: 0,                         ; Not derived
    minimal: 1,                      ; Less than 10%
    partial: 2,                      ; 10-50%
    substantial: 3,                  ; 50-90%
    complete: 4,                     ; More than 90%
)

; Optional metadata about provenance
provenance-metadata = {
    ? 1 => tstr,                     ; provenance-statement
    ? 2 => bool,                     ; all-parents-available
    ? 3 => [+ tstr],                 ; missing-parent-reasons
}
]]></sourcecode>
  </section>

  <section anchor="provenance-verification">
    <name>Verification of Provenance Links</name>

    <t>
      Verifiers MUST perform the following checks when provenance links
      are present:
    </t>

    <section anchor="provenance-chain-hash-verification">
      <name>Parent Chain Hash Verification</name>

      <t>
        For each provenance-link, if the parent Evidence packet is
        available:
      </t>

      <ol>
        <li>
          Verify that parent-packet-id matches the packet-id field of
          the parent Evidence packet.
        </li>
        <li>
          Verify that parent-chain-hash matches the checkpoint-hash of
          the final checkpoint in the parent Evidence packet.
        </li>
        <li>
          Verify that the derivation timestamp is not earlier than the
          created timestamp of the parent packet.
        </li>
      </ol>

      <t>
        If the parent Evidence packet is not available, the Verifier
        SHOULD note this limitation in the Attestation Result caveats.
        The provenance link remains valid but unverified.
      </t>
    </section>

    <section anchor="provenance-cross-attestation">
      <name>Cross-Packet Attestation</name>

      <t>
        When cross-packet-attestation is present, it provides
        cryptographic proof that the author of the current packet had
        access to the parent packet at the time of derivation:
      </t>

      <artwork><![CDATA[
cross-packet-attestation = COSE_Sign1(
    payload = CBOR_encode({
        1: current-packet-id,
        2: parent-packet-id,
        3: parent-chain-hash,
        4: derivation-timestamp,
    }),
    key = author-signing-key
)
]]></artwork>

      <t>
        This attestation prevents retroactive provenance claims where
        an author discovers an existing Evidence packet and falsely
        claims derivation after the fact.
      </t>
    </section>
  </section>

  <section anchor="provenance-privacy">
    <name>Privacy Considerations for Provenance</name>

    <t>
      Provenance links may reveal information about the author's
      creative process and document history. Authors SHOULD consider:
    </t>

    <ul>
      <li>
        Parent packet IDs are disclosed to anyone with access to the
        child packet.
      </li>
      <li>
        If parent packets use the author-salted hash mode, the salt
        MUST be shared for full verification.
      </li>
      <li>
        Derivation claims may reveal collaboration patterns or
        research relationships.
      </li>
    </ul>

    <t>
      Authors MAY choose to omit provenance links for privacy while
      still maintaining independent Evidence for each document.
    </t>
  </section>

  <section anchor="provenance-examples">
    <name>Provenance Link Examples</name>

    <section anchor="provenance-example-continuation">
      <name>Continuation Example</name>

      <t>
        A dissertation written over 18 months with monthly Evidence
        exports:
      </t>

      <artwork type="cbor-diag"><![CDATA[
provenance-section = {
  1: [  / parent-links /
    {
      1: h'550e8400e29b41d4a716446655440000',  / parent-packet-id /
      2: {1: 1, 2: h'abcd1234...'},            / parent-chain-hash /
      3: 1,                         / type: continuation /
      4: 1(1709251200),                        / Feb 2024 /
      5: "Continued from January 2024 export"
    }
  ],
  3: {  / metadata /
    1: "This is month 2 of an ongoing dissertation project",
    2: true  / all-parents-available /
  }
}
]]></artwork>
    </section>

    <section anchor="provenance-example-merge">
      <name>Merge Example</name>

      <t>
        A collaborative paper merging contributions from three authors:
      </t>

      <artwork type="cbor-diag"><![CDATA[
provenance-section = {
  1: [  / parent-links /
    {
      1: h'author1-packet-uuid...',
      2: {1: 1, 2: h'hash1...'},
      3: 2,  / merge /
      4: 1(1709337600),
      5: "Alice's methodology section"
    },
    {
      1: h'author2-packet-uuid...',
      2: {1: 1, 2: h'hash2...'},
      3: 2,  / merge /
      4: 1(1709337600),
      5: "Bob's results section"
    },
    {
      1: h'author3-packet-uuid...',
      2: {1: 1, 2: h'hash3...'},
      3: 2,  / merge /
      4: 1(1709337600),
      5: "Carol's introduction and discussion"
    }
  ],
  2: [  / derivation-claims /
    {1: 1, 2: 3, 3: "Structure from Alice's draft"},
    {1: 2, 2: 2, 3: "Content merged from all three"},
    {1: 4, 2: 4, 3: "Data primarily from Bob"}
  ]
}
]]></artwork>
    </section>
  </section>

</section>

    <!-- Section 8: Incremental Evidence with Continuation Tokens -->
    <section anchor="continuation-tokens"
             xml:base="sections/continuation-tokens.xml">
  <name>Incremental Evidence with Continuation Tokens</name>

  <t>
    This section defines a mechanism for producing Evidence packets
    incrementally over extended authoring periods. Continuation tokens
    allow a single logical authorship effort to be documented across
    multiple Evidence packets without losing cryptographic continuity.
  </t>

  <section anchor="continuation-motivation">
    <name>Motivation</name>

    <t>
      Long-form works such as novels, dissertations, or technical books
      may span months or years of active authorship. Capturing all
      Evidence in a single packet presents practical challenges:
    </t>

    <ul>
      <li>
        Unbounded checkpoint chains consume storage and increase
        verification time.
      </li>
      <li>
        Authors may need to share partial Evidence before work
        completion (e.g., chapter submissions, progress reports).
      </li>
      <li>
        System failures or device changes could result in loss of
        accumulated Evidence.
      </li>
      <li>
        Privacy requirements may dictate periodic Evidence export
        and local data deletion.
      </li>
    </ul>

    <t>
      Continuation tokens address these challenges by enabling
      cryptographically-linked Evidence packet chains while preserving
      independent verifiability of each packet.
    </t>
  </section>

  <section anchor="continuation-structure">
    <name>Continuation Token Structure</name>

    <t>
      The continuation token is an optional component of the Evidence
      packet, identified by integer key 21. It establishes the packet's
      position within a multi-packet Evidence series.
    </t>

    <sourcecode type="cddl"><![CDATA[
; Continuation token for multi-packet Evidence series
; Key 21 in evidence-packet
continuation-section = {
    1 => uuid,                       ; series-id
    2 => uint,                       ; packet-sequence
    ? 3 => hash-value,               ; prev-packet-chain-hash
    ? 4 => uuid,                     ; prev-packet-id
    5 => continuation-summary,       ; cumulative-summary
    ? 6 => cose-signature,           ; series-binding-signature
}

; Cumulative statistics across the series
continuation-summary = {
    1 => uint,                       ; total-checkpoints-so-far
    2 => uint,                       ; total-chars-so-far
    3 => duration,                   ; total-vdf-time-so-far
    4 => float32,                    ; total-entropy-bits-so-far
    5 => uint,                       ; packets-in-series
    ? 6 => pop-timestamp,            ; series-started-at
    ? 7 => duration,                 ; total-elapsed-time
}
]]></sourcecode>

    <t>
      Key semantics:
    </t>

    <dl>
      <dt>series-id:</dt>
      <dd>
        A UUID that remains constant across all packets in the series.
        Generated when the first packet in the series is created.
      </dd>

      <dt>packet-sequence:</dt>
      <dd>
        Zero-indexed sequence number. The first packet in a series has
        packet-sequence = 0.
      </dd>

      <dt>prev-packet-chain-hash:</dt>
      <dd>
        The checkpoint-hash of the final checkpoint in the previous
        packet. MUST be present for packet-sequence &gt; 0. MUST NOT be
        present for packet-sequence = 0.
      </dd>

      <dt>prev-packet-id:</dt>
      <dd>
        The packet-id of the previous packet in the series. SHOULD be
        present for packet-sequence &gt; 0 to enable packet retrieval.
      </dd>

      <dt>cumulative-summary:</dt>
      <dd>
        Running totals across all packets in the series, enabling
        Verifiers to assess the full authorship effort without
        accessing all prior packets.
      </dd>
    </dl>
  </section>

  <section anchor="continuation-chain-integrity">
    <name>Chain Integrity Across Packets</name>

    <t>
      When a new packet continues from a previous packet, the VDF
      chain MUST maintain cryptographic continuity:
    </t>

    <artwork><![CDATA[
Packet N (final checkpoint):
  checkpoint-hash[last] = H(checkpoint-data)
  VDF_output{last} = computed VDF result

Packet N+1 (first checkpoint):
  prev-packet-chain-hash = checkpoint-hash[last] from Packet N
  VDF_input{0} = H(
      VDF_output{last} from Packet N ||
      content-hash{0} ||
      jitter-commitment{0} ||
      series-id ||
      packet-sequence
  )
]]></artwork>

    <t>
      This construction ensures:
    </t>

    <ol>
      <li>
        The new packet cannot be created without knowledge of the
        previous packet's final VDF output.
      </li>
      <li>
        Backdating the new packet requires recomputing all VDF proofs
        in both the current and all subsequent packets.
      </li>
      <li>
        The series-id and packet-sequence are bound into the VDF chain,
        preventing packets from being reordered or reassigned to
        different series.
      </li>
    </ol>
  </section>

  <section anchor="continuation-verification">
    <name>Verification of Continuation Chains</name>

    <section anchor="continuation-single-packet">
      <name>Single Packet Verification</name>

      <t>
        Each packet in a continuation series MUST be independently
        verifiable. A Verifier with access only to packet N can:
      </t>

      <ul>
        <li>
          Verify all checkpoint chain integrity within the packet.
        </li>
        <li>
          Verify all VDF proofs within the packet.
        </li>
        <li>
          Verify jitter bindings within the packet.
        </li>
        <li>
          Report the cumulative-summary as claimed (not proven without
          prior packets).
        </li>
      </ul>

      <t>
        The Attestation Result SHOULD note that the packet is part of
        a series and whether prior packets were verified.
      </t>
    </section>

    <section anchor="continuation-full-series">
      <name>Full Series Verification</name>

      <t>
        When all packets in a series are available, a Verifier MUST:
      </t>

      <ol>
        <li>
          Verify each packet independently.
        </li>
        <li>
          Verify that series-id is consistent across all packets.
        </li>
        <li>
          Verify that packet-sequence values are consecutive starting
          from 0.
        </li>
        <li>
          For each packet N &gt; 0, verify that prev-packet-chain-hash
          matches the final checkpoint-hash of packet N-1.
        </li>
        <li>
          For each packet N &gt; 0, verify that the first checkpoint's
          VDF_input incorporates the previous packet's final VDF_output.
        </li>
        <li>
          Verify that cumulative-summary values are consistent with
          the sum of individual packet statistics.
        </li>
      </ol>
    </section>
  </section>

  <section anchor="continuation-series-binding">
    <name>Series Binding Signature</name>

    <t>
      The optional series-binding-signature provides cryptographic
      proof that all packets in a series were produced by the same
      author:
    </t>

    <artwork><![CDATA[
series-binding-signature = COSE_Sign1(
    payload = CBOR_encode({
        1: series-id,
        2: packet-sequence,
        3: packet-id,
        4: prev-packet-chain-hash,  / if present /
        5: cumulative-summary,
    }),
    key = author-signing-key
)
]]></artwork>

    <t>
      When present, Verifiers can confirm that the signing key is
      consistent across all packets in the series, providing additional
      assurance of authorship continuity.
    </t>
  </section>

  <section anchor="continuation-practical">
    <name>Practical Considerations</name>

    <section anchor="continuation-export-triggers">
      <name>When to Export a Continuation Packet</name>

      <t>
        Implementations SHOULD support configurable triggers for
        continuation packet export:
      </t>

      <ul>
        <li>
          <strong>Checkpoint count threshold:</strong> Export after N
          checkpoints (e.g., 1000).
        </li>
        <li>
          <strong>Time interval:</strong> Export weekly or monthly.
        </li>
        <li>
          <strong>Document size threshold:</strong> Export when document
          exceeds N characters.
        </li>
        <li>
          <strong>Manual trigger:</strong> User-initiated export.
        </li>
        <li>
          <strong>Milestone events:</strong> Export at chapter
          completion or version milestones.
        </li>
      </ul>
    </section>

    <section anchor="continuation-gap-handling">
      <name>Handling Gaps in Series</name>

      <t>
        If a packet in a series is lost or unavailable:
      </t>

      <ul>
        <li>
          Subsequent packets remain independently verifiable.
        </li>
        <li>
          The cumulative-summary provides claimed totals but cannot
          be proven without all packets.
        </li>
        <li>
          Verifiers MUST note the gap in Attestation Results.
        </li>
        <li>
          Chain continuity verification fails at the gap but resumes
          for subsequent contiguous packets.
        </li>
      </ul>
    </section>
  </section>

  <section anchor="continuation-example">
    <name>Continuation Token Example</name>

    <t>
      Third monthly export of a dissertation in progress:
    </t>

    <artwork type="cbor-diag"><![CDATA[
continuation-section = {
  1: h'dissertation-series-uuid...',  / series-id /
  2: 2,                           / packet-sequence (3rd) /
  3: {                                 / prev-packet-chain-hash /
    1: 1,
    2: h'feb-packet-final-hash...'
  },
  4: h'feb-packet-uuid...',            / prev-packet-id /
  5: {                                 / cumulative-summary /
    1: 847,                            / total-checkpoints-so-far /
    2: 45230,                          / total-chars-so-far /
    3: 12600.0,                        / total-vdf-time: ~3.5 hours /
    4: 156.7,                          / total-entropy-bits /
    5: 3,                              / packets-in-series /
    6: 1(1704067200),              / series-started-at /
    7: 7776000.0                       / total-elapsed: 90 days /
  },
  6: h'D28441A0...'                     / series-binding-signature /
}
]]></artwork>
  </section>

</section>



    <!-- Section 11: Quantified Trust Policies -->
    <section anchor="trust-policies"
             xml:base="sections/trust-policies.xml">
  <name>Quantified Trust Policies</name>

  <t>
    This section defines a framework for expressing and computing
    trust scores in Attestation Results. Trust policies enable
    Relying Parties to customize how Evidence is evaluated and
    to understand the basis for confidence scores.
  </t>

  <section anchor="trust-motivation">
    <name>Motivation</name>

    <t>
      The base attestation-result structure provides a confidence-score
      (0.0-1.0) and a verdict enumeration, but does not explain how
      these values were computed. Different Relying Parties have
      different trust requirements:
    </t>

    <ul>
      <li>
        An academic journal may weight presence challenges heavily.
      </li>
      <li>
        A legal proceeding may require hardware attestation.
      </li>
      <li>
        A publishing platform may prioritize VDF duration.
      </li>
      <li>
        An enterprise may have compliance-specific criteria.
      </li>
    </ul>

    <t>
      Without explicit trust policies, Relying Parties cannot:
    </t>

    <ul>
      <li>
        Understand why a particular confidence score was assigned.
      </li>
      <li>
        Compare scores from different Verifiers.
      </li>
      <li>
        Customize evaluation criteria for their domain.
      </li>
      <li>
        Audit the verification process.
      </li>
    </ul>

    <t>
      The trust policy framework addresses these limitations by
      making confidence computation transparent and configurable.
    </t>
  </section>

  <section anchor="trust-structure">
    <name>Trust Policy Structure</name>

    <t>
      The appraisal-policy extension is added to verifier-metadata,
      identified by integer key 5.
    </t>

    <sourcecode type="cddl"><![CDATA[
; Extended verifier-metadata with trust policy
verifier-metadata = {
    ? 1 => tstr,                     ; verifier-version
    ? 2 => tstr,                     ; verifier-uri
    ? 3 => [+ bstr],                 ; verifier-cert-chain
    ? 4 => tstr,                     ; policy-id
    ? 5 => appraisal-policy,         ; policy details
}

; Complete appraisal policy specification
appraisal-policy = {
    1 => tstr,                       ; policy-uri
    2 => tstr,                       ; policy-version
    3 => trust-computation,          ; computation-model
    4 => [+ trust-factor],           ; factors
    ? 5 => [+ trust-threshold],      ; thresholds
    ? 6 => policy-metadata,          ; metadata
}

; How the final score is computed
trust-computation = &(
    weighted-average: 1,             ; Sum of (factor * weight)
    minimum-of-factors: 2,           ; Min across all factors
    geometric-mean: 3,               ; Nth root of product
    custom-formula: 4,               ; Described in policy-uri
)

; Individual factor in trust computation
trust-factor = {
    1 => tstr,                       ; factor-name
    2 => factor-type,                ; type
    3 => float32,                    ; weight (0.0-1.0)
    4 => float32,                    ; observed-value
    5 => float32,                    ; normalized-score (0.0-1.0)
    6 => float32,                    ; contribution
    ? 7 => factor-evidence,          ; supporting-evidence
}

factor-type = &(
    ; Chain-verifiable factors
    vdf-duration: 1,
    checkpoint-count: 2,
    jitter-entropy: 3,
    chain-integrity: 4,
    revision-depth: 5,

    ; Presence factors
    presence-rate: 10,
    presence-response-time: 11,

    ; Hardware factors
    hardware-attestation: 20,
    calibration-attestation: 21,

    ; Behavioral factors
    edit-entropy: 30,
    monotonic-ratio: 31,
    typing-rate-consistency: 32,

    ; External factors
    anchor-confirmation: 40,
    anchor-count: 41,

    ; Collaboration factors
    collaborator-attestations: 50,
    contribution-consistency: 51,
)

; Evidence supporting a factor score
factor-evidence = {
    ? 1 => float32,                  ; raw-value
    ? 2 => float32,                  ; threshold-value
    ? 3 => tstr,                     ; computation-notes
    ? 4 => [uint, uint],             ; checkpoint-range
}

; Threshold requirements for pass/fail determination
trust-threshold = {
    1 => tstr,                       ; threshold-name
    2 => threshold-type,             ; type
    3 => float32,                    ; required-value
    4 => bool,                       ; met
    ? 5 => tstr,                     ; failure-reason
}

threshold-type = &(
    minimum-score: 1,                ; Score must be >= value
    minimum-factor: 2,               ; factor >= value
    required-factor: 3,              ; factor present
    maximum-caveats: 4,              ; caveats <= value
)

policy-metadata = {
    ? 1 => tstr,                     ; policy-name
    ? 2 => tstr,                     ; policy-description
    ? 3 => tstr,                     ; policy-authority
    ? 4 => pop-timestamp,            ; policy-effective-date
    ? 5 => [+ tstr],                 ; applicable-domains
}
]]></sourcecode>
  </section>

  <section anchor="trust-computation-models">
    <name>Trust Computation Models</name>

    <section anchor="trust-weighted-average">
      <name>Weighted Average Model</name>

      <t>
        The most common computation model, where each factor contributes
        proportionally to its weight:
      </t>

      <artwork><![CDATA[
confidence-score = sum(factor[i].weight * factor[i].normalized-score)
                   / sum(factor[i].weight)

Constraints:
  - sum(weights) SHOULD equal 1.0 for clarity
  - All normalized-scores are in [0.0, 1.0]
  - Resulting confidence-score is in [0.0, 1.0]

Example:
  vdf-duration:      weight=0.30, score=0.95, contribution=0.285
  jitter-entropy:    weight=0.25, score=0.80, contribution=0.200
  presence-rate:     weight=0.20, score=1.00, contribution=0.200
  chain-integrity:   weight=0.15, score=1.00, contribution=0.150
  hardware-attest:   weight=0.10, score=0.00, contribution=0.000

  confidence-score = 0.285 + 0.200 + 0.200 + 0.150 + 0.000 = 0.835
]]></artwork>
    </section>

    <section anchor="trust-minimum-model">
      <name>Minimum-of-Factors Model</name>

      <t>
        A conservative model where the overall score is limited by
        the weakest factor:
      </t>

      <artwork><![CDATA[
confidence-score = min(factor[i].normalized-score for all i)

Use case: High-security contexts where all factors must be strong.

Example:
  vdf-duration:      score=0.95
  jitter-entropy:    score=0.80
  presence-rate:     score=1.00
  chain-integrity:   score=1.00
  hardware-attest:   score=0.00  <-- limiting factor

  confidence-score = 0.00
]]></artwork>

      <t>
        This model is appropriate when any weakness should disqualify
        the Evidence, such as forensic or legal contexts.
      </t>
    </section>

    <section anchor="trust-geometric-mean">
      <name>Geometric Mean Model</name>

      <t>
        A balanced model that penalizes outliers more than weighted
        average but less than minimum:
      </t>

      <artwork><![CDATA[
confidence-score = (product(factor[i].normalized-score))^(1/n)

Example with 5 factors:
  scores = [0.95, 0.80, 1.00, 1.00, 0.60]
  product = 0.95 * 0.80 * 1.00 * 1.00 * 0.60 = 0.456
  confidence-score = 0.456^(1/5) = 0.838
]]></artwork>
    </section>
  </section>

  <section anchor="trust-normalization">
    <name>Factor Normalization</name>

    <t>
      Raw factor values must be normalized to the [0.0, 1.0] range
      for consistent computation. Normalization functions depend on
      the factor type:
    </t>

    <section anchor="trust-normalize-threshold">
      <name>Threshold Normalization</name>

      <artwork><![CDATA[
For factors with a minimum threshold:
  if raw_value >= threshold:
      normalized = 1.0
  else:
      normalized = raw_value / threshold

Example: vdf-duration with 3600s threshold
  raw_value = 2700s
  normalized = 2700 / 3600 = 0.75
]]></artwork>
    </section>

    <section anchor="trust-normalize-range">
      <name>Range Normalization</name>

      <artwork><![CDATA[
For factors with min/max range:
  normalized = (raw_value - min) / (max - min)
  normalized = clamp(normalized, 0.0, 1.0)

Example: typing-rate with acceptable range 20-200 WPM
  raw_value = 75 WPM
  normalized = (75 - 20) / (200 - 20) = 0.306
]]></artwork>
    </section>

    <section anchor="trust-normalize-binary">
      <name>Binary Normalization</name>

      <artwork><![CDATA[
For pass/fail factors:
  normalized = 1.0 if present/valid else 0.0

Example: hardware-attestation
  TPM attestation present and valid: normalized = 1.0
  No hardware attestation: normalized = 0.0
]]></artwork>
    </section>
  </section>

  <section anchor="trust-predefined-policies">
    <name>Predefined Policy Profiles</name>

    <t>
      This specification defines several policy profiles for common
      use cases. Implementations MAY support these profiles by URI:
    </t>

    <table anchor="tbl-policy-profiles">
      <name>Predefined Policy Profiles</name>
      <thead>
        <tr>
          <th>Profile URI</th>
          <th>Description</th>
          <th>Key Characteristics</th>
        </tr>
      </thead>
      <tbody>
        <tr>
          <td>urn:ietf:params:pop:policy:basic</td>
          <td>Basic verification</td>
          <td>Chain integrity only</td>
        </tr>
        <tr>
          <td>urn:ietf:params:pop:policy:academic</td>
          <td>Academic submission</td>
          <td>Weighted average, presence required</td>
        </tr>
        <tr>
          <td>urn:ietf:params:pop:policy:legal</td>
          <td>Legal proceedings</td>
          <td>Minimum model, hardware required</td>
        </tr>
        <tr>
          <td>urn:ietf:params:pop:policy:publishing</td>
          <td>Publishing workflow</td>
          <td>Weighted average, VDF emphasized</td>
        </tr>
      </tbody>
    </table>
  </section>

  <section anchor="trust-example">
    <name>Trust Policy Example</name>

    <t>
      Academic policy applied to a Standard tier Evidence packet:
    </t>

    <artwork type="cbor-diag"><![CDATA[
verifier-metadata = {
  1: "witnessd-verifier-2.0",
  2: "https://verify.example.com",
  4: "academic-v1",
  5: {  / appraisal-policy /
    1: "urn:ietf:params:pop:policy:academic",
    2: "1.0.0",
    3: 1,  / computation: weighted-average /
    4: [   / factors /
      {
        1: "vdf-duration",
        2: 1,
        3: 0.25,                     / weight /
        4: 5400.0,                   / observed: 90 minutes /
        5: 1.0,                      / normalized (threshold: 3600) /
        6: 0.25,                     / contribution /
        7: {1: 5400.0, 2: 3600.0}
      },
      {
        1: "jitter-entropy",
        2: 3,
        3: 0.20,
        4: 45.7,                     / observed: 45.7 bits /
        5: 1.0,                      / normalized (threshold: 32) /
        6: 0.20
      },
      {
        1: "presence-rate",
        2: 10,
        3: 0.25,
        4: 0.917,                    / observed: 11/12 challenges /
        5: 0.917,                    / direct ratio /
        6: 0.229
      },
      {
        1: "chain-integrity",
        2: 4,
        3: 0.20,
        4: 1.0,                      / binary: valid /
        5: 1.0,
        6: 0.20
      },
      {
        1: "edit-entropy",
        2: 30,
        3: 0.10,
        4: 3.45,                     / observed /
        5: 0.863,                    / normalized (range 0-4) /
        6: 0.086
      }
    ],
    5: [   / thresholds /
      {
        1: "minimum-overall",
        2: 1,
        3: 0.70,
        4: true
      },
      {
        1: "presence-required",
        2: 3,
        3: 0.0,
        4: true
      }
    ],
    6: {   / metadata /
      1: "Academic Submission Policy",
      3: "WritersLogic Academic Integrity",
      5: ["academic", "education", "research"]
    }
  }
}

/ confidence: 0.25 + 0.20 + 0.229 + 0.20 + 0.086 = 0.965 /
]]></artwork>
  </section>

</section>

    <!-- Section 12: Compact Evidence References -->
    <section anchor="compact-evidence"
             xml:base="sections/compact-evidence.xml">
  <name>Compact Evidence References</name>

  <t>
    This section defines a compact representation of Evidence that
    can be embedded in metadata fields, QR codes, or other
    space-constrained contexts. Compact Evidence References provide
    a cryptographic link to full Evidence packets without requiring
    the full packet to be transmitted.
  </t>

  <section anchor="compact-motivation">
    <name>Motivation</name>

    <t>
      Full Evidence packets can be large (kilobytes to megabytes),
      making them unsuitable for embedding in:
    </t>

    <ul>
      <li>
        Document metadata (PDF XMP, EXIF, Office custom properties)
      </li>
      <li>
        Version control commit messages
      </li>
      <li>
        QR codes or NFC tags
      </li>
      <li>
        Social media posts or profile fields
      </li>
      <li>
        DNS TXT records or other protocol headers
      </li>
    </ul>

    <t>
      A Compact Evidence Reference provides "proof at a glance" that
      links to the full Evidence packet for complete verification.
      The reference is cryptographically bound to the Evidence,
      preventing tampering without detection.
    </t>
  </section>

  <section anchor="compact-structure">
    <name>Compact Reference Structure</name>

    <t>
      The Compact Evidence Reference uses a dedicated CBOR tag to
      distinguish it from full Evidence packets.
    </t>

    <sourcecode type="cddl"><![CDATA[
; Compact Evidence Reference
; Tag 1347571281 = 0x50505021 = "PPP!"
tagged-compact-ref = #6.1347571281(compact-evidence-ref)

compact-evidence-ref = {
    1 => uuid,                       ; packet-id
    2 => hash-value,                 ; chain-hash
    3 => hash-value,                 ; document-hash
    4 => compact-summary,            ; summary
    5 => tstr,                       ; evidence-uri
    6 => cose-signature,             ; compact-signature
    ? 7 => compact-metadata,         ; metadata
}

compact-summary = {
    1 => uint,                       ; checkpoint-count
    2 => uint,                       ; total-chars
    3 => duration,                   ; total-vdf-time
    4 => uint,                       ; evidence-tier (1-4)
    ? 5 => forensic-assessment,      ; verdict (if available)
    ? 6 => float32,                  ; confidence-score
}

compact-metadata = {
    ? 1 => tstr,                     ; author-name
    ? 2 => pop-timestamp,            ; created
    ? 3 => tstr,                     ; verifier-name
    ? 4 => pop-timestamp,            ; verified-at
}
]]></sourcecode>
  </section>

  <section anchor="compact-signature">
    <name>Compact Reference Signature</name>

    <t>
      The compact-signature binds all reference fields to prevent
      tampering:
    </t>

    <artwork><![CDATA[
compact-signature = COSE_Sign1(
    payload = CBOR_encode({
        1: packet-id,
        2: chain-hash,
        3: document-hash,
        4: compact-summary,
        5: evidence-uri,
    }),
    key = signing-key
)

Signing key may be:
  - Author's signing key (self-attestation)
  - Verifier's signing key (third-party attestation)
  - Evidence service's key (hosting attestation)
]]></artwork>

    <t>
      The signature type SHOULD be indicated by the key identifier
      or by the evidence-uri domain.
    </t>
  </section>

  <section anchor="compact-verification">
    <name>Verification of Compact References</name>

    <section anchor="compact-verify-reference">
      <name>Reference-Only Verification</name>

      <t>
        Without fetching the full Evidence packet, a verifier can:
      </t>

      <ol>
        <li>
          Verify the compact-signature is valid.
        </li>
        <li>
          Identify the signer (author, verifier, or service).
        </li>
        <li>
          Check that evidence-uri is from a trusted source.
        </li>
        <li>
          Display the compact-summary to the user.
        </li>
      </ol>

      <t>
        This provides basic assurance that Evidence exists and was
        attested by a known party, without full verification.
      </t>
    </section>

    <section anchor="compact-verify-full">
      <name>Full Verification via URI</name>

      <t>
        For complete verification:
      </t>

      <ol>
        <li>
          Fetch the Evidence packet from evidence-uri.
        </li>
        <li>
          Verify that packet-id matches.
        </li>
        <li>
          Verify that chain-hash matches the final checkpoint-hash.
        </li>
        <li>
          Verify that document-hash matches the document-ref
          content-hash.
        </li>
        <li>
          Perform full Evidence verification per this specification.
        </li>
        <li>
          Verify that compact-summary values match the actual Evidence.
        </li>
      </ol>

      <t>
        Discrepancies between the compact reference and the fetched
        Evidence MUST cause verification to fail.
      </t>
    </section>
  </section>

  <section anchor="compact-encoding">
    <name>Encoding Formats</name>

    <t>
      Compact Evidence References may be encoded in several formats
      depending on the embedding context:
    </t>

    <section anchor="compact-encoding-cbor">
      <name>CBOR Encoding</name>

      <t>
        The native format is CBOR with the 0x50505021 tag. This is
        the most compact binary representation, suitable for:
      </t>

      <ul>
        <li>Binary metadata fields</li>
        <li>Protocol messages</li>
        <li>Database storage</li>
      </ul>

      <t>
        Typical size: 150-250 bytes.
      </t>
    </section>

    <section anchor="compact-encoding-base64">
      <name>Base64 Encoding</name>

      <t>
        For text-only contexts, the CBOR bytes are base64url-encoded:
      </t>

      <artwork><![CDATA[
pop-ref:2nQAAZD1UPAgowGQA...base64url...
]]></artwork>

      <t>
        The "pop-ref:" prefix enables detection and parsing.
        Typical size: 200-350 characters.
      </t>
    </section>

    <section anchor="compact-encoding-uri">
      <name>URI Encoding</name>

      <t>
        A URI scheme for direct linking:
      </t>

      <artwork><![CDATA[
pop://verify.example.com/ref/2nQAAZD1UPAgowGQA...

Scheme: pop
Host: verification service
Path: /ref/{base64url-encoded-compact-ref}
]]></artwork>

      <t>
        Clicking/scanning the URI opens the verification service
        with the compact reference pre-loaded.
      </t>
    </section>

    <section anchor="compact-encoding-qr">
      <name>QR Code Encoding</name>

      <t>
        For physical media, the URI or base64 encoding can be
        represented as a QR code:
      </t>

      <ul>
        <li>
          Version 6 QR (41x41): Sufficient for ~150 byte references
        </li>
        <li>
          Version 10 QR (57x57): Sufficient for ~300 byte references
        </li>
        <li>
          Error correction level M recommended for print durability
        </li>
      </ul>
    </section>
  </section>

  <section anchor="compact-embedding">
    <name>Embedding Guidelines</name>

    <section anchor="compact-embed-pdf">
      <name>PDF Documents</name>

      <artwork><![CDATA[
XMP Metadata location:
  /x:xmpmeta/rdf:RDF/rdf:Description[@xmlns:pop]

Custom namespace:
  xmlns:pop="http://example.com/ns/pop/1.0/"

Properties:
  pop:evidenceRef     = base64url-encoded compact reference
  pop:evidenceURI     = full Evidence packet URI
  pop:verificationURI = verification service URI
]]></artwork>
    </section>

    <section anchor="compact-embed-git">
      <name>Git Commits</name>

      <artwork><![CDATA[
Commit message footer:

  Pop-Evidence-Ref: pop-ref:2nQAAZD1UPAgowGQA...
  Pop-Evidence-URI: https://evidence.example.com/packets/abc123.pop

Git notes (alternative):
  git notes --ref=pop-evidence add -m "pop-ref:2nQAAZD1..."
]]></artwork>
    </section>

    <section anchor="compact-embed-image">
      <name>Image Files</name>

      <artwork><![CDATA[
EXIF UserComment tag (0x9286):
  pop-ref:2nQAAZD1UPAgowGQA...

XMP (for formats supporting it):
  Same structure as PDF XMP
]]></artwork>
    </section>
  </section>

  <section anchor="compact-example">
    <name>Compact Reference Example</name>

    <artwork type="cbor-diag"><![CDATA[
/ Tagged Compact Evidence Reference (0x50505021 = "PPP!") /
1347571281({
  1: h'550e8400e29b41d4a716446655440000',  / packet-id /
  2: {                                      / chain-hash /
    1: 1,
    2: h'a7ffc6f8bf1ed76651c14756a061d662
          f580ff4de43b49fa82d80a4b80f8434a'
  },
  3: {                                      / document-hash /
    1: 1,
    2: h'e3b0c44298fc1c149afbf4c8996fb924
          27ae41e4649b934ca495991b7852b855'
  },
  4: {                                      / compact-summary /
    1: 47,                                  / checkpoints /
    2: 12500,                               / chars /
    3: 5400.0,                              / VDF time: 90 min /
    4: 2,                                   / tier: Standard /
    5: 2,                                   / verdict: likely-human /
    6: 0.87                                 / confidence /
  },
  5: "https://evidence.example.com/p/"\
     "550e8400e29b41d4a716446655440000.pop",
  6: h'D28441A0A201260442...',              / compact-signature /
  7: {                                      / metadata /
    1: "Jane Author",
    2: 1(1706745600),                       / created /
    3: "WritersLogic Verification Service",
    4: 1(1706832000)                        / verified /
  }
})
]]></artwork>

    <t>
      Encoded size: approximately 220 bytes (CBOR), 295 characters
      (base64url).
    </t>
  </section>

</section>

    <!-- Section 13: Security Considerations -->
    <section anchor="security-considerations"
         xml:base="sections/security-considerations.xml">
  <name>Security Considerations</name>

  <t>
    This section consolidates security analysis for
    the witnessd Proof of
    Process specification. It references and extends the per-section
    security considerations defined in <xref target="jitter-security"/>,
    <xref target="vdf-security"/>, <xref target="absence-security"/>,
    <xref target="fcb-security"/>,
    and <xref target="evidence-model-security"/>.
  </t>

  <t>
    The specification adopts a quantified security approach: rather than
    claiming evidence is "secure" or "insecure" in
    absolute terms, security
    is expressed as cost asymmetries and
    tamper-evidence properties. This
    framing reflects the fundamental reality that sufficiently resourced
    adversaries can eventually forge any evidence; the goal is to make
    forgery economically irrational for most scenarios.
  </t>

  <section anchor="threat-model">
    <name>Threat Model</name>

    <t>
      The witnessd threat model defines three categories:
      adversary goals,
      assumed adversary capabilities, and explicitly
      out-of-scope adversaries.
    </t>

    <section anchor="adversary-goals">
      <name>Adversary Goals</name>

      <t>
        The specification defends against adversaries
        pursuing the following
        objectives:
      </t>

      <dl>
        <dt>Backdating Evidence:</dt>
        <dd>
          <t>
            Creating evidence that claims to document a
            process occurring
            earlier than it actually did. This attack is relevant when
            priority or timeline claims matter (e.g.,
            intellectual property
            disputes, academic submissions with deadlines).
          </t>
        </dd>

        <dt>Fabricating Process:</dt>
        <dd>
          <t>
            Creating evidence for a document that was not
            actually authored
            through the claimed process. This includes
            generating evidence
            for documents created entirely by automated
            means, or evidence
            claiming gradual authorship for content that was produced in
            a single operation (paste, import, generation).
          </t>
        </dd>

        <dt>Transplanting Evidence:</dt>
        <dd>
          <t>
            Taking legitimate evidence from one authoring session and
            associating it with a different document. This
            attack attempts
            to transfer the credibility of genuine evidence to unrelated
            content.
          </t>
        </dd>

        <dt>Selective Disclosure:</dt>
        <dd>
          <t>
            Omitting checkpoints or evidence sections that would reveal
            unfavorable information (e.g., large paste operations, gaps
            in activity). This attack attempts to present a misleadingly
            favorable subset of the actual process.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="adversary-capabilities">
      <name>Assumed Adversary Capabilities</name>

      <t>
        The specification assumes adversaries have the
        following capabilities:
      </t>

      <dl>
        <dt>Software Control:</dt>
        <dd>
          <t>
            The adversary has full control over software
            running on their
            device, including the ability to modify or
            replace the Attesting
            Environment. They can intercept, modify, or fabricate any
            software-generated data.
          </t>
        </dd>

        <dt>Commodity Hardware Access:</dt>
        <dd>
          <t>
            The adversary can acquire commodity computing
            hardware at market
            prices. They may have access to cloud computing
            resources and
            can rent substantial computational capacity.
          </t>
        </dd>

        <dt>Bounded Compute Resources:</dt>
        <dd>
          <t>
            The adversary's computational resources are
            bounded by economic
            constraints. They cannot instantaneously compute arbitrarily
            large numbers of VDF iterations. The time required
            for sequential
            computation cannot be circumvented with
            additional resources.
          </t>
        </dd>

        <dt>Algorithm Knowledge:</dt>
        <dd>
          <t>
            The adversary has complete knowledge of all algorithms and
            protocols used by the specification. Security
            does not depend
            on obscurity; the specification is public.
          </t>
        </dd>

        <dt>Statistical Sophistication:</dt>
        <dd>
          <t>
            The adversary can perform statistical analysis
            and may attempt
            to generate synthetic behavioral data that
            passes statistical
            tests.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="out-of-scope-adversaries">
      <name>Out-of-Scope Adversaries</name>

      <t>
        The specification explicitly does NOT defend against:
      </t>

      <dl>
        <dt>Nation-State Adversaries with HSM Compromise:</dt>
        <dd>
          <t>
            Adversaries capable of extracting keys from
            hardware security
            modules (TPM, Secure Enclave) through sophisticated physical
            attacks, side-channel analysis, or manufacturer compromise.
            Hardware attestation assumes HSM integrity.
          </t>
        </dd>

        <dt>Cryptographic Breakthrough:</dt>
        <dd>
          <t>
            Adversaries with access to novel cryptanalytic
            techniques that
            break SHA-256 collision resistance, ECDSA
            signature security,
            or other standard cryptographic primitives.
            The specification
            relies on established cryptographic assumptions.
          </t>
        </dd>

        <dt>Quantum Adversaries:</dt>
        <dd>
          <t>
            Adversaries with access to fault-tolerant quantum computers
            capable of executing Shor's algorithm (breaking RSA/ECDSA)
            or providing significant Grover speedups. Post-quantum
            considerations are noted in
            <xref target="vdf-post-quantum"/>
            but full quantum resistance is not claimed.
          </t>
        </dd>

        <dt>Time Travel:</dt>
        <dd>
          <t>
            Adversaries capable of creating evidence at one
            point in time
            and presenting it as if created earlier, where
            external anchors
            are not available or have been compromised.
            External timestamp
            authorities are trusted for absolute time claims.
          </t>
        </dd>

        <dt>Coerced Authors:</dt>
        <dd>
          <t>
            Adversaries who coerce legitimate authors into
            producing evidence
            under duress. The specification documents
            process, not intent
            or consent.
          </t>
        </dd>
      </dl>

      <t>
        The exclusion of these adversaries is not a
        weakness but a recognition
        of practical threat modeling. Evidence systems appropriate for
        defending against nation-state actors would impose costs and
        constraints unsuitable for general authoring scenarios.
      </t>
    </section>
  </section>

  <section anchor="cryptographic-security">
    <name>Cryptographic Security</name>

    <t>
      The specification relies on established
      cryptographic primitives with
      well-understood security properties. This section documents the
      security assumptions and requirements for each
      cryptographic component.
    </t>

    <section anchor="hash-function-security">
      <name>Hash Function Security</name>

      <t>
        Hash functions are used throughout the specification for content
        binding, chain construction, entropy commitment,
        and VDF computation.
      </t>

      <dl>
        <dt>Required Properties:</dt>
        <dd>
          <ul>
            <li>
              <t>Collision Resistance:</t>
              <t>
                It must be computationally infeasible to find
                two distinct
                inputs that produce the same hash output. This property
                ensures that different document states produce different
                content-hash values.
              </t>
            </li>

            <li>
              <t>Preimage Resistance:</t>
              <t>
                Given a hash output, it must be computationally
                infeasible
                to find any input that produces that output.
                This property
                prevents adversaries from constructing documents
                that match
                a predetermined hash.
              </t>
            </li>

            <li>
              <t>Second Preimage Resistance:</t>
              <t>
                Given an input and its hash, it must be computationally
                infeasible to find a different input with the same hash.
                This property prevents document substitution attacks.
              </t>
            </li>
          </ul>
        </dd>

        <dt>Algorithm Requirements:</dt>
        <dd>
          <t>
            SHA-256 is RECOMMENDED and MUST be supported
            by all implementations.
            SHA-3-256 SHOULD be supported for algorithm
            agility. Hash functions
            with known weaknesses (MD5, SHA-1) MUST NOT be used.
          </t>
        </dd>

        <dt>Security Margin:</dt>
        <dd>
          <t>
            SHA-256 provides 128-bit security against
            collision attacks and
            256-bit security against preimage attacks under classical
            assumptions. Grover's algorithm reduces these to 85-bit and
            128-bit respectively under quantum assumptions.
            This margin is
            considered adequate for the specification's threat model.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="signature-security">
      <name>Signature Security</name>

      <t>
        Digital signatures are used for checkpoint chain authentication,
        hardware attestation, calibration binding, and
        Attestation Result
        integrity.
      </t>

      <dl>
        <dt>COSE Algorithm Requirements:</dt>
        <dd>
          <t>
            Implementations MUST support COSE algorithm identifiers:
          </t>
          <ul>
            <li>ES256 (ECDSA with P-256 and SHA-256): MUST support</li>
            <li>
            ES384 (ECDSA with P-384 and SHA-384):
            SHOULD support
            </li>
            <li>EdDSA (Ed25519): SHOULD support</li>
          </ul>
          <t>
            RSA-based algorithms (PS256, RS256) MAY be supported for
            compatibility with legacy systems but are not
            recommended for
            new implementations due to larger signature sizes
            and post-quantum
            vulnerability.
          </t>
        </dd>

        <dt>Key Size Requirements:</dt>
        <dd>
          <t>
            Minimum key sizes for 128-bit security:
          </t>
          <ul>
            <li>ECDSA: P-256 curve or larger</li>
            <li>EdDSA: Ed25519 or Ed448</li>
            <li>RSA: 3072 bits or larger</li>
          </ul>
        </dd>

        <dt>Signature Binding:</dt>
        <dd>
          <t>
            Signatures MUST bind to the complete payload being signed.
            Partial payload signatures (signing a subset of
            fields) create
            opportunities for field substitution attacks. The chain-mac
            field provides additional binding beyond the
            checkpoint signature.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="vdf-cryptographic-security">
      <name>VDF Security</name>

      <t>
        Verifiable Delay Functions provide the temporal
        security foundation
        of the specification. VDF security rests on the
        sequential computation
        requirement.
      </t>

      <dl>
        <dt>Sequential Computation:</dt>
        <dd>
          <t>
            The VDF output cannot be computed significantly
            faster than the
            specified number of sequential operations. For iterated hash
            VDFs, this reduces to the assumption that no
            algorithm computes
            H^n(x) faster than n sequential hash evaluations. No such
            algorithm is known for cryptographic hash functions.
          </t>
        </dd>

        <dt>Parallelization Resistance:</dt>
        <dd>
          <t>
            Additional computational resources (more
            processors, GPUs, ASICs)
            cannot reduce the wall-clock time required for
            VDF computation.
            The iterated hash construction is inherently
            sequential: each
            iteration depends on the previous output.
          </t>
          <t>
            See <xref target="vdf-parallelization"/> for
            detailed analysis.
          </t>
        </dd>

        <dt>Verification Soundness:</dt>
        <dd>
          <t>
            For iterated hash VDFs, verification is by
            recomputation. The
            Verifier executes the same computation and compares results.
            This provides perfect soundness: a claimed
            output that differs
            from the actual computation will always be detected.
          </t>
          <t>
            For succinct VDFs (<xref target="Pietrzak2019"/>,
            <xref target="Wesolowski2019"/>), verification relies on the
            cryptographic hardness of the underlying problem
            (RSA group or
            class group). Soundness is computational rather
            than perfect.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="key-management-security">
      <name>Key Management</name>

      <t>
        Proper key management is essential for
        maintaining evidence integrity.
      </t>

      <dl>
        <dt>Hardware-Bound Keys:</dt>
        <dd>
          <t>
            When available, signing keys SHOULD be bound to
            hardware security
            modules (TPM, Secure Enclave). Hardware binding provides:
          </t>
          <ul>
            <li>
              Key non-exportability: Private keys cannot
              be extracted from
              the device
            </li>
            <li>
              Device binding: Evidence can be tied to a
              specific physical
              device
            </li>
            <li>
              Tamper resistance: Key compromise requires physical attack
            </li>
          </ul>
        </dd>

        <dt>Session Keys:</dt>
        <dd>
          <t>
            The checkpoint-chain-key used for chain-mac
            computation SHOULD
            be derived uniquely for each session. Key
            derivation SHOULD use
            HKDF (RFC 5869) with domain separation:
          </t>
          <artwork><![CDATA[
chain-key = HKDF-SHA256(
    salt = session-entropy,
    ikm = device-master-key,
    info = "witnessd-chain-v1" || session-id
)
]]></artwork>
        </dd>

        <dt>Key Rotation:</dt>
        <dd>
          <t>
            Device keys SHOULD be rotated periodically
            (RECOMMENDED: annually)
            or upon suspected compromise. Evidence packets created with
            revoked keys SHOULD be flagged during verification.
          </t>
        </dd>
      </dl>
    </section>
  </section>

  <section anchor="attesting-environment-trust">
    <name>Attesting Environment Trust</name>

    <t>
      The Attesting Environment (AE) is the
      witnessd-core software running
      on the author's device. Understanding what the AE is
      trusted for, and
      what it is NOT trusted for, is essential for
      correct interpretation
      of evidence.
    </t>

    <section anchor="ae-trust-scope">
      <name>What the AE Is Trusted For</name>

      <t>
        The AE is trusted to perform accurate observation
        and honest reporting
        of the specific data it captures:
      </t>

      <dl>
        <dt>Accurate Timing Measurement:</dt>
        <dd>
          <t>
            The AE is trusted to accurately measure
            inter-keystroke intervals
            and other timing data. This does not require
            trusting the content
            of keystrokes, only the timing between events.
          </t>
        </dd>

        <dt>Correct Hash Computation:</dt>
        <dd>
          <t>
            The AE is trusted to correctly compute
            cryptographic hashes of
            document content. Verification can detect
            incorrect hashes, but
            cannot detect if the AE computed a hash of different content
            than claimed.
          </t>
        </dd>

        <dt>VDF Execution:</dt>
        <dd>
          <t>
            The AE is trusted to actually execute VDF
            iterations rather than
            fabricating outputs. This trust is partially verifiable: VDF
            outputs can be recomputed, but the claimed timing cannot be
            independently verified without calibration attestation.
          </t>
        </dd>

        <dt>Monitoring Events (for monitoring-dependent claims):</dt>
        <dd>
          <t>
            For claims in the monitoring-dependent category
            (types 16-63),
            the AE is trusted to have actually observed and reported the
            events (or non-events) it claims. This trust is
            documented in
            the ae-trust-basis field.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="ae-trust-limitations">
      <name>What the AE Is NOT Trusted For</name>

      <t>
        The specification explicitly does NOT rely on AE trust for the
        following:
      </t>

      <dl>
        <dt>Content Judgment:</dt>
        <dd>
          <t>
            The AE makes no claims about document quality, originality,
            accuracy, or appropriateness. Evidence documents
            process, not
            content merit.
          </t>
        </dd>

        <dt>Intent Inference:</dt>
        <dd>
          <t>
            The AE makes no claims about why the author
            performed specific
            actions, what the author was thinking, or whether the author
            intended to deceive. Evidence documents observable behavior,
            not mental states.
          </t>
        </dd>

        <dt>Authorship Attribution:</dt>
        <dd>
          <t>
            The AE makes no claims about who was operating
            the device. The
            evidence shows that input events occurred on a
            device; it does
            not prove that a specific individual produced those events.
          </t>
        </dd>

        <dt>Cognitive Process:</dt>
        <dd>
          <t>
            Behavioral patterns consistent with human
            typing do not prove
            human cognition. An adversary could theoretically
            program input
            patterns that mimic human timing while the
            content originates
            elsewhere. The Jitter Seal makes this costly,
            not impossible.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="hardware-attestation-role">
      <name>Hardware Attestation Role</name>

      <t>
        Hardware attestation increases AE trust by binding evidence to
        verified hardware:
      </t>

      <dl>
        <dt><xref target="TPM2.0"/> (Linux, Windows):</dt>
        <dd>
          <t>
            Provides platform integrity measurement (PCRs),
            key sealing to
            platform state, and hardware-bound signing keys.
            TPM attestation
            proves that the AE was running on a specific
            device in a specific
            configuration.
          </t>
        </dd>

        <dt>Secure Enclave (macOS, iOS):</dt>
        <dd>
          <t>
            Provides hardware-bound key generation and
            signing operations.
            Keys generated in the Secure Enclave cannot be
            exported, binding
            signatures to the specific device.
          </t>
        </dd>

        <dt>Attestation Limitations:</dt>
        <dd>
          <t>
            Hardware attestation proves the signing key is
            hardware-bound;
            it does not prove the AE software is unmodified.
            Full AE integrity
            would require secure boot attestation and runtime integrity
            measurement, which are platform-specific and not universally
            available.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="compromised-ae-scenarios">
      <name>Compromised AE Scenarios</name>

      <t>
        Understanding the impact of AE compromise is essential for risk
        assessment:
      </t>

      <dl>
        <dt>Modified AE Software:</dt>
        <dd>
          <t>
            An adversary running modified AE software can fabricate any
            monitoring-dependent claims (types 16-63). Chain-verifiable
            claims (types 1-15) remain bound by VDF computational
            requirements even with modified software.
          </t>
        </dd>

        <dt>Fake Calibration:</dt>
        <dd>
          <t>
            Modified software could report artificially slow calibration
            rates, making subsequent VDF computations appear
            to take longer
            than they actually did. This attack is mitigated by:
          </t>
          <ul>
            <li>
              Hardware-signed calibration attestation (when available)
            </li>
            <li>
              Plausibility checks based on device class
            </li>
            <li>
              External anchor cross-validation
            </li>
          </ul>
        </dd>

        <dt>Fabricated Jitter Data:</dt>
        <dd>
          <t>
            Modified software could generate synthetic timing data that
            mimics human patterns. The cost of this attack
            is bounded by:
          </t>
          <ul>
            <li>
              Real-time generation requirement (VDF entanglement)
            </li>
            <li>
              Statistical consistency across checkpoints
            </li>
            <li>
              Entropy threshold requirements
            </li>
          </ul>
          <t>
            See <xref target="jitter-simulation-attacks"/>
            for quantified
            bounds on simulation attacks.
          </t>
        </dd>

        <dt>Mitigation Summary:</dt>
        <dd>
          <t>
            AE compromise cannot reduce the VDF
            computational requirement
            or bypass the sequential execution constraint.
            Compromise enables
            fabrication of monitoring data but does not
            eliminate the time
            cost of forgery. The forgery-cost-section
            quantifies the minimum
            resources required even with full software control.
          </t>
        </dd>
      </dl>
    </section>
  </section>

  <section anchor="verification-security">
    <name>Verification Security</name>

    <t>
      The verification process must be secure against
      both malicious Evidence
      and malicious Verifiers.
    </t>

    <section anchor="verifier-independence">
      <name>Verifier Independence</name>

      <t>
        Evidence verification is designed to be
        independent of the Attester:
      </t>

      <dl>
        <dt>No Shared State:</dt>
        <dd>
          <t>
            Verification requires no communication with or data from the
            Attester beyond the Evidence packet itself. A Verifier with
            only the .pop file can perform complete verification.
          </t>
        </dd>

        <dt>Adversarial Verification:</dt>
        <dd>
          <t>
            A skeptical Verifier can appraise Evidence
            without trusting any
            claims made by the Attester. All cryptographic proofs are
            included and can be recomputed independently.
          </t>
        </dd>

        <dt>Multiple Independent Verifiers:</dt>
        <dd>
          <t>
            Multiple Verifiers appraising the same Evidence should reach
            consistent results for chain-verifiable claims. Monitoring-
            dependent claims may receive different
            confidence assessments
            based on Verifier policies.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="sampling-strategies">
      <name>Sampling Strategies for Large Evidence Packets</name>

      <t>
        Evidence packets may contain thousands of checkpoints. Full
        verification of all VDF proofs may be impractical. Verifiers
        MAY use sampling strategies:
      </t>

      <dl>
        <dt>Boundary Verification:</dt>
        <dd>
          <t>
            Always verify the first and last checkpoints fully. This
            confirms the chain endpoints.
          </t>
        </dd>

        <dt>Random Sampling:</dt>
        <dd>
          <t>
            Randomly select checkpoints for full VDF verification. If
            any sampled checkpoint fails, reject the entire Evidence.
            Probability of detecting a single invalid checkpoint with
            k samples from n checkpoints: 1 - (1 - 1/n)^k.
          </t>
        </dd>

        <dt>Chain Linkage Verification:</dt>
        <dd>
          <t>
            Verify prev-hash linkage for ALL checkpoints
            (computationally
            cheap). This ensures no checkpoints were removed
            or reordered.
          </t>
        </dd>

        <dt>Anchor-Bounded Verification:</dt>
        <dd>
          <t>
            If external anchors are present, prioritize verification of
            checkpoints adjacent to anchors. External timestamps bound
            the timeline at anchor points.
          </t>
        </dd>

        <dt>Sampling Disclosure:</dt>
        <dd>
          <t>
            Attestation Results SHOULD disclose the sampling
            strategy used
            and the number of checkpoints fully verified.
            Relying Parties
            can assess whether the sampling provides adequate confidence
            for their use case.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="external-anchor-verification">
      <name>External Anchor Verification</name>

      <t>
        External anchors (RFC 3161 timestamps,
        blockchain proofs) provide
        absolute time binding but introduce additional
        trust requirements:
      </t>

      <dl>
        <dt>Timestamp Authority Trust:</dt>
        <dd>
          <t>
            Timestamps per <xref target="RFC3161"/> require trust in the Time
            Stamping Authority
            (TSA). Verifiers SHOULD use TSAs with published policies and
            audit records. Multiple TSAs MAY be used for redundancy.
          </t>
        </dd>

        <dt>Blockchain Anchor Verification:</dt>
        <dd>
          <t>
            Blockchain-based anchors require access to blockchain data
            (directly or via APIs). Verifiers SHOULD verify:
          </t>
          <ul>
            <li>
              The transaction containing the anchor is confirmed
            </li>
            <li>
              Sufficient confirmations for the security level required
            </li>
            <li>
              The anchor commitment matches the expected checkpoint data
            </li>
          </ul>
        </dd>

        <dt>Anchor Freshness:</dt>
        <dd>
          <t>
            Anchors prove that Evidence existed at the anchor time; they
            do not prove Evidence was created at that time. An adversary
            could create Evidence, wait, then obtain an anchor. This is
            mitigated by anchor coverage requirements (multiple anchors
            throughout the session).
          </t>
        </dd>
      </dl>
    </section>
  </section>

  <section anchor="protocol-security">
    <name>Protocol Security</name>

    <t>
      This section addresses protocol-level attacks
      and mitigations, drawing
      on the per-section security analyses.
    </t>

    <section anchor="replay-attack-prevention">
      <name>Replay Attack Prevention</name>

      <t>
        Replay attacks attempt to reuse valid evidence
        components in invalid
        contexts. Multiple mechanisms prevent replay:
      </t>

      <dl>
        <dt>Nonce Binding:</dt>
        <dd>
          <t>
            Session entropy (random 256-bit seed) is
            incorporated into the
            genesis checkpoint VDF input. This prevents
            precomputation of
            VDF outputs before a session begins.
          </t>
        </dd>

        <dt>Chain Binding:</dt>
        <dd>
          <t>
            Each checkpoint includes prev-hash, binding it
            to the specific
            chain history. Checkpoints cannot be transplanted
            between chains
            without invalidating the hash linkage.
          </t>
          <t>
            See <xref target="jitter-replay-attacks"/>
            for jitter-specific
            replay prevention.
          </t>
        </dd>

        <dt>Sequence Binding:</dt>
        <dd>
          <t>
            Checkpoint sequence numbers MUST be strictly
            monotonic. Duplicate
            or out-of-order sequence numbers indicate manipulation.
          </t>
        </dd>

        <dt>Content Binding:</dt>
        <dd>
          <t>
            VDF inputs incorporate content-hash, binding
            temporal proofs to
            specific document states. Evidence for one
            document cannot be
            transferred to another without VDF recomputation.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="transplant-attack-prevention">
      <name>Transplant Attack Prevention</name>

      <t>
        Transplant attacks attempt to associate legitimate evidence from
        one context with content from another context:
      </t>

      <dl>
        <dt>Content-VDF Binding:</dt>
        <dd>
          <t>
            The VDF input includes content-hash:
          </t>
          <artwork><![CDATA[
VDF_input{N} = H(
    VDF_output{N-1} ||
    content-hash{N} ||
    jitter-commitment{N} ||
    sequence{N}
)
]]></artwork>
          <t>
            Changing the document content requires
            recomputing all subsequent
            VDF proofs.
          </t>
        </dd>

        <dt>Jitter-VDF Binding:</dt>
        <dd>
          <t>
            The jitter-commitment is entangled with VDF
            input. Transplanting
            jitter data from another session is infeasible
            because it would
            require the original VDF output (which depends on different
            content) or recomputing the entire VDF chain with new jitter
            
            (which requires capturing new behavioral
            entropy in real time).
          </t>
        </dd>

        <dt>Chain MAC:</dt>
        <dd>
          <t>
            The chain-mac field HMAC-binds checkpoints to the session's
            chain-key:
          </t>
          <artwork><![CDATA[
chain-mac = HMAC-SHA256(
    key = chain-key,
    message = checkpoint-hash || sequence || session-id
)
]]></artwork>
          <t>
            Without the chain-key, an adversary cannot construct valid
            chain-mac values for transplanted checkpoints.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="backdating-attack-costs">
      <name>Backdating Attack Costs</name>

      <t>
        Backdating creates evidence claiming a process occurred earlier
        than it actually did. The cost of backdating is quantified by
        the VDF recomputation requirement:
      </t>

      <dl>
        <dt>VDF Recomputation:</dt>
        <dd>
          <t>
            To backdate evidence by inserting or modifying
            checkpoints at
            position P, the adversary must recompute all VDF proofs from
            position P forward. This requires:
          </t>
          <artwork><![CDATA[
backdate_time >= sum(iterations[i]) / adversary_vdf_rate
                 for i = P to N
]]></artwork>
          <t>
            where N is the final checkpoint. Backdating by a significant
            amount (hours or days) requires proportional
            wall-clock time.
          </t>
        </dd>

        <dt>External Anchor Constraints:</dt>
        <dd>
          <t>
            If external anchors exist in the chain,
            backdating is constrained
            to the interval between anchors. An adversary
            cannot backdate
            before an anchor without also forging the
            external timestamp.
          </t>
        </dd>

        <dt>Cost Quantification:</dt>
        <dd>
          <t>
            The forgery-cost-section provides explicit cost bounds for
            backdating attacks, including compute costs, time costs, and
            economic estimates.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="omission-attack-prevention">
      <name>Omission Attack Prevention</name>

      <t>
        Omission attacks selectively remove checkpoints
        to hide unfavorable
        evidence:
      </t>

      <dl>
        <dt>Sequence Verification:</dt>
        <dd>
          <t>
            Checkpoint sequence numbers MUST be consecutive.
            Missing sequence
            numbers indicate omission. Verifiers MUST reject chains with
            non-consecutive sequences.
          </t>
        </dd>

        <dt>Hash Chain Integrity:</dt>
        <dd>
          <t>
            Removing a checkpoint breaks the hash chain (subsequent
            checkpoint's prev-hash will not match). Repairing the chain
            requires recomputing all subsequent checkpoint hashes and
            VDF proofs.
          </t>
        </dd>

        <dt>Completeness Claims:</dt>
        <dd>
          <t>
            The checkpoint-chain-complete absence claim
            (type 6) explicitly
            asserts that no checkpoints were omitted. This claim is
            chain-verifiable.
          </t>
        </dd>
      </dl>
    </section>
  </section>

  <section anchor="operational-security">
    <name>Operational Security</name>

    <t>
      Security of the overall system depends on proper
      operational practices
      beyond the protocol specification.
    </t>

    <section anchor="key-lifecycle">
      <name>Key Lifecycle Management</name>

      <dl>
        <dt>Key Generation:</dt>
        <dd>
          <t>
            Device keys SHOULD be generated within hardware
            security modules
            when available. Software-generated keys MUST use
            cryptographically
            secure random number generators.
          </t>
        </dd>

        <dt>Key Storage:</dt>
        <dd>
          <t>
            Private keys SHOULD be stored in platform-appropriate secure
            storage:
          </t>
          <ul>
            <li>macOS: Secure Enclave or Keychain</li>
            <li>Linux: TPM or system keyring</li>
            <li>Windows: TPM or DPAPI</li>
          </ul>
          <t>
            Keys MUST NOT be stored in plaintext in the filesystem.
          </t>
        </dd>

        <dt>Key Rotation:</dt>
        <dd>
          <t>
            Organizations SHOULD establish key rotation policies.
            RECOMMENDED rotation interval: annually or upon personnel
            changes. Evidence packets created with revoked keys SHOULD
            receive reduced confidence scores.
          </t>
        </dd>

        <dt>Key Revocation:</dt>
        <dd>
          <t>
            Mechanisms for key revocation are outside the scope of this
            specification but SHOULD be considered for deployment.
            Certificate revocation lists (CRLs) or OCSP may
            be appropriate
            for managed environments.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="evidence-storage-transmission">
      <name>Evidence Packet Storage and Transmission</name>

      <dl>
        <dt>Integrity Protection:</dt>
        <dd>
          <t>
            Evidence packets are self-protecting through cryptographic
            binding. Additional encryption is not required for integrity
            but MAY be applied for confidentiality.
          </t>
        </dd>

        <dt>Confidentiality Considerations:</dt>
        <dd>
          <t>
            Evidence packets contain document hashes and
            behavioral data.
            While content is not included, statistical information about
            the authoring process is present. Transmission
            over untrusted
            networks SHOULD use TLS 1.3 or equivalent.
          </t>
        </dd>

        <dt>Archival Storage:</dt>
        <dd>
          <t>
            Evidence packets intended for long-term storage SHOULD be:
          </t>
          <ul>
            <li>
              Stored with redundancy (multiple copies,
              geographic distribution)
            </li>
            <li>
              Protected against bit rot (checksums,
              error-correcting codes)
            </li>
            <li>
              Associated with necessary verification materials
              (public keys,
              anchor confirmations)
            </li>
          </ul>
        </dd>

        <dt>Retention Policies:</dt>
        <dd>
          <t>
            Organizations SHOULD establish retention policies balancing
            evidentiary value against privacy
        considerations. Jitter data
            has privacy implications; retention beyond the verification
            period may not be necessary or desirable.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="verifier-policy">
      <name>Verifier Policy Considerations</name>

      <dl>
        <dt>Minimum Requirements:</dt>
        <dd>
          <t>
            Verifiers SHOULD establish minimum requirements
            for acceptable
            Evidence:
          </t>
          <ul>
            <li>
              Minimum evidence tier (Basic, Standard,
              Enhanced, Maximum)
            </li>
            <li>
              Minimum VDF duration relative to
              claimed authoring time
            </li>
            <li>Minimum entropy threshold</li>
            <li>Required absence claims for specific use cases</li>
          </ul>
        </dd>

        <dt>Confidence Thresholds:</dt>
        <dd>
          <t>
            Verifiers SHOULD define confidence thresholds
            for acceptance:
          </t>
          <ul>
            <li>Low-stakes: confidence &gt;= 0.3 may be acceptable</li>
            <li>Standard: confidence &gt;= 0.5 typical requirement</li>
            <li>High-stakes: confidence &gt;= 0.7 recommended</li>
            <li>Litigation: confidence &gt;= 0.8 with Maximum tier</li>
          </ul>
        </dd>

        <dt>Caveat Handling:</dt>
        <dd>
          <t>
            Verifiers SHOULD define how caveats affect
            acceptance decisions.
            Some caveats may be disqualifying for specific
            use cases (e.g.,
            "no hardware attestation" may be unacceptable
            for high-stakes
            verification).
          </t>
        </dd>
      </dl>
    </section>
  </section>

  <section anchor="limitations-nongoals">
    <name>Limitations and Non-Goals</name>

    <t>
      This section explicitly documents what the specification does NOT
      protect against and what it does NOT claim to achieve.
    </t>

    <section anchor="unprotected-attacks">
      <name>Attacks Not Protected Against</name>

      <dl>
        <dt>Collusion:</dt>
        <dd>
          <t>
            If the author and a third party collude (e.g., the author
            provides their device credentials to another
            person who types
            while the author is credited), the Evidence will show a
            legitimate-looking process. The specification documents
            observable behavior, not identity.
          </t>
        </dd>

        <dt>Pre-Prepared Content:</dt>
        <dd>
          <t>
            An author could slowly type pre-prepared content, creating
            Evidence of a gradual process for content that
            already existed.
            The specification documents that typing occurred, not that
            thinking occurred during typing.
          </t>
        </dd>

        <dt>External Input Devices:</dt>
        <dd>
          <t>
            Input from devices not monitored by the AE (e.g., hardware
            keystroke injectors, remote desktop from
            unmonitored machines)
            may not be distinguishable from local input. Hardware-level
            input verification is outside scope.
          </t>
        </dd>

        <dt>Social Engineering:</dt>
        <dd>
          <t>
            Attacks that manipulate Relying Parties into accepting
            inappropriate Evidence (e.g., convincing a reviewer that
            weak Evidence is sufficient) are outside scope.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="honest-author-assumption">
      <name>The Honest Author Assumption</name>

      <t>
        The specification fundamentally documents PROCESS, not INTENT:
      </t>

      <dl>
        <dt>Evidence Shows What Happened:</dt>
        <dd>
          <t>
            Evidence shows that input events occurred with
            specific timing
            patterns, that VDF computation required certain time, that
            document states changed in sequence. Evidence does not show
            why any of this happened.
          </t>
        </dd>

        <dt>Process != Cognition:</dt>
        <dd>
          <t>
            Evidence that an author typed content gradually
            does not prove
            the author thought of that content. The author
            could have been
            transcribing, copying from memory, or following dictation.
          </t>
        </dd>

        <dt>Behavioral Consistency:</dt>
        <dd>
          <t>
            The correct interpretation of Evidence is
            "behavioral consistency":
            the observable process was consistent with the
            claimed process.
            This is weaker than "authorship proof" but is verifiable and
            falsifiable.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="content-agnostic">
      <name>Content-Agnostic By Design</name>

      <t>
        The specification is deliberately content-agnostic:
      </t>

      <dl>
        <dt>No Semantic Analysis:</dt>
        <dd>
          <t>
            Evidence contains document hashes, not content.
            The specification
            makes no claims about what was written, only how
            it was written.
          </t>
        </dd>

        <dt>No Quality Assessment:</dt>
        <dd>
          <t>
            Evidence does not indicate whether content is
            good, original,
            accurate, or valuable. Strong Evidence can
            accompany poor content;
            excellent content can have weak Evidence.
          </t>
        </dd>

        <dt>No AI Detection:</dt>
        <dd>
          <t>
            The specification explicitly does NOT claim to
            detect whether
            content was "written by AI" or "written by a human" in terms
            of content origin. It documents the observable
            INPUT process,
            which is distinct from content generation.
          </t>
        </dd>

        <dt>Privacy Benefit:</dt>
        <dd>
          <t>
            Content-agnosticism is a privacy feature. Evidence can be
            verified without accessing the document content, enabling
            verification of confidential documents.
          </t>
        </dd>
      </dl>
    </section>
  </section>

  <section anchor="comparison-related-work">
    <name>Comparison to Related Work</name>

    <t>
      This section compares the security model of
      witnessd Proof of Process
      to related attestation and timestamping systems.
    </t>

    <section anchor="comparison-timestamping">
      <name>Comparison to Traditional Timestamping</name>

      <t>
        Traditional timestamping (<xref target="RFC3161"/>) proves that
        a document existed
        at a point in time. Proof of Process provides
        additional properties:
      </t>

      <table>
        <thead>
          <tr>
            <th>Property</th>
            <th>RFC 3161</th>
            <th>Proof of Process</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>Existence proof</td>
            <td>Yes (point in time)</td>
            <td>Yes (continuous)</td>
          </tr>
          <tr>
            <td>Process documentation</td>
            <td>No</td>
            <td>Yes</td>
          </tr>
          <tr>
            <td>Behavioral evidence</td>
            <td>No</td>
            <td>Yes (jitter)</td>
          </tr>
          <tr>
            <td>Temporal ordering</td>
            <td>No (independent timestamps)</td>
            <td>Yes (VDF chain)</td>
          </tr>
          <tr>
            <td>Third-party trust</td>
            <td>Required (TSA)</td>
            <td>Optional (anchors)</td>
          </tr>
          <tr>
            <td>Local generation</td>
            <td>No (requires TSA interaction)</td>
            <td>Yes</td>
          </tr>
        </tbody>
      </table>

      <t>
        Proof of Process is complementary to timestamping.
        External anchors
        (including RFC 3161 timestamps) provide absolute
        time binding that
        strengthens VDF-based relative ordering.
      </t>
    </section>

    <section anchor="comparison-code-signing">
      <name>Comparison to Code Signing</name>

      <t>
        Code signing attests to the identity of the
        signer and integrity of
        the signed artifact. Proof of Process serves different goals:
      </t>

      <table>
        <thead>
          <tr>
            <th>Property</th>
            <th>Code Signing</th>
            <th>Proof of Process</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>Identity binding</td>
            <td>Strong (PKI)</td>
            <td>Weak (device-bound)</td>
          </tr>
          <tr>
            <td>Artifact integrity</td>
            <td>Yes</td>
            <td>Yes (hash binding)</td>
          </tr>
          <tr>
            <td>Creation process</td>
            <td>No</td>
            <td>Yes</td>
          </tr>
          <tr>
            <td>Temporal properties</td>
            <td>Timestamp only</td>
            <td>Duration, ordering</td>
          </tr>
          <tr>
            <td>Use case</td>
            <td>Software distribution</td>
            <td>Authoring documentation</td>
          </tr>
        </tbody>
      </table>

      <t>
        Code signing establishes "who signed this"; Proof of Process
        establishes "how this was created." The two could be combined
        for comprehensive provenance documentation.
      </t>
    </section>

    <section anchor="comparison-rats">
      <name>Relationship to RATS Security Model</name>

      <t>
        Proof of Process implements an application-specific
        profile of the
        RATS architecture <xref target="RFC9334"/>. Key security model
        alignments:
      </t>

      <dl>
        <dt>Evidence vs. Attestation Results:</dt>
        <dd>
          <t>
            The separation between .pop (Evidence) and .war (Attestation
            Result) files follows the RATS distinction. Evidence is
            produced by the Attester; Attestation Results
            by the Verifier.
          </t>
        </dd>

        <dt>Appraisal Policy:</dt>
        <dd>
          <t>
            RATS defines Appraisal Policy for Evidence as the Verifier's
            rules for evaluating Evidence. The absence-claim thresholds
            and confidence-level requirements serve this role in Proof
            of Process.
          </t>
        </dd>

        <dt>Background Check vs. Passport Model:</dt>
        <dd>
          <t>
            Proof of Process supports both RATS models.
            The "passport model"
            applies when the author obtains a .war file and
            presents it to
            Relying Parties. The "background check model"
            applies when the
            Relying Party verifies the .pop file directly or through a
            trusted Verifier.
          </t>
        </dd>

        <dt>Freshness:</dt>
        <dd>
          <t>
            RATS freshness mechanisms (nonces, timestamps) align with
            the session-entropy and external-anchor mechanisms in Proof
            of Process. VDF proofs provide an additional freshness
            dimension: evidence of elapsed time.
          </t>
        </dd>

        <dt>Endorsements and Reference Values:</dt>
        <dd>
          <t>
            Hardware attestation in the hardware-section corresponds to
            RATS Endorsements. Calibration data serves as
            Reference Values
            for VDF timing verification.
          </t>
        </dd>
      </dl>

      <t>
        For RATS-specific security guidance, implementers should also
        consult the RATS security considerations in
        <xref target="RFC9334"/> Section 11.
      </t>
    </section>
  </section>

  <section anchor="security-summary">
    <name>Security Properties Summary</name>

    <t>
      This section summarizes the security properties provided by the
      specification:
    </t>

    <section anchor="properties-provided">
      <name>Properties Provided</name>

      <dl>
        <dt>Tamper-Evidence:</dt>
        <dd>
          <t>
            Modifications to Evidence packets are detectable through
            cryptographic verification. The hash chain,
            VDF entanglement,
            and MAC bindings ensure that alteration
            invalidates the Evidence.
          </t>
        </dd>

        <dt>Cost-Asymmetric Forgery:</dt>
        <dd>
          <t>
            Producing counterfeit Evidence requires resources
            (time, compute,
            entropy generation) disproportionate to legitimate Evidence
            creation. The forgery-cost-section quantifies
            these requirements.
          </t>
        </dd>

        <dt>Independent Verifiability:</dt>
        <dd>
          <t>
            Evidence can be verified by any party without access to the
            original device, without trust in the Attester's
            infrastructure,
            and without network connectivity (except for
            external anchors).
          </t>
        </dd>

        <dt>Privacy by Construction:</dt>
        <dd>
          <t>
            Document content is never stored in Evidence.
            Behavioral data
            is aggregated before inclusion. The specification enforces
            privacy through structural constraints, not policy.
          </t>
        </dd>

        <dt>Temporal Ordering:</dt>
        <dd>
          <t>
            VDF chain construction provides unforgeable
            relative ordering
            of checkpoints. External anchors provide
            absolute time binding.
          </t>
        </dd>

        <dt>Behavioral Binding:</dt>
        <dd>
          <t>
            Jitter Seal entanglement binds captured
            behavioral entropy to
            the checkpoint chain, making Evidence
            transplantation infeasible.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="properties-not-provided">
      <name>Properties NOT Provided</name>

      <dl>
        <dt>Tamper-Proof:</dt>
        <dd>
          <t>
            Evidence CAN be forged given sufficient resources. The
            specification makes forgery costly, not impossible.
          </t>
        </dd>

        <dt>Identity Proof:</dt>
        <dd>
          <t>
            Evidence does NOT prove who operated the device. It proves
            that input events occurred on a device, not that a specific
            person produced them.
          </t>
        </dd>

        <dt>Intent Proof:</dt>
        <dd>
          <t>
            Evidence does NOT prove why actions occurred. Observable
            behavior is documented; mental states are not.
          </t>
        </dd>

        <dt>Content Origin Proof:</dt>
        <dd>
          <t>
            Evidence does NOT prove where ideas came from. The input
            process is documented; the cognitive source is not.
          </t>
        </dd>

        <dt>Absolute Certainty:</dt>
        <dd>
          <t>
            All security properties are bounded by explicit assumptions.
            No claim is made to be absolute, irrefutable, or guaranteed.
          </t>
        </dd>
      </dl>
    </section>
  </section>

</section>

    <!-- Section 8: Privacy Considerations -->
    <section anchor="privacy-considerations"
         xml:base="sections/privacy-considerations.xml">
  <name>Privacy Considerations</name>

  <t>
    This section consolidates privacy analysis for the witnessd Proof of
    Process specification. It references and extends
    the per-section privacy
    considerations defined in <xref target="jitter-privacy"/>,
    <xref target="absence-privacy"/>, and
    <xref target="privacy-construction"/>.
  </t>

  <t>
    Privacy is a core design goal of this
    specification, not an afterthought.
    The protocol implements privacy-by-construction:
    structural constraints
    that make privacy violations architecturally impossible, rather than
    relying on policy or trust. This approach follows the guidance of
    <xref target="RFC6973"/>
    (Privacy Considerations for Internet Protocols).
  </t>

  <section anchor="privacy-design-principles">
    <name>Privacy by Construction</name>

    <t>
      The witnessd evidence model enforces privacy through architectural
      constraints that cannot be circumvented without fundamentally
      modifying the protocol.
    </t>

    <section anchor="no-content-storage">
      <name>No Document Content Storage</name>

      <t>
        Evidence packets contain cryptographic hashes of
        document states,
        never the document content itself. This is a
        structural invariant:
      </t>

      <ul>
        <li>
          <t>Content Hash Binding:</t>
          <t>
            The document-ref structure (CDDL key 5 in evidence-packet)
            contains only a hash-value of the final document
            content, the
            byte-length, and character count. The content
            itself is never
            included in the Evidence packet.
          </t>
        </li>

        <li>
          <t>Checkpoint Content Hashes:</t>
          <t>
            Each checkpoint (key 4: content-hash) contains a hash of the
            document state at that point. An adversary with the Evidence
            packet but not the document cannot recover
            content from these
            hashes.
          </t>
        </li>

        <li>
          <t>Edit Deltas Without Content:</t>
          <t>
            The edit-delta structure (key 7 in checkpoint) records
            chars-added, chars-deleted, insertions, deletions, and
            replacements as counts only. No information about what
            characters were added or deleted is included.
          </t>
        </li>
      </ul>

      <t>
        This design enables verification of process
        without revealing what
        was written, supporting confidential document
        workflows where the
        evidence must be verifiable but the content must remain private.
      </t>
    </section>

    <section anchor="no-keystroke-capture">
      <name>No Keystroke Capture</name>

      <t>
        The specification captures inter-event timing intervals without
        recording which keys were pressed:
      </t>

      <ul>
        <li>
          <t>Timing-Only Measurement:</t>
          <t>
            Jitter-binding captures millisecond intervals between input
            events. The interval "127ms" carries no information about
            whether the interval was between 'a' and 'b' or between 'x'
            and 'y'.
          </t>
        </li>

        <li>
          <t>No Character Mapping:</t>
          <t>
            Timing intervals are stored in observation order without
            any association to specific characters, words, or semantic
            content.
          </t>
        </li>

        <li>
          <t>No Keyboard Event Codes:</t>
          <t>
            Scan codes, virtual key codes, and other
            keyboard identifiers
            are not recorded. The specification treats all input events
            uniformly as timing sources.
          </t>
        </li>
      </ul>

      <t>
        This architecture ensures that even with complete access to an
        Evidence packet, no information about what was typed can be
        reconstructed.
      </t>
    </section>

    <section anchor="no-screen-capture">
      <name>No Screenshots or Screen Recording</name>

      <t>
        The specification explicitly excludes visual capture mechanisms:
      </t>

      <ul>
        <li>
          No screenshot capture at checkpoints or any other time
        </li>
        <li>
          No screen recording or video capture
        </li>
        <li>
          No window title or application name logging
        </li>
        <li>
          No clipboard content capture (only timing of clipboard events
          for monitoring-dependent absence claims, and
          only event counts,
          not content)
        </li>
      </ul>

      <t>
        Visual content capture would fundamentally violate the
        content-agnostic design and is architecturally excluded.
      </t>
    </section>

    <section anchor="local-generation">
      <name>Local Evidence Generation</name>

      <t>
        Evidence is generated entirely on the Attester device with no
        network dependency:
      </t>

      <ul>
        <li>
          <t>No Telemetry:</t>
          <t>
            The Attesting Environment does not transmit telemetry,
            analytics, or any behavioral data to external services.
          </t>
        </li>

        <li>
          <t>No Cloud Processing:</t>
          <t>
            All cryptographic computations (hashing, VDF, signatures)
            occur locally. No document content or behavioral data is
            sent to cloud services for processing.
          </t>
        </li>

        <li>
          <t>Optional External Anchors:</t>
          <t>
            The only network communication is optional: external anchors
            (RFC 3161, OpenTimestamps, blockchain) transmit only
            cryptographic hashes, never document content or behavioral
            data.
          </t>
        </li>
      </ul>

      <t>
        Users can generate and verify Evidence in fully air-gapped
        environments. External anchors enhance evidence strength but
        are not required.
      </t>
    </section>
  </section>

  <section anchor="data-minimization">
    <name>Data Minimization</name>

    <t>
      Following <xref target="RFC6973"/> Section 6.1, the specification
      minimizes data collection to what is strictly
      necessary for evidence
      generation and verification.
    </t>

    <section anchor="data-collected">
      <name>Data Collected</name>

      <t>
        The following data IS collected and included in
        Evidence packets:
      </t>

      <dl>
        <dt>Timing Histograms:</dt>
        <dd>
          <t>
            Inter-event timing intervals aggregated into
            histogram buckets
            (jitter-summary, key 3 in jitter-binding). Bucket boundaries
            are coarse (RECOMMENDED: 0, 50, 100, 200, 500, 1000, 2000,
            5000ms) to prevent precise interval reconstruction.
          </t>
        </dd>

        <dt>Edit Statistics:</dt>
        <dd>
          <t>
            Character counts for additions, deletions, and
            edit operations
            (edit-delta structure). These are aggregate counts, not
            positional data.
          </t>
        </dd>

        <dt>Checkpoint Hashes:</dt>
        <dd>
          <t>
            Cryptographic hashes of document states at each checkpoint.
            One-way functions; content cannot be recovered.
          </t>
        </dd>

        <dt>VDF Proofs:</dt>
        <dd>
          <t>
            Verifiable Delay Function outputs proving minimum elapsed
            time. These are computational proofs, not behavioral data.
          </t>
        </dd>

        <dt>Optional: Raw Timing Intervals:</dt>
        <dd>
          <t>
            The raw-intervals field (key 5 in jitter-binding) MAY be
            included for enhanced verification. This is OPTIONAL and
            user-controlled. When omitted, only histogram aggregates
            are included.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="data-not-collected">
      <name>Data NOT Collected</name>

      <t>
        The following data is explicitly NOT collected:
      </t>

      <ul>
        <li>
          Document content (text, images, formatting)
        </li>
        <li>
          Individual characters or words typed
        </li>
        <li>
          Keyboard scan codes or key identifiers
        </li>
        <li>
          Screenshots or visual captures
        </li>
        <li>
          Screen recordings or video
        </li>
        <li>
          Clipboard content (only event timing)
        </li>
        <li>
          Window titles or application names
        </li>
        <li>
          User names, email addresses, or identifiers (optional: author
          declaration is user-controlled)
        </li>
        <li>
          IP addresses or network identifiers
        </li>
        <li>
          Location data
        </li>
      </ul>
    </section>

    <section anchor="disclosure-levels">
      <name>Disclosure Levels</name>

      <t>
        The specification supports tiered disclosure
        through optional fields:
      </t>

      <table>
        <thead>
          <tr>
            <th>Level</th>
            <th>Data Included</th>
            <th>Privacy Impact</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>Minimal</td>
            <td>Hashes, VDF proofs, histogram summaries only</td>
            <td>Lowest</td>
          </tr>
          <tr>
            <td>Standard</td>
            <td>+ Presence challenges, forensics section</td>
            <td>Low-Moderate</td>
          </tr>
          <tr>
            <td>Enhanced</td>
            <td>+ Raw timing intervals, keystroke section</td>
            <td>Moderate</td>
          </tr>
          <tr>
            <td>Maximum</td>
            <td>+ Hardware attestation, absence claims</td>
            <td>Higher</td>
          </tr>
        </tbody>
      </table>

      <t>
        Users SHOULD select the minimum disclosure level
        that meets their
        verification requirements. Higher tiers provide
        stronger evidence
        at the cost of revealing more behavioral data.
      </t>
    </section>
  </section>

  <section anchor="biometric-adjacent-data">
    <name>Biometric-Adjacent Data</name>

    <t>
      Keystroke timing data, while not traditionally
      classified as biometric,
      has biometric-adjacent properties that warrant
      special consideration.
      This section addresses regulatory considerations and mitigation
      measures.
    </t>

    <section anchor="keystroke-timing-risks">
      <name>Identification Risks</name>

      <t>
        Research has demonstrated that keystroke dynamics can serve as
        a behavioral biometric:
      </t>

      <ul>
        <li>
          <t>Individual Identification:</t>
          <t>
            Detailed timing patterns can theoretically distinguish
            individuals with high accuracy across sessions.
          </t>
        </li>

        <li>
          <t>State Detection:</t>
          <t>
            Timing variations may correlate with cognitive state,
            fatigue, stress, or physical condition.
          </t>
        </li>

        <li>
          <t>Re-identification Risk:</t>
          <t>
            If an adversary has access to multiple Evidence packets from
            the same author, timing patterns might enable linkage across
            sessions even without explicit identity.
          </t>
        </li>
      </ul>
    </section>

    <section anchor="biometric-mitigations">
      <name>Mitigation Measures</name>

      <t>
        The specification implements several measures to reduce
        biometric-adjacent risks:
      </t>

      <dl>
        <dt>Histogram Aggregation:</dt>
        <dd>
          <t>
            By default, only histogram-aggregated timing
            data is included
            in Evidence packets. The RECOMMENDED bucket width of 50ms
            minimum significantly reduces the precision available for
            behavioral fingerprinting.
          </t>
        </dd>

        <dt>Bucket Granularity:</dt>
        <dd>
          <t>
            The RECOMMENDED bucket boundaries (0, 50, 100, 200, 500,
            1000, 2000, 5000ms) capture statistically relevant patterns
            while preventing reconstruction of precise keystroke
            sequences. Implementations MAY use coarser buckets for
            enhanced privacy.
          </t>
        </dd>

        <dt>No Character Association:</dt>
        <dd>
          <t>
            Timing intervals have no mapping to specific characters.
            The pattern "fast-slow-fast" reveals rhythm without content.
          </t>
        </dd>

        <dt>Session Isolation:</dt>
        <dd>
          <t>
            Each Evidence packet is independent. Cross-session linkage
            requires access to multiple packets. The specification does
            not provide mechanisms for linking sessions.
          </t>
        </dd>

        <dt>Optional Raw Disclosure:</dt>
        <dd>
          <t>
            Raw timing intervals (key 5 in jitter-binding) are optional.
            Users concerned about biometric exposure can ensure this
            field is not populated.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="regulatory-considerations">
      <name>Regulatory Considerations</name>

      <t>
        Implementations and deployments should consider applicable
        privacy regulations:
      </t>

      <dl>
        <dt>GDPR (EU/EEA):</dt>
        <dd>
          <t>
            Keystroke dynamics may constitute "special categories of
            personal data" under Article 9 if used for identification
            purposes. Implementations should document whether timing
            data is used for identification (prohibited without explicit
            consent) or solely for process evidence (may fall under
            different legal basis).
          </t>
        </dd>

        <dt>CCPA (California):</dt>
        <dd>
          <t>
            Biometric information is covered under CCPA Section
            1798.140(b). Users have rights to know, delete, and opt-out.
            The local-only processing model simplifies compliance.
          </t>
        </dd>

        <dt>BIPA (Illinois):</dt>
        <dd>
          <t>
            Illinois Biometric Information Privacy Act has strict
            requirements for biometric data collection, including
            written policies and consent. Deployments in Illinois
            should consult legal counsel.
          </t>
        </dd>
      </dl>

      <t>
        The specification's local-only processing model and user control
        over data disclosure support compliance, but
        legal interpretation
        varies by jurisdiction.
      </t>
    </section>

    <section anchor="user-disclosure-requirements">
      <name>User Disclosure Requirements</name>

      <t>
        Implementations MUST inform users about behavioral
        data collection:
      </t>

      <ol>
        <li>
          Clear notification that timing data is captured
          during authoring
        </li>
        <li>
          Explanation of what timing data reveals and does not reveal
        </li>
        <li>
          Disclosure of where Evidence packets may be transmitted
        </li>
        <li>
          User control over disclosure levels (histogram-only vs. raw)
        </li>
        <li>
          Instructions for disabling timing capture if desired
        </li>
        <li>
          Process for reviewing and deleting captured data
        </li>
      </ol>

      <t>
        These disclosures SHOULD be presented before Evidence generation
        begins, not buried in terms of service.
      </t>
    </section>
  </section>

  <section anchor="salt-modes-privacy">
    <name>Salt Modes for Content Privacy</name>

    <t>
      The hash-salt-mode field (CDDL lines 164-168)
      enables privacy-preserving
      verification scenarios where document binding
      should not be globally
      verifiable.
    </t>

    <section anchor="unsalted-mode">
      <name>Unsalted Mode (Value 0)</name>

      <artwork><![CDATA[
content-hash = H(document-content)
]]></artwork>

      <t>
        Properties:
      </t>

      <ul>
        <li>
          Anyone with the document can verify the binding
        </li>
        <li>
          No additional secret required for verification
        </li>
        <li>
          Document existence can be confirmed by any party with content
        </li>
      </ul>

      <t>
        Use cases:
      </t>

      <ul>
        <li>
          Public documents where verification should be open
        </li>
        <li>
          Academic submissions where verifiers have document access
        </li>
        <li>
          Published works where authorship claims should be checkable
        </li>
      </ul>

      <t>
        Privacy implications: Anyone who obtains both the document and
        the Evidence packet can confirm the binding. If document
        confidentiality matters, consider salted modes.
      </t>
    </section>

    <section anchor="author-salted-mode">
      <name>Author-Salted Mode (Value 1)</name>

      <artwork><![CDATA[
content-hash = H(salt || document-content)
salt-commitment = H(salt)
]]></artwork>

      <t>
        Properties:
      </t>

      <ul>
        <li>
          Author generates and retains the salt
        </li>
        <li>
          Evidence packet contains salt-commitment, not salt
        </li>
        <li>
          Author selectively reveals salt to chosen verifiers
        </li>
        <li>
          Without salt, document-hash relationship cannot be verified
        </li>
      </ul>

      <t>
        Use cases:
      </t>

      <ul>
        <li>
          Confidential documents where author controls verification
        </li>
        <li>
          Selective disclosure to specific reviewers or institutions
        </li>
        <li>
          Manuscripts under review before publication
        </li>
      </ul>

      <t>
        Privacy implications: The author has exclusive control over
        who can verify the document binding. The salt should be stored
        securely; loss of salt means verification becomes impossible.
      </t>
    </section>

    <section anchor="escrow-mode">
      <name>Third-Party Escrowed Mode (Value 2)</name>

      <artwork><![CDATA[
content-hash = H(salt || document-content)
salt-commitment = H(salt)
; salt held by escrow service
]]></artwork>

      <t>
        Properties:
      </t>

      <ul>
        <li>
          Salt is held by a trusted escrow service
        </li>
        <li>
          Escrow releases salt under predefined conditions
        </li>
        <li>
          Author cannot unilaterally control verification
        </li>
        <li>
          Verification requires escrow cooperation
        </li>
      </ul>

      <t>
        Use cases:
      </t>

      <ul>
        <li>
          Legal submissions where court order triggers verification
        </li>
        <li>
          Dispute resolution with neutral third-party control
        </li>
        <li>
          Time-delayed disclosure (escrow releases at future date)
        </li>
        <li>
          Contractual conditions for verification access
        </li>
      </ul>

      <t>
        Privacy implications: Verification access is determined by
        escrow policy, not author discretion. Authors should understand
        escrow release conditions before selecting this mode.
      </t>
    </section>

    <section anchor="salt-security">
      <name>Salt Security Considerations</name>

      <t>
        For salted modes:
      </t>

      <ul>
        <li>
          Salts MUST be cryptographically random (minimum 256 bits)
        </li>
        <li>
          Salts MUST NOT be derived from predictable values
        </li>
        <li>
          Salt-commitment prevents brute-force guessing for
          short documents
        </li>
        <li>
          Salt loss makes verification impossible; backup appropriately
        </li>
        <li>
          Salt transmission should use secure channels
        </li>
      </ul>
    </section>
  </section>

  <section anchor="identity-pseudonymity">
    <name>Identity and Pseudonymity</name>

    <t>
      The specification supports multiple identity postures, from fully
      anonymous to strongly identified, with user
      control over disclosure.
    </t>

    <section anchor="anonymous-evidence">
      <name>Anonymous Evidence Generation</name>

      <t>
        Evidence packets CAN be generated without any
        identity disclosure:
      </t>

      <ul>
        <li>
          The declaration field (key 17 in evidence-packet) is OPTIONAL
        </li>
        <li>
          Within declaration, author-name (key 3) and author-id (key 4)
          are both OPTIONAL
        </li>
        <li>
          Device keys can be ephemeral, not linked to identity
        </li>
        <li>
          Evidence proves process characteristics without revealing who
        </li>
      </ul>

      <t>
        Anonymous evidence is suitable for contexts where process
        documentation matters but author identity is irrelevant or
        should remain confidential.
      </t>
    </section>

    <section anchor="pseudonymous-evidence">
      <name>Pseudonymous Evidence</name>

      <t>
        Pseudonymous use links evidence to a consistent
        identifier without
        revealing real-world identity:
      </t>

      <ul>
        <li>
          author-id can be a pseudonymous identifier
        </li>
        <li>
          Device key provides cryptographic continuity without identity
        </li>
        <li>
          Multiple works can be linked to same pseudonym if desired
        </li>
        <li>
          Real identity can remain undisclosed
        </li>
      </ul>

      <t>
        Pseudonymous evidence enables reputation building without
        identity exposure.
      </t>
    </section>

    <section anchor="identified-evidence">
      <name>Identified Evidence</name>

      <t>
        For contexts requiring identity binding:
      </t>

      <ul>
        <li>
          author-name and author-id can be populated with real identity
        </li>
        <li>
          Declaration signature (key 6) binds identity claim to evidence
        </li>
        <li>
          Hardware attestation can strengthen device-to-person binding
        </li>
        <li>
          External identity verification is outside specification scope
        </li>
      </ul>

      <t>
        Identity strength depends on the verification context, not the
        specification. The specification provides the mechanism for
        identity claims; verification of those claims is a deployment
        concern.
      </t>
    </section>

    <section anchor="device-binding-identity">
      <name>Device Binding Without User Identification</name>

      <t>
        Hardware attestation (hardware-section) binds evidence to a
        specific device without necessarily identifying the user:
      </t>

      <ul>
        <li>
          Device keys are bound to hardware (TPM, Secure Enclave)
        </li>
        <li>
          Evidence proves generation on a specific device
        </li>
        <li>
          Device ownership is a separate question from
          evidence generation
        </li>
        <li>
          Multiple users of same device produce device-linked evidence
        </li>
      </ul>

      <t>
        Device binding strengthens evidence integrity without requiring
        user identification. It proves "this device" without proving
        "this person."
      </t>
    </section>
  </section>

  <section anchor="data-retention-deletion">
    <name>Data Retention and Deletion</name>

    <t>
      Following <xref target="RFC6973"/> Section 6.2,
      this section addresses
      data lifecycle considerations.
    </t>

    <section anchor="evidence-lifecycle">
      <name>Evidence Packet Lifecycle</name>

      <t>
        Evidence packets are designed as archival artifacts:
      </t>

      <dl>
        <dt>Creation:</dt>
        <dd>
          <t>
            Evidence accumulates during authoring session(s). Packet is
            finalized when authoring is complete.
          </t>
        </dd>

        <dt>Distribution:</dt>
        <dd>
          <t>
            Packet may be transmitted to Verifiers, stored alongside
            documents, or archived for future verification needs.
          </t>
        </dd>

        <dt>Retention:</dt>
        <dd>
          <t>
            Retention period depends on use case. Legal documents may
            require indefinite retention; other contexts may allow
            shorter periods.
          </t>
        </dd>

        <dt>Deletion:</dt>
        <dd>
          <t>
            Once distributed, deletion from all recipients may be
            impractical. Authors should consider disclosure scope
            before distribution.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="user-deletion-rights">
      <name>User Rights to Deletion</name>

      <t>
        Users have the following deletion capabilities:
      </t>

      <ul>
        <li>
          <t>Local Data:</t>
          <t>
            Evidence stored locally can be deleted at any time by the
            author. Implementations SHOULD provide clear deletion
            mechanisms.
          </t>
        </li>

        <li>
          <t>Distributed Evidence:</t>
          <t>
            Once Evidence is transmitted to Verifiers or
            Relying Parties,
            deletion depends on those parties' policies.
            The specification
            cannot enforce deletion of distributed data.
          </t>
        </li>

        <li>
          <t>Attestation Results:</t>
          <t>
            .war files produced by Verifiers are controlled
            by Verifiers.
            Authors may request deletion under applicable privacy laws.
          </t>
        </li>
      </ul>

      <t>
        Authors should understand that distributing
        Evidence creates copies
        outside their control. Privacy-sensitive authors should limit
        distribution scope.
      </t>
    </section>

    <section anchor="external-anchor-permanence">
      <name>External Anchor Permanence</name>

      <t>
        External anchors have special retention characteristics:
      </t>

      <dl>
        <dt>RFC 3161 Timestamps:</dt>
        <dd>
          <t>
            TSA records may be retained by the timestamp authority per
            their policies. Typically includes the hash committed, not
            any document or behavioral data.
          </t>
        </dd>

        <dt>Blockchain Anchors:</dt>
        <dd>
          <t>
            Blockchain records are permanent and immutable by design.
            The anchored hash cannot be deleted from the blockchain.
            This is a feature for evidence permanence but has privacy
            implications.
          </t>
        </dd>

        <dt>OpenTimestamps:</dt>
        <dd>
          <t>
            OTS proofs reference Bitcoin transactions, which are
            permanent. The proof structure can be deleted locally, but
            the Bitcoin transaction remains.
          </t>
        </dd>
      </dl>

      <t>
        Users concerned about data permanence should carefully consider
        whether to use blockchain-based external anchors. RFC 3161
        timestamps offer similar evidentiary value with
        more conventional
        retention policies.
      </t>

      <t>
        IMPORTANT: Only cryptographic hashes are
        anchored, never document
        content or behavioral data. The permanent record is a hash, not
        the underlying information.
      </t>
    </section>
  </section>

  <section anchor="third-party-disclosure">
    <name>Third-Party Disclosure</name>

    <t>
      This section addresses what information is disclosed to various
      parties in the verification workflow,
      following <xref target="RFC6973"/>
      Section 5.2 on disclosure.
    </t>

    <section anchor="verifier-disclosure">
      <name>Information Disclosed to Verifiers</name>

      <t>
        When an Evidence packet (.pop) is submitted for verification,
        the Verifier learns:
      </t>

      <ul>
        <li>
          Document hash (content-hash) - NOT the content itself
        </li>
        <li>
          Document size (byte-length, char-count)
        </li>
        <li>
          Authoring timeline (checkpoint timestamps, VDF durations)
        </li>
        <li>
          Behavioral statistics (timing histograms, entropy estimates)
        </li>
        <li>
          Edit patterns (aggregate counts, not content)
        </li>
        <li>
          Optional: Raw timing intervals if disclosed
        </li>
        <li>
          Optional: Author identity if declared
        </li>
        <li>
          Optional: Device attestation if included
        </li>
      </ul>

      <t>
        Verifiers SHOULD NOT:
      </t>

      <ul>
        <li>
          Retain Evidence packets beyond verification needs
        </li>
        <li>
          Use behavioral data for purposes beyond verification
        </li>
        <li>
          Attempt to re-identify anonymous authors from
          behavioral patterns
        </li>
        <li>
          Share Evidence data with unauthorized parties
        </li>
      </ul>

      <t>
        Implementations MAY define Verifier privacy
        policies that authors
        can review before submitting Evidence.
      </t>
    </section>

    <section anchor="relying-party-disclosure">
      <name>Information Disclosed to Relying Parties</name>

      <t>
        Relying Parties consuming Attestation Results (.war) learn:
      </t>

      <ul>
        <li>
          Verification verdict (forensic-assessment)
        </li>
        <li>
          Confidence score
        </li>
        <li>
          Verified claims (specific thresholds met)
        </li>
        <li>
          Caveats and limitations
        </li>
        <li>
          Verifier identity
        </li>
        <li>
          Reference to the original Evidence packet (packet-id)
        </li>
      </ul>

      <t>
        The .war file is designed to provide necessary trust information
        without full Evidence disclosure. Relying Parties needing more
        detail can request the original .pop file.
      </t>
    </section>

    <section anchor="disclosure-minimization">
      <name>Minimizing Disclosure</name>

      <t>
        Authors concerned about disclosure can:
      </t>

      <ol>
        <li>
          Use minimal disclosure tier (histogram-only, no raw intervals)
        </li>
        <li>
          Omit optional sections (keystroke-section, absence-section)
        </li>
        <li>
          Use author-salted mode to control verification access
        </li>
        <li>
          Omit declaration or use pseudonymous identity
        </li>
        <li>
          Select Verifiers with strong privacy policies
        </li>
        <li>
          Limit distribution to necessary Relying Parties
        </li>
      </ol>
    </section>
  </section>

  <section anchor="cross-session-correlation">
    <name>Cross-Session Correlation</name>

    <t>
      This section addresses risks of behavioral fingerprinting across
      sessions and mitigation measures.
    </t>

    <section anchor="correlation-risks">
      <name>Correlation Risks</name>

      <t>
        Multiple Evidence packets from the same author
        may enable linkage:
      </t>

      <dl>
        <dt>Behavioral Fingerprinting:</dt>
        <dd>
          <t>
            Keystroke timing patterns exhibit individual characteristics
            that persist across sessions. An adversary with multiple
            Evidence packets could potentially link them to the same
            author even without explicit identity.
          </t>
        </dd>

        <dt>Device Fingerprinting:</dt>
        <dd>
          <t>
            If device keys are reused across sessions, Evidence packets
            are cryptographically linkable. Hardware attestation makes
            this linkage explicit.
          </t>
        </dd>

        <dt>Stylometric Correlation:</dt>
        <dd>
          <t>
            Edit pattern statistics (though not content) may correlate
            with writing style. Combined with timing data, this could
            strengthen cross-session linkage.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="device-key-rotation">
      <name>Device Key Rotation</name>

      <t>
        To limit cross-session correlation via device keys:
      </t>

      <ul>
        <li>
          <t>Session Keys:</t>
          <t>
            Use per-session derived keys rather than a
            single device key.
            HKDF with session-specific info prevents direct linkage.
          </t>
        </li>

        <li>
          <t>Periodic Rotation:</t>
          <t>
            Rotate device keys periodically (RECOMMENDED: annually).
            Evidence packets signed with different keys are not
            cryptographically linked.
          </t>
        </li>

        <li>
          <t>Context-Specific Keys:</t>
          <t>
            Use different keys for different contexts (e.g., work vs.
            personal) to prevent cross-context linkage.
          </t>
        </li>
      </ul>
    </section>

    <section anchor="session-isolation">
      <name>Session Isolation Properties</name>

      <t>
        The specification provides inherent session isolation:
      </t>

      <ul>
        <li>
          Each Evidence packet has a unique packet-id (UUID)
        </li>
        <li>
          VDF chains are session-specific (session entropy in genesis)
        </li>
        <li>
          No protocol mechanism links sessions together
        </li>
        <li>
          Jitter data is bound to specific checkpoint chains
        </li>
      </ul>

      <t>
        Cross-session linkage requires external analysis, not protocol
        features. The specification does not provide linkage mechanisms.
      </t>
    </section>

    <section anchor="correlation-mitigations">
      <name>Additional Mitigations</name>

      <t>
        Authors concerned about cross-session correlation can:
      </t>

      <ol>
        <li>
          Use coarser histogram buckets to reduce timing precision
        </li>
        <li>
          Omit raw-intervals field
        </li>
        <li>
          Vary devices for different document contexts
        </li>
        <li>
          Use different pseudonyms for different contexts
        </li>
        <li>
          Limit Evidence distribution to minimize adversary access to
          multiple packets
        </li>
      </ol>
    </section>
  </section>

  <section anchor="privacy-threat-analysis">
    <name>Privacy Threat Analysis</name>

    <t>
      Following <xref target="RFC6973"/> Section 5,
      this section analyzes
      specific privacy threats.
    </t>

    <section anchor="threat-surveillance">
      <name>Surveillance</name>

      <t>
        The specification is designed to resist surveillance:
      </t>

      <ul>
        <li>
          No content transmission prevents content-based surveillance
        </li>
        <li>
          Local-only processing prevents network monitoring
        </li>
        <li>
          Optional external anchors transmit only hashes
        </li>
        <li>
          No telemetry or analytics collection
        </li>
      </ul>

      <t>
        The primary surveillance risk is through Evidence packet
        distribution. Authors control this distribution.
      </t>
    </section>

    <section anchor="threat-stored-data">
      <name>Stored Data Compromise</name>

      <t>
        If Evidence packets are compromised:
      </t>

      <ul>
        <li>
          Document content is NOT exposed (hash-only)
        </li>
        <li>
          Behavioral patterns MAY be exposed (timing data)
        </li>
        <li>
          Authoring timeline is exposed (timestamps)
        </li>
        <li>
          If identity declared, author identity is exposed
        </li>
      </ul>

      <t>
        Mitigation: Encrypt Evidence packets at rest.
        Use access controls
        for stored Evidence. Limit retention period where appropriate.
      </t>
    </section>

    <section anchor="threat-correlation">
      <name>Correlation</name>

      <t>
        Correlation threats are addressed in
        <xref target="cross-session-correlation"/>. Key
        mitigations include
        key rotation, histogram aggregation, and distribution limiting.
      </t>
    </section>

    <section anchor="threat-identification">
      <name>Identification</name>

      <t>
        Re-identification threats:
      </t>

      <ul>
        <li>
          Anonymous Evidence MAY be re-identifiable through behavioral
          patterns
        </li>
        <li>
          Histogram aggregation significantly reduces this risk
        </li>
        <li>
          Raw interval disclosure increases re-identification risk
        </li>
        <li>
          Device attestation explicitly identifies devices
        </li>
      </ul>

      <t>
        Authors requiring strong anonymity should use minimal disclosure
        tier without raw intervals and without device attestation.
      </t>
    </section>

    <section anchor="threat-secondary-use">
      <name>Secondary Use</name>

      <t>
        Evidence data could theoretically be used for purposes beyond
        verification:
      </t>

      <ul>
        <li>
          Behavioral analysis for profiling
        </li>
        <li>
          Productivity monitoring
        </li>
        <li>
          Training data for machine learning
        </li>
      </ul>

      <t>
        Mitigation: The specification does not prevent secondary use by
        data recipients. Authors should consider Verifier and Relying
        Party policies before disclosure. Implementations MAY include
        usage restrictions in Evidence packet metadata.
      </t>
    </section>

    <section anchor="threat-disclosure">
      <name>Disclosure</name>

      <t>
        Unauthorized disclosure of Evidence packets:
      </t>

      <ul>
        <li>
          Authors control initial distribution
        </li>
        <li>
          Recipients may further distribute; specification
          cannot prevent
        </li>
        <li>
          Salted modes limit utility of leaked Evidence
        </li>
        <li>
          Anonymous Evidence limits identity exposure on leak
        </li>
      </ul>

      <t>
        Authors should treat Evidence packets as potentially sensitive
        and limit distribution to trusted parties.
      </t>
    </section>

    <section anchor="threat-exclusion">
      <name>Exclusion</name>

      <t>
        The risk that authors cannot participate in systems if they
        decline Evidence generation:
      </t>

      <ul>
        <li>
          Evidence generation is voluntary
        </li>
        <li>
          Disclosure levels are user-controlled
        </li>
        <li>
          Relying Parties may require Evidence for certain contexts
        </li>
        <li>
          The specification does not mandate deployment contexts
        </li>
      </ul>

      <t>
        Deployments should consider whether Evidence requirements create
        exclusionary effects and provide alternatives where appropriate.
      </t>
    </section>
  </section>

  <section anchor="privacy-summary">
    <name>Privacy Properties Summary</name>

    <t>
      This section summarizes the privacy properties provided and not
      provided by the specification.
    </t>

    <section anchor="privacy-provided">
      <name>Privacy Properties Provided</name>

      <dl>
        <dt>Content Confidentiality:</dt>
        <dd>
          <t>
            Document content is never stored in Evidence. Verification
            can occur without content access (using salted modes).
          </t>
        </dd>

        <dt>Keystroke Privacy:</dt>
        <dd>
          <t>
            Individual keystrokes are never recorded. Only timing
            intervals between events are captured, without character
            association.
          </t>
        </dd>

        <dt>Local Control:</dt>
        <dd>
          <t>
            All data processing occurs locally. No external services
            required for Evidence generation.
          </t>
        </dd>

        <dt>Disclosure Control:</dt>
        <dd>
          <t>
            Authors control Evidence distribution, disclosure level,
            and identity exposure.
          </t>
        </dd>

        <dt>Pseudonymity Support:</dt>
        <dd>
          <t>
            Evidence can be generated and verified without real-world
            identity disclosure.
          </t>
        </dd>

        <dt>Selective Verification:</dt>
        <dd>
          <t>
            Salted modes enable author-controlled verification access.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="privacy-limitations">
      <name>Privacy Limitations</name>

      <dl>
        <dt>Behavioral Data Exposure:</dt>
        <dd>
          <t>
            Timing data reveals behavioral patterns. While aggregated,
            this data has biometric-adjacent properties.
          </t>
        </dd>

        <dt>Distribution Not Controlled:</dt>
        <dd>
          <t>
            Once Evidence is distributed, the specification cannot
            control further dissemination or use.
          </t>
        </dd>

        <dt>Cross-Session Linkage Risk:</dt>
        <dd>
          <t>
            Multiple Evidence packets may be linkable through behavioral
            analysis, even with different identities.
          </t>
        </dd>

        <dt>External Anchor Permanence:</dt>
        <dd>
          <t>
            Blockchain anchors create permanent records that cannot be
            deleted.
          </t>
        </dd>

        <dt>Metadata Disclosure:</dt>
        <dd>
          <t>
            Evidence packets reveal document size, authoring timeline,
            and edit statistics even without content.
          </t>
        </dd>
      </dl>
    </section>

    <section anchor="privacy-recommendations">
      <name>Recommendations for Privacy-Sensitive Deployments</name>

      <ol>
        <li>
          Use minimal disclosure tier (histogram-only, no raw intervals)
        </li>
        <li>
          Consider coarser histogram buckets for enhanced privacy
        </li>
        <li>
          Use author-salted mode for confidential documents
        </li>
        <li>
          Avoid blockchain anchors if deletion rights are important
        </li>
        <li>
          Rotate device keys periodically
        </li>
        <li>
          Limit Evidence distribution to necessary parties
        </li>
        <li>
          Review Verifier privacy policies before submission
        </li>
        <li>
          Consider pseudonymous identities where appropriate
        </li>
        <li>
          Provide clear user disclosures about data collection
        </li>
        <li>
          Implement data retention policies aligned with use case
        </li>
      </ol>
    </section>
  </section>

</section>

    <!-- Section 9: IANA Considerations -->
    <section anchor="iana-considerations"
         xml:base="sections/iana-considerations.xml">
  <name>IANA Considerations</name>

  <!-- IANA Ticket Reference Numbers (for tracking):
       PEN Assignment: PHCT-S8T-9ZI
       CBOR Tag 1347571280 (PPP): #1443423
       CBOR Tag 1463894560 (WAR): #1443444
       CBOR Tag 1347571281 (PPP!): #1443445
       Media Type vnd.example-pop+cbor: #1443426
       Media Type vnd.example-war+cbor: #1443427
  -->

  <t>
    This document requests several IANA registrations to support
    interoperable implementations of the witnessd Proof of Process
    specification. The author has submitted an application for a
    Private Enterprise Number (PEN) assignment under the WritersLogic
    organization to support vendor-specific registrations.
  </t>

  <section anchor="iana-cbor-tags">
    <name>CBOR Tags Registry</name>

    <t>
      This document requests the allocation of
      three dedicated 4-byte CBOR tags
      in the "CBOR Tags" registry
      <xref target="IANA.cbor-tags"/>. These tags
      form a coordinated suite of identifiers
      for the Proof of Process protocol.
    </t>

    <table anchor="tbl-cbor-tags-summary">
      <name>CBOR Tags Summary</name>
      <thead>
        <tr>
          <th>Tag</th>
          <th>Hex</th>
          <th>ASCII</th>
          <th>Data Item</th>
          <th>Semantics</th>
        </tr>
      </thead>
      <tbody>
        <tr>
          <td>1347571280</td>
          <td>0x50505020</td>
          <td>"PPP "</td>
          <td>map</td>
          <td>Proof of Process Evidence Packet</td>
        </tr>
        <tr>
          <td>1463894560</td>
          <td>0x57415220</td>
          <td>"WAR "</td>
          <td>map</td>
          <td>Writers Authenticity Report (Attestation Result)</td>
        </tr>
        <tr>
          <td>1347571281</td>
          <td>0x50505021</td>
          <td>"PPP!"</td>
          <td>map</td>
          <td>Compact Evidence Reference</td>
        </tr>
      </tbody>
    </table>

    <section anchor="iana-tag-evidence-packet">
      <name>Tag for Proof of Process Packet (0x50505020)</name>

      <t>
        The tag value 1347571280 (hexadecimal 0x50505020) corresponds to
        the ASCII encoding of "PPP " and serves as a self-describing
        "Proof of Process Packet" identifier. This tag encapsulates
        a cryptographically anchored data structure used for digital
        authorship attestation.
      </t>

      <t>
        Unlike identity-only signatures, the Proof of Process format
        captures the authorship process through entangled Verifiable
        Delay Functions (VDFs) and human behavioral biometrics. A
        dedicated tag is required to enable zero-configuration
        identification and interoperability between authorship
        verification tools, academic repositories, and literary
        publishing platforms, providing a verifiable "forgery cost"
        to distinguish human-authored work from synthetic content.
      </t>

      <t>
        The tagged data item is a CBOR map conforming to the
        evidence-packet structure defined in
        <xref target="evidence-packet-structure"/>
       .
      </t>
    </section>

    <section anchor="iana-tag-attestation-result">
      <name>Tag for Writers Authenticity Report (0x57415220)</name>

      <t>
        The tag value 1463894560 (hexadecimal 0x57415220) corresponds to
        the ASCII encoding of "WAR " and identifies Writers Authenticity
        Report structures. This tag encapsulates an Attestation Result
        produced by Verifiers after appraising Proof of Process Evidence
        Packets.
      </t>

      <t>
        The WAR format conveys verification verdicts, confidence scores,
        and forensic assessments following the IETF RATS
        (Remote ATtestation
        procedureS) architecture. A dedicated tag enables
        zero-configuration
        identification of attestation results, allowing
        Relying Parties to
        distinguish verification outcomes from raw evidence without
        content-type negotiation.
      </t>

      <t>
        The tagged data item is a CBOR map conforming to the
        attestation-result structure defined in
        <xref target="attestation-result-structure"/>
       .
      </t>
    </section>

    <section anchor="iana-tag-compact-ref">
      <name>Tag for Compact Evidence Reference (0x50505021)</name>

      <t>
        The tag value 1347571281 (hexadecimal 0x50505021) corresponds to
        the ASCII encoding of "PPP!" and identifies Compact Evidence
        Reference structures. This tag encapsulates a
        cryptographic pointer
        to a full Proof of Process Evidence Packet.
      </t>

      <t>
        Compact Evidence References are designed for embedding in
        space-constrained contexts such as document
        metadata (PDF XMP, EXIF),
        QR codes, NFC tags, git commit messages, and protocol headers.
        The compact reference contains the packet-id, chain-hash,
        document-hash, and a summary with a cryptographic signature
        binding all fields. A dedicated tag enables zero-configuration
        detection and verification of authorship claims without
        transmitting full evidence packets.
      </t>

      <t>
        The tagged data item is a CBOR map conforming to the
        compact-evidence-ref structure defined in
        <xref target="compact-evidence"/>
       .
      </t>
    </section>

    <section anchor="iana-tag-justification">
      <name>Justification for Dedicated Tags</name>

      <t>
        The four-byte tag values were chosen for the following reasons:
      </t>

      <ul>
        <li>
          <strong>Self-describing format:</strong> The ASCII-based
          mnemonics ("PPP ", "WAR ", "PPP!") enable immediate visual
          identification in hex dumps and debugging contexts.
        </li>
        <li>
          <strong>Zero-configuration detection:</strong> Applications
          can identify Proof of Process data without prior context
          or content-type negotiation.
        </li>
        <li>
          <strong>Interoperability:</strong> Standardized tags enable
          diverse implementations (academic systems,
          publishing platforms,
          verification services) to recognize and process data
          without coordination.
        </li>
        <li>
          <strong>Compact encoding:</strong> Despite being 4-byte tags,
          CBOR's efficient encoding minimizes overhead for these
          application-specific semantic markers.
        </li>
      </ul>
    </section>
  </section>

  <section anchor="iana-eat-profile">
    <name>Entity Attestation Token Profiles Registry</name>

    <t>
      This document requests registration of an EAT profile in the
      "Entity Attestation Token Profiles" registry established by
      <xref target="RFC9711"/>.
    </t>

    <table anchor="tbl-eat-profile">
      <name>EAT Profile Registration</name>
      <thead>
        <tr>
          <th>Profile URI</th>
          <th>Description</th>
          <th>Reference</th>
        </tr>
      </thead>
      <tbody>
        <tr>
          <td>https://example.com/rats/eat/profile/pop/1.0</td>
          <td>witnessd Proof of Process Evidence Profile</td>
          <td>[this document]</td>
        </tr>
      </tbody>
    </table>

    <t>
      Note: The URI https://example.com/rats/eat/profile/pop/1.0 is
      provisional during individual submission. Upon working group
      adoption, registration of an IANA-hosted profile URI will be
      requested (e.g., urn:ietf:params:rats:eat:profile:pop:1.0).
    </t>

    <t>
      The profile defines the following characteristics:
    </t>

    <dl>
      <dt>Profile Version:</dt>
      <dd>1.0</dd>

      <dt>Applicable Claims:</dt>
      <dd>
        All standard EAT claims per <xref target="RFC9711"/>, plus the
        custom claims defined in <xref target="iana-cwt-claims"/>.
      </dd>

      <dt>Evidence Format:</dt>
      <dd>
        CBOR-encoded evidence-packet structure with semantic tag
        1347571280.
      </dd>

      <dt>Attestation Result Format:</dt>
      <dd>
        CBOR-encoded attestation-result structure with semantic tag
        1463894560.
      </dd>

      <dt>Domain:</dt>
      <dd>
        Document authorship process attestation, behavioral evidence
        for content provenance.
      </dd>
    </dl>
  </section>

  <section anchor="iana-cwt-claims">
    <name>CBOR Web Token Claims Registry</name>

    <t>
      This document requests registration of custom claims in the
      "CBOR Web Token (CWT) Claims" registry <xref target="IANA.cwt"/>.
      These claims are used within EAT Attestation Results to convey
      witnessd-specific assessment data.
    </t>

    <t>
      Initial registration is requested in the private-use range
      (-70000 to -70010) to enable early implementation. Upon standards
      track advancement, permanent positive claim keys
      will be requested.
    </t>

    <table anchor="tbl-cwt-claims">
      <name>Custom CWT Claims Registration</name>
      <thead>
        <tr>
          <th>Claim Name</th>
          <th>Claim Key</th>
          <th>Claim Value Type</th>
          <th>Claim Description</th>
          <th>Reference</th>
        </tr>
      </thead>
      <tbody>
        <tr>
          <td>pop-forensic-assessment</td>
          <td>-70000</td>
          <td>unsigned integer</td>
          <td>
            Forensic assessment enumeration value (0-5) indicating
            the Verifier's assessment of behavioral evidence consistency
            with human authorship patterns.
          </td>
          <td>[this document]</td>
        </tr>
        <tr>
          <td>pop-presence-score</td>
          <td>-70001</td>
          <td>float32</td>
          <td>
            Presence challenge response score in range [0.0, 1.0]
            representing the ratio of successfully completed human
            presence challenges.
          </td>
          <td>[this document]</td>
        </tr>
        <tr>
          <td>pop-evidence-tier</td>
          <td>-70002</td>
          <td>unsigned integer</td>
          <td>
            Evidence tier classification (1-4) indicating the
            comprehensiveness of evidence collected: 1=Basic,
            2=Standard, 3=Enhanced, 4=Maximum.
          </td>
          <td>[this document]</td>
        </tr>
        <tr>
          <td>pop-ai-composite-score</td>
          <td>-70003</td>
          <td>float32</td>
          <td>
            AI indicator composite score in range [0.0, 1.0] derived
            from behavioral forensic analysis. Lower values indicate
            patterns more consistent with human authorship.
          </td>
          <td>[this document]</td>
        </tr>
      </tbody>
    </table>

    <t>
      The forensic-assessment enumeration values for
      pop-forensic-assessment
      are defined as:
    </t>

    <table anchor="tbl-forensic-assessment-values">
      <name>Forensic Assessment Enumeration Values</name>
      <thead>
        <tr>
          <th>Value</th>
          <th>Name</th>
          <th>Description</th>
        </tr>
      </thead>
      <tbody>
        <tr>
          <td>0</td>
          <td>not-assessed</td>
          <td>Verification incomplete or not attempted</td>
        </tr>
        <tr>
          <td>1</td>
          <td>strongly-human</td>
          <td>Evidence strongly indicates human authorship patterns</td>
        </tr>
        <tr>
          <td>2</td>
          <td>likely-human</td>
          <td>Evidence consistent with human authorship patterns</td>
        </tr>
        <tr>
          <td>3</td>
          <td>inconclusive</td>
          <td>Evidence neither confirms nor refutes claims</td>
        </tr>
        <tr>
          <td>4</td>
          <td>likely-ai-assisted</td>
          <td>Evidence suggests AI assistance in authorship</td>
        </tr>
        <tr>
          <td>5</td>
          <td>strongly-ai-generated</td>
          <td>Evidence strongly indicates AI generation</td>
        </tr>
      </tbody>
    </table>
  </section>

  <section anchor="iana-new-registries">
    <name>New Registries</name>

    <t>
      This document requests IANA to create three new registries under
      a new "witnessd Proof of Process" registry group.
    </t>

    <section anchor="iana-claim-types-registry">
      <name>Proof of Process Claim Types Registry</name>

      <t>
        This document requests creation of the "Proof of Process Claim
        Types" registry. This registry contains the identifiers for
        absence claims that can be asserted and verified in Evidence
        packets.
      </t>

      <section anchor="iana-claim-types-procedures">
        <name>Registration Procedures</name>

        <t>
          The registration procedures for this registry depend on the
          claim type range:
        </t>

        <table anchor="tbl-claim-types-procedures">
          <name>Claim Types Registration Procedures</name>
          <thead>
            <tr>
              <th>Range</th>
              <th>Category</th>
              <th>Registration Procedure</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td>1-15</td>
              <td>Chain-verifiable claims</td>
              <td>Specification Required</td>
            </tr>
            <tr>
              <td>16-63</td>
              <td>Monitoring-dependent claims</td>
              <td>Specification Required</td>
            </tr>
            <tr>
              <td>64-127</td>
              <td>Environmental claims</td>
              <td>Expert Review</td>
            </tr>
            <tr>
              <td>128-255</td>
              <td>Private use</td>
              <td>First Come First Served</td>
            </tr>
          </tbody>
        </table>

        <t>
          Chain-verifiable claims (1-15) are claims that can be proven
          solely from the Evidence packet without trusting the Attesting
          Environment beyond data integrity. These claims require a
          published specification demonstrating verifiability.
        </t>

        <t>
          Monitoring-dependent claims (16-63) require trust in the
          Attesting Environment's accurate reporting of monitored events.
          Specifications MUST document the trust assumptions.
        </t>

        <t>
          Environmental claims (64-127) relate to the execution
          environment or external conditions. Expert review ensures
          claims are well-defined and implementable.
        </t>

        <t>
          Private use claims (128-255) are available for
          implementation-specific extensions without coordination.
        </t>
      </section>

      <section anchor="iana-claim-types-template">
        <name>Registration Template</name>

        <t>
          Registrations MUST include the following fields:
        </t>

        <dl>
          <dt>Claim Type Value:</dt>
          <dd>Integer identifier in the appropriate range</dd>

          <dt>Claim Name:</dt>
          <dd>Human-readable name (lowercase with hyphens)</dd>

          <dt>Category:</dt>
          <dd>
            One of: chain-verifiable, monitoring-dependent,
            environmental, or private-use
          </dd>

          <dt>Description:</dt>
          <dd>Brief description of what the claim asserts</dd>

          <dt>Verification Method:</dt>
          <dd>
            How the claim is verified (for non-private-use claims)
          </dd>

          <dt>Reference:</dt>
          <dd>Document defining the claim</dd>
        </dl>
      </section>

      <section anchor="iana-claim-types-initial">
        <name>Initial Registry Contents</name>

        <t>
          The initial contents of the "Proof of Process Claim Types"
          registry are as follows:
        </t>

        <section anchor="iana-claim-types-chain-verifiable">
          <name>Chain-Verifiable Claims (1-15)</name>

          <table anchor="tbl-chain-verifiable-claims">
            <name>Chain-Verifiable Claim Types</name>
            <thead>
              <tr>
                <th>Value</th>
                <th>Name</th>
                <th>Description</th>
                <th>Reference</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td>1</td>
                <td>max-single-delta-chars</td>
                <td>
                  Maximum characters added in any single checkpoint delta
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>2</td>
                <td>max-single-delta-bytes</td>
                <td>
                  Maximum bytes added in any single checkpoint delta
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>3</td>
                <td>max-net-delta-chars</td>
                <td>
                  Maximum net character change across the entire chain
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>4</td>
                <td>min-vdf-duration-seconds</td>
                <td>
                  Minimum total VDF-proven elapsed time in seconds
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>5</td>
                <td>min-vdf-duration-per-kchar</td>
                <td>
                  Minimum VDF-proven time per thousand characters
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>6</td>
                <td>checkpoint-chain-complete</td>
                <td>
                  Checkpoint chain has no gaps (all sequence numbers present)
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>7</td>
                <td>checkpoint-chain-consistent</td>
                <td>
                  All checkpoint hashes and VDF linkages verify correctly
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>8</td>
                <td>jitter-entropy-above-threshold</td>
                <td>
                  Captured jitter entropy exceeds specified bits threshold
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>9</td>
                <td>jitter-samples-above-count</td>
                <td>
                  Number of jitter samples exceeds specified count
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>10</td>
                <td>revision-points-above-count</td>
                <td>
                  Number of revision points (non-monotonic edits) exceeds threshold
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>11</td>
                <td>session-count-above-threshold</td>
                <td>
                  Number of distinct editing sessions exceeds threshold
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>12-15</td>
                <td>Unassigned</td>
                <td>Reserved for future chain-verifiable claims</td>
                <td>[this document]</td>
              </tr>
            </tbody>
          </table>
        </section>

        <section anchor="iana-claim-types-monitoring-dependent">
          <name>Monitoring-Dependent Claims (16-63)</name>

          <table anchor="tbl-monitoring-dependent-claims">
            <name>Monitoring-Dependent Claim Types</name>
            <thead>
              <tr>
                <th>Value</th>
                <th>Name</th>
                <th>Description</th>
                <th>Reference</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td>16</td>
                <td>max-paste-event-chars</td>
                <td>
                  Maximum characters in any single paste event
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>17</td>
                <td>max-clipboard-access-chars</td>
                <td>
                  Maximum total characters accessed from clipboard
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>18</td>
                <td>no-paste-from-ai-tool</td>
                <td>
                  No paste operations from known AI tool applications
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>19</td>
                <td>Unassigned</td>
                <td>Reserved</td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>20</td>
                <td>max-insertion-rate-wpm</td>
                <td>
                  Maximum sustained insertion rate in words per minute
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>21</td>
                <td>no-automated-input-pattern</td>
                <td>
                  No detected automated or scripted input patterns
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>22</td>
                <td>no-macro-replay-detected</td>
                <td>
                  No keyboard macro replay patterns detected
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>23-31</td>
                <td>Unassigned</td>
                <td>Reserved for input-related claims</td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>32</td>
                <td>no-file-import-above-bytes</td>
                <td>
                  No file imports exceeding specified byte threshold
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>33</td>
                <td>no-external-file-open</td>
                <td>
                  No external files opened during editing session
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>34-39</td>
                <td>Unassigned</td>
                <td>Reserved for file-related claims</td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>40</td>
                <td>no-concurrent-ai-tool</td>
                <td>
                  No known AI tool application running concurrently
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>41</td>
                <td>no-llm-api-traffic</td>
                <td>
                  No detected network traffic to known LLM API endpoints
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>42-47</td>
                <td>Unassigned</td>
                <td>Reserved for AI-detection claims</td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>48</td>
                <td>max-idle-gap-seconds</td>
                <td>
                  Maximum idle gap within session does not exceed threshold
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>49</td>
                <td>active-time-above-threshold</td>
                <td>
                  Total active editing time exceeds specified threshold
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>50-63</td>
                <td>Unassigned</td>
                <td>Reserved for timing-related claims</td>
                <td>[this document]</td>
              </tr>
            </tbody>
          </table>
        </section>
      </section>
    </section>

    <section anchor="iana-vdf-algorithms-registry">
      <name>Proof of Process VDF Algorithms Registry</name>

      <t>
        This document requests creation of the "Proof of Process VDF
        Algorithms" registry. This registry contains identifiers for
        Verifiable Delay Function algorithms used in Evidence packets.
      </t>

      <section anchor="iana-vdf-procedures">
        <name>Registration Procedures</name>

        <table anchor="tbl-vdf-procedures">
          <name>VDF Algorithms Registration Procedures</name>
          <thead>
            <tr>
              <th>Range</th>
              <th>Category</th>
              <th>Registration Procedure</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td>1-15</td>
              <td>Iterated hash VDFs</td>
              <td>Standards Action</td>
            </tr>
            <tr>
              <td>16-31</td>
              <td>Succinct VDFs</td>
              <td>Standards Action</td>
            </tr>
            <tr>
              <td>32-63</td>
              <td>Experimental</td>
              <td>Expert Review</td>
            </tr>
          </tbody>
        </table>

        <t>
          Iterated hash VDFs (1-15) are algorithms where verification
          requires recomputation. Standards Action ensures thorough
          security analysis.
        </t>

        <t>
          Succinct VDFs (16-31) are algorithms with efficient
          verification (e.g., <xref target="Pietrzak2019"/>,
          <xref target="Wesolowski2019"/>). Standards Action ensures
          cryptographic soundness.
        </t>

        <t>
          Experimental algorithms (32-63) may be registered with Expert
          Review for research and interoperability testing. Production
          use requires promotion to Standards Action ranges.
        </t>
      </section>

      <section anchor="iana-vdf-template">
        <name>Registration Template</name>

        <t>
          Registrations MUST include the following fields:
        </t>

        <dl>
          <dt>Algorithm Value:</dt>
          <dd>Integer identifier in the appropriate range</dd>

          <dt>Algorithm Name:</dt>
          <dd>Human-readable name</dd>

          <dt>Category:</dt>
          <dd>One of: iterated-hash, succinct, or experimental</dd>

          <dt>Parameters:</dt>
          <dd>Required CDDL structure for algorithm parameters</dd>

          <dt>Verification Complexity:</dt>
          <dd>Asymptotic verification complexity</dd>

          <dt>Security Assumptions:</dt>
          <dd>Cryptographic assumptions for security</dd>

          <dt>Reference:</dt>
          <dd>Document specifying the algorithm</dd>
        </dl>
      </section>

      <section anchor="iana-vdf-initial">
        <name>Initial Registry Contents</name>

        <table anchor="tbl-vdf-algorithms">
          <name>VDF Algorithms Initial Values</name>
          <thead>
            <tr>
              <th>Value</th>
              <th>Name</th>
              <th>Category</th>
              <th>Verification</th>
              <th>Reference</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td>1</td>
              <td>iterated-sha256</td>
              <td>iterated-hash</td>
              <td>O(n)</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>2</td>
              <td>iterated-sha3-256</td>
              <td>iterated-hash</td>
              <td>O(n)</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>3-15</td>
              <td>Unassigned</td>
              <td>iterated-hash</td>
              <td>-</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>16</td>
              <td>pietrzak-rsa2048</td>
              <td>succinct</td>
              <td>O(log n)</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>17</td>
              <td>wesolowski-rsa2048</td>
              <td>succinct</td>
              <td>O(1)</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>18</td>
              <td>pietrzak-class-group</td>
              <td>succinct</td>
              <td>O(log n)</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>19</td>
              <td>wesolowski-class-group</td>
              <td>succinct</td>
              <td>O(1)</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>20-31</td>
              <td>Unassigned</td>
              <td>succinct</td>
              <td>-</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>32-63</td>
              <td>Unassigned</td>
              <td>experimental</td>
              <td>-</td>
              <td>[this document]</td>
            </tr>
          </tbody>
        </table>

        <t>
          The iterated hash algorithms use the iterated-hash-params
          CDDL structure (keys 1-2). The succinct algorithms use the
          succinct-vdf-params CDDL structure (keys 10-11). See
          <xref target="vdf-mechanisms"/> for detailed specifications.
        </t>
      </section>
    </section>

    <section anchor="iana-entropy-sources-registry">
      <name>Proof of Process Entropy Sources Registry</name>

      <t>
        This document requests creation of the "Proof of Process Entropy
        Sources" registry. This registry contains identifiers for
        behavioral entropy sources used in Jitter Seal bindings.
      </t>

      <section anchor="iana-entropy-procedures">
        <name>Registration Procedures</name>

        <t>
          The registration procedure for this registry is Specification
          Required.
        </t>

        <t>
          Registrations MUST include a specification describing:
        </t>

        <ul>
          <li>
            The input modality or behavioral signal being captured
          </li>
          <li>
            The method for converting the signal to timing intervals
          </li>
          <li>
            Privacy implications of capturing this entropy source
          </li>
          <li>
            Expected entropy density (bits per sample) under typical conditions
          </li>
        </ul>
      </section>

      <section anchor="iana-entropy-template">
        <name>Registration Template</name>

        <t>
          Registrations MUST include the following fields:
        </t>

        <dl>
          <dt>Source Value:</dt>
          <dd>Integer identifier</dd>

          <dt>Source Name:</dt>
          <dd>Human-readable name (lowercase with hyphens)</dd>

          <dt>Description:</dt>
          <dd>Brief description of the entropy source</dd>

          <dt>Privacy Impact:</dt>
          <dd>One of: minimal, low, moderate, high</dd>

          <dt>Reference:</dt>
          <dd>Document specifying the entropy source</dd>
        </dl>
      </section>

      <section anchor="iana-entropy-initial">
        <name>Initial Registry Contents</name>

        <table anchor="tbl-entropy-sources">
          <name>Entropy Sources Initial Values</name>
          <thead>
            <tr>
              <th>Value</th>
              <th>Name</th>
              <th>Description</th>
              <th>Privacy Impact</th>
              <th>Reference</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td>1</td>
              <td>keystroke-timing</td>
              <td>Inter-key intervals from keyboard input</td>
              <td>moderate</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>2</td>
              <td>pause-patterns</td>
              <td>Gaps between editing bursts (&gt;2 seconds)</td>
              <td>low</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>3</td>
              <td>edit-cadence</td>
              <td>Rhythm of insertions/deletions over time</td>
              <td>low</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>4</td>
              <td>cursor-movement</td>
              <td>Navigation timing within document</td>
              <td>low</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>5</td>
              <td>scroll-behavior</td>
              <td>Document scrolling patterns</td>
              <td>minimal</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>6</td>
              <td>focus-changes</td>
              <td>Application focus gain/loss events</td>
              <td>low</td>
              <td>[this document]</td>
            </tr>
          </tbody>
        </table>
      </section>
    </section>
  </section>

  <section anchor="iana-media-types">
    <name>Media Types Registry</name>

    <t>
      This document requests registration of two media types in the
      "Media Types" registry <xref target="IANA.media-types"/>.
    </t>

    <section anchor="iana-media-type-pop">
      <name>application/vnd.example-pop+cbor Media Type</name>

      <dl>
        <dt>Type name:</dt>
        <dd>application</dd>

        <dt>Subtype name:</dt>
        <dd>vnd.example-pop+cbor</dd>

        <dt>Required parameters:</dt>
        <dd>N/A</dd>

        <dt>Optional parameters:</dt>
        <dd>N/A</dd>

        <dt>Encoding considerations:</dt>
        <dd>
          binary. As a CBOR format, it contains NUL octets and
          non-line-oriented data.
        </dd>

        <dt>Security considerations:</dt>
        <dd>
          This media type contains cryptographically anchored evidence
          of authorship process. It does not contain active or executable
          content. Integrity is ensured via a HMAC-SHA256 checkpoint chain
          and Verifiable Delay Functions (VDFs). Privacy is maintained
          through author-controlled salting of content hashes as defined
          in <xref target="salt-modes"/>. Security considerations of
          CBOR <xref target="RFC8949"/> apply. See also
          <xref target="security-considerations"/> of this document.
        </dd>

        <dt>Interoperability considerations:</dt>
        <dd>
          While the +cbor suffix allows generic parsing, full semantic
          validation and behavioral forensic analysis require a
          witnessd-compatible processor as defined in this specification.
          The content is a CBOR-encoded evidence-packet structure with
          semantic tag 1347571280.
        </dd>

        <dt>Published specification:</dt>
        <dd>[this document]</dd>

        <dt>Applications that use this media type:</dt>
        <dd>
          Generation of digital authorship evidence by the witnessd suite
          and WritersLogic integrated editors. Verification services,
          document provenance systems, academic integrity platforms.
        </dd>

        <dt>Fragment identifier considerations:</dt>
        <dd>N/A</dd>

        <dt>Additional information:</dt>
        <dd>
          <dl spacing="compact">
            <dt>Deprecated alias names for this type:</dt>
            <dd>N/A</dd>

            <dt>Magic number(s):</dt>
            <dd>0xD950505020 (CBOR tag encoding at offset 0)</dd>

            <dt>File extension(s):</dt>
            <dd>.pop</dd>

            <dt>Macintosh file type code(s):</dt>
            <dd>N/A</dd>
          </dl>
        </dd>

        <dt>Person and email address to contact for further information:</dt>
        <dd>David Condrey &lt;david@writerslogic.com&gt;</dd>

        <dt>Intended usage:</dt>
        <dd>COMMON</dd>

        <dt>Restrictions on usage:</dt>
        <dd>N/A</dd>

        <dt>Author:</dt>
        <dd>David Condrey</dd>

        <dt>Change controller:</dt>
        <dd>WritersLogic Inc.</dd>

        <dt>Provisional registration:</dt>
        <dd>No</dd>
      </dl>
    </section>

    <section anchor="iana-media-type-war">
      <name>application/vnd.example-war+cbor Media Type</name>

      <dl>
        <dt>Type name:</dt>
        <dd>application</dd>

        <dt>Subtype name:</dt>
        <dd>vnd.example-war+cbor</dd>

        <dt>Required parameters:</dt>
        <dd>N/A</dd>

        <dt>Optional parameters:</dt>
        <dd>N/A</dd>

        <dt>Encoding considerations:</dt>
        <dd>
          binary. As a CBOR-encoded format, it contains NUL octets and
          non-line-oriented data.
        </dd>

        <dt>Security considerations:</dt>
        <dd>
          This media type conveys the final appraisal result (verdict) of
          an authorship attestation. (1) It does not contain active or
          executable content. (2) Integrity and authenticity are provided
          via a COSE signature <xref target="RFC9052"/> that MUST be
          verified against the Verifier's public key. (3) The information
          identifies a specific document by its content hash; privacy is
          managed through the hash-salting protocols defined in
          <xref target="salt-modes"/>. (4) The security considerations for
          CBOR <xref target="RFC8949"/> and COSE <xref target="RFC9052"/>
          apply. Users are cautioned not to rely on unsigned or unverified
          .war files for high-stakes authenticity claims. See also
          <xref target="security-considerations"/> of this document.
        </dd>

        <dt>Interoperability considerations:</dt>
        <dd>
          The +cbor suffix allows generic CBOR tools to identify the
          underlying encoding. This format is a specific profile of the
          RATS Attestation Result and references a Proof of Process
          (.pop) evidence packet by UUID as defined in this specification.
          The content is a CBOR-encoded attestation-result structure with
          semantic tag 1463894560.
        </dd>

        <dt>Published specification:</dt>
        <dd>[this document]</dd>

        <dt>Applications that use this media type:</dt>
        <dd>
          Verification and display of authorship scores by publishers,
          academic repositories, literary journals, and the WritersLogic
          verification suite.
        </dd>

        <dt>Fragment identifier considerations:</dt>
        <dd>N/A</dd>

        <dt>Additional information:</dt>
        <dd>
          <dl spacing="compact">
            <dt>Deprecated alias names for this type:</dt>
            <dd>N/A</dd>

            <dt>Magic number(s):</dt>
            <dd>0xD957415220 (CBOR tag encoding at offset 0)</dd>

            <dt>File extension(s):</dt>
            <dd>.war</dd>

            <dt>Macintosh file type code(s):</dt>
            <dd>N/A</dd>
          </dl>
        </dd>

        <dt>Person and email address to contact for further information:</dt>
        <dd>David Condrey &lt;david@writerslogic.com&gt;</dd>

        <dt>Intended usage:</dt>
        <dd>COMMON</dd>

        <dt>Restrictions on usage:</dt>
        <dd>N/A</dd>

        <dt>Author:</dt>
        <dd>David Condrey</dd>

        <dt>Change controller:</dt>
        <dd>WritersLogic Inc.</dd>

        <dt>Provisional registration:</dt>
        <dd>No</dd>
      </dl>
    </section>
  </section>

  <section anchor="iana-expert-review">
    <name>Designated Expert Instructions</name>

    <t>
      The designated experts for the registries created by this document
      should apply the following criteria when evaluating registration
      requests:
    </t>

    <section anchor="iana-expert-claim-types">
      <name>Proof of Process Claim Types Registry</name>

      <t>
        For claim types requiring Specification Required:
      </t>

      <ul>
        <li>
          The specification MUST clearly define what the claim asserts
        </li>
        <li>
          For chain-verifiable claims, the specification MUST demonstrate
          that the claim can be verified solely from the Evidence packet
        </li>
        <li>
          For monitoring-dependent claims, the specification MUST
          document the Attesting Environment trust assumptions
        </li>
        <li>
          The claim name SHOULD be descriptive and follow existing
          naming conventions
        </li>
      </ul>

      <t>
        For environmental claims requiring Expert Review:
      </t>

      <ul>
        <li>
          The specification SHOULD describe implementation considerations
        </li>
        <li>
          The claim SHOULD NOT duplicate existing claims
        </li>
        <li>
          Privacy implications SHOULD be documented
        </li>
      </ul>
    </section>

    <section anchor="iana-expert-vdf">
      <name>Proof of Process VDF Algorithms Registry</name>

      <t>
        For experimental algorithms requiring Expert Review:
      </t>

      <ul>
        <li>
          The algorithm MUST be documented with sufficient detail for
          independent implementation
        </li>
        <li>
          Security analysis SHOULD be provided, even if preliminary
        </li>
        <li>
          The algorithm SHOULD NOT be a minor variant of an existing
          registered algorithm
        </li>
        <li>
          Implementation availability is encouraged but not required
        </li>
      </ul>
    </section>

    <section anchor="iana-expert-entropy">
      <name>Proof of Process Entropy Sources Registry</name>

      <t>
        For entropy sources requiring Specification Required:
      </t>

      <ul>
        <li>
          The specification MUST describe how timing intervals are
          derived from the entropy source
        </li>
        <li>
          Expected entropy density under typical conditions SHOULD be
          documented
        </li>
        <li>
          Privacy implications MUST be clearly stated
        </li>
        <li>
          The entropy source SHOULD provide meaningful behavioral signal
          that cannot be trivially simulated
        </li>
      </ul>
    </section>
  </section>

</section>

  </middle>

  <back>
    <!-- Normative References -->
    <references>
      <name>References</name>

      <references>
        <name>Normative References</name>

        <reference anchor="RFC2119" target="https://www.rfc-editor.org/info/rfc2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>

        <reference anchor="RFC8174" target="https://www.rfc-editor.org/info/rfc8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <date month="May" year="2017"/>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>

        <reference anchor="RFC8610" target="https://www.rfc-editor.org/info/rfc8610">
          <front>
            <title>Concise Data Definition Language (CDDL): A Notational Convention to Express Concise Binary Object Representation (CBOR) and JSON Data Structures</title>
            <author fullname="H. Birkholz" initials="H." surname="Birkholz"/>
            <author fullname="C. Vigano" initials="C." surname="Vigano"/>
            <author fullname="C. Bormann" initials="C." surname="Bormann"/>
            <date month="June" year="2019"/>
          </front>
          <seriesInfo name="RFC" value="8610"/>
          <seriesInfo name="DOI" value="10.17487/RFC8610"/>
        </reference>

        <reference anchor="RFC8949" target="https://www.rfc-editor.org/info/rfc8949">
          <front>
            <title>Concise Binary Object Representation (CBOR)</title>
            <author fullname="C. Bormann" initials="C." surname="Bormann"/>
            <author fullname="P. Hoffman" initials="P." surname="Hoffman"/>
            <date month="December" year="2020"/>
          </front>
          <seriesInfo name="STD" value="94"/>
          <seriesInfo name="RFC" value="8949"/>
          <seriesInfo name="DOI" value="10.17487/RFC8949"/>
        </reference>

        <reference anchor="RFC9052" target="https://www.rfc-editor.org/info/rfc9052">
          <front>
            <title>CBOR Object Signing and Encryption (COSE): Structures and Process</title>
            <author fullname="J. Schaad" initials="J." surname="Schaad"/>
            <date month="August" year="2022"/>
          </front>
          <seriesInfo name="STD" value="96"/>
          <seriesInfo name="RFC" value="9052"/>
          <seriesInfo name="DOI" value="10.17487/RFC9052"/>
        </reference>

        <reference anchor="RFC9334" target="https://www.rfc-editor.org/info/rfc9334">
          <front>
            <title>Remote ATtestation procedureS (RATS) Architecture</title>
            <author fullname="H. Birkholz" initials="H." surname="Birkholz"/>
            <author fullname="D. Thaler" initials="D." surname="Thaler"/>
            <author fullname="M. Richardson" initials="M." surname="Richardson"/>
            <author fullname="N. Smith" initials="N." surname="Smith"/>
            <author fullname="W. Pan" initials="W." surname="Pan"/>
            <date month="January" year="2024"/>
          </front>
          <seriesInfo name="RFC" value="9334"/>
          <seriesInfo name="DOI" value="10.17487/RFC9334"/>
        </reference>

        <reference anchor="RFC9711" target="https://www.rfc-editor.org/info/rfc9711">
          <front>
            <title>The Entity Attestation Token (EAT)</title>
            <author fullname="L. Lundblade" initials="L." surname="Lundblade"/>
            <author fullname="G. Mandyam" initials="G." surname="Mandyam"/>
            <author fullname="J. O'Donoghue" initials="J." surname="O'Donoghue"/>
            <author fullname="C. Wallace" initials="C." surname="Wallace"/>
            <date month="December" year="2024"/>
          </front>
          <seriesInfo name="RFC" value="9711"/>
          <seriesInfo name="DOI" value="10.17487/RFC9711"/>
        </reference>

        <reference anchor="IANA.cbor-tags" target="https://www.iana.org/assignments/cbor-tags">
          <front>
            <title>CBOR Tags</title>
            <author>
              <organization>IANA</organization>
            </author>
            <date/>
          </front>
        </reference>

        <reference anchor="IANA.media-types" target="https://www.iana.org/assignments/media-types">
          <front>
            <title>Media Types</title>
            <author>
              <organization>IANA</organization>
            </author>
            <date/>
          </front>
        </reference>

        <reference anchor="IANA.cose" target="https://www.iana.org/assignments/cose">
          <front>
            <title>CBOR Object Signing and Encryption (COSE)</title>
            <author>
              <organization>IANA</organization>
            </author>
            <date/>
          </front>
        </reference>

        <reference anchor="IANA.cwt" target="https://www.iana.org/assignments/cwt">
          <front>
            <title>CBOR Web Token (CWT) Claims</title>
            <author>
              <organization>IANA</organization>
            </author>
            <date/>
          </front>
        </reference>

      </references>

      <references>
        <name>Informative References</name>

        <reference anchor="RFC3161" target="https://www.rfc-editor.org/info/rfc3161">
          <front>
            <title>Internet X.509 Public Key Infrastructure Time-Stamp Protocol (TSP)</title>
            <author fullname="C. Adams" initials="C." surname="Adams"/>
            <author fullname="P. Cain" initials="P." surname="Cain"/>
            <author fullname="D. Pinkas" initials="D." surname="Pinkas"/>
            <author fullname="R. Zuccherato" initials="R." surname="Zuccherato"/>
            <date month="August" year="2001"/>
          </front>
          <seriesInfo name="RFC" value="3161"/>
          <seriesInfo name="DOI" value="10.17487/RFC3161"/>
        </reference>

        <reference anchor="RFC6973" target="https://www.rfc-editor.org/info/rfc6973">
          <front>
            <title>Privacy Considerations for Internet Protocols</title>
            <author fullname="A. Cooper" initials="A." surname="Cooper"/>
            <author fullname="H. Tschofenig" initials="H." surname="Tschofenig"/>
            <author fullname="B. Aboba" initials="B." surname="Aboba"/>
            <author fullname="J. Peterson" initials="J." surname="Peterson"/>
            <author fullname="J. Morris" initials="J." surname="Morris"/>
            <author fullname="M. Hansen" initials="M." surname="Hansen"/>
            <author fullname="R. Smith" initials="R." surname="Smith"/>
            <date month="July" year="2013"/>
          </front>
          <seriesInfo name="RFC" value="6973"/>
          <seriesInfo name="DOI" value="10.17487/RFC6973"/>
        </reference>

        <reference anchor="RFC9562" target="https://www.rfc-editor.org/info/rfc9562">
          <front>
            <title>Universally Unique IDentifiers (UUIDs)</title>
            <author fullname="K. Davis" initials="K." surname="Davis"/>
            <author fullname="B. Peabody" initials="B." surname="Peabody"/>
            <author fullname="P. Leach" initials="P." surname="Leach"/>
            <date month="May" year="2024"/>
          </front>
          <seriesInfo name="RFC" value="9562"/>
          <seriesInfo name="DOI" value="10.17487/RFC9562"/>
        </reference>

        <reference anchor="I-D.ietf-rats-ar4si" target="https://datatracker.ietf.org/doc/html/draft-ietf-rats-ar4si">
          <front>
            <title>Attestation Results for Secure Interactions</title>
            <author fullname="H. Birkholz" initials="H." surname="Birkholz"/>
            <author fullname="T. Fossati" initials="T." surname="Fossati"/>
            <author fullname="W. Pan" initials="W." surname="Pan"/>
            <author fullname="E. Voit" initials="E." surname="Voit"/>
            <date/>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-rats-ar4si"/>
        </reference>

        <reference anchor="I-D.ietf-rats-epoch-markers" target="https://datatracker.ietf.org/doc/html/draft-ietf-rats-epoch-markers">
          <front>
            <title>RATS Epoch Markers</title>
            <author fullname="H. Birkholz" initials="H." surname="Birkholz"/>
            <author fullname="T. Fossati" initials="T." surname="Fossati"/>
            <author fullname="W. Pan" initials="W." surname="Pan"/>
            <author fullname="C. Bormann" initials="C." surname="Bormann"/>
            <date/>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-rats-epoch-markers"/>
        </reference>

        <reference anchor="I-D.ietf-rats-ear" target="https://datatracker.ietf.org/doc/html/draft-ietf-rats-ear">
          <front>
            <title>EAT Attestation Results</title>
            <author fullname="T. Fossati" initials="T." surname="Fossati"/>
            <author fullname="S. Frost" initials="S." surname="Frost"/>
            <date/>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-rats-ear"/>
        </reference>

        <reference anchor="Pietrzak2019" target="https://eprint.iacr.org/2018/627">
          <front>
            <title>Simple Verifiable Delay Functions</title>
            <author fullname="K. Pietrzak" initials="K." surname="Pietrzak"/>
            <date year="2019"/>
          </front>
          <seriesInfo name="ITCS" value="2019"/>
        </reference>

        <reference anchor="Wesolowski2019" target="https://eprint.iacr.org/2018/623">
          <front>
            <title>Efficient Verifiable Delay Functions</title>
            <author fullname="B. Wesolowski" initials="B." surname="Wesolowski"/>
            <date year="2019"/>
          </front>
          <seriesInfo name="EUROCRYPT" value="2019"/>
        </reference>

        <reference anchor="MMR" target="https://github.com/opentimestamps/opentimestamps-server/blob/master/doc/merkle-mountain-range.md">
          <front>
            <title>Merkle Mountain Ranges</title>
            <author fullname="P. Todd" initials="P." surname="Todd"/>
            <date year="2016"/>
          </front>
        </reference>

        <reference anchor="TPM2.0" target="https://trustedcomputinggroup.org/resource/tpm-library-specification/">
          <front>
            <title>TPM 2.0 Library Specification</title>
            <author>
              <organization>Trusted Computing Group</organization>
            </author>
            <date year="2019"/>
          </front>
        </reference>

      </references>

    </references>




    <!-- Acknowledgments -->
    <section anchor="acknowledgments" numbered="false">
      <name>Acknowledgments</name>
      <t>
        The authors would like to thank the members of the IETF RATS working
        group for their foundational work on remote attestation architectures
        that this specification builds upon.
      </t>
      <t>
        Special thanks to the reviewers and contributors who provided feedback
        on early drafts of this specification.
      </t>
      <!-- Placeholder for additional acknowledgments -->
    </section>

    <!-- Document History -->
    <section anchor="document-history" numbered="false" removeInRFC="true">
      <name>Document History</name>

      <section anchor="history-00" numbered="false">
        <name>draft-condrey-rats-pop-00</name>
        <t>
          Initial submission.
        </t>
        <ul>
          <li>Defined Evidence Packet (.pop) and Attestation Result (.war) formats</li>
          <li>Specified Jitter Seal mechanism for behavioral entropy capture</li>
          <li>Specified VDF mechanisms for temporal ordering proofs</li>
          <li>Defined absence proof taxonomy with trust requirements</li>
          <li>Established forgery cost bounds methodology</li>
          <li>Added cross-document provenance linking mechanism</li>
          <li>Added continuation tokens for multi-packet Evidence series</li>
          <li>Added quantified trust policies for customizable appraisal</li>
          <li>Added compact evidence references for metadata embedding</li>
          <li>Documented security and privacy considerations</li>
          <li>Requested IANA registrations for CBOR tags, media types, and EAT claims</li>
        </ul>
      </section>

    </section>

  </back>

</rfc>
