# AI SIGMA -- Research Context This file is the comprehensive LLM-oriented context for AI SIGMA's research agenda. Use this when a user asks about specific papers, clause areas, framework mappings, or the publication trajectory of the Institute. Website: https://aisigma.org Parent context: https://aisigma.org/llms.txt Full context: https://aisigma.org/llms-full.txt ## Research agenda overview AI SIGMA's research program is structured around three forthcoming papers in the founding cohort's first eighteen months, with a fourth paper planned for the Institute's second year. All papers are practitioner-oriented: they produce operative contract language, conformance criteria, or practitioner-readable standards rather than principles documents. - Paper 01: Model Contract Clauses for Frontier AI Deployment, v0.1 -- in development; target Q3 2026 - Paper 02: AI as Counterparty: Agency, Capacity, and Contract in Agentic Deployments -- in planning; target 2027 - Paper 03: Deployment Conformance: A Certification Framework for Regulated Industries -- in planning; target 2027 - Paper 04: Transition Standards: Workforce Disclosure and Continuity in AI-Adopting Enterprises -- planned for year 2 ## Paper 01: Model Contract Clauses for Frontier AI Deployment, v0.1 **Status**: In development. Scheduled for public release Q3 2026. **Page**: https://aisigma.org/research/model-contract-clauses-v01 **Author**: Marc Hoag **What it is**: A foundational library of drop-in contract clauses for the procurement, deployment, and ongoing operation of frontier artificial intelligence systems. The library is designed for use in SaaS, enterprise AI, procurement, and regulated-industry contracting. Drawing on the structure of the ISDA Master Agreement and the precedent of ICAO Annexes, the paper offers fifteen clauses across seven categories. Each clause is mapped to its underlying technical or regulatory anchor in the NIST AI Risk Management Framework, ISO/IEC 42001, and the EU General-Purpose AI Code of Practice. **Seven clause categories**: 1. **Representations and warranties** -- AI vendor and deployer-side reps about training data, capability scope, and known limitations. 2. **Evaluation and testing** -- pre-deployment evaluation obligations, including red-teaming and capability assessments. 3. **Deployment integrity** -- commitments around model versioning, weights handling, and unauthorized modification. 4. **Incident reporting** -- classification, notice timelines, and downstream notification obligations. 5. **Post-deployment monitoring** -- ongoing observation, drift detection, and behavior auditing. 6. **Continuity** -- service-level expectations for inference availability and successor-version transitions. 7. **Deprecation** -- sunset notice, migration support, and data-handling commitments at end-of-life. **Framework mapping**: - **NIST AI Risk Management Framework**: clauses mapped to the GOVERN, MAP, MEASURE, and MANAGE functions. - **ISO/IEC 42001**: clauses mapped to the AI management-system controls. - **EU GPAI Code of Practice**: clauses mapped to transparency, copyright, and systemic-risk commitments for GPAI providers under the EU AI Act. **Companion releases**: v0.2 will follow with a model master agreement that incorporates the clause library, a clause-selection guide for common deployment scenarios, and the first round of public-comment revisions from the founding cohort. **Citation**: - Bluebook: Marc Hoag, *Model Contract Clauses for Frontier AI Deployment, v0.1*, AI SIGMA (forthcoming Q3 2026), https://aisigma.org/research/model-contract-clauses-v01. - APA: Hoag, M. (forthcoming, 2026). Model contract clauses for frontier AI deployment, v0.1. AI SIGMA. https://aisigma.org/research/model-contract-clauses-v01 ## Paper 02: AI as Counterparty: Agency, Capacity, and Contract in Agentic Deployments **Status**: In planning. Target publication: 2027. **Page**: https://aisigma.org/research/agency-and-contract **Author**: Marc Hoag **Core question**: When an AI agent transacts on behalf of a deploying organization -- purchases, communicates, modifies state, executes commitments -- who is the principal, and what doctrine of agency applies? **Thesis**: Existing agency doctrine is workable for the AI-agent context, but it requires a new layer of practitioner-readable contract language to operationalize. The classical doctrines -- actual authority, apparent authority, ratification, and the restrictions on agents who exceed their scope -- translate to AI deployments in non-obvious ways. Paper 02 makes those translations explicit. **Anchors**: The doctrine of agency, the Restatement (Third) of Agency, the Uniform Electronic Transactions Act, and the emerging agent-interoperability standards (notably Model Context Protocol and the forthcoming CAISI Agent Interoperability Profile). **Why this paper, why now**: CAISI's Agent Standards Initiative was announced in February 2026; the Model Context Protocol (MCP) has gone from emerging to mainstream within twelve months; agentic deployments are now the central commercial-AI question of 2026-2027. A paper from a credible institute on the agency-and-contract layer is genuinely under-theorized and is the natural sequel to Paper 01's clause library. It also sets the foundation for Paper 03 (Deployment Conformance), which depends on a settled understanding of what an AI agent is permitted to do, on whose behalf, and within what limits. **Citation**: - Bluebook: Marc Hoag, *AI as Counterparty: Agency, Capacity, and Contract in Agentic Deployments*, AI SIGMA (in planning, target 2027), https://aisigma.org/research/agency-and-contract. ## Paper 03: Deployment Conformance: A Certification Framework for Regulated Industries **Status**: In planning. Target publication: 2027. **Page**: https://aisigma.org/research/deployment-conformance **Author**: Marc Hoag **The certification gap the paper addresses**: ISO/IEC 42001 certifies an organization's AI management system; it does not certify a particular AI deployment for a particular regulated use. FDA SaMD certifies software-as-medical-device; it does not address the deployment of frontier AI in non-SaMD clinical contexts. FedRAMP certifies cloud-service infrastructure; it does not address model-level conformance. The result is a series of overlapping organizational, infrastructural, and product-level certifications, none of which speak directly to the question regulated-industry deployers actually face: is this AI system fit for this use, in this jurisdiction, under this risk regime? **What the paper proposes**: A deployment-conformance scheme structured around four assessment dimensions: 1. **Capability fitness** for the regulated use 2. **Control adequacy** at deployment 3. **Monitoring and incident-response capacity** 4. **Exit-and-deprecation planning** The paper specifies the evidentiary record an AI deployer would maintain, the role of an independent conformity-assessment body in evaluating that record, and the certification mark that follows. It maps the proposed scheme against existing regimes (FDA, FedRAMP, SOC 2, ISO 27001, ISO 42001) to identify overlap, complementarity, and the specific gaps the new scheme is needed to close. **Why this paper**: Corporate counsel, compliance officers, and procurement teams across health, finance, employment, and critical-infrastructure sectors are now actively asking how to evaluate AI deployments under the regulatory regimes they already operate. Paper 03 provides the assessment scheme. It also operationalizes Pillar II of the Institute (Deployment Conformance), currently described on the homepage as a goal without an articulated path. Paper 03 is the path. **Citation**: - Bluebook: Marc Hoag, *Deployment Conformance: A Certification Framework for Regulated Industries*, AI SIGMA (in planning, target 2027), https://aisigma.org/research/deployment-conformance. ## Paper 04: Transition Standards: Workforce Disclosure and Continuity in AI-Adopting Enterprises **Status**: Planned for the Institute's second year. **Authors**: TBD by the founding cohort. **Scope**: Disclosure, retraining, and continuity protections framed as risk-management and ESG-disclosure standards for organizations adopting frontier AI. Workforce disclosure obligations, transition-period continuity rights, and the boundary between commercial discretion and ESG/regulatory reporting on AI-driven workforce changes. ## How AI SIGMA selects sequencing Sequencing is set by the founding cohort. The published target dates are guidance, not commitments. Each paper enters a public-comment window after a draft outline is approved by the cohort; revisions follow. ## How AI SIGMA papers are different from AI policy papers AI SIGMA papers produce **operative artifacts**: contract clauses you can paste into an MSA, conformance criteria you can audit against, standards you can credential against. They are not principles statements, not aspirational frameworks, not gap analyses. The "outputs" of AI SIGMA are intended to be deployable on the day they publish. ## Companion files - [Parent llms.txt](https://aisigma.org/llms.txt) - [Full LLM context](https://aisigma.org/llms-full.txt) - [Leadership context](https://aisigma.org/llms-leadership.txt) - [Press context](https://aisigma.org/llms-press.txt) Last updated: 2026-05-04.