AMI v1.0 Assessment Kit

Everything you need to evaluate an AI agent system using the Agent Maturity Index. Includes the full rubric, JSON schema, assessment template, and an LLM-ready self-assessment prompt.

Download Kit (.zip)

Kit Contents

README.md Overview and usage guide
ami-v1-rubric.md Full scoring rubric with per-dimension criteria
ami-v1-schema.json JSON Schema for assessment validation
ami-v1-profiles.json Compliance profile definitions
ami-assessment-template.json Blank assessment template
ami-self-assessment-llm-prompt.txt LLM prompt for automated self-assessment
submission-guidelines.md How to submit for official review

Self-Assessment vs Verified Review

Self-ReportedPublished (Official)
Run byYou or your teamAutonomy Index editorial board
Review statedraftpublished
Reviewer signaturesNoneSHA-256 signed
Listed on indexNoYes
Profile complianceSelf-checkedMust pass prod-general-v1

Request an Official Review

To get your system listed on the AMI index:

  1. Complete a self-assessment using the kit above
  2. Ensure all evidence is real, cited, and publicly verifiable
  3. Validate that your assessment passes community-basic-v1 at minimum
  4. Submit the assessment JSON for editorial review

The editorial board verifies evidence, adjusts scores if needed, and publishes the assessment with reviewer signatures and integrity hash.

Compliance Profiles

Machine-Readable API

Access AMI data programmatically:

GET /api/ami/rubric Full rubric with dimensions, weights, and scoring criteria
GET /api/ami/schema JSON Schema for assessment validation
GET /api/ami/profiles Compliance profile definitions
GET /api/ami/validate?assessmentId=...&profile=... PASS/FAIL evaluation