AI Involvement Disclosure Policy
Version 1.0 — February 2026
A companion policy to the Future’s Edge Ethical AI Policy
Purpose and philosophy
Section titled “Purpose and philosophy”Future’s Edge is an AI-fluent organisation. We believe AI is a powerful tool for amplifying human capability, accelerating learning, and democratising access to expertise. We encourage members to use AI confidently and creatively — not despite our commitment to trust, but because of it.
This policy exists to make AI use transparent, not to limit it. Trust is built through honesty, not through avoiding tools that make us more capable.
The core principle: Use AI as much or as little as serves your work and your learning. Always disclose how you used it. The disclosure protects the integrity of the reputation system, respects the intelligence of your peers, and models the transparency we ask of everyone.
The five levels of AI involvement
Section titled “The five levels of AI involvement”We use a five-level scale to describe AI involvement in any work product. The levels are numbered 5 to 1 — where Level 5 is entirely human-authored and Level 1 is primarily AI-generated with human curation.
Why this numbering? Because human intellectual work — thinking, judging, directing, creating — is what we value most. The scale reflects that. A higher number means more human contribution. This is not a judgment against AI use — it is an honest reflection of where the cognitive work happened.
Level 5 — Human-authored
Section titled “Level 5 — Human-authored”What it means:
You conceived, structured, researched, drafted, and refined this work entirely yourself. No generative AI tool was involved at any stage — including ideation, outlining, drafting, editing, translation, summarisation, or fact-checking.
You may have used:
Traditional research tools, dictionaries, spell-checkers and grammar tools that do not use generative AI (e.g. basic spell-check in a word processor).
You did not use:
Any generative AI tool — even for a single sentence, even as a sounding board.
When this level is required:
Skill-building learning tasks where the point is developing capability through the doing; reflective writing where the insight must be your own; any context where “human-authored” is explicitly specified.
Disclosure format:
AI Involvement: Level 5 — Human-authoredLevel 4 — AI-verified
Section titled “Level 4 — AI-verified”What it means:
You conceived, structured, researched, drafted, and refined this work yourself. After you finished, you used AI in a limited, non-generative capacity — for example, to check grammar and spelling, verify factual claims, or assess readability.
The key characteristic:
The AI did not generate content, suggest structural changes, or contribute ideas. It checked and confirmed what you had already created.
Examples:
- Running your completed essay through a grammar checker
- Using AI to verify that a statistic you cited is accurate
- Using an AI accessibility tool to check readability for plain language
When this level works well:
Quality assurance on work you have already completed; ensuring technical accuracy; making content more accessible without changing its substance.
Disclosure format:
AI Involvement: Level 4 — AI-verifiedDescription: AI used for [grammar checking / fact verification / readability review] after completion.Level 3 — AI-assisted
Section titled “Level 3 — AI-assisted”What it means:
The core intellectual work — the ideas, argument, structure, and voice — is your own. You used AI as a tool during the process to enhance, develop, or refine your work. You could reproduce the substance of this work independently; the AI made it better, not possible.
The key characteristic:
The AI improved or extended your thinking — it did not originate it. Remove the AI and the core work still exists, albeit in rougher form.
Examples:
- Using AI to suggest alternative phrasings for something you had already written
- Asking AI to identify gaps in an argument you had already constructed
- Using AI to help structure notes you had already taken
- Using AI to translate work you wrote in your first language
- Using AI to generate follow-up questions you then answered yourself
When this level works well:
Refining and strengthening work you have already created; translating across languages while maintaining your voice; getting unstuck when you know what you want to say but need help articulating it.
Disclosure format:
AI Involvement: Level 3 — AI-assistedDescription: [One to two sentences describing how AI was used]Example: "I developed the argument and structure independently. AI was used to refine phrasing and identify gaps in my reasoning."Level 2 — AI-collaborated
Section titled “Level 2 — AI-collaborated”What it means:
You and AI worked together to produce this. Both contributed meaningfully to the ideas, structure, and content. You directed, curated, challenged, and shaped the AI’s contributions; the AI generated substantial portions of the output. Your intellectual contribution is real and significant — but the final product could not exist in its current form without the AI.
The key characteristic:
This is genuine collaboration. It is not a lesser form of work — done well, it is often the highest-quality output an AI-fluent professional produces. Your skill lies in your direction, judgment, and ability to recognise and refine what is valuable.
Examples:
- Using an extended AI conversation to develop, challenge, and refine a policy position
- Prompting AI to draft sections, then substantially editing, restructuring, and challenging the output
- Using AI to synthesise research you curated, then critically reviewing and expanding the synthesis
- Iterative prompt-and-revise cycles where your judgment shapes every iteration
When this level works well:
Complex research and synthesis; policy development; designing frameworks; producing high-quality first drafts that you then refine; any context where speed and depth matter and you have the expertise to direct and evaluate effectively.
When to be cautious:
Learning contexts where the skill you are meant to develop is the thing the AI is doing for you. In those cases, Level 2 shortcuts your growth.
Disclosure format:
AI Involvement: Level 2 — AI-collaboratedDescription: [Two to three sentences describing what you contributed and what AI contributed]Session log: [Link to published log, if applicable]AI score: [X% human / X% AI, if tool provides this]Example: "I directed the structure, challenged assumptions, and refined all outputs. AI generated draft content based on my prompts and research direction. All final decisions and edits were mine."Level 1 — AI-generated with human curation
Section titled “Level 1 — AI-generated with human curation”What it means:
The AI produced the primary content based on a prompt or set of instructions you provided. Your contribution was in the quality of the brief, the curation of the output, and your judgment that it meets the required standard. You performed minimal post-generation editing.
The key characteristic:
Your value lies in your ability to brief effectively, evaluate critically, and decide what is fit for purpose. This is a legitimate level in many professional contexts — particularly for first drafts, templated content, or exploratory work where volume and speed matter more than voice.
Examples:
- Generating a first draft from a detailed prompt, with light editing before submission
- Using AI to produce a summary of a document you provided
- Creating a structured outline from a prompt, then submitting it with light review
- Generating multiple options and selecting one with minimal modification
When this level works well:
Producing content at scale; generating first drafts for refinement later; creating templated or standardised outputs; exploratory ideation where you need volume before quality.
When this level is not permitted:
Any learning context where the skill being assessed is the thing the AI produced. Submitting Level 1 work in a context that requires Level 3, 4, or 5 is a trust violation.
Disclosure format:
AI Involvement: Level 1 — AI-generated with human curationDescription: [One to two sentences describing the prompt or brief you provided]Prompt summary: [Include the core prompt, if appropriate]Session log: [Link to published log, if applicable]AI score: [X% human / X% AI, if tool provides this]Example: "I provided a detailed brief covering scope, audience, and key points. AI generated the draft. I reviewed for accuracy and relevance but made minimal edits."The disclosure format
Section titled “The disclosure format”Every work product submitted at Future’s Edge includes a disclosure block. This is lightweight, standardised, and human-readable. It is not bureaucracy — it is trust infrastructure.
Standard disclosure template
Section titled “Standard disclosure template”━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━AI INVOLVEMENT DISCLOSURE
Level: [1 / 2 / 3 / 4 / 5] — [Label]
Description:[One to three sentences describing how AI was used,or "No AI involvement" for Level 5]
Session log:[Link] / [Not published] / [Not applicable]
AI involvement score:[X% human / X% AI] / [Not available]
Submitted by: [Member name or pseudonym]Date: [Date]━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━Where the disclosure appears:
- At the end of written work submissions
- In the metadata of files uploaded to the platform
- As a structured field in governance proposals
- As a required section in KnowledgeBank contributions
Verification mechanisms
Section titled “Verification mechanisms”1. Published session and tool logs
Section titled “1. Published session and tool logs”Where your work was produced using an AI tool that generates a session log — including conversational AI tools, coding assistants, and writing tools — you are encouraged (and in some high-stakes contexts, required) to publish the log alongside your work.
What a published log demonstrates:
- The sequence of prompts and responses
- Your iterative direction and judgment
- The proportion of the final output that originated with the AI versus you
- That the disclosure level you claimed is honest
What it does not do:
- Expose private thoughts or unrelated conversations (you control what you share)
- Create a surveillance record of your broader AI use
- Penalise you for exploratory or iterative prompting — iteration is good practice
How to publish:
- Export the session log from your AI tool (most major tools support this)
- Upload to the Future’s Edge platform alongside your work product
- The log is attached to your submission and visible to reviewers — it is not published publicly by default unless you choose to share it
When publishing is required:
- Level 1 or 2 work submitted in governance contexts (proposals, policy amendments, charter revisions)
- Level 1 or 2 work submitted for reputation score contribution in high-stakes categories
- Any work where your disclosure level is disputed by a reviewer or member
When publishing is encouraged but optional:
- General knowledge contributions to the KnowledgeBank
- Project deliverables at Level 2 or 3
- Community discussion posts at Level 1 or 2
2. AI involvement scoring
Section titled “2. AI involvement scoring”Some AI tools can estimate the proportion of a final output that is human-generated versus AI-generated. This capability is growing rapidly. Where a tool supports this, include the score in your disclosure.
How to interpret AI involvement scores:
| Score | Likely level | What it means |
|---|---|---|
| 90–100% human | Level 5 or 4 | AI played no generative role, or only verification |
| 70–89% human | Level 4 or 3 | AI enhanced; you authored |
| 40–69% human | Level 3 or 2 | Genuine collaboration |
| 20–39% human | Level 2 or 1 | AI generated; you directed and curated |
| 0–19% human | Level 1 | Primarily AI-generated with light curation |
Important caveats:
- These tools are imperfect. Treat scores as indicative, not definitive
- A low human score in a Level 2 submission is not automatically problematic — what matters is the quality of your direction, curation, and judgment
- Do not attempt to manipulate these scores to misrepresent your level — that is a trust violation
3. Peer review as a soft verification layer
Section titled “3. Peer review as a soft verification layer”Reviewers of submitted work are trained to recognise the characteristics of each level — not to penalise AI use, but to verify that the level you claimed matches the work.
If a reviewer believes your disclosure is materially inaccurate — for example, Level 2 work labelled as Level 4 — they can flag it for Ethics Circle review. This is not adversarial. It is a quality and trust mechanism that protects the integrity of the reputation system for everyone.
What reviewers look for:
- Consistency between the stated level and the nature of the work
- Whether your voice, judgment, and direction are visible in the output
- Whether the session log (if provided) supports the claimed level
- Whether the AI involvement score (if provided) aligns with the claimed level
What level is appropriate for which context?
Section titled “What level is appropriate for which context?”Not every context permits every level. Some contexts require human-authored work because the learning happens through the doing. Other contexts welcome full AI collaboration because the outcome matters more than the process.
| Context | Minimum level | Maximum level | Notes |
|---|---|---|---|
| Skill-building learning tasks | Level 5 | Level 5 | The point is developing the skill through doing it yourself |
| Reflective learning submissions | Level 5 | Level 3 | Core reflection must be your own; AI can help you articulate it |
| Knowledge contributions (KnowledgeBank) | Level 1 | Level 5 | All levels permitted; disclosure required |
| Governance proposals | Level 3 | Level 5 | You must be able to defend the proposal in discussion |
| Policy development | Level 2 | Level 5 | Collaboration welcomed; you must own the logic |
| Project deliverables | Level 1 | Level 5 | Client or project context determines limits |
| Community discussion | Level 1 | Level 5 | Informal; disclosure encouraged but not enforced |
| Reputation score contributions | Level 2 | Level 5 | Level 1 does not earn individual skill reputation |
| Research and synthesis | Level 1 | Level 5 | All levels permitted; quality of curation matters |
Key principle: The more a context is designed to develop your capability, the higher the minimum level required. The more a context is about producing a quality outcome, the more flexibility you have.
Why we encourage AI use
Section titled “Why we encourage AI use”Some organisations treat AI as a risk to be managed. Future’s Edge treats it as a capability to be developed.
We encourage AI use because:
-
It democratises expertise. A young member in an emerging economy with access to AI can produce work that competes with well-resourced professionals. That is levelling up, and it is central to our mission.
-
It accelerates learning. The feedback loop between trying, getting AI input, refining, and trying again is how AI-fluent professionals develop expertise faster than previous generations could.
-
It frees cognitive energy for higher-order thinking. If AI can handle the first draft, the formatting, the fact-checking — you can spend your energy on the strategy, the creativity, the judgment. That is a better use of human intelligence.
-
It is the reality of the professional world our members are entering. Teaching members to use AI transparently and effectively is teaching them to succeed in the world as it is, not as it was.
The only thing we do not tolerate is dishonesty about its use. Because that erodes trust — and trust is the foundation of everything we build.
When disclosure levels are disputed
Section titled “When disclosure levels are disputed”If a reviewer, peer, or Ethics Circle member believes your disclosed level does not match the work — you will be notified and invited to provide additional context or evidence (such as a session log or AI involvement score).
The process:
- Notification — You receive a concern notice within five business days, stating what is disputed and why
- Response — You have 14 days to provide evidence supporting your disclosed level
- Review — The Ethics Circle reviews the evidence and makes a determination
- Outcome — One of four outcomes:
- Concern dismissed — Your disclosure was accurate; no action taken
- Clarification requested — You are asked to update your disclosure with more detail
- Level adjustment — Your disclosure is revised to reflect the actual level; no penalty if the error was inadvertent
- Trust violation — If the misrepresentation was deliberate, this is referred to the dispute resolution process
Important: An honest mistake is not a trust violation. If you genuinely believed your work was Level 3 and a reviewer determines it is Level 2 — updating the disclosure is sufficient. A trust violation occurs when the evidence shows you intentionally misrepresented the level to gain unfair advantage.
Reputation implications of each level
Section titled “Reputation implications of each level”Your reputation at Future’s Edge is built through demonstrated capability. AI involvement affects how different types of work contribute to your reputation score — not as a penalty, but as an honest reflection of what the work demonstrates about your capability.
| Level | Reputation contribution | Why |
|---|---|---|
| Level 5 | Full contribution to skill-based reputation | Demonstrates your independent capability |
| Level 4 | Full contribution to skill-based reputation | Demonstrates your capability; AI verification does not diminish it |
| Level 3 | Full contribution to skill-based reputation | Demonstrates your capability; AI assistance enhanced it |
| Level 2 | Partial contribution to skill-based reputation; full contribution to collaboration reputation | Demonstrates your judgment, direction, and curation capability |
| Level 1 | Minimal contribution to skill-based reputation; contributes to project completion and output reputation | Demonstrates your briefing and evaluation capability |
Why Level 1 earns less individual skill reputation:
Because the cognitive work — the structuring, drafting, and reasoning — was primarily done by the AI. Your contribution was real and valuable (briefing and curation are skills), but it does not demonstrate the same depth of capability as Level 3, 4, or 5 work. This is honest, not punitive.
Why Level 2 still earns meaningful reputation:
Because directing, curating, and refining AI output to a high standard is a genuine, valuable, and increasingly important professional skill. The reputation you earn reflects that.
Teaching members to disclose well
Section titled “Teaching members to disclose well”Future’s Edge provides training on AI disclosure as part of the Foundation Program. Members learn:
- How to assess which level their work falls into
- How to write clear, honest disclosure descriptions
- How to export and publish session logs
- How to interpret AI involvement scores
- Why disclosure protects them and the community
The cultural norm we are building: Disclosing AI use transparently is a sign of professionalism and integrity — not a confession of inadequacy.
A note on this policy document
Section titled “A note on this policy document”In the spirit of transparency: this AI Involvement Disclosure Policy was produced at Level 2 — AI-collaborated.
The human directed the structure, set the philosophy, determined the numbering reversal, challenged the AI’s outputs, and made all final decisions about what to include. The AI generated draft content under that direction. The session log exists and could be published alongside this policy.
This is what we mean by honest, transparent AI collaboration.
Version history
Section titled “Version history”| Version | Date | Summary of changes |
|---|---|---|
| 1.0 | February 2026 | Initial ratification |
Next review: February 2027
Owned by: Future’s Edge community (DAO-governed)
Maintained by: Ethics Circle
This policy is published under Creative Commons Attribution 4.0 International (CC BY 4.0). You are free to adapt and redistribute it with attribution.