Community Impact Assessment Template
Purpose: To be completed before any new AI tool or practice is adopted at Future’s Edge. This is a working document — plain language is more important than polish.
Community impact assessment
Section titled “Community impact assessment”Assessment date: _______________
Completed by: _______________
Tool/practice name: _______________
Status: ☐ Draft ☐ Under review ☐ Approved ☐ Rejected
Section 1: What is this AI system?
Section titled “Section 1: What is this AI system?”Plain-language description (explain what it does as if to someone who has never heard of it):
What problem does it solve?
Where will it be used? (which parts of the organisation, which processes, which member interactions)
Named tool owner (the human accountable for this tool’s outcomes):
Name: _______________
Role: _______________
Contact: _______________
Section 2: Who is affected?
Section titled “Section 2: Who is affected?”Check all affected stakeholder groups:
- ☐ Youth members
- ☐ Under-served populations
- ☐ Emerging economy participants
- ☐ Small business owners or clients
- ☐ Partner organisations / NFPs
- ☐ Local communities
- ☐ Future’s Edge staff / contractors
- ☐ Other: _______________
Which group is most affected? (who has the most at stake?)
Section 3: The five critical questions
Section titled “Section 3: The five critical questions”Question 1: Who is the community here, and what do they need to trust?
For each affected group, what do they need to feel safe, heard, and in control?
Question 2: What could go wrong for the most vulnerable group?
Name the highest-consequence risk. Be specific. Who gets hurt, and how?
Question 3: Is this structurally trustworthy, or just compliant?
Test against each of the nine principles:
| Principle | Pass? | Notes |
|---|---|---|
| 1. Trust is structural | ☐ Yes ☐ No ☐ Unsure | |
| 2. Human agency | ☐ Yes ☐ No ☐ Unsure | |
| 3. Inclusion designed in | ☐ Yes ☐ No ☐ Unsure | |
| 4. Dignity and grace | ☐ Yes ☐ No ☐ Unsure | |
| 5. Economic fairness | ☐ Yes ☐ No ☐ Unsure | |
| 6. Privacy is respect | ☐ Yes ☐ No ☐ Unsure | |
| 7. Reversibility | ☐ Yes ☐ No ☐ Unsure | |
| 8. Open by default | ☐ Yes ☐ No ☐ Unsure | |
| 9. Proof of concept | ☐ Yes ☐ No ☐ Unsure |
If any principle is marked “No” or “Unsure” — explain and propose mitigation:
Question 4: Who is accountable, and what is their oversight mechanism?
How will the named tool owner stay meaningfully in the loop?
- ☐ Manual review of every decision
- ☐ Sampling-based review (specify frequency: _______________)
- ☐ Automated alerts for threshold events (specify: _______________)
- ☐ Regular reporting and audit trail review
- ☐ Other: _______________
Question 5: Are we doing this for them, or for us?
What is the primary beneficiary of this AI system?
- ☐ Members / affected community (their capability, agency, or opportunity increases)
- ☐ Future’s Edge (operational efficiency, cost reduction, or commercial gain)
- ☐ Both — genuinely
If the answer is primarily “for us” — can you justify that honestly?
Section 4: Data and privacy
Section titled “Section 4: Data and privacy”What personal or community data does this AI system use?
Is the data collection minimised? (only what is genuinely needed)
☐ Yes ☐ No ☐ Unsure
Have affected people consented in plain language?
☐ Yes ☐ No ☐ Not yet
Can people see what data is held about them?
☐ Yes ☐ No
Can people request deletion or correction?
☐ Yes ☐ No ☐ Partial (explain: _______________)
Section 5: Bias and inclusion
Section titled “Section 5: Bias and inclusion”Has this tool been tested for bias?
☐ Yes ☐ No ☐ Not yet (planned date: _______________)
If tested — what groups were included in testing?
- ☐ Gender diversity
- ☐ Language diversity
- ☐ Geographic diversity (emerging economies represented)
- ☐ Cultural background diversity
- ☐ Socioeconomic diversity
Were any performance disparities found?
☐ No disparities detected
☐ Disparities detected (describe and propose remediation):
Have affected communities been involved in design or review?
☐ Yes — co-designed with community input
☐ Yes — reviewed by affected communities before deployment
☐ No — not yet (explain why and when this will happen):
Section 6: Explainability and challenge
Section titled “Section 6: Explainability and challenge”Can affected people understand how this AI makes decisions?
☐ Yes — plain-language explanation provided
☐ Partial — technical documentation exists but needs translation
☐ No — black box system
If “No” or “Partial” — how will you address this before deployment?
Can people challenge an AI-generated decision?
☐ Yes — clear pathway exists (describe: _______________)
☐ No — no challenge mechanism
If “No” — why not, and is this acceptable given the decision’s impact?
Section 7: Recommendation
Section titled “Section 7: Recommendation”Recommendation:
☐ Approve — ready for deployment
☐ Approve with conditions (specify: _______________)
☐ Defer — more work needed before approval
☐ Reject — violates principles and cannot be remediated
Signature (completing team): _______________
Date: _______________
Section 8: Ethics Circle review
Section titled “Section 8: Ethics Circle review”Reviewed by: _______________
Review date: _______________
Ethics Circle decision:
☐ Approved
☐ Approved with conditions (specify: _______________)
☐ Returned for revision
☐ Rejected
Ethics Circle notes:
Signature (Ethics Circle representative): _______________
CIA Record ID: _______________ (to be issued by Ethics Circle upon approval)
Published to Use Case Register: ☐ Yes — Date: _______________