Skip to content
TIP: Summarize this page with your AI tool by using the Copy button below

Incentive design

Future’s Edge operates on a unique task-based system where members access work opportunities matched to their skills, experience, goals, interests, and trust scores. Every task is structured as a smart contract that clearly defines submission requirements, proof standards, and associated rewards.

This document outlines our approach to determining the value of tasks to the organisation and designing incentive schemes that fairly reward members while maintaining economic sustainability.

Understanding this framework is essential for founding members who will help design, refine and govern these systems.

As a truly decentralized autonomous organization (DAO), Future’s Edge requires transparent, auditable systems for value creation and distribution. Smart contract-based task valuation ensures that:

  • Every member can see how tasks are valued and why
  • Incentive calculations follow consistent, publicly visible formulas
  • Governance decisions about formula adjustments are data-informed
  • The system remains fair as the organization scales globally

Rather than arbitrary pricing or purely market-based approaches, our framework considers the full spectrum of value tasks create – financial, strategic, educational, social, and community-building. This ensures incentives align with our values of empowerment, learning, and impact while maintaining economic viability.

By establishing clear formulas before launch, we:

  • Minimize disputes about fair compensation
  • Reduce opportunities for system manipulation
  • Create predictable earning potential for members
  • Balance organizational sustainability with member rewards

The system allows different members to see different incentive offers based on their development goals, skill matches, and trust scores – creating pathways for both novices seeking learning and experienced members seeking income.

Before setting incentives for a task, we must first understand its value to the organization.

Task value represents the benefit Future’s Edge receives when the task is completed successfully – encompassing financial gains, strategic progress, risk mitigation, knowledge creation, and community strengthening.

The following factors contribute to determining a task’s organizational value. Not all factors apply to every task, and weights may vary based on organizational priorities established through governance.

FactorDescriptionExample application
Direct revenue generationIncome earned from client projects or servicesA development task for a paying client generates invoice value
Cost avoidance or reductionPreventing future expenses or improving efficiencyAutomating a manual process reduces ongoing operational costs
Budget complianceCompleting work within allocated resourcesFinishing ahead of schedule preserves budget for other initiatives
FactorDescriptionExample application
Mission and values alignmentAdvancing core purpose of youth empowerment and digital skillsA workshop teaching blockchain basics to underserved youth
Strategic pillar contributionSupporting key organizational prioritiesBuilding partnerships with universities advances education pillar
Innovation potentialCreating reusable methodologies or approachesDeveloping a new DAO governance framework others can adopt
FactorDescriptionExample application
Schedule criticalityImpact of delays on project or organizational timelinesTasks on critical path toward client deliverable deadlines
Dependency multiplierNumber of other tasks or members blocked until completionA design approval that unblocks five development tasks
Opportunity costAlternative value lost if task is delayed or incompleteMissing a grant application deadline closes funding opportunity
FactorDescriptionExample application
Quality standards and complexityRequired expertise and precision levelClient-facing deliverables requiring professional polish
Risk mitigationReducing potential threats to the organizationImplementing security audit recommendations
Failure impactConsequences of poor executionLow-quality work damages client relationships and reputation
FactorDescriptionExample application
Knowledge asset creationProducing reusable intellectual propertyCreating tutorial content for the KnowledgeBank
Organizational learningBuilding collective capabilitiesFirst implementation of new technology creates learning for all
Proof-of-concept valueValidating approaches for future applicationTesting a new governance mechanism in low-stakes context
FactorDescriptionExample application
Community growthAttracting and retaining membersCreating compelling onboarding experiences
Network effectsStrengthening connections and collaborationOrganizing cross-field office collaboration sessions
Trust infrastructureEnhancing governance and reputation systemsImproving the transparency of smart contract operations
FactorDescriptionExample application
Social impact valuePositive change created for target communitiesDigital skills training for elders in underserved areas
Reputation enhancementElevating Future’s Edge visibility and credibilitySpeaking at major conference or publishing research
Advocacy and awarenessAdvancing causes aligned with organizational valuesCreating content that promotes youth empowerment
FactorDescriptionExample application
Governance strengtheningImproving decision-making processesDesigning better proposal templates for member voting
Operational efficiencyStreamlining workflows and reducing frictionBuilding automation for routine administrative tasks
Scalability contributionCreating templates and repeatable processesDeveloping mission creation templates for field offices

While the final formula will be developed through founding member research and experimentation, a general approach includes:

  1. Identify applicable factors: Determine which value factors the task addresses
  2. Assign factor scores: Rate each applicable factor (e.g., 1-10 scale)
  3. Apply weights: Multiply factor scores by organizational priority weights
  4. Calculate base value: Sum weighted factor scores to produce task value score
  5. Validate intuitively: Check whether the value score feels right compared to similar tasks

Example simplified formula:

Task Value = (Financial Impact × 0.30) + (Strategic Alignment × 0.20) + (Knowledge Creation × 0.15) + (Community Building × 0.15) + (Time Criticality × 0.10) + (Risk Mitigation × 0.10)

Different task categories emphasize different value factors:

  • Client project work: Higher weight on financial impact, schedule criticality, quality standards
  • Governance activities: Higher weight on trust infrastructure, governance strengthening, community participation
  • Self-development: Higher weight on knowledge creation, organizational learning, member portfolio building
  • Impact missions: Higher weight on social value, advocacy, alignment with values, community benefit

Once task value is established, we determine appropriate incentives to motivate completion while maintaining organizational sustainability.

Incentives should attract qualified members, encourage quality work, and reward contributions fairly across the organization’s multiple reward currencies.

FactorDescriptionImpact on incentives
Task availability vs. member capacityRatio of open tasks to available qualified membersWhen many tasks compete for few members, incentives increase
Scarcity of required skillsRarity of expertise within the organizationSpecialized skills command premium incentives
Competition for attentionOther attractive opportunities available to membersPeak activity periods may require higher incentives
Unpopular but necessary workTasks members consistently avoidEscalating incentives until claimed
FactorDescriptionImpact on incentives
Deadline proximityTime remaining until due dateSurge pricing as deadlines approach
Time decay on valueValue reduction over timeFront-loaded incentives for time-sensitive tasks
Optimal timing windowsIdeal completion periodsPeak incentives during optimal windows
Opportunity cost of delayCascading impact on dependent tasksHigher incentives to prevent bottlenecks
FactorDescriptionImpact on incentives
Skill match and development goalsAlignment with member’s learning pathwayLower cash, higher skill badges for developmental tasks
Trust score and rank thresholdsMember’s standing in the organizationDifferent reward currencies at different levels
Individual performance patternsWhen members are most productiveAI-driven timing optimization for task offers
Historical preferencesPatterns in past task selectionsAdjusted incentives to encourage portfolio diversity
FactorDescriptionImpact on incentives
Instant feedback mechanismsReal-time micro-rewards for milestonesStronger behavioral reinforcement
Variable ratio reinforcementUnpredictable bonus rewardsHigher sustained engagement
Progress visibilityReal-time accumulation displaysPsychological momentum through goal gradient effect
Vesting schedulesPartial immediate, remainder upon quality reviewBalancing motivation with quality assurance

Collaborative vs. solo task considerations

Section titled “Collaborative vs. solo task considerations”
FactorDescriptionImpact on incentives
Team formation dynamicsCollaboration requirementsBonuses for assembling high-performing diverse teams
Peer evaluation componentsTeam-distributed incentive poolsEncouraging genuine collaboration over free-riding
Network effects multipliersCollective benefits from collaborationShared bonuses for strengthening field office cohesion
Mentorship incentivesPairing experienced and novice membersRewards for both learning outcomes and knowledge transfer
FactorDescriptionImpact on incentives
Experimentation premiumUncertain outcomes for R&D tasksHigher incentives compensate for failure risk
Complexity multipliersIntegration of multiple skills or technologiesCompounding incentives for sophisticated work
Reversibility costsExpense of correcting errorsLower upfront, significant quality bonuses
Learning curve compensationFirst-time task typesLearning subsidies that decrease with organizational experience
FactorDescriptionImpact on incentives
Strategic alignment bonusesAdvancing current strategic pillarsPremium incentives adjusted quarterly
Capacity building investmentsCreating reusable assetsLong-tail incentives as others use the work
Underserved division supportAddressing resource imbalancesTemporary boosts to priority areas
Governance participation floorsEnsuring continuous organizational functioningMinimum incentives for essential governance tasks
FactorDescriptionImpact on incentives
Trust score vs. tokens vs. badgesDifferent reward currenciesGovernance tasks earn trust, creative tasks earn impact NFTs
Reputation decay countersMaintaining scores during inactivityStrategic value beyond immediate rewards
Access and privilege unlocksExclusive opportunitiesNon-fungible rewards like advanced missions or mentorship
External credential valuePortable blockchain-verified credentialsLower internal incentives acceptable for career-valuable work

Unlike static pricing, many task incentives will adjust automatically based on real-world conditions:

Incentives increase as deadlines approach or tasks remain unclaimed:

Current Incentive = Base Incentive × (1 + Urgency Factor)

Where Urgency Factor = (Time Elapsed / Total Time Available) × Escalation Rate

Similar to surge pricing in gig economy platforms, incentives respond to supply-demand imbalances:

Current Incentive = Base Incentive × (Open Tasks / Available Qualified Members)

Different members see customized incentives based on their profiles:

Member Incentive = Base Incentive × Skill Match Factor × Development Goal Alignment × Trust Tier Multiplier

A general framework for determining task incentives includes:

  1. Start with task value: Use the organizational value score as baseline
  2. Apply sustainability factor: Multiply by affordability ratio (available treasury / projected needs)
  3. Add dynamic adjustments: Incorporate temporal urgency, supply-demand, personalization factors
  4. Select reward currencies: Determine mix of cash, trust score, badges, reputation based on task type
  5. Set circuit breakers: Establish maximum limits to prevent unsustainable escalation
  6. Preview and validate: Check whether incentive package feels motivating and fair

Example simplified formula:

Base Incentive = Task Value × Treasury Sustainability Factor

Dynamic Incentive = Base Incentive × (1 + Urgency Factor) × Supply-Demand Factor × Member Personalization

Final Incentive Package = { Cash: Dynamic Incentive × 0.60, Trust Score: Dynamic Incentive × 0.25, Skill Badges: Dynamic Incentive × 0.15 }

Drawing on research about human motivation and decision-making:

  • Loss aversion framing: Some incentives structured as potential losses (deadline penalties) rather than pure gains
  • Social comparison effects: Anonymous leaderboards leverage competitive motivation without toxicity
  • Endowment effects: Allowing members to “claim” tasks with provisional rewards creates psychological ownership
  • Milestone chunking: Breaking large tasks into smaller milestones with intermediate rewards maintains motivation

Understanding potential problems helps us design more resilient systems and respond effectively when issues emerge.

Incentive inflation spirals: Dynamic pricing that escalates too aggressively could drain organizational treasury faster than revenue generation.

Mitigation: Circuit breaker maximum limits, treasury monitoring dashboards, governance alerts when escalation exceeds thresholds.

Revenue-cost mismatch: Client projects might not generate sufficient surplus to fund community development and governance tasks at sustainable levels.

Mitigation: Cross-subsidization models where higher-margin work funds lower-margin but strategically important work, transparent treasury reporting.

Market volatility: Cryptocurrency-based rewards create wildly inconsistent real-world value for members.

Mitigation: Hybrid reward structures with stable fiat options, geographic purchasing power adjustments, member choice in reward currencies.

Geographic inequality: Members in lower-cost-of-living regions might systematically undercut others, or purchasing power adjustments might feel unfair.

Mitigation: Transparent methodology for geographic adjustments, community governance of adjustment factors, baseline quality standards.

Skill-based stratification: Advanced members might monopolize high-value tasks while novices struggle to access opportunities.

Mitigation: Reserved task pools for lower trust scores, mentorship requirements for advanced members, progressive skill-building pathways.

Time zone disadvantages: Members in certain time zones might face systematic disadvantages if urgent tasks emerge during their sleep hours.

Mitigation: Time zone diversity requirements for critical tasks, asynchronous work options, follow-the-sun task distribution.

Accessibility gaps: Members with disabilities, intermittent internet, or competing responsibilities might be structurally disadvantaged.

Mitigation: Alternative task formats, extended timeframes without penalty, accessibility accommodations, diverse task types.

Intrinsic motivation crowding: Over-reliance on extrinsic rewards might undermine members’ genuine passion for learning and impact.

Mitigation: Balance of reward types, celebration of intrinsic value, non-monetary recognition, purpose-driven framing.

Status anxiety and comparison: Visible leaderboards and incentive rankings could foster toxic competitiveness or imposter syndrome.

Mitigation: Anonymous aggregate displays, emphasis on personal growth trajectories, collaborative achievements celebrated equally.

Addiction dynamics: Variable ratio reward schedules that maximize engagement could create unhealthy patterns, especially for vulnerable youth.

Mitigation: Time investment limits, well-being check-ins, emphasis on sustainable participation, mental health resources.

Fairness perceptions: Even economically rational incentive adjustments might feel unfair to members, eroding system trust.

Mitigation: Transparent formula communication, opportunity for appeals, regular community feedback, governance involvement.

Smart contract vulnerabilities: Bugs in incentive calculation logic could be exploited for financial gain or cause unintended distributions.

Mitigation: Rigorous security audits, bug bounty programs, phased rollouts, emergency pause mechanisms, insurance reserves.

Formula complexity: Multi-factor dynamic pricing algorithms might produce unexpected edge-case behaviors or become computationally expensive.

Mitigation: Extensive simulation testing, simple formulas first with gradual sophistication, monitoring for anomalies.

Upgrade difficulties: Modifying smart contract logic after deployment requires complete system migrations.

Mitigation: Modular contract architecture, proxy patterns for upgradeability, governance-controlled parameters.

Trust erosion: Poorly functioning incentive systems could undermine belief in Future’s Edge’s mission, causing high-value members to leave.

Mitigation: Rapid response to problems, transparent communication about challenges, member involvement in solutions.

Communication barriers: Explaining complex incentive adjustments across languages and cultures could lead to misunderstandings.

Mitigation: Plain language documentation, visual explanations, multilingual support, cultural context consideration.

Resistance to change: Members who benefited from initial structures might oppose governance proposals to rebalance the system.

Mitigation: Clear rationale for changes, gradual transitions, grandfathering provisions, democratic decision-making.

Based on Future’s Edge structure, several commonly cited DAO challenges are already addressed:

  • No whale dominance: Each member has an equal vote regardless of trust score or token holdings
  • Sybil attack protection: Trust score threshold requirements for governance participation make operating multiple accounts prohibitively expensive in time and effort
  • Payment after proof: Members only receive rewards after verified completion and acceptance through appropriate review mechanisms

Methods to approach the design of these schemes

Section titled “Methods to approach the design of these schemes”

The founding team should employ a human-centered, iterative approach to developing these critical systems.

  • Member journey mapping: Interview potential members about what motivates them, what they find valuable, and what feels fair in reward systems
  • DAO case studies: Analyze existing platforms (Gitcoin, Coordinape, Dework) to understand what works and fails in practice
  • Empathy mapping: Create empathy maps for different member personas – novices seeking learning, experienced members seeking income, governance enthusiasts, impact-focused contributors
  • Behavioral economics research: Study literature on intrinsic vs. extrinsic motivation, fairness perception, and incentive design
  • Run small safe-to-fail experiments: Create 3-5 different valuation approaches and test them with small pilot groups
  • A/B testing framework: Design controlled experiments where different member cohorts experience different incentive structures, then measure engagement, quality, and satisfaction
  • Simulation modeling: Use agent-based modeling to simulate how different formulas might create emergent behaviors before deploying live
  • Design thinking sprints: Run structured workshops using the double diamond approach – diverge to generate many possible valuation factors, then converge on the most critical ones
  • Strength-based team formation: Leverage each founding member’s unique strengths to assign roles in formula development
  • Values alignment sessions: Regularly check that proposed formulas align with Future’s Edge core values – ensuring formulas don’t accidentally incentivize behavior contrary to trust, openness, or collaboration
  • Cross-disciplinary collaboration: Intentionally bring together founding members from different backgrounds – economics, education, technology, social work – to spot blind spots
  • Youth-led validation: Present draft formulas to youth advisors (under 25) to get feedback on whether the system feels fair and motivating
  • Veil of ignorance approach: Ask founding members to design formulas as if they don’t know what role they’ll play – novice or expert, client-facing or governance-focused
  • Living documentation: Create formula documentation in the public KnowledgeBank from day one, allowing community feedback during development
  • Decision journal: Document not just what formulas you choose, but why – the assumptions, trade-offs, and concerns
  • Version control: Treat formulas like open-source code – track changes, explain updates, allow community to see evolution
  • Spreadsheet modeling first: Build Excel/Google Sheets models that can be easily tweaked based on feedback before coding smart contracts
  • Smart contract sandbox: Deploy test versions on blockchain testnets where founding members can interact without real financial stakes
  • Quality metrics dashboard: Create real-time dashboards showing how formulas perform against key success criteria
  • Founding cohort as guinea pigs: Have the founding team use the system first, experiencing it as both task creators and completers to identify pain points
  • Small-scale launch: Roll out to 30-50 early adopters before full launch, with explicit framing that this is experimental and will evolve
  • Structured feedback loops: Schedule weekly retrospectives during pilots where members share what felt fair, unfair, motivating, or frustrating
  • Two-week sprints: Adopt agile methodology where formulas can be adjusted every two weeks based on data and feedback
  • Metrics-driven adjustment: Define clear success metrics and adjust formulas when metrics indicate problems
  • Community governance from start: Even in early stages, allow members to vote on proposed formula adjustments
  • Post-task surveys: Brief 2-3 question surveys after every completion asking about fairness and suggested improvements
  • Monthly town halls: Regular community gatherings where members can raise concerns and propose improvements
  • Anomaly detection: Monitor for statistical outliers – tasks that surge abnormally, members gaming the system, formulas producing unexpected distributions
  • Emergency override process: Define who can make urgent formula adjustments if serious problems emerge, with transparency requirements
  • Constitutional amendment process: Establish how major formula changes get proposed, debated, and voted on by the full DAO
  • Academic partnerships: Consider collaborating with university researchers to study the incentive system for external validation and insights
  1. Value before incentives: Always determine organizational value first, then design appropriate incentives – not the reverse
  2. Transparency builds trust: All formulas, weights, and adjustment mechanisms should be publicly visible and auditable
  3. Human-centered design: Systems should feel intuitive and fair to members, not just be mathematically optimal
  4. Iterate, don’t perfect: Launch with good formulas and improve continuously rather than delay seeking perfection
  5. Distributed sense-making: Empower members to interpret performance data themselves, not rely solely on founding team analysis

Task value determination combines:

  • Financial impact (revenue, costs, budget)
  • Strategic alignment (mission, innovation, pillars)
  • Time factors (urgency, dependencies, opportunity cost)
  • Quality and risk considerations
  • Knowledge and capability building
  • Community and network effects
  • Impact and reputation
  • Governance and operations

Incentive scheme design incorporates:

  • Supply and demand dynamics
  • Temporal urgency and deadlines
  • Member-specific personalization
  • Reward structure timing
  • Collaboration vs. solo considerations
  • Risk and complexity adjustments
  • Strategic organizational priorities
  • Multiple reward currencies

For founding members:

  • Commit to human-centered research and member involvement
  • Embrace experimentation and learning over perfection
  • Stay grounded in organizational values during technical design
  • Maintain transparency even when formulas are imperfect
  • Listen deeply to member experiences of fairness and motivation

For the system:

  • Balance economic sustainability with fair member compensation
  • Prevent gaming and exploitation through thoughtful design
  • Address equity concerns proactively (geographic, skill-based, accessibility)
  • Monitor behavioral and psychological impacts
  • Build in mechanisms for continuous improvement

For the organization:

  • Use formula development as community-building opportunity
  • Create trust through transparent, consistent processes
  • Align incentives with mission rather than just market rates
  • Celebrate both extrinsic and intrinsic motivation
  • Evolve systems through democratic governance

Task valuation and incentive design will evolve continuously as Future’s Edge grows. What remains constant is our commitment to:

  • Fairness: Every member deserves transparent, equitable treatment
  • Sustainability: Systems must support long-term organizational health
  • Empowerment: Incentives should enable member growth and impact
  • Learning: We approach this as an ongoing experiment, not a finished solution
  • Community: Members collectively govern and refine these systems

By understanding these frameworks, founding members can contribute meaningfully to designing systems that honor our values while enabling Future’s Edge to thrive as a pioneering DAO that genuinely empowers youth to shape their futures.