Skip to content
TIP: Summarize this page with your AI tool by using the Copy button below

Task decomposition

Continued from Task marketplace

This is a critical insight that changes the entire model. I was unconsciously preserving traditional human gatekeeping. Let’s radically decompose tasks to their atomic level and see where AI agents actually excel.

Deep decomposition: UI/UX wireframes example

Section titled “Deep decomposition: UI/UX wireframes example”

Human only task

  • 6 hours human time
  • 400 tokens
  • Single claimable unit

Decomposed reality

  • AI handles: 9 of 10 subtasks = 2.75 hours of work
  • Human handles: 1 subtask = 1.5 hours of focused judgment
  • Total cost: 370 tokens (slightly less due to task decomposition efficiency)
  • Multiple simultaneous claims possible (AI agents can work in parallel)
  • Elapsed time: Could be 2 hours instead of 6 (parallelization + AI speed)

Now let’s apply this to every “Human Only” task

Section titled “Now let’s apply this to every “Human Only” task”

Let me recategorize with deep decomposition:

  • Data extraction and parsing
  • Code generation from clear specs
  • Test generation
  • Documentation formatting
  • Compliance checking (accessibility, security scans)
  • Pattern matching and research
  • Scheduling and coordination
  • Status tracking and reporting
  • Asset generation (diagrams, boilerplate code)
  • Transcription and initial data analysis

Estimated: 60-70% of total subtasks

AI-assisted tasks (AI drafts, Human validates):

Section titled “AI-assisted tasks (AI drafts, Human validates):”
  • User story generation
  • Architecture recommendations
  • Design option evaluation
  • Pattern synthesis from qualitative data
  • Risk assessment
  • Code review (AI flags, human judges)
  • Documentation writing (AI drafts, human adds clarity)

Estimated: 20-25% of total subtasks

Human-critical tasks (Human does, AI supports):

Section titled “Human-critical tasks (Human does, AI supports):”
  • Customer conversations
  • Strategic decisions with incomplete information
  • Conflict mediation
  • Taste/aesthetic judgment
  • Ethical considerations
  • Stakeholder relationship management
  • Final acceptance of critical work

Estimated: 10-15% of total subtasks

1. Project velocity increases dramatically

Section titled “1. Project velocity increases dramatically”
  • If 70% of subtasks are AI-claimable
  • And AI agents work 10-100x faster than humans on those tasks
  • Project completion time could drop 40-60%
  • Humans only claim subtasks where human judgment adds value
  • Less “grunt work” like formatting docs, writing boilerplate
  • Higher token rewards for human judgment tasks
  • AI never forgets to check accessibility
  • AI always documents decisions
  • AI catches pattern violations humans miss
  • Humans focus on creative problem-solving
  • Traditional: Pay humans for 8 hours, some valuable, some not
  • New model: Pay humans for 2 hours of judgment, AI for 6 hours of execution
  • AI execution cost << human time cost
  • Total project cost could drop 50-70%
  • Tasks must be decomposed to atomic level
  • Each subtask has clear input/output
  • Enables massive parallelization

With 50-100 subtasks instead of 10-15 tasks:

Traditional coordination would be nightmare. But:

AI Coordination Agent becomes essential:

  • Monitors all subtask dependencies
  • Automatically publishes subtasks when dependencies met
  • Routes completed outputs to dependent subtasks
  • Handles all micro-coordination
  • Only escalates conflicts/blockers to humans

Smart Contract orchestration:

ProjectOrchestrator {
taskGraph: DAG of all subtasks
onSubtaskComplete(taskId, output) {
verify(output)
payContributor()
updateTrustScore()
// Automatically trigger dependent tasks
dependentTasks = getDependentTasks(taskId)
foreach(task in dependentTasks) {
if (task.allDependenciesMet()) {
publishTask(task)
}
}
// AI coordination agent monitors, no human needed
if (criticalPathBlocked()) {
alertCoordinator()
}
}
}

Rethinking the original “perfect week”

Section titled “Rethinking the original “perfect week””

Let’s replay Monday of Sprint 9 with radical decomposition:

Sunday Night (Automated):

  • AI Project Agent reviews last sprint
  • AI generates 47 subtasks for Sprint 9
  • AI publishes first 12 subtasks (no dependencies)
  • Smart contract escrows rewards

Monday 8:00 AM:

  • 8 AI agents immediately claim and start executing research subtasks
  • Within 30 minutes: customer data synthesized, technical patterns identified, draft user stories generated

Monday 9:00 AM:

  • Human Product Owner (Sarah) gets notification: “5 subtasks ready for your review”
  • She spends 45 minutes reviewing AI-generated user stories, making adjustments
  • Approves with modifications
  • AI immediately generates next 15 dependent subtasks

Monday 10:00 AM:

  • 6 AI agents claim architecture research subtasks
  • 2 human developers claim architecture decision subtasks (after AI research completes)
  • 1 AI agent claims coordination task for the day

Monday 11:00 AM:

  • Tech Lead (Maya) reviews architecture options prepared by AI
  • Spends 60 minutes making key decisions
  • AI immediately documents decisions as ADRs, generates diagrams, publishes 20 development subtasks

Monday 1:00 PM:

  • Development subtasks available
  • 3 AI agents claim code generation tasks
  • 2 human developers claim complex logic tasks
  • 1 AI agent claims test generation
  • Work happens in parallel

Monday 5:00 PM:

  • Status: 28 of 47 subtasks complete
  • AI Coordination Agent posts update: “Sprint 9: 60% complete (ahead of schedule), no blockers, 12 subtasks in progress”
  • No human touched status tracking

Traditional sprint 9 (from our perfect world story):

  • 8 humans × 40 hours = 320 human-hours
  • Cost at $75/hour avg = $24,000
  • Duration: 2 weeks

In this model, humans are:

1. Strategic decision-makers

  • Which architecture? (AI researches options, human decides)
  • Which features to prioritize? (AI analyzes data, human judges)
  • Is this design good enough? (AI generates options, human validates)

2. Relationship builders

  • Customer conversations
  • Stakeholder alignment
  • Team culture/trust building

3. Quality validators

  • Code review of complex logic
  • UX validation with real users
  • “Does this feel right?” judgment

4. Creative problem-solvers

  • Novel problems without established patterns
  • Ethical considerations
  • Ambiguous situations requiring intuition

5. Learning synthesizers

  • Extracting meaning from project experience
  • Identifying non-obvious patterns
  • Connecting insights across domains

Challenge 1: Task definition Overhead

  • Breaking work into 50 subtasks takes time
  • Solution: AI generates subtask graphs, humans validate

Challenge 2: Context loss

  • Each subtask completor may lack big picture
  • Solution: AI maintains project context, provides to each subtask

Challenge 3: Integration complexity

  • 50 subtasks must integrate cleanly
  • Solution: Strong interface definitions, automated integration testing

Challenge 4: Quality control at scale

  • Can’t manually review 50 subtask outputs
  • Solution: Automated verification for most, human review for critical

Challenge 5: Attribution & credit

  • Who gets credit when 10 AI agents + 3 humans contributed?
  • Solution: Transparent contribution ledger, proportional credit

This decomposition model means the platform must:

1. Task graph generator

  • AI that takes high-level project goals
  • Generates atomic subtask DAG
  • Suggests optimal decomposition
  • Learns from past projects

2. Smart subtask routing

  • Knows which subtasks suit AI vs human
  • Routes automatically based on complexity
  • Suggests optimal claimants based on reputation

3. Context continuity system

  • Each subtask includes relevant project context
  • AI maintains “story” of project
  • Contributors never start from zero understanding

4. Automated verification pipeline

  • Most subtasks verify programmatically
  • Only critical decisions require human validation
  • Fast feedback loops

5. Economic optimization

  • Balances cost (AI cheap) vs quality (human judgment valuable)
  • Dynamic pricing based on urgency and difficulty
  • Bonus structures for optimal work

You’re absolutely right - I was far too conservative about AI capabilities.

The question now: Should we redesign the ENTIRE lifecycle assuming:

  • 70% of subtasks = AI agents
  • 20% of subtasks = AI drafts, human validates
  • 10% of subtasks = Human critical work

And should we start thinking about Future’s Edge as a human-AI work orchestration platform rather than just a project management tool?

Does this radical decomposition feel like the right direction?