Programs

Evidence-Based Health AI Training

8 weeks

Best for: Technical and clinical professionals entering evidence-based health AI — software engineers, data scientists, health IT professionals, physicians, nurse practitioners, PAs, public health professionals, consultants, and product managers

  • Built on curriculum that began development through the Stanford Medicine Health Futurist program and taught at Morehouse College
  • Learn to evaluate health AI claims against source literature, design deployment thresholds, and build monitoring plans
  • Cohorts forming now

EvidenceCycle Sprint

Decide

4 weeks

Best for: One urgent decision requiring evidence discipline fast

Outcomes

  • A decision-ready recommendation grounded in traceable evidence
  • Stakeholders aligned on risks, unknowns, and what would change the decision
  • A reusable EvidenceCycle workspace your team can keep using

Deliverables

  • EvidenceVault (curated and traceable)
  • EvidenceCards (focused set covering decision-critical claims/risks; recommended)
  • EvidenceAtlas (final) with embedded EvidenceCycle Brief(s)
  • One-page executive summary with risk/unknowns register and 'what would change our mind?'

Requirements

  • Named decision owner + 2-6 stakeholders
  • Access to relevant context (current workflows, constraints, vendor documentation)
  • 60-90 minute weekly working session + async feedback cycles

EvidenceCycle Implementation

Deploy Safely

8-12 weeks

Best for: Organizations preparing to deploy or scale AI where success depends on evaluation, monitoring, governance, and workflow fit—not just a decision memo

Outcomes

  • Decision readiness and deployment readiness
  • A practical evaluation + monitoring plan aligned to real workflows
  • EvidenceCycle OS embedded into your team's way of working (repeatable cadence)

Deliverables

  • EvidenceAtlas (final) with embedded EvidenceCycle Brief(s)
  • EvidenceBench (final) with embedded EvidenceCycle Brief(s)
  • Pre-deployment tests, metrics, thresholds, governance/escalation protocols
  • EvidenceCycle OS workspace (templates, roles, weekly cadence for your team)
  • Handoff pack for leadership and implementers

Requirements

  • Decision owner + implementation lead + security/compliance/workflow stakeholders
  • Ability to run lightweight pilots/tests (or define how you would)
  • 60-90 minute weekly working session + periodic leadership check-ins

EvidenceCycle OS (Ongoing)

Stay Current

Monthly or Quarterly

Best for: Keeping decisions defensible as evidence, vendors, and regulations change

Entry point: After completing Program A (Sprint) or Program B (Implementation)

Radar Lite

Monthly or Quarterly
  • Ingest new evidence and product changes
  • 'What changed?' deltas against your current position
  • Refresh Vault/Cards/Briefs + update open questions
  • Quarterly decision check-in

Embedded Support

Fractional Team
  • Ongoing EvidenceCycle operator support inside your workflow
  • Continuous updates to Atlas/Bench artifacts
  • Training + governance cadence support

Deliverables

  • Updated Vault/Cards and short delta brief ('what changed + what it means')
  • Updated monitoring plan signals/thresholds if relevant
  • Decision-ready update memos for leadership
Request scoping call for proposal

EvidenceCycleAI (Pro)

Coming Soon

Best for: Teams with enough evidence volume that manual upkeep becomes a bottleneck

  • Faster evidence ingestion and deduplication
  • Change detection + alerts ('new evidence that impacts our decision')
  • Auto-drafted deltas/briefs for human review
  • Evaluation harness scaffolds to accelerate Bench work

EvidenceCycleAI accelerates the operating system—it does not replace human judgment.

Contact us to join the waitlist

Optional Add-Ons

  • +Guest expert session (clinical operations, safety, policy, evaluation methods)
  • +Vendor comparison pack (structured EvidenceCards + procurement appendix)
  • +Private academic-industry roundtable tied to your focus area