From Training Metrics to Mission Readiness: Lesson I Learned at DIA

In the mid-2000s, I served as the division chief of the Defense Intelligence Agency Global Learning Solutions Group. For two years, my team was responsible for supporting the integration of learning capabilities across all combatant command J2s (intelligence) to align them with DIA systems to improve efficiency and effectiveness across the Defense Intelligence Enterprise.

At the same time, the under secretary of defense for intelligence launched an initiative to collect training data across the enterprise. On paper, this made perfect sense. Leadership wanted visibility. Accountability. Proof of value. So, we did what the system allowed us to do. We collected data on what the training systems could produce:

  • Course starts
  • Course completions
  • Time spent in training
  • Seat hours for instructor-led courses

I was part of the “machine” that gathered and reported that data. We briefed it. We defended it. We used it to justify budgets and programs.

And yet, even then, something felt fundamentally incomplete.

What we could measure—and what we couldn’t

Between 2006 and 2008, there was nothing else to measure. Nothing to quantify the efficacy of training efforts.

Beyond end-of-course Level I “smile sheet” surveys, training measurement stopped at activity. We could show volume, but not value.

From that data alone, we could not inform leadership about:

  • The level of learner engagement
  • How much of the course content learners actually absorbed
  • Whether learners could apply that knowledge “after the fact”
  • Whether training improved mission performance
  • Whether individuals, teams or organizations were truly mission ready

Most critically, we could not calculate a meaningful return on training investment.

We were measuring training inputs—not readiness outcomes.

And while most people intuitively understood that more training did not automatically equal better mission execution, we had no credible way to prove—or disprove—that assumption.

The readiness “blind spot”

What we lacked wasn’t effort, technology or intent.

We lacked a way to measure knowledge as an operational capability.

We couldn’t answer basic readiness questions like:

  • Do our analysts actually understand the mission context behind their tasks?
  • Is critical knowledge distributed or trapped in a few experts?
  • Are teams aligned in what they believe is true, current and actionable?
  • How fragile is our readiness when conditions change?

Training data alone could not answer those questions…until now.

Enter the Mission Readiness Knowledge Index

The FedLearn Mission Readiness KnowledgeIndex exists because of that exact gap.

MRKI shifts the focus from training consumption to knowledge readiness. It asks a different question.

Does an organization actually know what it needs to know to execute the mission—right now?

Instead of counting courses and hours, MRKI measures:

  • Coverage of mission-critical knowledge across roles and teams
  • Depth and accuracy of understanding, not just exposure
  • Distribution and resilience of knowledge—reducing single points of failure
  • Readiness under change when assumptions are tested and conditions shift

This is the measurement capability we didn’t have at DIA—but desperately needed.

Why MRKI changes the conversation

With MRKI data, leaders can finally:

  • See readiness gaps before missions fail
  • Prioritize learning based on operational risk—not convenience
  • Move beyond compliance metrics to confidence metrics
  • Link learning investments directly to mission outcomes

For the first time, defense and intelligence organizations can talk credibly about:

  • Knowledge readiness at the individual, team and enterprise level
  • The quality of understanding—not just its presence
  • ROI that reflects mission effectiveness—not training throughput

From reporting activity to enabling readiness

Looking back, the training data we collected wasn’t wrong—it was just incomplete.

It told us what happened. MRKI tells us whether we’re ready.

That distinction matters more today than ever as missions become faster, more complex and less forgiving of knowledge gaps.

If I had access to MRKI during my tenure at DIA, our conversations with leadership would have been fundamentally different—not about how much training we delivered, but about how prepared the force truly was.

Mission readiness has always been a knowledge problem.

Now we can finally measure it.

Dr. J. Keith Dunbar
Founder and Chief Executive Officer
FedLearn