Three Myths Senior Leaders Believe About Modeling & Simulation (and What They Cost You)
by Bryan Cannady, Founder, Unordinary Group
If you’ve worn a star, run a portfolio, or owned a hard problem in the Pentagon, you’ve probably had some version of this thought:
“We have plenty of modeling. Why are we still guessing?”
You’re not wrong to wonder. The Department spends a lot of time and money on models—campaign tools, mission simulations, engagement benches, digital twins, MBSE environments. On paper, it’s an impressive portfolio.
In practice, senior leaders still end up in too many design reviews and posture discussions where the “analysis” is a single slide, a single vignette, or a single answer that somehow never survives contact with reality.
From where I sit—straddling operators, analysts, and technologists—the problem isn’t that leaders don’t value modeling. It’s that they’ve been quietly sold three myths about what modeling & simulation can do for them.
Those myths show up in different words in different rooms, but the impact is the same: slow learning, brittle decisions, and a lot of false confidence.
Let’s name them.
Myth 1: “Somewhere, there’s a model that can answer everything.”
This is the most tempting myth, because it sounds efficient.
You see it in questions like:
“What’s the model we’re using for this?”
“Can we get everyone on one environment?”
“If we invest in this new engine, will it cover both space and air and maritime and cyber and…?”
Behind those questions is a quiet hope: if we can just pick the right flagship model, it will somehow unify the problem. One tool to answer the campaign, the mission, the engagement, the force design. One tool that everyone can agree on. One tool to brief.
The reality is harsher:
Engagement tools are built to get physics and behavior right in ugly detail.
Mission tools are built to see whether missions “close” when timing, geometry, and threats are allowed to matter.
Theater and campaign tools are built to explore futures—hundreds of them—not just a handful of exquisite runs.
MBSE and digital engineering environments are built to manage structure, not to replay a war.
Each of those is a different kind of work. Trying to cram them into one environment doesn’t make the problem simpler. It just hides the seams.
What this myth costs you:
Time – every new requirement gets bolted onto the “one tool,” and pretty soon no one can move without breaking something.
Transparency – the more a single model tries to do, the less anyone can explain why it gave the answer it did.
Adaptability – when one codebase becomes the center of gravity, your ability to pivot with the threat or the technology stack drops to zero.
What works better:
Think of your modeling portfolio the way you think about your joint force: specialized units, clear roles, disciplined integration.
You don’t ask an F-16 to be a tanker, AWACS, and ISR platform on the same sortie.
You shouldn’t ask a mission tool to be a theater campaign engine, or vice versa.
The right question isn’t “What is the model?”
It’s “What combination of tools do we need for this question—and how do we move information cleanly between them?”
That’s an orchestration problem, not a shopping problem.
Myth 2: “High-fidelity is always better.”
If you’ve ever sat through a briefing where the engagement modelers walk through radar waveforms, seeker logic, or 6-DOF missile flyouts, you’ve felt the pull of this myth.
At some point, someone says (or at least thinks):
“Why would we use any approximation when we have all of this?”
And at the engagement level, they’re right. If you’re certifying a weapon, vetting a threat model, or validating a piece of hardware, you should care deeply about the details. That work is non-negotiable.
The problem shows up when that instinct gets carried up a couple of levels.
At theater or campaign scale, your questions change:
“How does this concept behave across hundreds of possible fights?”
“Where is this design fragile to changes in tactics, posture, or attrition?”
“How does this architecture hold up when the adversary isn’t playing the scenario we wrote down?”
Those questions don’t need a million exquisitely detailed runs.
They need fast, transparent, “approximately right” runs that let you explore the space of futures, not just stare at one outcome.
When the campaign team says, “we’re comfortable with approximation,” what they’re really saying is:
“I don’t trust any single model enough to bet the force design on it. I want to see patterns that survive bad assumptions.”
From the outside, it can sound like they’re dismissing high-fidelity work. They aren’t. They’re trying to use it differently.
What this myth costs you:
Under-used high-fidelity tools – they become expensive science projects that never push their insight into decisions.
Overcomplicated theater analysis – you either don’t explore enough futures (too slow), or you quietly drop fidelity anyway.
False reassurance – “but the detailed model says…” becomes a conversation stopper, when it should be the beginning of a deeper look.
What works better:
Let high-fidelity tools do what they’re good at: stress specific pieces of the system, under specific conditions, and expose where approximations break.
Let theater and campaign tools do what they’re good at: explore broad spaces of futures, quickly, with enough transparency that you can see when the answer hangs on a fragile assumption.
The leadership move is not to pick a winner between “detail” and “scale.”
It’s to insist on a visible, explicit relationship between them.
If your high-fidelity bench can’t give you anything that your theater engine can actually use, you don’t have a fidelity problem—you have a workflow problem.
Myth 3: “If I buy an architecture, I’ll get answers.”
This one is newer, but it’s rising fast.
It sounds like:
“We’re investing in a digital engineering environment; that will finally knit this together.”
“Once everyone is using the same architecture tool, we’ll be able to run the war in the model.”
“Can we just make the SysML model the source of truth and plug simulations into that?”
Architecture tools and MBSE environments are important. They give you:
a controlled way to define systems,
a way to manage interfaces and variants, and
traceability when a requirement changes.
But they are not a substitute for operational behavior.
You can’t diagram your way to an answer about:
How fast a sensing architecture really breaks under attack.
What happens when comms and PNT degrade at the same time.
Which cross-domain kill chains still function when some of your “assured” links are gone.
You still have to watch those things move in a dynamic environment.
What this myth costs you:
Disappointed expectations – leaders sign up for a digital thread and quietly expect a digital rehearsal space; they get a better drawing tool.
Over-centralization – everything gets pushed toward one “architecture of record,” and local experimentation dies.
Blind spots – the more beautiful the diagrams, the easier it is to forget that none of them has actually been executed as a dynamic fight.
What works better:
Treat architecture as the spine, not the brain.
Use MBSE and digital tools to define what systems are and how they’re supposed to connect.
Use modeling & simulation to discover how they actually behave when you put them under stress.
And most importantly, make sure architecting and modeling are in constant conversation—so that when one changes, the other notices.
The leadership question isn’t “Do we have a digital engineering environment?”
It’s “Can my architecture and my modeling environments disagree in ways I can see, debug, and learn from?”
How Unordinary Group fits into this
Unordinary Group sits in the uncomfortable middle of all three myths.
We’ve been in the rooms where:
campaign tools are overextended,
mission tools are under-leveraged,
engagement benches are walled off, and
architecture lives in one tool while ops lives in another.
We don’t start by selling you a new model.
We start with three simpler questions:
What decision are you trying to make, and in what time frame?
(That tells us the scale and the kind of uncertainty you care about.)
What tools and data are already in your ecosystem?
(You almost always have more than you realize; they’re just not talking to each other.)
Where is the friction between the communities that own those tools?
(Because the friction points are exactly where insight is currently dying.)
From there, our work usually looks like:
Standing up lightweight theater workflows that can actually run the number of futures your decisions deserve.
Wiring those workflows to domain tools (space, sensing, comms, PNT) so we’re not guessing at the pieces that really matter.
Helping your teams export and reuse what high-fidelity work already knows, instead of re-deriving it in a coarser model or ignoring it altogether.
Making the seams visible—where we’re approximating, where we’re not, and what that means for your confidence level.
We’re not trying to be your “one big model.”
We’re trying to help you stop believing you need one.
If you’re a senior leader who’s been burned by modeling before—either because it slowed you down, over-promised, or quietly contradicted itself—there’s probably nothing wrong with your instinct.
The problem is the myths you were handed.
The good news is that once you stop chasing the wrong dream, the right work becomes much clearer:
multiple tools,
honest seams,
faster learning,
better questions.
That’s the space Unordinary Group was built to operate in.