Validation Challenges in AI-Based Decision Support

April 18, 2024 11:30am - April 18, 2024 12pm

Click here to download PDF copy of presentation

Speaker: Doug Samuelson (The Dupuy Institute)


Artificial Intelligence (AI), offers promising uses in decision-making, as support for both suggesting courses of action and evaluating likely outcomes. Especially with the recent increasing interest in conflicts that do not focus exclusively on direct military conflict, computer-supported assistance in assessing “what might happen if” is becoming more and more valuable, if not essential. However, use of such support involves some hazards well worth noting. Among these are: the AI’s extensive data requirements; the difficulty of assessing the credibility of the AI’s outputs; the difficulty, for some systems, of making minor adjustments and re-running the analysis (the very capability one would hope these systems would enhance); all too often, the opacity of the reasoning the AI employed; and some particular legal and operational difficulties of dealing with software providers. Most important, there is an intractable limitation: AI cannot infer context nor make inferences about matter entirely outside the data it as had the opportunity to ingest. Also, AI cannot be validated without an observation-based data set, more reliable than the AI, against which to compare the AI’s results. However, even observation-based data sets entail inherent uncertainty. Relying heavily and uncritically on such analyses has a high probability of leading to disaster. We discuss these issues in the context of some actual professional wargaming experiences.