The Future of Strategic Decision-Making
by Roger Spitz

After working with countless decision-makers and interpreting the next-order impacts of our world’s rapidly accelerating rate of change, humanity appears at a crossroads. Evolutionary pressure prioritizes relevance, and that pressure could be nearing our strategic decisions.
As a society, we must completely adapt the education system (Spitz, 2020), prioritizing experimentation and discovery, instilling curiosity and comfort with uncertainty, first starting in the playground and then spreading all the way to our boardrooms. If we don’t improve our abilities to evolve in a nonlinear world, we could find human decision-making sidelined by algorithms as we become blindsided by increasing complexity, while machines gradually learn to move up the decision value chain.
AAA is often used to reflect the ultimate achievement. Those with finance backgrounds will recognize that AAA is the highest level of credit worthiness, or in science the best rank for alphabetical grading scales. The UNDP have used “Anticipatory, Adaptive and Agile” in the context of governance (Wiesen, 2020), as have esteemed colleagues in their recent article entitled “Triple-A Governance: Anticipatory, Agile and Adaptive” (Ramos, Uusikyla, & Luong, 2020).
Stephen Hawking (2000) qualified the 21st century as “the century of complexity.” With that backdrop, for some time we have been using AAA as “Anticipatory, Antifragile and Agility” (AAA) to define what humans should be developing to improve their abilities as the world becomes more complex. This need for humans to enhance their capabilities is that much more relevant in the context of machines learning fast and with increasingly higher-level human functions.
While the term anticipatory is intimately related to foresight, for our AAA taxonomy we borrow the definition of antifragile from Taleb (2012): “Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better.” And we use “agility” in the context of the Cynefin framework (Snowden & Boone, 2007), looking at properties such as our ability to be curious, innovative and experimental, to know how to amplify or dampen our evolving behaviors depending on feedback, thus allowing instructive patterns to emerge, especially in complex adaptive systems.
Decision-Making: No Longer a Human Exclusive
Decision-making for key strategic topics (like investments, research and development (R&D), and mergers and acquisitions (M&A)) currently mandates human involvement, typically through Chief Executive Officers, leadership teams, boards, shareholders, and governments. Looking forward, the question is not how much machines will augment human decision-making but whether in time humans will remain involved in the process at all.
Through machine learning (ML) and natural language processing (NLP), the capabilities of artificial intelligence (AI) in strategic decision-making are improving rapidly, while human capacities in this area may not necessarily be progressing. It could even be the opposite: while machines are deemed by many to augment humans in a positive way, the Pew Research Center cautions that AI could reduce individuals’ cognitive, social and survival skills: “People’s deepening dependence on machine-driven networks will erode their abilities to think for themselves [and]take action independent of automated systems” (Anderson, Cohn, & Rainie, 2018).
There are many decision cycle models including the much-admired OODA loop[1] (Observe, Orient, Decide, Act).
At its core, we frame decision-making as following a simple, three-step process:
Detect and collect intelligence.
Interpret the information.
Make and implement decisions.
Every one of these steps is essential to a successful conclusion. The following lists examples of failures in this process. Poor intelligence (failure at step 1) led to the Bay of Pigs invasion, while ineffective interpretation (failure at step 2) contributed to Israel’s surprise at the 1973 October war.
Step 3 is sometimes harder to isolate. Making and implementing decisions can also include ones which decide not to set-up a system to detect or collect intelligence in the first place, or which limit investment in the resources to ultimately interpret such information. One could argue that the lack of preparation which resulted in improvised governmental responses for COVID-19 was a failure at all three steps.
Corporate history is littered with examples of leadership teams with a cognitive bias towards making poor decisions that extrapolated the past with linear predictions. This is often a result of humans finding it difficult to process “exponential” trends (which initially do not seem to grow fast) and being oblivious to next-order implications.
The telecom operators had the option to innovate in over-the-top (OTT) technologies rather than relying on historic cash cows like text messaging and international calls. This wrong decision paved the way for new players like Skype, WeChat, and WhatsApp to lead with disruptive exponential technologies. In the same vein, Verizon acquired video conferencing platform BlueJeans in April 2020 as a late defensive move given the pandemic and explosion of Zoom, as opposed to anticipating the strategic need for enterprise-grade video conferencing platform for the future of work (remote), health (telemedicine) or education (online learning). The pandemic accelerated this need while proper understanding of our two first decision-making steps should have meant that Verizon would have made those strategic decisions many years ago (instead of playing catch-up with Zoom today). In the same way that Disney only woke up in 2017 when it acquired control of BAMTech for streaming technology, leaving Netflix dominate this space during many precious years.
In 2011, Vincent Barabba wrote, “In essence we alerted the management team that change in the capturing of images through digital technologies was coming, and that they had a decade to prepare for it.” Despite on target market assessments, Kodak did not make the correct strategic decisions.
Given the speed and scale of change, the question of “if and how” we are able to enhance our capabilities for decision-making is a legitimate one.
Machines are moving up the decision-making value chain

Today, humans primarily use AI for insights, but AI’s skills could surpass human abilities at every step in the process. AI is already improving in predictive analytics, steadily making its way to the right, toward prescriptive outcomes recommending specific options.

This is in part fueled by exponential technologies as AI learns to move up the value chain:
Machines are archetypically used in optimization, automating processes and repetitive tasks.
We are finding them more present in augmentation roles as well, where they lend their greater processing powers to perceive and learn (such as in radiology).
AI is even tackling the formerly human-mandated domain of creativity. (Google Arts recently partnered with the British choreographer Wayne MacGregor to train an AI to choreograph dances (Leprince-Ringuet, 2018)).
A significant advantage AI has over humans is driven by stacked innovation platforms that can scale rapidly, wherein massive amounts of networked data provide ever deeper insights through signal detection, trend interpretation, and pattern recognition at scale and with unstructured data. This also allows non-intuitive information and connections to be unearthed through ML, while NLP is effective for unstructured extraction.
AI’s current superiority at detection and collection, with scale helping the interpretation
AI already surpasses human ability in trend detection, signal-, and pattern-recognition for unstructured data at scale:
One company, Blue Dot, used NLP and ML to detect the COVID-19 virus before the US Center for Disease Control.
Another company, Social Standards, scrapes Instagram and Twitter to detect emerging local brands and competitors before they reach peak visibility.
The geospatial analytics company Orbital Insight mines digital imagery to predict crop yields or construction rates of Chinese buildings.
Algorithm-augmented predictive insights drive decision-making
A step further than analytics-driven decision support, AI accelerates “infinite” simulations, evaluations and developments, reducing the cost of testing to carry out major R&D and drug discoveries:
Halicin was the first antibiotic discovered using AI. The AI found molecules that even help treat formerly untreatable bacterial strains.
The OCD medication DSP-1181 is the first non-human-made drug molecule to enter phase 1 clinical trials. Thanks to DSP-1181’s ML intelligence, researchers completed in 1 year what normally would have taken several years.
In the future, will AI perform autonomous, prescriptive strategic decision-making?
AI is currently tasked with decision-assistance, not autonomous strategic decision-making. Why? The situation is beyond “complicated”.

Neither humans nor AI find that decision-making in complex situations is their strength. Using Dave Snowden’s Cynefin Framework (Snowden & Boone, 2007), the complex domain involves unknown unknowns, where there are no right answers and it is only retrospectively that one can establish cause and effect. So if there is solace to be found in humanity’s poor performance here, it is that machines are not currently able to do better (AI’s comfort zone is in the complicated domain, where there is a range of right answers, known unknowns, and causality can be analyzed, so plays well to data).
Most applications of predictive interpretation involve a joint project (augmentation) between humans and AI. As AI is exponential, over time the role of humans may reduce in a number of areas.
In analyzing the trend of machine involvement, one thing is clear: AI is playing a greater role in every step of the decision process. It is starting to take over areas that we previously thought were too important to entrust to machines or required too much human judgment:
In 2017, software from J.P. Morgan completed 360,000 hours of legal due diligence work in seconds.
A mere two years later, in late 2019, Seal Software (acquired in early 2020 by DocuSign) demonstrated software that helps automate the creative side of legal work, suggesting negotiation points and even preparing the negotiations themselves.
EQT Ventures’ proprietary ML platform Motherbrain made more than $100 million in portfolio company investments by monitoring over 10 million companies, its algorithms taking data from dozens of structured and unstructured sources to identify patterns.
A German startup called intuitive.ai delivers AI solutions to foster informed strategic management decisions, while UK-based startup 9Q.ai is developing “Complex AI” to optimize multi-objective strategic decision-making in real-time including for the management consultancy sector.
As we are seeing with the current crisis, the extent of international failures in preparation (such as completely ignoring warnings from US’ own intelligence, Bill Gates or the World Economic Forum) is just the tip of the iceberg in our failures in responses to the required problem-solving frameworks. So currently, neither humans nor AI are performing well in complex systems. And few leaders embrace the experimental model, which requires curiosity, creativity and diverse perspectives to allow for unpredictable instructive patterns to emerge.
Will we rise to the challenge of accelerating, disruptive, and unpredictable complex times? Because AI will certainly keep learning—even beyond complicated—as algorithms will no longer rely on only a range of right answers:
Matthew Cobb (2020) provides a detailed examination of whether our brain is a computer, spanning the views of Gary Marcus (“Computers are, in a nutshell, systematic architectures that take inputs, encode and manipulate information, and transform their inputs into outputs. Brains are, so far as we can tell, exactly that.”) and those neuroscientists who consider that, even if this were true, “reverse engineering” the brain may not be a given.
AI is developing fast in handling complexity with progress in key areas such as artificial neural networks (broadly inspired by biological neural networks which constitute brains and good at pattern recognition). Russell, in his seminal books on AI, acknowledges the views of a number of philosophers who believe that AI will never succeed while expanding on how intelligent agents reason logically with knowledge, including decision-making in uncertain environments and the importance of artificial neural networks to generate the knowledge required for the intelligent agents to have the components required to make decisions (Russell & Norvig, 2020).
There are of course limitations to what AI can do today, partly due to the data itself, even more so in complex systems (“Data means more information, but also means more false information” (Taleb, 2013)). In Black Swan, Taleb (2007) warns against how one can misuse big data, including “rearview mirror” (confirmation vs. causality), an instance of poor reasoning as the narrative is being built around the data that ends up with a history clearer than empirical reality. He also flags “silent evidence” as one cannot rely on experiential observations to develop a valid conclusion (the possibility of missing data, spurious correlations, and the risk of previously unobserved events have a tremendous impact).
Earlier this year, Ragnar Fjelland (2020) wrote “Why general artificial intelligence will not be realized,” and while acknowledging the major milestones in AI research (including DeepMind AlphaGo in deep reinforcement learning), his view is that the systems lack flexibility and find it difficult to adapt to changes in environment. Like Taleb, he focuses on correlation and causality, and AI’s lack of understanding, a major limitation today.
As AI continues to develop, machines could become increasingly legitimate in autonomously making strategic decisions, where today humans have the edge. If humans fail to become sufficiently AAA, rapidly learning machines could surpass our ability. They do not have to reach general artificial intelligence nor become exceptional at handling complex systems, just better than us.