In 1913, Henry Ford popularized the concept of a moving production line, dividing the assembly of automobiles into steps, with workers performing specific tasks. Ford transformed the automative industry, making automobiles available to the masses, by revolutionizing the manufacturing process. This systematic approach has informed technological innovations ever since. It is identifiable in the frameworks that shape emerging technologies in the modern era and constitutes a largely irreversible change in how associated systems will be constructed in the future. What remains underappreciated, however, is the extent to which this approach, by its highly structured nature, represents a security threat.
Modern systems and processes associated with technological innovation represent a novel and disruptive domain or battlespace relevant to both military and civilian organizations. Actors can maneuver within those systems to create specific cognitive, informational, and technological effects. Moreover, as they have become a typical feature of a nation's innovation landscape, similar ecologies are being developed by our enemies. And therein lies not only friction but opportunity. The systematized processes associated with technological innovation are ripe for manipulation through cognitive-style warfare and could be more effective in achieving asymmetric, and potentially strategic, advantage than the technologies themselves.
The growth of these systems has been widespread throughout the modern era, generating a new kind of sociotechnical reality. For example, the development of advanced transport and communications networks necessitated data management systems, metrics, and regulatory frameworks, especially as these technologies became accessible to the masses. We have become so reliant on digital technologies, and the systems and infrastructures that make them possible, that one scholar has argued that humans have become “hybrids of machine and organism” and increasingly fixated on the idea that technological advancement is the key to progress.
This technological dependence has restructured systems, organizations, and human cognition, and—somewhat counter to the purpose—has regimented the space for human creativity. Innovation hubs and accelerators, which are now pervasive in the modern landscape, were intended to foster new ideas, support startups, and connect entrepreneurs with resources and expertise. But they now often act in pursuit of narrow strategic priorities. What was once treated as an organic craft is now classified, operationalized, and systematized toward specific means and for explicit ends. The implications of this highly structured approach are significant yet poorly understood.
A structured technological innovation system allows for targeted resourcing and increased efficiency. But it also creates exposure through openness and dependence. This can create weak points, vulnerabilities, and brittleness, all of which can be attacked or exploited. Processes are mediated by technical systems (models, platforms, pipelines, evaluation metrics), influencing what problems are visible and which paths are most suitable. The systems become legible in the sense that their constituent parts can increasingly be “read” and understood. Moreover, the propensity for organizations to detail their innovation processes in written strategic plans, describing their modes and methods of execution, makes them readable not just figuratively but also literally.
These are serious exposures. Something that is steerable can become influenceable, and this could take the form of hostile manipulation. The reality is, our technological innovation systems are legible to friend and foe alike. An unexamined consequence of creating highly structured systems is the inability to control who is reading them and acting upon them. An actor could intervene at the level of tools, workflows, training data, incentives, or evaluation criteria with downstream effects on creative output, strategic direction, and institutional behavior. There are ample entry points for technical interventions, influence operations, and calculated deceptions, deployed such that they manipulate a rival into self-defeating positions. In effect, we could be innovating our way into a trap.
The reality is, our technological innovation systems are legible to friend and foe alike.
The central contention of this article is not one of doom and gloom, however. Our enemies, too, have implemented systematized innovation ecologies that could be manipulated to create distortions in perceptions and calculus. During the Cold War, the U.S. Office of Net Assessment used knowledge of Soviet organizational behavior to gain competitive advantage. A similarly rigorous understanding of modern technological innovation ecologies could provide prospects for gaining the upper hand, as it were. But such a course requires acuity and the ability to situate oneself outside of the systems in question. Deploying cognitive-style warfare as a means of weaponizing innovation systems involves a return to a more organic form of innovation. We can choose to become the attacker, rather than the attacked.
A recent NATO report defined cognitive warfare as the “manipulation of the enemy's cognition,” involving “the use of all knowledge, strategies, and available tools to impact human behavior…. with the end goal of manipulating and altering decision-making.” Under this definition, the systems associated with technological innovation offer ripe pickings for cognitive-style warfare. Now that humans have fashioned this highly vulnerable domain, defined by the ever-deepening and increasingly structured union of humans and machines, we can no longer ignore the opportunities and threats we have built into it. And to be clear, if we decide we do not have the appetite for such a battle, our enemies certainly will.