If you haven’t yet read Samuel Arbesman’s Overcomplicated: Technology at the Limits of Comprehension (Random House, 2016) I hope you will soon.
This improbably short book does three things magnificently well: It explains the befuddling fact that we now live with technological systems so complex that even the engineers who built them do not necessarily understand their behavior. Second, It offers one of the most lucid explanations of complex systems that you are likely to find. Third, it offers a pathway to understanding and diagnosing failures within those systems. That pathway, it turns out, begins with the simple activity of observing the system to gain insight into how it behaves. Arbesman’s focus is on technology, but his guidance is applicable to our human systems—government bureaucracies, financial institutions, cities and towns, companies and organizations—which of course are suffused with technology as well.
Most of us who aren’t engineers don’t really understand what is happening inside the technological tools we rely on — our computers, phones, and increasingly the software that helps us do everything: book train tickets, pay taxes, buy groceries, turn on the heat in our house. I’m not sure whether this is comforting or alarming, but the people who built the systems don’t fully understand them either anymore. They too are puzzled by ‘bugs’ and by larger scale failures.
Computing is the guilty party: although humans have for centuries created increasingly complicated technologies, the advance of computing, and the millions of lines of code that may describe a single program, have created technology that “has eclipsed our ability to comprehend it,” as Arbesman explains. To take one example of what we are talking about: the “flash crash” of May 6, 2010 in which the U.S. stock indexes experienced unprecedented turbulence in the space of a half-hour, made its greatest drop ever, and recovered, in a matter of minutes. Automated ‘algorithmic’ trades moving at instantaneous speed through the market produced ripple effects at different levels in the trading system, including in investor behavior, in unpredictable ways.
Computing as Complexity
The system acted this way because the financial system is “complex,” a term of art derived from complexity science. To understand complexity, picture this scene of buoys floating near a dock, as Arbesman suggests:
Imagine water buoys, tied together, floating in the water. As a boat goes by, its wake generates small waves taht begin moving one buoy then another. But each buoy does not act alone. Its own motion, since it is connected by rope to other buoys of in different weights and sizes, causes changes in the undulations of the others. These movements can even cause unexpected feedback so that one buoy’s motion eventually comes back, indirectly, to itself. The boat’s simple wake has generated a large cascade of activity across this complex network of buoys. And if the boat had sped by in just a slightly different way—at another speed or angle—the motions of the buoys might have been entirely different.
By comparison, if the buoys were thrown on the dock in a lifeless pile, they might form a complicated arrangement of tangled ropes and individual buoys. It might be difficult to describe or replicate the grouping. They could be described as “complicated.” But without the dynamic interconnectedness that mobilizes them in the water, they are not complex.
The behavior of complex systems is difficult to predict and so are their failure, for several reasons:
- There was no single cause of failure, but rather multiple contributing factors
- Interconnections in the system meant that small changes rippled through it, producing new effects
- Interconnectedness also meant art that system produces feedback, which has independent effects on the system itself
Instead, like a living thing, and in Arbesman’s wonderful language, complexity is characterized by a “a riot of motion and interaction, enormously sophisticated, with small changes cascading throughout the organism’s body, generating a whole host of behaviors.”
Unpredictability Compounds Complexity in Computer and Human Systems
What is it about complex systems that makes their behavior so difficult to project? Arbesman offers several explanations. In much of his imagery, humans are trying to peer from outside into systems grounded in computer code, whether they are our personal laptop, a nuclear power plant, or the financial markets.
But it is easy to see that human social systems function in many of the same ways. In reality, of course, we humans are instrumental to how technology functions in our world. Technologies, after all, are after all embedded in our world, and our social systems. As we digitalize and automate an increasing number of our social domains, complex technologies and complex social systems are not simply like each other, they are converging. His reasoning, therefore, has significant explanatory power for those who are thinking about the challenges that face contemporary organizations as a whole, rather than simply their IT departments.
First, complex systems exist on multiple levels, whose interactions can produce unpredicted phenomena, a quality that is called “emergence” in complexity science. The internet has multiple physical layers, from the fiber optic cables snaking across oceans, to the physical components of a single computer, to its software. As these systems grow larger and more complicated, the different levels can interact in unanticipated ways. Computer scientist Melanie Mitchell has described emergence by way of army ants. By itself, each is “nearly blind and minimally intelligent,” yet in the large groups of half a million or more that are formed in the Brazilian rain forest, they begin to move as a single body with collective intelligence, capable of destroying or carrying away everything edible in their paths (from her book, Complexity: A Guided Tour).
Second, the process of accretion makes systems difficult to understand as they get bigger, more complicated and older. Any engineered system, by its nature, is a simplified model of reality. As new conditions and exceptions to the original rules arise, small fixes are made to solve those problems, but they also interact with the rules generated by the old system. This is true in software development, but it is also visible in the formal and informal systems of social organization, such as gender ordering. The current legislative struggle to manage how transgendered people will use public bathrooms is the social equivalent of a kluge: “a cobbled together, inelegant and sometimes needlessly complicated solution….” to a system that never imagined this particular case.
Third, higher levels or interaction between cobbled together systems, greater interoperability and more interdependence amplify both the number and intensity of the interactions that can take place within the system, leading to even more potentially unpredictable outcomes.
Messy Diversity versus Elegant Simplifications
Since humans are not cognitively capable of playing or grasping any large complex technological system in its entirety, we have to learn to approach our technology-saturated systems in new and, Arbesman suggests, somewhat humble ways. We may have to abandon the ambitious claims of the Enlightenment that we can ultimately know everything and approach knowing in another way.
Above all, Arbesman suggests that we embrace what he calls ‘biological thinking.’ Living systems—the human body, for instance—exemplify complexity, which makes their behavior difficult to understand or predict as well. No one fully knows, for example, why some people get cancer, why cancers can behave in different ways in different bodies, or the mix of external and internal drivers that lead to a cancer. In order to gain insight into these phenomena, biological thinkers have developed specialized approaches. Arbesman reasons that these approaches will be useful in approaching technological complexity as well. He contrasts biological thinking to physics thinking, which seeks to generate elegant, unifying explanations to describe systems.
In contrast, biologists are more accepting of diversity and specificity, and more sensitive to history, and to the fact that all systems have legacies of evoluton and adaptation that has shaped their function over time. Biological thinkers gain insight into the interrelationships between different sub-systems by seeking to learn from mutations and accidents, rather than producing abstract models that expel rather than appreciate details, exceptions, the oddly inexplicable.
Diagnosing Complex Social Systems
Arbesman ends his book just shy of saying what it is actually like to exhibit biological thinking in practice. But if he did, he might say it is like telling a story. There is virtue, as he points out, in developing models of complex domains, like that of a city, to understand how different elements interact and flow at a high level of abstraction. But as he also points out, you might not know from that model why certain patterns or system behaviors are occurring.
For those insights, you must apply biological thinking: go into the street and ask the people who are in the system to tell the story of what they see, what they are doing, where they are going and why. You will get irreducible stories of great specificity, of why people are buying flowers, running to the drugstore, hailing a cab, stamping their foot and proclaiming the end is near, or leaning against a building laughing into their mobile phones. You may also find emergent behavior, phenomena that appear only when different systems interact. A rush hour traffic jam emerges because a temporal system, the 9-5 workday, interacts with a spatial system, the organization of cities into residential and business areas.
If we want to better understand, and improve, institutions that are producing unexpected and inexplicable behaviors, we can start by acting like field biologists: “collect, record and catalog” what is happening, with a focus on “details and diversity.”
For a purely technological system, the story unfolds in a record of actions that occurred at a level of code, and any others that rippled out into the physical world (we threw our laptop across the room in frustration; a nuclear reactor was triggered to shut down).
In a human system, the story unfolds when you ask people to tell you their story, in their words: what they think is happening, why, what they think should be done, how the situation looks from where they are. And people, unlike code, may also have ideas about how to solve or alleviate problems generated from complexity.
This may seem on its face to be terribly laborious, inefficient, expensive in terms of time and money. But consider the profound expense and inefficiency of building systems based on models that do not reflect accurately, or respect, how particular our human reality is. Individuals do not behave in linear ways, and in the aggregate although certain patterns may appear (we can call them culture, in some contexts), we are still not entirely predictable. Arbesman recommends that the way to learn the potential behavior of a nonlinear, complex system is to “inject the unexpected” into a situation and see what happens. Using the insights gained by collecting stories to generate unexpected potentialities into the system is route to learning and building better systems.