The creation of complexity from simplicity – Chris Madden
The term complexity was popularized in the mid-20th century by Warren Weaver. Weaver identified three broad categories of systems: simple, complicated, and complex.
Simple systems are made of only a few interacting inert objects like billiard balls or satellites. Complicated systems, like automobiles or airplanes, may have thousands of parts or more. However, once designed and built, even complicated instruments perform in very predictable, mechanical, ways. Bicycles, automobiles, and even robotic systems turn out to be everyday items whose degree of complicatedness we take for granted.
Early scientists –– Galileo, Descartes, and Newton – developed a set of analytic methods to reduce these simple systems to basic laws. Analytic methods were literally understood in terms of cutting apart all objects and all claims to truth to their root causes and assumptions in order to reassemble them into complete explanatory systems. These methods were so effective that by the early 1800s Laplace could assert:
“Given for one instant an intelligence which could comprehend all forces by which nature is animated and the respective situations of the being which compose it – an intelligence sufficiently vast to submit these data to analysis – it would embrace in the same formula the movements of the greatest bodies and those of the lightest atom; for it, nothing would be uncertain and the future, as the past, would be present to its eyes.”
In other words, there are no accidents; everything that is going to happen is determined by what has already happened, and everything that has already happened can be determined from current conditions.
Early in the 1900s, however, belief in analysis and the ability to predict the future and determine the past from present conditions began to be questioned. French mathematician Henri Poincare’ explained that:
“…it may happen that small differences in the initial conditions produce very great ones in the final phenomenon. A small error in the former will produce an enormous error in the latter. Prediction becomes impossible.”
Complex* systems have features that are also complicated and may act in patterned ways, but whose interactions are constantly changing. An air traffic control system is complex because its functioning depends on many variables that keep varying, such as weather, aircraft downtime, peak loading, etc.
With complicated systems, one can usually predict outcomes by knowing the design (or having a detailed engineering manual at hand). In contrast, depending on the interplay of the elements in the system, complex systems may produce highly divergent outcomes.
It is impossible to predict the way a complex system will respond with sufficient accuracy. One is limited to establishing the conditions characteristic of a complex system that lead to emergence,** then working creatively at leverage points to change the configuration of the system until the desired ends are produced.
*For a system to be classed as complex, it must manifest several necessary qualities:
1. Emergence (self-organization)
Emergence (e.g. ants, birds flocking, human social groups) is a primary quality of a learning system where these collectives develop capacities that can exceed the possibilities of the same group of agents if they were made to work independently; where people that need not have much in common, much less be oriented by a common goal, can join in a collective group that seems to develop a clear purpose.
**The conditions for emergence:
- Internal diversity—a source of possible responses to emergent circumstances. One cannot specify in advance what sorts of variation will be necessary for appropriately intelligent action.
- Internal redundancy—the complement to diversity; enables the habituated, moment-to-moment interactivity of the agents that constitute a system.
- Neighbor interaction—the neighbors that must interact with one another are ideas, hunches, queries, and other manners of representation.
- Distributed control—one must relinquish any desire to control the structure and outcomes of the collective; one must give up control if complexity is going to happen.
- Randomness—the structures that define complex social systems maintain a delicate balance between sufficient coherence to orient agents’ actions and sufficient randomness to allow for flexible and varied response.
2. Bottom up
Emergence is an example of “bottom up” organization; it does not require a “leader,” per se. Emergence is a paradox: a manifestation of a collective intelligence, but intelligent group action is dependent on the independent actions of diverse individuals. (“Intelligence” is the quality of exploring a range of possible actions and selecting ones that are well suited to the immediate situation; a repertoire of possibilities, and a means to discern the relative effectiveness of each possibility, not unlike creativity.)
- Non-polarized groups can consistently make better decisions and come up with better answers than most of their members and…often the group outperforms the best member.
- You do not need a consensus in order…to tap into the wisdom of a crowd, and the search for consensus encourages tepid, lowest-common-denominator solutions which offend no one rather than exciting everyone.
- The rigidly hierarchical, multilayered corporation…discourages the free flow of information.
- Decisions about local problems should be made, as much as possible, by people close to the problem…People with local knowledge are often best positioned to come up with a workable and efficient solution.
- The evidence in favor of decentralization is overwhelming…The more responsibility people have with their own environments, the more engaged they will be.
- Individual irrationality can add up to collective rationality.
- Paradoxically, the best way for a group to be smart is for each person to act as independently as possible.
3. Scale-free networks
A so-called scale-free (decentralized) network, which consists of nodes nodding into grander nodes, usually on several levels of organization, is more robust than a centralized network because if a node were to fail, it is unlikely that the whole system will collapse.) A decentralized network will decay into a centralized network under stress. For example, when time is a scarce commodity, the most common organizational strategy is a central network with a leader or teacher as the hub and employees or students at the ends of the spokes. This works against the “intelligence” of the organization by preventing agents from pursuing their own self-interest and obsessions, preventing diversity of experience.
4. Nested organization
An immediate implication of a decentralized architecture is that distinct levels o of organization can emerge.
5. Ambiguously bounded, but organizationally closed systems
- Complex systems are “open”; that is they are constantly exchanging matter and/or information with their contexts. In a situation where a collective is working on a project, it is rarely a simple matter to discern who has contributed what, especially if the final product is at all sophisticated.
- Complex systems usually arise from and are part of other complex systems, even while being coherent and discernible unities. Where does an agent stop and a collective begin? The question is sometimes easily answered. After all the distinction between an ant and an anthill seems relatively straightforward. However, if one considers more complex systems, for example, and individuals personality, the situation becomes much more difficult.
- Distinguishable but intimately intertwined networks can and do exist in the same “spaces.” Consider the relationship between one’s neural system and one’s system of understandings, both of which can be understood in terms of decentralized networks, but neither of which can be collapsed into the other.
Structured-determined behavior is one of the key characteristics used to distinguish a complex unity from a complicated (mechanical) system. The manner in which a complicated system will respond to a perturbation is generally easy to figure out, simply because its responses are determined by the perturbation. For example, if a block of wood is nudged, its response will be quite different than if you nudge a dog. The response will not be determined by you, but by the dog. What is more, not even experience with nudging will provide an adequate knowledge of what will happen if it is repeated—for two reasons. First, a complex system learns. That is, it is constantly altering its own structure in response to emergent experiences. Secondly, systems that are virtually identical will respond differently to the same perturbation. Hence one cannot generalize the results from one system to another…it problematizes the contemporary desire for “best practices” in education—a notion that what works well in one context should work well in most contexts. That only makes sense when talking about mechanical systems.
Complex systems do not operate in balance; indeed, a stable equilibrium implies death for a complex system.
8. Short-range relationships
Most of the information is exchanged among close neighbors, meaning that the system’s coherence depends mostly on agents’ immediate interdependencies, not on centralized control or top-down administration. A “win-win logic”; an agent’s situation will likely improve if the situations of his/her/its nearest neighbors improve. A “we” is usually better than an “I” for all involved.