Uncertainty and Resilience

251

In my work as a futurist, focusing on the intersection of environment, technology and culture, the concept of resilience has come to play a fundamental role. We face a present and a future of extraordinary change, and whether that change manifests as threat or opportunity depends on our capacity to adapt and remake ourselves and our civilization — that is, depends upon our resilience. It’s no surprise, then, that Brian Walker’s essay, “Resilience Thinking,” articulates a set of principles that resonate deeply for me.

When I first read Walker’s piece, I was struck by how closely his list of characteristics of resilience parallels the set that I’ve been using in my own work. Some of the language varies, of course — I tend to talk about “transparency” where Walker talks about “feedback,” for example — but the underlying principles align strongly. He includes a feature that I’ve left out (and will consider adding): ecological services, easily lost in a too-brittle environment. I can see how this concept could be applied broadly, articulating the ways in which one element of a resilient system can serve to strengthen and reinforce the viability of other elements of a system.

At the same time, in my work I include a couple of features that Walker doesn’t touch on in his essay. That’s not to say that they are alien to the resilience concept, but (at least in my articulation) they emerge from the worlds of design and strategy more so than the world of ecology. There are undoubtedly parallels to these concepts in other writings on resilience, but I’d like to take a moment to explore how these versions, emphasizing intentionality and planning, might fit in with Walker’s larger argument.

The first is default to least harm. This is a concept from the interaction design world, reflecting a desire to make sure that when a system fails, the default state is as harmless as possible. The narrow goal here is that a system failure shouldn’t make an already-bad situation significantly worse. The “air brakes” on large trucks might be considered the simple example of this principle: Air pressure keeps the brakes off; if the brakes fail, they slowly re-engage as the air leaks out, bringing the truck to a stop.

More broadly, this principle forces to us to recognize that no system is immune to failure, and that we are far-better served by considering the implications of failure in our plans and strategies. As an element of resilience, this is at once common-sense and easily-forgotten. We know that we should be ready for disaster, but we hate to think about it. Defaulting to least harm can mean actions as simple as making backups, building fire-breaks, funding safety nets, and so forth; such actions may seem boring, a waste of resources, or even distracting from core goals, but they are nonetheless key elements of a resilient world.

This concept applies to more than how we build or undertake simple projects. Implicit in the notion of defaulting to least harm is the need to avoid cascade failures (where the collapse of one system overloads and causes the collapse of other systems, and so forth). Doing so requires not just thinking of the resilience of individual components, but of connected systems. One example of how this manifests is in the avoidance of monocultures. Monocultures — tree farms or single operating system computer networks, as examples — can be highly efficient, allowing for consistent and easy management, but they offer a prime example of over-optimization undermining resiliency. Under attack — whether by disease or computer virus — monocultures are terribly brittle; the failure of one component of the group means that all components are vulnerable. Polycultures, mixing different “species” together, can be more complex to manage, but can be far better able to withstand attacks.

The second feature of resilience that Walker might want to consider adding is foresight, the capacity to think through possible future consequences of present actions, and to identify early indicators of changing conditions. The concept of resilience implicitly acknowledges a dynamic environment, and the need to be able to adapt to changing conditions. The speed, form and impact of such changes are inconsistent and unpredictable — but unpredictable is not the same as unforeseeable. It’s possible to use our limited knowledge of what lies ahead to look for present-day choices and actions that serve to improve our resilience, not degrade it.

This means looking at plausible futures, not simply one “official future.” If the future is unpredictable, we’re much better off looking at a range of possible outcomes than just at a single best guess. A forecast doesn’t need to be exactly right to be useful; in fact, a mix of divergent, plausible futures (sometimes referred to as scenarios) can offer insights into the strengths and weaknesses of a given system or strategy. With foresight tools such as scenarios (as well as similar processes, such as mapping and gaming), we can test how well our present environment and plans would respond to complex changes; if we see that a particular aspect of how we are today tends to weaken or fail under certain (unpredictable, but reasonable) conditions, we know that it’s likely that we’ll need to strengthen or change that potential point of failure.

It’s like a wind tunnel, in a way. We can test a design against a variety of conditions (all similar to what one might find in reality), in order to make sure that there are no hidden flaws. It’s not foolproof by any means, but even if a design that passes a wind tunnel test can’t be guaranteed to work, a design that fails such a test almost certainly won’t. Or to adopt an analogy of a different sort, such a practice can be seen as an immune system, where a taste of a possible future allows us to develop antibodies against the less-desirable outcomes.

Both of these aspects of resilience that I’ve come to identify in my work come down to ways of dealing with uncertainty. Systems and strategies of optimization can work only when conditions are certain — and conditions rarely remain certain for long. Resilience, conversely, is an especially viable strategy for dealing with uncertainty, as it does not presume stasis.

Yet it’s something of a paradox: The most resilient systems are those that recognize that they may be insufficient against all possible outcomes. Defaulting to least harm offers a way for a resilience strategy to handle unexpected failure gracefully. Foresight, conversely, offers a way for a resilience strategy to be able to anticipate changes before they occur. This is not defeatism. The potential for failure lies within every action, every system we design — but it’s the very process of preparing for the chance of failure that gives us the greatest hope for long-term success.

LEAVE A REPLY