System Failures and Interventions


“You know, I can’t imagine what it’s going to be like for you. But I can tell you this: It won’t fail because of what I do.” Mattingly realized the reason Apollo worked at all was because thousands of people had said to themselves, “It won’t fail because of me.” (from A Man on the Moon: The Voyages of the Apollo Astronauts, by Andrew Chaikin)

In the last newsletter, I wrote a bit about the work the engineering team I'm on at my day job does within a hospital context: developing simulated tissue, organs, limbs, and bodies to better train surgeons, nurses, and other healthcare professionals. Since then, unsurprisingly, the team has shifted to 100% Covid-19 mitigation efforts, ranging from facilities adjacent projects to PPE and other devices; the day job is now also a nights and weekends job.

Medical device development, engineering, and manufacturing work is notorious for being loaded with paperwork: FMEA (failure mode and effects analysis), traceability, quality management systems, and ISO standard after ISO standard. That cumbersome set of tasks became best practice for good reason: the stakes in the medical world are life and death, and the interactions of systems within healthcare and patient treatments are as varied and complex as anything under the sun.

Even in times of crisis, for every prototype, there are far more plans, reports, FMEA documents, and detailed instructions for use.

Or at least there ought to be. That many of the eager DIY medical and PPE device projects emerging in the last month or so lack those elements is a big concern, and a topic I plan to cover in greater depth whenever life drops back down to more typical levels of chaos and challenge.

As systems scale, their failures can too, a ripple at a local level is a tsunami at a national one. Tsunami-scale failures warrant changes in tactics, but wholesale jettisoning of protective processes and careful planning is a sure recipe for further pain. There are lessons from the history of space programs, where the stakes are life and death, but the course of action cannot be fully simulated in advance.

From a review of the Chaikin book I quoted at the start of this intro:

With regard to Apollo 13 in particular, Chaikin shares the ingenuity that enabled the astronauts to return alive after an oxygen tank exploded on the ship, resulting in a life-threatening buildup of carbon dioxide. They solved this problem by constructing an air purifier out of cardboard from a flight plan book, two lithium hydroxide canisters, a couple of plastic bags, and some tape.

Chaikin shows that this solution involved more than on-the-spot ingenuity. Solving this and other problems they encountered was made possible by a great deal of earlier thought given to “what-if” scenarios. As Ken Mattingly, one of the astronauts at mission control during Apollo 13, observed, “Nearly every solution the teams were coming up with had already been thought of, and sometimes even tested, on previous missions”

In times of crisis, action is urgent, but care must persist.

We must find ways to move fast, without breaking things.


Design:


Building Things: 

Mapping Markets:   

  • A thought-provoking argument for a government-to-government market (G2G): "There's a kind of dogma that says that states are inherently less efficient than businesses, and there's something to the idea that states aren't often suited to competing head-to-head with the private sector—but there's not much evidence that it's the source of capital that affects efficiency. Rather—businesses are more often provided the incentives and competition that allow them to become near-perfect players in their spaces. By replicating a system of competitive dynamics within the state sector, it should be possible to approximate the dynamics that lead to efficiency in the private sector, without reproducing some of the more pernicious principle agent problems that contracting entails."

Social Beings:  

More sometime soon. In the meantime, get in touch: andrew@cleardesignlab.co

Comment