image alt text

In parts One and Two of my series on NASDAQ’s handling of the Facebook IPO, I recapped the coding errors that lead to its delay, the technical details, and consequences of NASDAQ’s approach. In this article, I examine the human factors intrinsic to decision making that make it hard stop an evolving event.

In Part Two of the series, I described the steps NASDAQ took to salvage the Facebook IPO after a coding error led to an unexpected race condition, and the indefinite delay of the IPO. NASDAQ’s fix, which it implemented without testing, caused a cascade of additional and unexpected failures. This is not an uncommon result to the attempt to fix such a problem on the fly: once a problem has occurred in a complex system, improvised fixes are most likely fraught and will lead to further problems. In the case of the Facebook IPO, NASDAQ’s improvised fix led to unanticipated risks and regulatory violations.

In his comments on NASDAQ’s attempt to salvage the IPO, Steven Lofchie of Cadwalader referenced the psychological challenge of conceding defeat (temporarily) by halting an ongoing process, rather than trying to implement a heroic rescue on the fly. In cognitive psychology, this phenomenon is called plan continuation bias, and refers to the cognitive difficulty humans have in identifying when conditions have changed such that a planned course of action is no longer appropriate. When coupled with confirmation bias, the cognitive tendency to discard information that disagrees with our established conclusions, it becomes difficult to change course even in the face of unexpected adverse events.

Stress compounds this challenge, making it harder to recognize a situation has changed. It is well established that elevated stress levels combined with the high cognitive load of a complex decision leads to decreased performance (Figure 1). This is exactly the situation in which NASDAQ’s decision makers found themselves as they wrestled with the IPO’s delay and pushed forward with their plan to initiate trading in Facebook.

But merely understanding that biases and stress affect decision making is not a prescription for reliable performance. Rather, specific tools are needed to equip organizations and employees to make sound decisions in tumultuous circumstances.

image alt text

Train employees to make effective decisions in stressful situations. Repeated training in complex tasks can shift them from a “difficult” cognitive regime to a simple one. For example, airline pilots train in simulated taxing emergency situations to reduce the impairment of stressful situations. In the context of an exchange like NASDAQ, such training might consist of verbally simulating “emergency” scenarios and evaluating the specific decisions made in response. Additionally, engineers could deploy scenarios test environments that their colleagues have to diagnose and fix in real time (for an example of how a technology-savvy company like Google does this, see our discussion on Realistically Evaluating Contingency Plans).

Explicitly enable decision makers to stop a process they do not understand. Specific training can help decision makers understand which types of problems are more easily solved by stepping back and stopping rather than improvising a plan on the spot. Such an understanding helps organizations explicitly acknowledge stopping a process (temporarily conceding defeat as Mr. Lofchie would say) as a valid, and equal, procedural option – at least temporarily. When backed by conclusions gleaned from managing simulated (or actual) failures, such a conclusion may be more defensible to regulators, whose traditional stance demand that systems be functioning, at all times, even if that is neither practical nor optimal ( As seen, for example, through requirements for business continuity plans. It is worth noting that such a regulatory stance may not be the most productive approach. By analogy to aviation, it would be fraught for the FAA to apply pressure to an airline to minimize delays. ).

Incorporate blockers in crisis decisions. Blockers are individuals who are designated “devil’s advocates” who explicitly challenge a group’s decisions in order to ensure the best decision is being made. Blockers help a group turn away from more obvious, but potentially incorrect, paths by breaking the cognitive lock that can result from groupthink. In the securities industry, organizations already have natural blockers in roles such as the Chief Risk Officer, Chief Compliance Officer and General Counsel. These roles should be explicitly empowered as blockers at the beginning of any crisis: as we can see from NASDAQ’s handling of the IPO, even what seems like a purely technical decision has a multitude of economic, litigation and regulatory implications.

Explicitly structure decision-making teams. When a crisis arises, structure decision making around functional, rather than corporate or hierarchical, roles. Operational decision-making power should be allocated fluidly to actors with expertise rather than to those with seniority. This ensures teams have the expertise necessary to make sound decisions. For example, detailed emergency management plans developed by hospitals restructure decision making around functional, rather than administrative, roles. During the Facebook IPO, such a structuring would have explicitly empowered NASDAQ’s engineers to evaluate the “big picture,” rather than narrowing their focus to the question of how to continue the IPO. Without understanding the implementation details (i.e., how the code works) of such a technical system, it is hard for senior management to make informed decisions about how to manage the system in crisis ( “Prior to receiving [the SEC release] on May 18, the SVP/INET was unaware of the existence of the validation check.” (SEC Release No. 69655 / May 29, 2013, Page 6.)).

Conclusion

The lessons presented here are common practices in fields that require rapid and precise crisis response ( For example, for a detailed discussion of how stress and cognitive biases affect commercial airline pilots, see Benjamin Berman’s book, The Limits of Expertise: Rethinking Pilot Error and the Causes of Airline Accidents.). Teaching operators specific crisis management skills, empowering them to stop a risky process in progress, incorporating blockers, and organizing decision around functional roles enable organizations to effectively manage dynamic and complex situations. As the complexity of exchanges and the systems involved in electronic trading increases, companies like NASDAQ can use these tools to create a culture that enables them to regain control of a potential crisis, perhaps by putting a temporary halt to operations, before charging ahead into what may become an unrecoverable error.

Chris Clearfield is a principal at System Logic, an independent consulting firm that helps organizations manage issues of risk and complexity.

Lofchie Comment

It occurs to me that the process that Chris describes for co-ordinating decision making teams to deal with crises is very similar to the process that many firms already have in place to approve new products. That is, firms will often have in place a product approval process that prevents going ahead with a new product without sign off from each relevant function within a firm; e.g., operations, legal, credit, compliance, business line, and so on. Translating this new product approval process into a crisis management process seems moderately straightforward.

Steven Lofchie is the co-head of the Financial Services Department at Cadwalader, Wickersham & Taft.