As with any organization seeking to inform and guide decision-making, the United States military has relied on data analysis many times in its history. In many cases, this led to successful outcomes. In other cases, faults existed in the reliability of the data or the manner in which it was subsequently employed. Three such cases are recent talent-based branching efforts for new officers (“The Good”), the misinterpretation of airplane battle damage in WWII (“The Bad”), and the reliance on enemy casualty figures as a barometer of success in the Vietnam War (“The Ugly”). A brief exploration of each will help us understand the promises and pitfalls of using data to drive organizational behavior.
As the familiar observation goes, you don’t know what you don’t know. For aspiring Army officers, one largely uninformed choice they make before commissioning day will have a lasting impact: their selection of branch. Unfortunately for these individuals, the information void of not knowing what life as an infantryman or signal officer truly means creates a hindrance on branch selection day. Meanwhile for the Army, inefficiencies exist when minimally informed individuals enter career fields for which they are poorly suited. This information gap creates future problems such as attrition, poor morale, and resource inefficiencies of having to retrain officers who elect to switch branches later in their careers.
Fortunately, the Army’s Training and Doctrine Command (TRADOC) has recently moved to correct this situation. TRADOC collected such pertinent inputs as trait-based data from the branches, new entrant tests, academic performance numbers, and more to create “talent based profiles” of those individuals most likely to succeed in the various Army career fields. They then initiated the career field education and branch selection process earlier, at the beginning of cadets’ journeys rather than the end. This process now starts in year one rather than year four of the commissioning journey. The collection and distribution of pertinent information to the right audience at the right time is already drawing rave reviews from senior army leaders, and represents a key step in modernizing Army talent management for the 21st Century. Although the program is in its infancy, the effort mirrors practices at tech giants like Google that have proven quite successful.
The Army Air Corps (AAC) was desperate to keep planes in the sky during WWII. Battle losses had tactical, operational, and strategic impacts, since it couldn’t win dogfights without fighters, mount operations without swarms of bombers pummeling ground defenses into submission, or support broader strategy if it sapped the nation’s manufacturing resources. One major area of analysis for the AAC was the armoring of its aircraft to withstand the hazards of the unfriendly skies. Naturally, a key input into that analysis was the damage sustained by the planes that made it back to base. Officials hoped that by evaluating the surviving airframes, they could make informed decisions on where to add armor in an optimal way so as not to overburden engines and slow planes down.
Perhaps a good theory, but in practice, the AAC’s data collection and interpretation process went all wrong. Why? They were drawing the wrong lessons from the patterns they noticed. The airframes that safely made it back to base had areas of commonality where they had sustained damage. The AAC’s answer was to add armor plating to those areas, since it was presumed that they were the most vulnerable to attack. Enter the analyst Abraham Wald. Wald knew they were all wrong. He inferred instead that the areas that had not sustained damage were the ones that should be armored. Why? Since the damage on surviving aircraft was concentrated in specific areas, Wald knew those were in fact the least important to the planes’ functionality. In turn, the areas that were not damaged did need armor since they were responsible for the safe return of the aircraft being studied. Today it is thought Wald saved thousands of U.S. lives by correcting this misuse of data before it could take hold.
Perhaps the military’s greatest misapplication of data on a grand scale occurred during the Vietnam War. It was during that conflict that the great political commentator Walter Lippmann’s theories on the link between increasing casualty figures and declining U.S. public support for war played out in newspaper headlines and protests in the streets. Meanwhile, U.S. military and civilian leaders were enamored with a related notion that enemy casualty figures could serve as a sort of antivenom for U.S. public morale, helping burnish their case that the military was winning the campaign against the Vietcong through attrition. It was a tragic, if understandable flaw in logic for the U.S. side. After all, the context with which Presidents Kennedy, Johnson, and Nixon and their advisors were most familiar was conventional war. In that sort of fighting, enemy casualties were frequently held up against those of the friendly side, and with other metrics such as the comparative amount of area held, number of sorties flown, etc., a sense of the war’s larger tide might be gleaned.
Vietnam was different. The “search and destroy” method of the war’s early years was in fact more counterproductive than not. As LTC (ret.) John Nagl observed in his seminal counterinsurgency study Learning to Eat Soup with a Knife, despite numerous indications that the U.S. could not kill its way out of the conflict, the instrumentalization of enemy casualty figures was too tempting a ploy for leaders to set aside. As Nagl notes, this was often despite clear internal warnings: “It is hardly surprising that [the Department of International Security Affairs] complete repudiation of the Military Assistance Command, Vietnam strategy was not popular in the military high command.” Those warnings, however, went unheeded, and the misapplication of enemy casualty figures served as gasoline atop the already blazing fire of wrong choices and imminent defeat.
Data is a sword in a scabbard. It does not believe in a higher power. It is only in the hands of those seeking to make or destroy a case that it comes alive to serve a higher purpose. For military professionals, it is critical that the higher purpose be moral, ethical, sensible, and wise. Those criteria satisfied, it is then necessary to determine precisely what the data tells us. Are we engaging in sophistry and selectively picking out pieces of the data that support our current position? Are we instrumentalizing bad data to avoid having to change course? Or rather are we doing what’s right and casting fresh light on a challenge so we can improve, as with matching new officers with their branches? Sometimes it takes a brilliant and unconventional mind like Abraham Wald’s to divert the well-intentioned from disaster. As Wald knew, it often starts with understanding what’s right in front of our eyes.