Data Driven Decisions: the Good, the Bad, and the Ugly

As with any organization seeking to inform and guide decision-making, the United States military has relied on data analysis many times in its history. In many cases, this led to successful outcomes. In other cases, faults existed in the reliability of the data or the manner in which it was subsequently employed. Three such cases are recent talent-based branching efforts for new officers (“The Good”), the misinterpretation of airplane battle damage in WWII (“The Bad”), and the reliance on enemy casualty figures as a barometer of success in the Vietnam War (“The Ugly”). A brief exploration of each will help us understand the promises and pitfalls of using data to drive organizational behavior.


As the familiar observation goes, you don’t know what you don’t know. For aspiring Army officers, one largely uninformed choice they make before commissioning day will have a lasting impact: their selection of branch. Unfortunately for these individuals, the information void of not knowing what life as an infantryman or signal officer truly means creates a hindrance on branch selection day. Meanwhile for the Army, inefficiencies exist when minimally informed individuals enter career fields for which they are poorly suited. This information gap creates future problems such as attrition, poor morale, and resource inefficiencies of having to retrain officers who elect to switch branches later in their careers.


Fortunately, the Army’s Training and Doctrine Command (TRADOC) has recently moved to correct this situation. TRADOC collected such pertinent inputs as trait-based data from the branches, new entrant tests, academic performance numbers, and more to create “talent based profiles” of those individuals most likely to succeed in the various Army career fields. They then initiated the career field education and branch selection process earlier, at the beginning of cadets’ journeys rather than the end. This process now starts in year one rather than year four of the commissioning journey. The collection and distribution of pertinent information to the right audience at the right time is already drawing rave reviews from senior army leaders, and represents a key step in modernizing Army talent management for the 21st Century. Although the program is in its infancy, the effort mirrors practices at tech giants like Google that have proven quite successful.


                The Army Air Corps (AAC) was desperate to keep planes in the sky during WWII. Battle losses had tactical, operational, and strategic impacts, since it couldn’t win dogfights without fighters, mount operations without swarms of bombers pummeling ground defenses into submission, or support broader strategy if it sapped the nation’s manufacturing resources. One major area of analysis for the AAC was the armoring of its aircraft to withstand the hazards of the unfriendly skies. Naturally, a key input into that analysis was the damage sustained by the planes that made it back to base. Officials hoped that by evaluating the surviving airframes, they could make informed decisions on where to add armor in an optimal way so as not to overburden engines and slow planes down.


Perhaps a good theory, but in practice, the AAC’s data collection and interpretation process went all wrong. Why? They were drawing the wrong lessons from the patterns they noticed. The airframes that safely made it back to base had areas of commonality where they had sustained damage. The AAC’s answer was to add armor plating to those areas, since it was presumed that they were the most vulnerable to attack. Enter the analyst Abraham Wald. Wald knew they were all wrong. He inferred instead that the areas that had not sustained damage were the ones that should be armored. Why? Since the damage on surviving aircraft was concentrated in specific areas, Wald knew those were in fact the least important to the planes’ functionality. In turn, the areas that were not damaged did need armor since they were responsible for the safe return of the aircraft being studied. Today it is thought Wald saved thousands of U.S. lives by correcting this misuse of data before it could take hold.


                Perhaps the military’s greatest misapplication of data on a grand scale occurred during the Vietnam War. It was during that conflict that the great political commentator Walter Lippmann’s theories on the link between increasing casualty figures and declining U.S. public support for war played out in newspaper headlines and protests in the streets. Meanwhile, U.S. military and civilian leaders were enamored with a related notion that enemy casualty figures could serve as a sort of antivenom for U.S. public morale, helping burnish their case that the military was winning the campaign against the Vietcong through attrition. It was a tragic, if understandable flaw in logic for the U.S. side. After all, the context with which Presidents Kennedy, Johnson, and Nixon and their advisors were most familiar was conventional war. In that sort of fighting, enemy casualties were frequently held up against those of the friendly side, and with other metrics such as the comparative amount of area held, number of sorties flown, etc., a sense of the war’s larger tide might be gleaned.


Vietnam was different. The “search and destroy” method of the war’s early years was in fact more counterproductive than not. As LTC (ret.) John Nagl observed in his seminal counterinsurgency study Learning to Eat Soup with a Knife, despite numerous indications that the U.S. could not kill its way out of the conflict, the instrumentalization of enemy casualty figures was too tempting a ploy for leaders to set aside. As Nagl notes, this was often despite clear internal warnings: “It is hardly surprising that [the Department of International Security Affairs] complete repudiation of the Military Assistance Command, Vietnam strategy was not popular in the military high command.” Those warnings, however, went unheeded, and the misapplication of enemy casualty figures served as gasoline atop the already blazing fire of wrong choices and imminent defeat.


                Data is a sword in a scabbard. It does not believe in a higher power. It is only in the hands of those seeking to make or destroy a case that it comes alive to serve a higher purpose. For military professionals, it is critical that the higher purpose be moral, ethical, sensible, and wise. Those criteria satisfied, it is then necessary to determine precisely what the data tells us. Are we engaging in sophistry and selectively picking out pieces of the data that support our current position? Are we instrumentalizing bad data to avoid having to change course? Or rather are we doing what’s right and casting fresh light on a challenge so we can improve, as with matching new officers with their branches? Sometimes it takes a brilliant and unconventional mind like Abraham Wald’s to divert the well-intentioned from disaster. As Wald knew, it often starts with understanding what’s right in front of our eyes.



Praising Kane: An Economic Look at Tim Kane’s Vision for DoD Personnel Reform

Tim Kane’s book Bleeding Talent struck a nerve in the national security community upon its release in 2012. In it, Kane laid out a compelling case that the United States military was losing many of its best people due to inefficient personnel systems, morale issues, and stifling bureaucracy. If that book was the diagnosis, Kane’s newly released paper Total Volunteer Force provides his prescription. Kane’s paper is subtitled “A Blueprint for Pentagon Personnel Reform”, and it does serve that purpose should senior DoD figures wish to take on large scale personnel change. The paper is most notable for its attempts to introduce private sector forces to DoD, practices honed in a competitive setting making the best possible use of available labor.

The DoD labor market is influenced by both macroeconomic and microeconomic forces. Macroeconomically, unemployment in the general U.S. economy is an important force bearing on both DoD’s labor pool, and the resources available to hire, train, and retain talent (i.e. federal tax revenues). Microeconomically, service members will continue to seek the highest possible utility from their lives and careers. Kane’s paper provides twenty specific recommendations, and he does a commendable job factoring in both macro and micro forces. A few examples will help illustrate why Kane’s paper represents more than just a “common sense” approach to DoD personnel reform, and is largely grounded in solid economic theory, making it more likely to actually succeed.

One of Kane’s more novel recommendations is to allow veterans and reservists to apply for active duty jobs. As Kane says, “the current lack of permeability eliminates from military jobs millions of fully qualified citizens who have already served honorably.” Although his recommendation will have secondary effects in such areas as compensation and force structure, the central premise is logical. An efficient labor market is one in which talent can flow most freely and hiring agencies are best able to seek and retain the best qualified applicants.  Kane’s recommendation achieves both goals, and has the additional macroeconomic benefit of driving down overall labor costs, as a reservist would only be paid at an active duty rate when their skill set is most needed in the active component.  For example, in a ten year period in which the reservist is on active duty for three years and in reserve status for seven, a much better value would be generated for DoD versus paying an active duty salary for the full decade.


As might be expected, Total Volunteer Force recommends several compensation-specific measures. These include a shift from current “tenure” based pay toward a “role and responsibility” pay structure. Kane argues that “there is no reason to pay a senior O-3 in an easy job more than a junior O-3 in a demanding job.” It’s a convincing point, and aligns with labor principles such as incentivizing those positions upon which the success of the total organization rests.  For DoD, this often means command. One critique of this recommendation is that it fails to take into account the prestige and promotion potential associated with those key positions, and thus risks over-incentivizing them.

Another compensation based recommendation is to expand pay flexibility. Kane suggests paying servicemembers more to serve in remote and unglamorous assignments. Potential compensation would increase the longer the position sits vacant, until a qualified applicant claims it. Although this logic holds in the private sector, such as the Alaska-based deep sea fishing job Kane references, DoD has the luxury of assigning personnel where they are needed, regardless of personal preference. Thus, although Kane’s recommendation makes some sense, it could also result in unnecessary expenditures to manipulate labor already within DoD’s purview to employ.

Kane also wishes to return a significant amount of control of servicemembers’ careers to the individual. One recommendation he makes in this area is the ability to opt out of the “up or out” system of promotion and retention. This would enable a greater degree of individual specialization in certain vocations such as pilots and tank crewmen, while drawing down labor costs by avoiding unneeded promotions and corresponding pay increases. This recommendation is one of the most viable Kane gives us, as it is within the services’ current authorities execute. Moreover, the economic logic seems sound. Why pay someone more money to fill a position they do not wish to fill when they are content at present with less money?


While this is indeed true, economists at MIT have written about the additional necessity of tying promotions to the difficulty of tasks being performed, and the worker’s efficiency at performing them. As they note, “the productivity gain of assigning a skilled worker to the difficult task is greater than the cost of the worker obtaining skills.” Thus, in the reverse, the pool of skilled servicemembers opting out of additional promotions will in turn lose the incentive to become more skilled, and the service will forego the corresponding productivity gains. For example, the Air Force may find itself for the first time in memory paying a large number of airmen to remain merely average. This sort of dramatic cultural change must be addressed should the services take on this particular recommendation.

Finally, Kane addresses one of the most pressing labor issues facing DoD in the 21st Century: the critical need for skilled cyber personnel. The scramble for cyber talent in a labor market dominated by tech firms able to pay exorbitant salaries has been covered in many venues, and Kane agrees with the common refrain that DoD must adopt a different set of rules for the cyber workforce. Specifically, he proposes an exemption from Defense Officer Personnel Management Act (DOPMA) standards, which prescribe fixed wage tables and promotion timelines. This recommendation is somewhat challenging to justify in an economic sense.

In economic theory, such a productivity proposition would be articulated using such terms as marginal revenue product, marginal physical product, and marginal cost of the worker. These concepts don’t map well to the DoD cyber workforce, however, since they are based on a capitalistic, for-profit model in which the cost of acquiring skilled workers is all about productivity and profits. That isn’t to say that Kane is necessarily off base, merely that DoD is an outlier due to its lack of a profit motive. Economic theory does support two other important aspects of Kane’s recommendation, the current skill premium associated with the cyber workforce, and the corresponding inelastic supply of highly skilled cyber personnel.

Overall, Tim Kane’s set of recommendations is both practical and well reasoned. As a recent interview with economists at West Point in this very blog pointed out, the military’s industrial age personnel system is ripe for an overhaul. Kane’s so-called “blueprint” is one of the most convincing and cohesive versions set forth thus far. Whether DoD has the appetite to take on such an ambitious project at this stage is anyone’s guess, but it would certainly do well to use Kane’s work as an architectural vision, if not its final blueprint.


Study to Win: The Economic Pre-History of Three Prominent Strategists

As the saying goes, “You don’t buy insurance when you need it.”  The same can be said for education.  One never knows when a particular bit of information, a theory, or an experience will prove important later.  For some of history’s greatest strategists, the economic education and experience gained in their formative years made a significant difference in their later performance.  Sometimes this education was formal, as in the case of Edward Luttwak, the prominent writer and consultant to the U.S. military who also studied at the London School of Economics.  Sometimes it resembles kismet, as in the case of military strategist Antoine-Henri Jomini’s earlier participation in the Congress of Vienna, in which the power balance of the European continent was established along geographic, military, political, and economic axes.  Today we will investigate the formal and informal economic preparation of three prominent strategic thinkers: Winston Churchill, Julian Corbett, and Fabius Maximus.


Perhaps the most famous name on this list, Churchill seemingly fit a dozen fascinating lives into just one: Prime Minister of Britain, prominent writer, co-architect of victory in WWII, painter, soldier, Minister of Defense…the list goes on.  It comes as little surprise that he had several notable brushes with economic education (and practice) during his remarkable life.  Although we could look to his service in Parliament and time spent poring over budgets, taxation, and defense spending as a means of explaining his later strategic genius, we will instead start earlier, beginning with his structured self-education as a young officer in India.

Churchill’s success as a strategist is belied by his observation that, “I am always ready to learn, but I do not always like to be taught.”  Fortunately for Churchill, he found in himself a brilliant teacher.  While in Bangalore, he devoured such works such as Plato’s Republic and The Decline and Fall of the Roman Empire, a tome about which biographer Jonathan Rose writes, “[Churchill] found the huge book a useful guide to how not to run an empire.”  (Emphasis mine).  Rose notes that Churchill was forever influenced by one particular lesson in the book, that one must preserve peace through constant preparation for war.


Additional self-education in the work of economic theorists such as Adam Smith, John Stuart Mill, and Frederic Bastiat spurred Churchill to think and write about economic issues, developing as Rose asserts, a deep aversion to monopoly capitalism and a belief in antitrust measures as a means of preserving individual freedom against “formidable combinations of capital.”  His interest and education in economics didn’t end in India, however, reaching their pre-war zenith during his tenure from 1924-29 as Chancellor of the Exchequer, the British equivalent of the U.S. Secretary of the Treasury.  In this role, Churchill had ample opportunity to see economic theory meet practice, despite his observation re economics that he was, “quite uneducated on the subject, but had to argue about it all my life”.  For a novice, he certainly acquitted himself quite well amid the deep uncertainty of WWII, in which he famously argued against the early invasion of the European mainland, fearing insufficient resources existed in 1942 to achieve a lasting victory, or when he had the foresight to prepare for a postwar re-integration of servicemembers into the British economy…two years before the Allied victory!


Unlike Churchill, Julian Corbett’s contributions as a strategist remain rooted in the world of theory.  A fellow Brit, Corbett followed a highly improbable path to immortality as a strategic thinker.  For one, he never served a day in the British navy, living out life as an attorney until 1896 when fate intervened in the form of a request from historian John Knox Laughton to edit some documents dealing with the Royal Navy’s combat performance in the 16th century Spanish War.  It was at that point that Corbett found his calling, soon transitioning into influential treatises on naval strategy and lectures at the British Royal Naval College in Greenwich.  One overlooked aspect of the time between his transition from legal also-ran to prominent naval strategist was his time as a staff writer of the Pall Mall Gazette, a daily London newspaper of the time.

It was during this time that Corbett was dispatched to cover the Dongola Expedition occurring in Sudan.  Interestingly, his travel companion for the journey was Sherlock Holmes’ creator, Arthur Conan Doyle.  Corbett and Doyle witnessed a conflict heavily influenced by economic circumstances, as the British Empire sought to assert its dominion over Sudan, largely via its influence in the Egyptian government.  The conflict persisted over a considerable period, with the victorious British and their Egyptian proxies establishing an “Anglo-Egyptian Sudan” in 1899.  The prize for the British?  A straight north-south line of economic influence running from the Suez Canal, along the Nile in Egypt, and onward through Sudan where the Blue and White Nile converge.


In retrospect, then, it is no wonder that Corbett’s subsequent writing about naval strategy included a heavy emphasis on the linkage between commerce on water and operations on land.  Corbett, perhaps more directly than his American counterpart Alfred T. Mahan, argued the need for maritime and land based forces to work in a mutually supporting manner, recognizing that blockades and other maritime tactics only succeeded as a part of a broader campaign.  Fourth Generation Warfare theorist William Lind argues that the modern U.S. Navy would do well to heed Corbett’s advice, though in his opinion the Navy still prefers the Mahanian model of large ships projecting power globally, with grey-hulled titans perpetually preparing for decisive battle.  Lind frames the dichotomy thusly, “Were the U.S. Navy really to turn to Corbett, it would build lots of ships designed for operations in coastal waters and on rivers, often with troops on board. But such ships are small ships, and the U.S. Navy hates small ships.”  Would Corbett, having personally witnessed an empire struggle to secure the flow of commerce in far off lands, endorse this tendency?  We can only speculate…


The birth of economic theory is often traced to Adam Smith’s highly influential treatise of 1776, The Wealth of Nations.  This of course does not mean that economic behavior was absent before its publication, merely that the tools to describe and analyze such behavior were not yet fully defined.  Fabius Maxiumus, one of history’s great captains, might have missed the birth of economics by almost 2000 years, but his understanding of its principles was excellent.  The most evident in his campaign against Hannibal Barca was his mastery of the most basic concept of them all: supply and demand.

The Fabian Strategy as we know it today is often referred to as “war of attrition”, or the gradual exhaustion of the enemy’s will or ability to fight.  This contrasts with a direct approach wherein adversaries’ relative strength is tested in the form of decisive battle.  During the Second Punic War, Fabius understood that his army was no direct match for Hannibal’s, therefore he fashioned himself into a pest, opportunistically preying on Hannibal’s extended supply lines and setting fire to crops as ways of sapping his opponent’s ability to fight.  It was a brilliant approach.  In addition to the materiel weakness it inflicted, Hannibal’s seemingly unstoppable army was left unable to exert its prowess in pitched combat, most notably as it attempted to draw Fabius into battle in the seemingly Roman-favorable terrain of Apulia (the Puglia region of modern Italy).  Fabius’ strategic patience was rewarded for a lengthy period before he succumbed to tactical temptation at Cannae, lured into the very sort of decisive fight he had long avoided.

So did Fabius fumble his way into his famous stratagem out of necessity?  Has history conferred too much credit on a cowardly commander, as his critics argued?  Probably not.  First, it is worth noting that Fabius was a well seasoned hand when he was named dictator of the Roman Republic (a kind of emergency position imbued with strong, central control).  As such, Fabius had served in several governmental positions before developing his eponymous strategy.  Most notable of these was his tenure as Roman Censor.

As the name implies, Censors were responsible for conducting a popular census, but also with numerous financial responsibilities.  These included the collection of taxes, the sale of real property owned by the Republic, and the development and oversight of new construction projects.  In all, a tremendously complex undertaking for a vast empire, and one constantly concerned with sustainment: of finances, of material resources, of economic prestige, and of military might.  Is it any wonder then that when in command Fabius conducted a quick calculation of his and Hannibal’s relative combat power and elected to match Roman strengths with Carthaginian weaknesses?  His service as Censor certainly granted him an uncommon foresight and ability to “budget” his combat power across time.  As Sun Tzu observed, “The general who wins the battle makes many calculations in his temple before the battle is fought.”  Fabius had been calculating far longer than most.


The views expressed in this blog post are those of the author and do not reflect the official policy or position of the Department of the Army, Department of Defense, or the U.S. Government.

Outrageous Fortune: Spears and Arrows

In February of 2015, the dais was packed with Iranian officials called together to witness an exclusive event.  They were among the select few invited to a highly orchestrated naval exhibition meant to demonstrate the viability of Iranian “swarm” tactics against a U.S. aircraft carrier.  Their patience was rewarded with a mock engagement in which the static, knockoff carrier was beset on all sides by fast attack craft, its self-defense confounded by the multiple angles of attack.  Although the results were a highly optimistic assessment of Iranian capabilities, the broader concept at play generated much analysis.  Is the era of “large, expensive, and few” military platforms drawing to a close, pundits asked?  If so, why?

Upon examination, there are separate tactical and economic issues that have caused combatants to explore asymmetric tech.  Tactically, it remains to be seen if sophisticated platforms can truly be overwhelmed by multiple, smaller attackers.  Economically, the world’s militaries are also pondering if investments in large and expensive platforms such as main battle tanks and advanced piloted fighters still delivers a comparatively good value versus cheaper, more rapidly produced pieces.  I’ll call these spears (big, expensive) and arrows (smaller, cheaper).


Significantly, the second dilemma above does not necessarily hinge on the first.  Regardless of the shifting tactical strengths of spears and arrows due to changes in technology, advanced militaries may also independently decide that the procurement, manufacturing, manning, and sustainment of spears is not a good investment compared with the cumulative effect of many arrows.  Lose one spear, and you’ve suffered a significant tactical and economic hit.  Lose multiple arrows, and the attack continues.  Thus, we have tandem considerations, tactical and economic, that must be squared.

Unfortunately, much of what is written about the spear vs. arrow phenomenon focuses exclusively on the tactical aspect, with less attention paid to the broader economic context.  This is an incomplete picture.  Consider the current Russian frustrations with the next generation stealth fighter.  Production issues have created multiple delays to date, order quantities have been slashed, and economic sanctions have forced Moscow into alternate, more disruptive means of financing.  These frustrations have occurred before a single aircraft has taken flight or been tested against a potential swarm attack.  Might these compounding economic hindrances force Moscow toward an arrow solution?

The forward thinkers at DARPA are already seeking to marry solid tactical tools with economically supportable production by harnessing the effect of current sophisticated manned aircraft at a fraction of the cost.  A Popular Science article quotes a DARPA project manager on the advantages of firing scores of arrows at an adversary: “We wouldn’t be discarding the entire airframe, engine, avionics and payload with every mission, as is done with missiles, but we also wouldn’t have to carry the maintainability and operational cost burdens of today’s reusable systems, which are meant to stay in service for decades.”  Alas, the modern marriage of viable tools and economic practicality is drawing near.


Source: T.N. Dupuy, “The Evolution of Weapons and Warfare”

A note on the “modern” part.  Viewed from a historical perspective, it is clear that the Iranian military, Russia, and DARPA are not facing unprecedented challenges.  Strategic thought suggests that victors align strategy, operations, and tactics in mutually supporting fashion, with tactics subordinate to strategy.  History also suggests another, more direct imperative: the marriage of one’s tactical tools and one’s economy.  This is not an all encompassing strategic ends-ways-means formulation, but a more direct reconciling of one’s tools, their viability in a modern context, and one’s ability to generate and sustain them.

The Vietcong were quite proficient in their understanding of this principle.  In the book Inside the VC and the NVA, authors Michael Lee Lanning and Dan Cragg explore dozens of VC attributes, from organization to rations to equipment.  The lessons of the Tet Offensive were particularly effective in teaching the North Vietnamese the important harmony between effective tactical tools and economics.  The authors recount, “[Tet] was a reminder that it was better to be occasionally hungry or short of ammunition in a jungle hideaway than to have a full stomach and extra ammo and attempt to slug it out with Allied units toe to toe”.

North Vietnam was not awash with internal financial resources.  They also lacked the numbers or the equipment to fight organized U.S. formations in the open.  Thus, they selected an effective tactical and economic union – an affordable mix of land mines, small arms, and booby traps – to help wear down U.S. resolve.  Commitment to this approach was fully nested throughout the economy to maximize what economic advantages they did possess.  Labor was one asset, with defense plant workers toiling between 12-16 hours a day for ten days at a stretch.  External economic support from the People’s Republic of China, some $225 million in 1967 alone, also helped.  Despite these benefits, the North Vietnamese successfully resisted the temptation to push their tools and tactics into unsustainable areas.

The linkage of tools and economics is not just an imperative for the economically disadvantaged.  The German Army of WWII had a robust economy, larger than any other combatant besides the U.S. as of 1939, yet still made multiple blunders.  The German ballistic missile program provides a fairly fascinating illustration of one such miscue.  Michael J. Neufeld’s book The Rocket and the Reich details the “munitions crisis” that Germany brought upon itself by pursuing the tank-intensive Polish campaign, massive domestic construction projects, and long range rocket production in rapid fashion.  The constraining factor?  Steel.  There simply wasn’t enough supply to support the demands of the ambitious German agenda.  The missile program in particular was a dangerous obsession for German leaders, who were more concerned with could they develop reliable ballistic missiles in time to win the war, than wondering if they should, given competing demands.  Thus, in contrast to the VC, there was a central disharmony between the tools deemed necessary to win and the economic output required to supply them.

The lessons we can draw are simple, yet important as ever for planners to understand.  The factors of tools, technology, and economics engage in a perpetual dance, luring combatants to emphasize one ahead of the others.  In fact, all must be considered in concert.  A seemingly can’t-miss advancement in technology cannot be pursued if it will lead to economic ruin.  Alternatively, a robust economy does not immunize one from technologically simple tools of the adversary.  Planners must balance the factors of tools, technology, and economics to determine an optimal mix, ready to trade reliable but outmoded spears for arrows when the next conflict dictates.


The views expressed in this blog post are those of the author and do not reflect the official policy or position of the Department of the Army, Department of Defense, or the U.S. Government.

Interview with a Personnel Visionary

I recently had the chance to interview Mr. Mike Colarusso of the United States Army’s Office of Economic and Manpower Analysis (OEMA), a leading organization in the study of military-economic issues.  Colarusso, a retired lieutenant colonel, historian and research analyst, has co-authored a compelling series of papers for the U.S. Army War College’s Strategic Studies Institute with Colonel Dave Lyle, a professor of economics at West Point and director of OEMA.  Their publications apply economic principles to the military’s industrial age personnel system. 

Topics include talent based branching of newly commissioned officers, the optimization of retention tools and bonuses, and the management of senior officer talent.  All are worthy of serious study for those interested in how the military must adapt its personnel system to meet the demands of the 21st Century.  MILopoly wishes to thank LTC Raymond Kimball for his assistance in setting up the interview. 

The views expressed are personal in nature and do not reflect those of the Department of Defense, U.S. Army, or USMA.

MILopoly: Your writing focuses on optimization of the human capital of the Army, though much of the focus is on the officer cohort.  Could you talk a little about your vision on how that group is optimally complemented by Army NCOs, Warrants, and Civilians?

Colarusso: Our initial focus upon active component officers began almost a decade ago.  It stemmed from an analysis of the challenges confronting that particular segment of the Army workforce, challenges that both Congress and the larger defense establishment were keen to understand and meet.  Because active duty officers exercise an outsized influence upon policies across the total force, we believed this was a good place to begin. Getting talent management right with this population would create program and policy leaders who could extend the benefits of talent management to the other segments of the Total Army workforce – Guard, Reserve and Civilians.

That said, while many of the talent management principles we prescribe for officers generalize to any workforce, each Army workforce segment is confronted with different challenges, so attempting to apply talent management solutions uniformly to all of them could do as much harm as good.  Optimizing performance via talent management begins and ends with a root cause analysis of the problems confronting a group of workers – what’s making them less productive, and why? The Army is now engaged in that type of granular analysis of its other workforce segments.

What are your thoughts on the lateral hiring of civilian executive talent into senior Army flag officer billets?  Would Army culture support such a move?

While we think there are tremendous opportunities to increase lateral entry into the officer ranks as late as mid-career, we wouldn’t recommend it in competitive category positions at the most senior levels.  It’s not a question of whether the culture would accept it or not, but rather an acknowledgment that the Army is a land combat profession.  Its senior leaders are the stewards of that profession. If they aren’t onboarded into the profession as young professionals, if they aren’t given the opportunities to internalize the values and ethos of the profession and to understand and master the tenets of military leadership, we believe that they would be severely disadvantaged as senior leaders.

This long employment experience also shapes the utility senior officers derive from the intrinsic rewards of military service, something we believe would markedly differ with laterally entered civilian executives. Lastly, “just in time” hiring of senior executives would in some ways remove the institutional incentive to thoughtfully develop and manage officers for a career of service culminating in senior leadership.  What we would advocate to increase expertise in the senior ranks is to end the practice of treating all generals as “generalists” and afford them the developmental opportunities and continuing higher education needed to make them more effective enterprise leaders. We also think that succession planning and increased tenure in positions of strategic import would give flag officers the time span of discretion needed to implement strategies and lead change.

Could you expound on what you mean by enterprise leaders?  Would this mean grouping by “tracks”, similar to those junior officers follow within their branches?

By enterprise leaders, we mean those who run the Army and are stewards of the Army Profession.  Above and beyond the ability to lead or think strategically, this level of work also requires deep functional expertise in budgets, logistics, economics, law, public affairs and outreach, acquisition, research and development, human capital management, politics, statecraft, civil-military relations, foreign affairs, society and culture, regional and global threats, etc.

To get the right leaders to the top of the profession requires inventorying officer talents from commissioning forward, which over time could then be better aligned within the branches, functional areas and career fields.  But we’d suggest not allowing such formal designations to handcuff the officer management system.  If officers’ innate abilities, personal experiences, family or cultural background, volunteer endeavors, hobbies or education suited them particularly well for work outside of a formal branch or career field, the Army should be able to assign them accordingly. You could think of this as “talent-based” succession planning – not choosing officers for positions, but instead selecting them for the work at hand.


In your article on military compensation, you suggest a new rubric for optimizing the military pay structure.  As you developed that product, what were your assumptions on how the external labor market influences officers’ career choices at the Colonel and General Officer levels?  Do you believe that most officers tend toward inertia and “riding it out” by awaiting promotion results before considering career moves, or do you feel that there is a constant mental calculation about whether to jump ship for the civilian sector after a certain point in one’s career?

I’m the non-economist of our group, so please consider that as you read my answer, but at the moment it’s hard to know which senior officers are a flight risk due to the information asymmetry disadvantaging the Army.  As an institution, it doesn’t know its individual employees particularly well, which makes wage contract negotiations perilous.  The Army can eliminate this asymmetry via diligent talent management, but until then it can only guess at what’s in the minds of its senior officers.  To make that guessing game a little less risky, we should consider three things:

First – the officer population is highly heterogeneous. Therefore, do not assume that officers value all benefits equally (or cash above all).

Second – rational choice theory suggests that to retain officers with high opportunity costs, intrinsic rewards matter, as senior officer pay and benefits are generally outstripped by private sector compensation packages. Talent alignment – allowing officers to do meaningful work that they are actually suited for, goes a long way towards providing the intrinsic rewards of military service that other employers would be hard pressed to match.

Lastly – redesign the current military pension and benefits system because it actually encourages human capital flight once an officer becomes retirement-eligible: the benefits kick in too early.

Also in that article you suggest a more rigorous approach to Professional Military Education (PME), in both selection and curriculum.  Do you feel that there is a risk of burnout for officers already expected to navigate a challenging sequence of Key Developmental Positions en route to compete for senior positions?

No. We actually speak about this at more length in Chapter 6 of Senior Officer Talent Management: Fostering Institutional Adaptability (SOTM, 2014), and we tie it to other talent management proposals. In a nutshell – the notion of using accredited, degree-granting institutions as a “take a knee” moment in an otherwise fraught career timeline is the polar opposite of what’s needed in a profession.

An institution of higher learning with virtually a 100% graduation rate is either doing an outstanding job of screening its attendees or is doing a lackluster job and then dropping course standards to the lowest common denominator.  In the case of PME, evidence suggests it’s the latter (see SOTM).  PME is an incredible opportunity to learn a ton of information about our people’s talents while simultaneously putting them through their intellectual paces while doing so.  It’s also a missed credentialing opportunity – as signal theory explains, a credential with no rigor tells us nothing about the productive potential of the person holding it.  Professions certify and credential their people.


While on the topic of PME, I have long thought that an education in the principles of economics provides an excellent platform for military officers to conceptualize the decisions they face.  From understanding the economics of a country or region in which they are operating, to understanding the concept of opportunity costs as they relate to plans and programs, economics is a multifaceted tool.  Given the opportunity, would you inject more economics into the PME curriculum?  If so, where and how?

Specific PME curriculum design is not something we’ve devoted an outsized share of our thinking to, but yes, we’d agree that in the Information Age the ability to “think like an economist” is useful, even necessary. Increasing access to high quality data makes it possible for senior leaders to have a better command of the facts when making decisions. Thinking analytically, intellectual curiosity, the ability to develop and test hypotheses or to conduct root-cause analysis of problems – these are talents we should cultivate in our senior leaders.

We want them to exercise their subjective judgment (if not, we’d suggest having robots run the Army). But we want them to have access to objective, methodologically valid data when doing so.  And we want them to challenge underlying assumptions by asking the right questions.  PME redesign that engenders or unlocks these talents would be welcome, although it’s possible to hone these abilities in other ways.  In particular, we think resident, graduate-level education at top-tier civilian universities should become a more integral part of “professional military education.”

Your article on creating a more effective regional alignment strategy was released in 2014.  Since that time, the priorities of the Army have shifted toward a high emphasis on readiness, and remains mired in an industrial age personnel management model.  Given the challenges you enumerate, and the seeming disconnect between ready units and conducting non-Mission Essential Task List (METL) regional engagement activities, do you feel that the Regionally Aligned Force (RAF) construct is sustainable?

As I think we touched upon in that article, in the last thirty years the Army has alternated between two approaches – the regionally aligned forces model during the reasonably steady equilibrium of the post-WW II Cold War era, and the modular “plug and play” approach of the post-9/11 world.  Today’s RAF approach in some ways seems to be a hybrid of these earlier approaches. Is it sustainable in a world that changes as rapidly as it does today?  I don’t know.  I do know, however, that any approach to meeting regional threats has an exponentially better chance of success if the Army has a full inventory of the talents already resident in its labor force.  This is a critical risk management tool in an uncertain world.  If you can see your talent, you can organize it rapidly to meet unforeseen global contingencies.

In that same article, you state that “ARFORGEN fails to appreciate that despite standardization, each BCT is a unique collection of indi­viduals. Its outsized focus upon ‘plug and play’ in­terchangeability fails to leverage that uniqueness.”  How can the Army address the institutional need for some degree of unit standardization (i.e. doctrinal support, training standards) while getting the most out of its people at the individual level?

In the talent management taxonomy that we’ve proposed for the Army, we suggest that there are in essence two tiers of talent acquisition, development, employment and retention. The first tier is baseline – if you want to be an infantryman, for example, you need to be able to shoot, move and communicate, to have a level of physical fitness almost on par with a professional athlete, and to multi-task and problem-solve in the “crowded hour.” But to undertake an infantry military training team mission in Ukraine, a second tier of necessary talents might include Eastern European cultural fluency, an understanding of the history and geo-politics of the region, experience conducting operations in marshlands like those of Ukraine’s Pripyat, etc.

A uniformed service is always going to have a baseline of uniform standards, but the complexity of the global environment demands we layer specialized talents upon these. As a nation of immigrants that’s continental in scope and possessed of the world’s best university-level education system, we’re pretty fortunate.  Our Army is more likely than those of our adversaries to possess the heterogeneous talents needed to respond to complex regional challenges.  But we need to be able to see those talents, and we really can’t at the moment

Out of curiosity, in your research have you come across examples of other militaries, today or yesterday, that have gotten the talent management proposition “right”?  Something for the U.S. to borrow from?

Well, when you consider the way General Marshall managed officers while presiding over a 40-fold increase in the force during World War II, you definitely see some of the talent management principles we espouse in operation.  Valuing continuing education, differentiating officers by talent and then assigning them to positions based upon those talents rather than seniority, looking down the bench to mid-career officers for potential generals, pushing through significant personnel management adaptations in response to an existential threat – these are hallmarks of a talent management approach.

As for other militaries, most western armies manage their people in somewhat similar fashion to ours, although there are of course innovations here and there that we could certainly benefit from.  But there’s no one army that stands out as the “talent management exemplar,” at least not in the research we’ve done so far.  It’s pretty clear, however, that potential adversaries are taking new approaches to human capital management.  The evidence is in their increased military capabilities.  The Chinese are pivoting from a mechanization focus to an information age approach, with heavy emphasis upon STEM education.  The Russians are leading in certain aspects of armor, missile and fighter aircraft design. The Iranians can hack into RQ-170 Sentinels. The North Koreans are developing a true nuclear ballistic missile capability.  And all of these nations also possess sophisticated cyber warfare capabilities.

These developments show that potential threat forces are operating closer to the military technology frontier than ever before.  While open markets and espionage have certainly contributed to their technological advancements, the Chinese and Russians in particular do not seem content with mimicry of western technologies.  They are working to create truly innovative militaries, thought leaders in their own right.  The true revolution in military affairs isn’t a “mil-tech” one but a “skill-tech” one. We can’t maintain our technological ascendancy unless we maintain and exploit the human capital advantage a free society always enjoys. That’s why a talent management approach is so critical.

Finally, the Army has been notoriously bad at predicting the next conflict.  How can we best select, train, and educate the next generation of officers to operate in a future yet unwritten?

A couple of things leap to mind. One, provide more higher education – teach officers how to think, not what to think. Two, provide individual career paths tailored to each individual. Don’t keep running to developmental corner solutions, or you’ll end up with an officer corps so homogenous in thought and ability that it can’t meet a range of challenges outside of its comfort zone.  A variety of talent is the best way to reduce risk in an uncertain world.


The Problem with 4%

As the U.S. presidential debates approach, some familiar budgetary figures will re-emerge in the public consciousness.  One of these is the common expression of military spending as a percentage of Gross Domestic Product (GDP).  For nearly 30 years this figure has hovered within a point or two of the 4% mark (usually much closer), regardless of revenue fluctuations or military spending trends elsewhere in the world.  Think tanks are fixated on this linkage, devoting multiple editorials to the topic.  But is defense spending as a percentage of GDP a good metric to use as an immutable “standard”?  There are multiple reasons to suggest that this reductionism is a bad idea.

The first problem with the GDP-Defense link is that GDP provides a deceptively simple glimpse into actual outlays.  Why?  The seemingly straightforward figure-the almighty 4%-actually fluctuates significantly when expressed in normalized dollars spent.  This chart helps to portray the divergence:


Note the green line.  Since 1988, it has veered fairly widely despite the relative stability of the GDP percentage (blue line).  Immediately we can see that the perceived stability of 4% is an illusion when considering the actual expenditures going out the door.  The difference is attributable to the size of the overall economy, a moving figure.  After all, 4% GDP of a hypothetical “boom” year for the U.S. economy will be greater than a “bust” year the next.  Thus, using 4% as a reference point across multiple years is an imprecise technique at best.  Here is GDP size since the 1960s for reference.  It is quite easy to see how different the value of 4% would be across this period of time!


Second, the external security environment is glossed over when we default to a 4% barometer.  The so-called “peace dividend” at the end of the Cold War is one such example of a sea change in the external environment that led to a reevaluation of national priorities.  Defense spending was seen as a more fungible at the time given the collapse of the lone external great power threat, bringing about a 2.2% reduction as a percentage of GDP, the last “major” move.  One external event that resulted in virtually no reconsideration, meanwhile, is NATO’s commitment to spend at least 2% per country on defense (admittedly falling well short of the target at present).

Looking forward, an external threat could emerge requiring increased commitments to defense materiel and manpower.  Alternatively, a replay of the 2008 global financial crisis could constrain resources while driving stimulus spending outside of defense.  Each of these scenarios would change the interplay of U.S. security needs and the resources necessary to execute it.  Meanwhile, the 4% benchmark would prove a drawback in two ways: by ignoring the volatility in the environment while serving as a political football that could constrain hard choices.

In a related area, the final problem with the 4% link is the inertia that it breeds.  Robert Mihara wrote a thought provoking critique in the always great Infinity Journal about the Army’s threat based planning and resourcing model, offering a “value based” proposition based on the auto industry as a feasible alternative.  As Mihara states:

“The Army’s strategic planners must not only capture the unique contribution of the Army vis-à-vis its sister services, but the planners must also evaluate the Army’s contribution to the national interest against those of civilian governmental and non-governmental organizations. It is here that the articulation of the Army’s relative value brings its leaders back to properly consider the opportunity costs their nation ought to pay in exchange for the kind of security uniquely obtained through Army landpower.”

Mihara’s argument can be applied a step above the service level as well.  Shouldn’t DoD be more in the habit of continually reevaluating its value proposition for the nation in relation to the other slate of agencies, departments, and even NGOs or businesses operating in its sphere of influence?

To be clear, this is not meant as an endorsement of DoD budget increases or decreases, rather a challenge to avoid the kind of prescriptive, status quo thinking that a “4% mentality” inevitably creates.  Granted, any DoD budget request outside of this now-standard 4% norm would require a significant messaging effort and intragovernmental approach, but the nation should expect no less than a serious, continual reevaluation of one of its most trusted-and best resourced-governmental agencies.

Number Crunch


The views expressed in this blog post are those of the author and do not reflect the official policy or position of the Department of the Army, Department of Defense, or the U.S. Government.

The Problem with Narrow Framing


Union Major General David Hunter could scarcely believe the sight he encountered on the afternoon of June 17th, 1864.  His Confederate rival, Lieutenant General Jubal Early, appeared to have amassed a much larger defensive force than anticipated, with Early’s resupply trains even now humming along at a brisk pace in the late in the day, offloading ever more grey clad troops and gear in noisy fashion.  Hunter even thought he could hear the roar of Confederate bands playing away inside the city, an odd bit of nonchalance given the stakes: control of the whole Shenandoah Valley.  It seemed Hunter’s intended target of Lynchburg, Virginia would be a costlier prize than he had bargained for.  Not only were Confederate troops more concentrated and better supplied than the token force he expected, their defenses seemed better prepared as well.

Still, he decided to plunge ahead with the attack on the 18th, personally leading a charge around the redoubts arrayed southeast of the city.  He sent Major General George Crook’s division to probe westward, but Crook encountered unfavorable terrain and large numbers of Confederate infantry and got nowhere.  Worse still, Early’s forces saw an opening amid the confusion and launched a counterattack that chased Hunter’s forces until nightfall before withdrawing.  Seeing no benefit in potentially losing more men and time against long odds, Hunter declined to press Lynchburg’s defenses again and began the long march into West Virginia.  As darkness swallowed the scarred terrain, Union losses outnumbered the Confederates’ by a factor of 10 to 1.

We know now that Hunter was duped.  Early’s men had been instructed to do everything possible to make their numbers seem larger, a ruse that paid off even better than expected.  Hunter, relying on inaccurate information, ignoring the status of the overall campaign, and overvaluing the importance of keeping his force at full strength, had based all of his subsequent actions on false foundations.  Today we would observe that he had succumbed to a kind of narrow framing, or the weighing of one’s own choices separately from the bigger picture.  Unfortunately for Hunter, the stakes of the engagement were not isolated, and his timidity helped the Confederacy cling to life for a number of months while freely operating in the important Shenandoah Valley corridor.



Behavioral economist Dr. Richard Thaler of the University of Chicago researches narrow framing, and was instrumental in classifying its character and causes.  In his 2015 book Misbehaving, he cites a number of diverse examples, including a staff lecture he gave to the executives of a large company in the print media industry.  Briefly summarized, the midlevel execs in the room were each presented with an opportunity that would either make a profit of $2 million or lose $1 million, each with a 50% likelihood.  Independently, the execs proved extremely risk averse, with only 3 of 23 taking the bet.

When the CEO in the back of the room was asked how many of the ventures he would accept, his answer was conclusive: all of them!  Astute statisticians can appreciate why…the expected net result of these independent bets would pay a tidy $500,000.  The midlevel execs were overly compartmentalized in their perception of the total risk level and unable to look beyond an immediate fear of losing “their” bet.  The CEO, meanwhile, viewed the risk-reward proposition across a broader spectrum and was happy to capitalize on a strong overall likelihood of success.



So how does narrow framing impact military command?  As Maj. Gen. Hunter’s unfortunate experience demonstrates, it is first important for commanders to understand not just their own circumstances, but how they fit into the larger picture.  A counterpoint to Hunter’s shortsightedness can be found at Gettysburg, specifically Joshua Chamberlain’s leadership of the 20th Maine at Little Round Top.  Chamberlain, in contrast to Hunter, knew the vital importance of his place in the overall Union line, and heeded at all costs his commanding officer’s admonition to hold his unit’s position.

Through charge after charge, the 20th Maine fought, finally counterattacking with bayonets drawn when ammunition was depleted.  This was more than just a demonstration of intestinal fortitude.  Chamberlain had rightfully elevated his perception beyond the narrow engagement with the Confederates assaulting his position, instead looking out and around at the bigger picture, and adjusting his risk calculation accordingly.  Had Chamberlain framed his win-loss proposition purely from the perspective of his own unit, his decision to spring headlong into the Confederate line would have been illogical; given the broader picture it was essential.

This highlights the second lesson of narrow framing: overall risk must be continuously recalculated when contemplating a new risk-reward proposition.  Rarely are these propositions as “all or nothing” in nature as Chamberlain faced.  In fact, commanders most often deal in incremental change.  Thus, it is important to remain continually mindful of how one’s decisions imperceptibly tip the scales over time toward or away from desired outcomes.  So too must the actions of one’s friendly forces be weighed, and in turn offset.

The final lesson on narrow framing is simple in theory but more challenging in execution: understand your biases and how they square with the reality of your circumstances.  The behavioral economics literature tells us that loss aversion is a stronger motivating factor than is the desire to gamble for favorable results.  Is this true for you?  Your boss?  Your adversary?  All are important to consider when seeking to optimize outcomes.  Imagine if the most innately talented baseball players among us acted on their “loss aversion” given a higher likelihood of failure (an out) than success (a hit).  For example, a 66% failure rate might seem like a discouraging figure, enough to consider taking up another sport.  In reality though, a batter hitting .333 would be an asset to most teams.  Thus, a 33% rate of return shouldn’t discourage participation.  Understanding biases and the extent to which they conform to the situation is essential to making optimal decisions.




The views expressed in this blog are those of the author and do not reflect the official policy or position of the Department of the Army, Department of Defense, or the U.S. Government.

Economic Potential Energy

A recent Foreign Affairs article enumerates the many reasons that the United States will continue to enjoy its position as the world’s lone superpower, despite China’s gains in some areas.  The central argument of the piece is that the U.S. fosters innovation and economic incentives in a more efficient and effective manner than the PRC.  Of particular interest, and what many commentators miss when contrasting strengths of the two countries, is the underlying “potential energy factor” at play [my term].  By this, I mean that the U.S. is not only currently in a position of economic strength, but that its various inner workings, when fully harnessed, could continue to multiply its advantages in ways unavailable to the competition.

In physics, potential energy refers to “the energy that something has because of its position or the way its parts are arranged.”  When mapped to the world of international relations, the potential energy of nations means assessing precisely how positions and parts are arranged relative to other states or international bodies.  One’s position can be generally assessed against the rest of the world, or measured against a specific “other.”

The UN’s inclusive wealth index provides one such measure of potential energy, taking into account three factors: “(i) manufactured capital (roads, buildings, machines, and equipment), (ii) human capital (skills, education, health), and (iii) natural capital (sub-soil resources, ecosystems, the atmosphere).”  As the FA authors note, this measurement is far more illustrative than GDP alone, and produces a stark divide between the U.S. (~$144 trillion) and China (~$32 trillion).  Furthermore, the core U.S. advantages that underpin those figures are unlikely to erode in the near to mid future.


Unfortunately, most examinations of the economies of the two states misses this important angle.  This is partially the result of analysis hamstrung by a dogged insistence on utilizing GDP as the sole measure of economic vitality.  In fact, a far more complete (and complex) picture is developed when we balance an examination of relative strengths between what is presently identifiable (via metrics such as GDP), and what ripples beneath the surface (as in the inclusive wealth project).  By using the latter measure, one is able to more fully grasp the present context while examining the engines whirring away under the surface.  It is often through this additional breadth of analysis that gravity, nuance, and vulnerability present themselves.

Inclusive wealth, while an incomplete indication of the true balance of potential economic power due to a failure to consider such issues as reserve currency, trade alliances, or political climate, is nonetheless useful.  In the defense world, parallels exist with the net assessment methodology pioneered by the legendary Andrew Marshall at the Pentagon’s Office of Net Assessment, advocates of assessing military might through means beyond simple quantitative measures of tanks, troops, and airplanes.  In economics or defense, decisions are best supported via a more holistic and integrated view than might be gleaned by a single measure alone; the comparison of a video versus a photograph.

As a thorough read of the excellent intellectual biography of Mr. Marshall reveals however, it is not merely enough to tack on additional metrics and average the sum.  A macro analysis of which factors are being considered, why they are selected, how they are measured, the psychology and culture at play, (and more) must be undertaken in order to ensure full analytic rigor.  After all, in defense or economics, a snapshot, no matter how vivid, can only capture a point in time.  It is the analyst’s job to bring that photograph to life, allowing the viewer to perceive a full range of motion… and perhaps even discern what might happen next.


The views expressed in this blog are those of the author and do not reflect the official policy or position of the Department of the Army, Department of Defense, or the U.S. Government.

The Lure of First Mover Advantage

Economics, unlike most other social sciences, can seem like magic.  How can months of heavy rain in Florida impact the share price of a paper company in New Jersey?  Like an illusionist revealing a trick, an economist can untangle the knots to reveal the connection:

  1. U.S. OJ manufacturers rely on product from Florida.
  2. The excess rain means an optimal orange yield in Florida.
  3. More, juicier oranges means more product to bring to market.
  4. More cartons of OJ means more business for the paper industry.
  5. New Jersey’s Paper Company “A” acts to absorb the coming excess product before its competitors, by purchasing wood pulp at low rates.
  6. Paper Company “A” meets demand at the lowest cost and highest profit margins among its peers; the paper company’s share price jumps.

When the links are revealed, a logical flow becomes apparent.  The economist is merely the interpreter of a series of events that might seem unrelated to the outsider.

This alchemy is at its most compelling when an organization, seemingly out of nowhere, is able to capitalize on an advantage.  Much like in above example, however, the result we’re seeing isn’t random at all. The economist might point to a phenomenon known as the “First Mover Advantage” (FMA) to explain how the organization achieved such an optimal position relative to its competitors.  In industry, international affairs, and beyond, FMA can differentiate survivors from failures.



Perhaps the most poignant example of FMA in the modern era of defense is the development of nuclear weaponry.  When the United States “came to market” with the atomic bomb in 1945, its competitors in that arena were at an immediate, insurmountable disadvantage.

Japan, nuclear aspirants in their own right, quickly faced an entirely new competitive relationship with its enemy.  Militaries around the world continue to seek such a silver bullet, nuclear or not, as a means of reframing the power balance with their enemies.  But is this always a wise course of action?  Let us consider four points:

First, FMA requires significant early investment.  The Manhattan Project was an unprecedented collection of the greatest scientific minds available at the time.  Even then, there was no guarantee of success.  Modern militaries, therefore, must make very well informed judgments about when and where to apply resources seek so-called “game changing technology” that may or may not pay dividends.

scanned: May 2001 by Image Delivery Systems LLC

Manhattan Project members

Second, FMA approach carries risks.  Irrelevancy at the proving hour is one.  Consider the Maginot Line, France’s attempt to delay German aggression before it had occurred.  The set of operating assumptions that led to its creation were rendered shockingly moot at the time of execution, as German offensive planning and capabilities had long since “solved” the problem sets the French were attempting to impose.

Third, and related to the previous point, one’s competitors are not static.  Just as you are attempting to secure an early and powerful advantage, often so too are they.  Even if they are not pursuing an FMA gambit, changes to their doctrine, technology, or the operating environment itself can easily undermine the viability of an FMA approach.

Fourth, beware the potential myopia of FMA.  It is easy to understand the enthusiasm that an organization’s leaders must feel when they appear to have found a way to “subdue the enemy without fighting” in the finest traditions of Sun Tzu.  This shouldn’t excuse the organization from prudent contingency planning in the event of a failure, or from pursuing alternative areas of potential competitive domination.


Despite the frequently hopeful words of policymakers, the lure of a “king of the mountain” position in the defense realm remains a dangerous thing.  Information spreads rapidly.  Technology can be stolen.  Countermeasures can be rapidly engineered.

The drone phenomenon is but one example.  Just a few years ago, it seemed as if the United States’ longstanding technological core competency would pay dividends in the race to weaponize the use of drones.  Quickly however, low cost/high utility countermeasures were under development that significantly diluted the once promising space.

The scenarios facing today’s defense futurists are essentially twofold:

Scenario A: any true “game changer” must be so revolutionary and with such high barriers to entry for competitors, that it can stand alone and unchallenged for an amount of time so as to achieve decisive results in the contested space (Examples: Atom Bomb in WWII, Chariots in Mesopotamia).

Scenario B: since the prior scenario is so unlikely, if attempting to position oneself as the First Mover and capitalize on a potential competitive front, it is most prudent to think and plan for a series of short duration advantages, exploitable both independently and in concert with one another, to gain incremental favorable position relative to competitors.  (Examples: China’s island creation in the S. China Sea today, China’s development of gunpowder in the 8th Century).

To be sure, the First Mover Advantage of seizing not just uncontested but often unknown areas to secure a position of strength is a powerful lure. From oranges to atom bombs, however, it is best to think prudently not just about the chances of success, but of the likely duration of that success relative to the investment required.






The views expressed in this blog are those of the author and do not reflect the official policy or position of the Department of the Army, Department of Defense, or the U.S. Government.


The Royal Navy and Economies of Scale

I must go down to the sea again, to the lonely sea and the sky,
And all I ask is a tall ship and a star to steer her by.
And the wheels kick and the winds song and the white sail’s shaking,
And a grey mist on the sea’s face, and a grey dawn breaking.

-John Masefield


The concept of ECONOMIES OF SCALE is one familiar to any business owner who has attempted to compete with a larger, deeper pocketed competitor. Among the advantages for the larger party are a more sizable labor pool, a larger array of technologically superior machinery, and access to a greater supply network of raw materials. Over time, the cumulative advantages enjoyed by the larger competitor gradually branch out, deepen, and compound, overwhelming the smaller competitor’s ability to try and compete head on. The larger competitor amplifies its strengths in all key areas that its size will facilitate, from the higher quality labor pool it is able to attract, to the more rapid production enabled by its sophisticated machinery, to the fewer “bad” products it produces, given the quality assurance and quality control measures it can afford.

Most important of all however, and the central reason for pursuing an economy of scale, is the ability of a large manufacturer to churn out its product for a lower per unit cost. This is the point of separation that ultimately puts one’s competitors out of business, as they become unable to compete with a comparable product that costs less and less to produce and thus generates a higher profit margin for each unit sold. Despite this competitive advantage for the large manufacturer, it is nonetheless possible to drive one’s seemingly dominant business into the ground. This is due to the other total costs of running a larger scale operation, from the high payroll to the cost of investment in newer and better manufacturing equipment. This is no small concern. With just one failure to adapt to consumer preferences or market conditions, today’s economy of scale can be tomorrow’s massive and costly liability.

Notwithstanding this risk, when might it be prudent to restructure one’s existing organization in order to try and create an economy of scale where one does not currently exist? British economist and Pulitzer Prize winner Alfred Chandler succinctly addressed the logic of this proposition when he noted that “structure should follow strategy”, meaning one should only reconfigure one’s organization if by doing so it is able to more efficiently pursue the ends/ways/means calculus that gives an organization its purpose. For British naval strategists of the 19th Century, the question of whether to set about creating an economy of scale on the high seas was a vexing one indeed. The Royal Navy’s advantages were already significant. But, they wondered, could they become insurmountable for potential competitors should they assemble a fleet double the size of their two nearest competitors? The prevailing strategy of the Empire and its navy seemed to suggest its necessity, wherein “for the first time in history, the defence of the Empire was treated as a whole”, but would Britain’s finances allow it? Just as importantly, what would its potential adversaries do to stop it?

Economies of Scale on the High Seas, and When and How to Pursue Them

Britain’s Royal Navy (RN) was one clear winner of the Industrial Revolution, enjoying the distinct resourcing advantages that emanated from the British manufacturing juggernaut ashore. The reason is clear: as British advantages in manufacturing during the Revolution led to a further extension of its global economic power, the critical guarantor of that power, the RN, was naturally seen as an important area for investment. Britain’s reasons for redoubling its focus on the Navy in the 19th Century went beyond the obvious surface reason of merely protecting its merchant ships, for as Greg Kennedy points out, the RN was more like the potter’s wheel upon which British Empire’s overall fortunes were built:

The navy was a useful way for the government to redistribute money into the national economy; it provided economic security through the demands made on the nation for the upkeep of the navy, a navy which in rum ensured a steady flow of raw materials and access to markets throughout the empire; it provided a psychological deterrent to other nations who might desire to destabilize that imperial system; and finally it was a living experiment for the introduction and application of the new technologies issuing forth from Britain’s industrial revolution.


Thus, rolled up in the RN was a catalyst for spending, an underwriter of future stability, a reliable client for the nation, a means of distribution, an enforcer of monopoly, and an incubator for innovation. As it pulled farther away from its nearest competitors, these various roles underscored precisely why the RN’s entrenched position of advantage among global navies had become more pronounced over the decades. For some within the British elite, the period of relative calm in the mid to late 19th Century was the opportune time to calcify and systematically exploit that advantage.

Their logic was compelling. For a maritime power like the British, the tie between its economic and military levers was clear cut, as the symbiosis between the RN fleet and the economic fortunes of the British Empire had grown inextricable. As a maritime state, the new line of Mahanian thinking went, Britain had to secure the power it had amassed as a maritime nation by producing and manning a sufficient number of naval ships to protect its commercial supply lines so that it could afford to build more ships and further expand its influence.

Although this circular logic might lead one to wonder about the possibility of spending beyond the point of optimal return, a compelling economic counterpoint was put forth by George Hamilton, First Lord of the Admiralty, who noted before the House of Commons that through rapid and massive expansion the British navy would be “able to associate an increase of strength with a decrease of expenditure.” Imagine: Spending more to save more, all while investing in the very tool that made Britain so exceptional! Hamilton was understandably hooked on the dream of the RN achieving an economy of scale in its industry.

Hamilton’s thoughts on the way ahead didn’t stop there. He went on to say that in fact “it would be […] dangerous to pursue a course in naval warfare in which we should assume that all unprotected towns and commerce would be unmolested by an enemy.” This is a telling observation, as it suggests that Hamilton, as the chief advocate of the RN in Parliament, had become so absorbed by the notion of British dominance of its maritime competition that he had become correspondingly uninterested in the hard work of deciding where to assume risk, instead simply preferring to leave nothing undefended.

Despite this astonishing leap of faith, the Mahanian-monopolistic line of thinking that Hamilton represented finally became British policy in the form of the Defence Act of 1889, Britain’s landmark naval program that called for amassing a fleet equal in size to the combined fleets of its two nearest competitors over a period of five years. Publicly, the Act was easily sold as a prudent means of deterring potential adversaries like France and Russia, reworking the same strategic calculus echoed before and after by many of history’s empires. In practice, however, this massive build up quickly led to a set of unintended consequences and a poor return on Britain’s investment.

The Fallout and Its Ramifications

First, Russia, France, the United States, and Germany clamored to meet the unspoken challenge put forth by the British. Their concern soon translated into similar increases in naval spending, “increases so large and for such long periods in advance as to alter the complexion of the whole very materially.” Admittedly, while the British gambit at first compounded its numerical advantages on the high seas, it soon warped the state of affairs between the RN and its maritime competitors into a dynamic that was less than favorable for the RN. Instead of finding itself exploiting the advantages of an economy of scale, the RN was left to contend with more competition for dominance of the seas. How could this have happened? Economic theory would suggest that the power gap between the dominant outfit and its smaller competitors would become more pronounced as per unit (per ship) costs declined and efficiencies were maximized, right?

The disconnect between economy of scale theory and its application in the defense world lay in Britain’s poor reading of the security environment, combined with an unshakeable fixation on the advantages of lowering per unit costs relative to the diminishing returns they represented. Lord Hamilton was seemingly so absorbed with the idea that Britain could create a fleet so massive that no two powers could touch it that he failed to think realistically about the likelihood that those powers ever realistically would challenge the RN’s primacy. Furthermore, the efficiencies of increasing production, while tempting, had blinded him to the possibility that British production would quickly reach a point of diminishing returns, with each subsequent naval vessel actually guaranteeing less and less security for the empire. It is easiest to understand these fallacies through the prism of the behavior of private companies and states.


Companies are, unless they are bankrolled by wealthy people with endless resources, dependent upon profits to remain competitive. This profit dependence helps to focus organizational behavior: sell as many widgets for as high a price as the market will bear while keeping production costs down, and you can live another day. This same logic holds for manufacturers of Product X, both large and small, though it is nearly always better to be big for several reasons. For one thing, become large and dominant enough and gain enough market share, and you make staying in business impractical for competitors. Even better, after they cease operations you can soon amass their customers and markets, gaining momentum along the way. Size begets size.

Lord Hamilton made the mistake of extrapolating much of this same size-based logic to the seas. Whereas commercial manufacturers of like products might share similar external interests with respect to raw materials, potential consumers, markets, and the like, global navies’ interests can diverge to a much larger degree. For many navies of the 19th Century, merely protecting their share of global maritime commerce was the goal, never expansion. This is different than a truly wanton, free market security marketplace of the type that would threaten the interests of the British Empire. In other words, the navies that lagged behind the British represented states that were simply not threatening in a way that necessitated the RN doubling of the strength of its nearest competitors, much less a monopolistic control of global maritime supply lines.

This imperfect application of economic principles led to the kind of mismatch that Chandler warned against when he cautioned that structure should follow strategy. In practical terms, Hamilton also failed to consider one other important point when setting his force size. Namely, the remaining capacity of his two nearest competitors to increase production, an oversight that must have given him pause when it became apparent that France and Russia had nearly maximized their output by the end of the century, yet were still unable to match British production.

This disparity is evident in the states’ production numbers in the period following the passage of the landmark Defence Act: “1890-1900 showed England with 715,150 tons added to her naval strength, against her adversaries’ but 495,611.” Translation? Although France and Russia ramped up production in response to the RN buildup, they were still no match for British production that continued to spiral well beyond what was required to secure its maritime trade. Thus, Britain brought upon itself a situation wherein it had incentivized its competitors to ramp up materiel production out of alarm, while forcing themselves to continue to more than double the fleets of the same competitors they were forcing to expand. Precisely none of which translated into a net return on investment in terms of its overall security!

As a part of its misreading of the security environment, Britain had structured the RN as if Russia and France, non-maritime countries with diversified economic interests, were more like industry competitors fighting for a finite set of resources. In fact, these countries were preoccupied instead with the longer term German threat, as a close read of the Franco-Russian Alliance bears out. Regardless, Britain doubled down on its maritime power in an effort to support a strategy of “strength everywhere” that went well beyond what was truly required. This was a wildly inefficient course of action, as evidenced by the 155 British ships that were scrapped at the close of the period. By 1904, the year before he was appointed Foreign Secretary, Sir Edward Grey finally made the failure of the growth initiative public:

It is quite true that policy determines armaments, but armaments have also something to do with determining policy. We still have the Two-Power standard for the Navy – I think that is the official standard, that our Navy is to be equal to the Navy of any other two Powers. Yes, but the Two-Power standard does not mean what it did when it was first introduced. […] It has come to this, therefore, that while we must keep up our Navy to make us safe against any probable combination against us, yet, at the same time, with the great increase in the navies of the world, it is, in my opinion, necessary for us as a nation to depart from our old policy of splendid isolation.



More recently, modern militaries have continued to ask whether an economy of scale-esque positioning relative to its competitors is truly required to achieve strategic ends. For the U.S. in its protracted Cold War with the Soviet Union, the pursuit of such a structure was certainly in the service of its strategy of deterrence, though perhaps a true domination of the global defense landscape would have been impossible. The comparative costs imposed on the adversary were acceptable for the return they produced however, and resulted in a hastening of the USSR’s collapse under its own inefficient weight. For the individual NATO countries of that same period, the spending calculation was much different. In that particular security marketplace, banding together and pooling resources to counter a common adversary was the only feasible approach.

In areas like the Korean peninsula, the picture is murkier. To the South, a modern and efficient economy. To the North, a hermetic kingdom unwilling to advance into the 21st Century. Defense spending in South Korea is roughly six times that of its neighbor to the North, however no hypothetical economy of scale between the two countries seemingly exists that would render continued competition on the behalf of the North Koreans inadvisable. Reasons such as information asymmetry, the tacit or public backing of such partner states as China and the United States, and the upending of the “rational actor” theory of state behavior in the case of North Korea complicate matters.

Nonetheless, South Korea has continued to forge ahead anyway, announcing a 4% increase in defense spending for 2016 as a means of “deterring aggressive action” and “opening more dialogue” between the nations. Like the British before them, it remains to be seen if this financial investment will result in a corresponding increase or a decline in security. As Hamilton’s folly reminds us, the important thing when seeking to scale up to a degree unmatchable by one’s competition is that the questions of can we and should we should always be considered separate matters.



The views expressed in this blog are those of the author and do not reflect the official policy or position of the Department of the Army, Department of Defense, or the U.S. Government.