A Tale Of Two Navies. Anthony Wells
based on persistent presence. There continues to this day a major requirement for forward-deployed naval intelligence assets; we will explore and analyze it later. Security and need-to-know are the guardian angels of these operations.
Let us return to the overall picture of post–World War II centralization, what this meant, and how it impacted both navies. We should also address absolutely core questions of whether the vast changes that occurred were indeed necessary: Were there better ways to do things? Would both navies be in better or worse shape today without them? Are the national security interests of both countries, individually and collectively, well served by the organizational changes since 1960? Now that you can appreciate the changes that occurred with the new organizations, let us form an analytical framework for looking at these questions.
The Western democracies have based their national-security policies on clearly identifiable values and strategic considerations. Primary among them has been the need to protect citizens from invasion and threats that challenge their core geographic and political integrity and identities: the right to live in peace and harmony; the right to make choices via cherished democratic institutions; and the right to exist as an independent nation, free from oppression or threat. Deep-rooted historical, cultural, ethnic, linguistic, and economic factors bond such nations. The strategic requirements that arise from the efforts of each and every nation to maintain its national identity and self-determination vary across time and geographic space. These factors have determined, for example, the reactions by the United States and the United Kingdom to a changing balance of power, whereby the status quo of free, well-established, independent states has been threatened.
A central generic lesson of the first half of the twentieth century may be summarized as the need to anticipate and prepare for defense when imminent threats indicate that the world is changing in ways that cause deep concern for future security. The clearest and best example of this is the rise of the Nazi Party and the German elections of 1933, leading to Germany’s ever-increasing belligerence from 1936 to the outbreak of World War II in September 1939. Unpreparedness may lead to a perception of actual weakness and embolden an aggressor to challenge the status quo. The latter in today’s environment may not be classical territorial violations, with invasion the worst-case scenario, but economic and resource challenges, acquisition of monopolistic trade in key raw materials, or the exploitation of cyberspace, water, and energy-source rights. The ultimate expression of this primary strategic requirement is national survival.
Second, nations have developed, mainly but not totally since the nineteenth century, the need to ally themselves for self-protection with other nations. Conversely, nations with belligerent and often expansionist intent have allied with nations where they perceived opportunities for gain. The Nazi-Soviet Pact, the Nazi-Japanese-Italian Axis, and the later denouncement of the Nazi-Soviet Pact by Germany are good twentieth-century examples of realpolitik played out by adversary nations who perceived gain in making and breaking alliances.
Third, the leading twentieth-century democratic powers, the United States and the United Kingdom, have led in the exaltation of self-determination and, in the case of the United Kingdom, decolonization and the right of self-government. President Woodrow Wilson was the father of the post–World War I League of Nations, and both nations were at the heart of the founding of the United Nations organization and the North Atlantic Treaty Organization (NATO). The former was conceived to foster international cooperation and allay future wars, the latter to preserve peace in ways that built on the lessons learned from the failure of the League of Nations to maintain international order. NATO’s strength lay in its military cohesion, organization, and military capabilities, which aimed to deter, not threaten.
The cultural underpinnings of the United States and the United Kingdom and the need to show strength by clear military capability, national resolve, and cooperation in well-organized alliance structures point to a clearly definable thread that runs through American and British strategic thinking. It is that preparedness for a changing threat environment is paramount. The United States and the United Kingdom have found that the tools of international-security diplomacy and the use of power in the pursuit of peaceful outcomes constitute a very mixed bag. The former includes diplomatic pressure and multinational applications of economic sanctions, isolation, and restrictions on the flow of goods, materials, and capital. Where they have failed, the use of force has tended to be the tool of last resort, whether in the shape of blockade, mining, increased levels of war preparedness, or, in the worst case, open and declared war.
In certain cases the United States and the United Kingdom have been constrained in the use of these tools, because the overall strategic situation and balance of power were not in their favor. The Soviet invasion of Hungary, Czechoslovakia, and Afghanistan showed how a combination of circumstances can render the United States, the United Kingdom, and their allies impotent—an unhealthy state of affairs. The sphere of influence of the Soviet Union in all three cases was such that NATO could not react in any meaningful way, only protest. There is a deep and abiding lesson in those three points, not least that military power as an instrument of foreign and overall national-security policy has limits. Understanding those limits is crucial.
Let us now return to the issue of US and UK centralization and the impact on both countries’ navies. World War II was undoubtedly the greatest conflict fought in human history. What is quite astounding about it is that neither the United States nor the United Kingdom fundamentally changed its fundamental defense organization during the conflict. There was tighter control and enforced cooperation, but none of that was opposed, let alone resisted, by any of the military services. World War II was complex for the British and Americans at every level—particularly the quite amazing necessity to build in short order a massive industrially based war machine and to innovate technologically on extraordinarily short time lines. But one thing is very clear: the system worked. Nothing is perfect, but the US and UK World War II defense organizations performed brilliantly. Changes were made on the fly; bureaucratic inertia went out the window, and those who stood in the way of change or defied direct orders were soon removed. Any form of incompetence or inability to perform was rectified.
The question therefore arises, why change now? Furthermore, why did change, from individual service centricity to centralization, take place? None of the US and UK military services during or after World War II can be accused of not being team players—at the worst, of playing service politics in pursuit of self-serving goals. Nothing could be farther from the truth. The political leadership and the service chiefs and their staffs agreed on grand strategy and then allocated service resources required to execute it. Interservice rivalry was a matter not of deeply fought-over divisions of the resource pie but of rivalry to perform, to excel, indeed to show worthiness in all regards—a hugely healthy state of affairs. The US Navy and the Royal Navy were never in bitter contentious battles with the other services over resources and who would do what to execute the grand strategy. During the Battle of the Bulge, General George Patton’s Third Army was never so pleased as when it saw the US Army Air Forces appear to provide air-to-ground support once the weather was clear enough, and on countless occasions surface naval forces heralded overhead Liberators or Short Sunderlands to attack surfaced U-boats. Interservice rivalry was about combined mutual effectiveness, not internecine competition.
World War II proved three axioms about defense organizations: they have to be relevant, they have to be efficient, and they have to be effective. What emerged from World War II was a desire for greater integration and top-level control, because centralization would lead to greater efficiency and less rivalry and inefficiency. What happened in reality, however, was that a massive bureaucracy with a significant political overlay was placed on top of the existing structure. Both countries’ defense infrastructures grew. Once the basic political changes took place, underpinned by legislative action, the Office of the Secretary of Defense and the chiefs of staff structure in the United States and, later, the Ministry of Defense and the Central Defense Staff in the United Kingdom all grew exponentially. These changes incurred massive costs. The key question, again, is: Was it all worthwhile if what both countries had during World War II, aside from some lessons learned, worked well?
Several prominent post–World War II figures were centralists. In the United Kingdom the greatest advocate on integration was, perhaps surprisingly, Admiral of the Fleet the Lord Louis Mountbatten. He personally oversaw the creation