A Spy Among Friends: Kim Philby and the Great Betrayal by Ben Macintyre

This is the interesting story of the Cambridge Five, five graduates of Cambridge University who worked for the British government from the 1930s to the 1950s while they spied for the Soviet Union. Kim Philby was the most successful of the group and is the author’s principal subject. Philby was a double agent for 20 years, working for the British security services while delivering massive amounts of information to the Russians. The secrets he passed to the Russians resulted in many operations being blown and lots of people being killed. Eventually, he fled to the Soviet Union (although it’s very possible that Britain’s MI-6 encouraged him to leave in order to save the British government a great deal of embarrassment).

Philby and his fellow spies (Donald Maclean, Guy Burgess, John Cairncross and Anthony Blunt) all became convinced at Cambridge that the Soviet Union had the best available political system. That made it relatively easy to recruit them in service of the Russians. Three of them lived out their lives in Moscow. None of them were ever prosecuted for spying.

The principal theme of the book is that the Cambridge Five were able to remain undiscovered for so long because they were comfortable members of the British ruling class. The security services and the Foreign Office were primarily run by other members of the upper class who presumed that the men they worked and drank with were gentlemen and would never betray their country.

After Philby confessed to spying for the Russians, he could have been returned to England for prosecution or even assassinated. But he was permitted to circulate freely until he defected one night, boarding a freighter bound for Odessa. Other spies weren’t treated so gently:

I mention the fate of less favored traitors who did far less than Philby but spent years in prison for it.

“Ah well, Vassall –well, he wasn’t top league, was he?”

(John Vassall, homosexual son of an Anglican parson and clerk to the naval attaché at the British Embassy in Moscow, was sentenced for eighteen years for spying for the KGB.)

Mr. Vassall had not attended Eton or Cambridge, as Mr. Philby had, and never belonged to the right gentlemen’s club.

Advertisements

Simply Napoleon by J. David Markham and Matthew Zarzeczny

Napoleon Bonaparte was one of the most important people who ever lived. I’ve been curious about him but haven’t wanted to read an 800-page biography. That’s why I got a copy of this brief one. It’s part of the “Simply” series of short biographies for the general reader. Other titles in the series include Simply Freud, Simply Dickens and Simply Tolstoy.

I now have a better understanding of Napoleon’s life, but do not recommend this book. It’s a second-rate production. It covers the major events in Napoleon’s military and political career, but provides little insight into his thinking or character. It lists precise statistics for the losses in battles that happened more than 200 years ago but never indicates that the numbers aren’t necessarily to be trusted. For example, it’s stated that 243 Spaniards were killed and 735 were wounded, while some 2,200 Frenchmen were killed, 400 wounded and 17,635 were captured in the same battle. Is it plausible that almost 10 times as many Frenchmen were killed while almost twice as many Spaniards were wounded? A number of illustrations appear as black splotches.

Furthermore, the subjects emphasized are sometimes bizarre. One paragraph covers Napoleon’s seizure of the French government and the creation of a new constitution in 1799. That’s immediately followed by almost nine pages devoted to the slave revolt in Haiti and its repercussions.

If you want to read something short about Napoleon, you might try Napoleon: A Very Short Introduction and A Very Short Introduction to the Napoleonic Wars. I haven’t read those two, but the “Very Short Introduction” books from Oxford University Press tend to be quite good.

To Fight Against This Age: On Fascism and Humanism by Rob Riemen

The author is a Dutch writer and “cultural philosopher”. The dust jacket says To Fight Against This Age was an international best seller. The book has two parts: “The Eternal Return of Fascism” and “The Return of Europa”.

The first part argues convincingly that fascism is a recurring tendency in Western civilization. The second argues that a united Europe could be much more than it has turned out to be, which is “nothing other than an Economic Union, where the terms soul, culture, philosophy, and live in truth are as impossible as a palm tree on the moon” [167].

The situation in the United States being more urgent, I found the discussion of fascism more engaging. We hesitate to apply the word “fascist” to the right-wing extremists who have gained ground in America (and in some parts of Europe),  mainly because they haven’t taken total control of society and spread bloodshed in the manner of Hitler and Mussolini. Rieman, however, says we should use the term to make clear how extreme these movements are and also make it easier to stop them:

… the fascist bacillus will always remain virulent in the body of mass democracy. Denying this fact or calling it something else will not make us resistant to it…. If we want to put up a good fight, we first have to admit that it has become active in our social body again and call it by its name: “fascism” [34].

In the twenty-first century, no fascist would willingly be called a “fascist”. Fascists aren’t that stupid, and it fits with their mastery of the skill of lying. Contemporary fascists are recognizable partly through what they say, but just as important is how they operate…. Fascist techniques are identical everywhere: the presence of a charismatic leader; the use of populism to motivate the masses; the designation of the base group as victims (of crises, or elites, or of foreigners); and the direction of all resentment toward an “enemy”. Fascism has no need for a [small “d”] democratic party with members who are individually responsible; it needs an inspiring and authoritative leader who is believed to have superior instincts (making decisions that don’t require supporting arguments), a faction leader who can be obeyed and followed by the masses [83-84].

Sound familiar?

Where Does the Weirdness Go? (Why Quantum Mechanics Is Strange, But Not As Strange As You Think) by David Lindley

If you want an introduction to quantum mechanics, this is a very good book to read. I didn’t get some of it, but I don’t blame the author, who does an excellent job. He was a theoretical astrophysicist before he began editing science magazines. Since the book was published in 1996, some of it may be out of date, but not enough to make a difference to the general reader.

The title “Where Does the Weirdness Go?” refers to a puzzle. Since events at the quantum level are weird, why doesn’t that weirdness show up at the level of our ordinary experience? Reality looks fairly well-defined to us. We don’t see the things around us as probabilities. The chair you’re sitting on is right there under you; it’s not possibly there and possibly not there. Electrons and photons may be in an indeterminate state, possibly here and possibly there, but that probabilistic weirdness disappears when it comes to higher-level stuff.

I think the book’s subtitle (“Not As Strange As You Think”) refers to the puzzle’s answer. Lindley explains that, roughly speaking, quantum weirdness disappears when something called “quantum coherence” turns into “quantum decoherence”. When a quantum state is “coherent”, its properties are mere probabilities. But that can only be the case if the quantum system is isolated from other quantum systems. Here’s how Wikipedia puts it:

… when a quantum system is not perfectly isolated, but in contact with its surroundings, coherence decays with time, a process called quantum decoherence. As a result of this process, the relevant quantum behaviour is lost.

The quantum behavior referred to here is the weirdness (things like “is it a particle or is it a wave?” and “spooky action at a distance”). Since quantum systems (photons, electrons, paired particles) are rarely, if ever, appropriately isolated inside objects like chairs, clouds and chickens, those types of things don’t behave weirdly.  The constant atomic and sub-atomic turmoil inside everyday objects means that their properties are defined or definite, not probabilistic. The stuff we see around us doesn’t display any quantum weirdness because there are trillions upon trillions of quantum-level interactions occurring at every moment.

One thing the book makes clear is that there’s nothing special about quantum states being measured. Nor does human consciousness have any special role in quantum mechanics. In fact, measurement is an example of decoherence. When a physicist measures an electron, it is no longer isolated. In order to be measured, the electron has to interact with something else at the quantum level. That results in the electron’s possible position or momentum becoming real, not probabilistic. So when we hear about the importance of measurement in quantum mechanics, it only means that something at the quantum level is interacting with something else at that level. Most such interactions have nothing at all to do with us humans. 

Something (among many) I don’t understand: Once an electron has lost its probabilistic nature by interacting with some other quantum-level thing, do any of its properties ever become probabilistic again? If not, it would seem like every electron or photon in the universe would eventually have well-defined properties. 

I’ll say one more thing about the book. The author subscribes to what’s known as the “Copenhagen interpretation” of quantum mechanics. Apparently, most physicists do. The Copenhagen interpretation is a response to questions like “what’s really going on at the quantum level?” and “is it possible to explain why quantum events are so weird?” The answer given by the Copenhagen interpretation is: “Don’t bother trying to understand what’s happening. We can’t explain what’s happening and there is no sense in trying, because there is no definite reality to be explained at that level until measurement (or quantum-level interaction) occurs. This is just the way the world is.”

The author concludes by asking “will we ever understand quantum mechanics?” Here’s his answer:

But we do [understand it], don’t we? As an intellectual apparatus that allows us to figure out what will happen in all conceivable kinds of situations, quantum mechanics works just fine, and tells us whatever … we need to know….

[But] quantum mechanics clearly does not fit into any picture that we can obtain from everyday experience of how the world works… It throws us off balance… Physics, and the rest of science, grew up with the belief in objective reality, that the universe is really out there and that we are measuring it…. And the longer the belief was retained, the more it came to seem as it must be an essential part of the foundation of physics….

Then quantum mechanics came along and destroyed that notion of reality. Experiment backs up the axioms of quantum mechanics. Nothing is real until you measure it [or it comes into contact with something else!], and if you try to infer from disparate sets of measurements what reality really is, you run into contradictions….

A true believer might conclude that objective reality must still be there somewhere, beneath quantum mechanics. That’s what Einstein believed….[But] if quantum mechanics does not embody an objective view of reality, then evidently an objective view of reality is not essential to the conduct of physics…

[But] quantum mechanics, despite its lack of an objective reality, nevertheless gives rise to a macroscopic world that acts, most of the time, as if it were objectively real… And so, almost paradoxically, we can believe in an objective reality most of the time, because quantum mechanics predicts that the world should behave that way. But it’s because the world behaves that way that we have acquired such a profound belief in objective reality — and that’s what makes quantum mechanics so hard to understand [222-224]

The American Pragmatists by Cheryl Misak

This is an entry in a series called The Oxford History of Philosophy, written by an expert on the philosophical school known as “pragmatism”. Here’s how Oxford University Press describes the book:

Cheryl Misak presents a history of the great American philosophical tradition of pragmatism, from its inception in the Metaphysical Club of the 1870s to the present day. She identifies two dominant lines of thought in the tradition: the first begins with Charles S. Peirce and Chauncey Wright and continues through to Lewis, Quine, and Sellars; the other begins with William James and continues through to Dewey and Rorty. This ambitious new account identifies the connections between traditional American pragmatism and twentieth-century Anglo-American philosophy, and links pragmatism to major positions in the recent history of philosophy, such as logical empiricism. Misak argues that the most defensible version of pragmatism must be seen and recovered as an important part of the analytic tradition.

According to Professor Misak, “the most defensible version of pragmatism” is the version initiated by C. S. Peirce and Chauncey Wright in the 19th century and carried forward by C. I. Lewis, W. V. Quine and Wilfrid Sellars in the 20th. She argues that it is more defensible because it considers truth to be less subjective. In the caricature or simplification of pragmatism as set forth by William James and criticized by G. E. Moore and Bertrand Russell, true statements are those that “work for us”. If religious beliefs make your life better, for example, they’re true. By contrast, the tradition that began with Peirce treats truth more objectively. Statements may “work for us” even though they’re false. The Peircean pragmatists see a stronger relationship between truth and how the world is, regardless of human goals or interests.

It isn’t easy to briefly explain what pragmatism is, but Prof. Misak gives it a try in the Preface:

Pragmatists are empiricists in that they require beliefs to be linked to experience. They want their explanations and ontology down-to-earth (natural as opposed to supernatural) and they require philosophical theories to arise out of our practices. As Peirce put the pragmatic maxim, we must look to the upshot of our concepts in order to understand them….

[But] pragmatists reject the part of empiricism that says that all of our beliefs originate in experience and that our beliefs can be linked in an atomistic way to discrete experiences…. They reject any naturalism that gives ontological to matter or physicality — they want to consider whether value, generality, chance, etc. might be part of the natural world. They are holists, taking their view to encompass all of science, logic, mathematics, art, religion, ethics and politics. Unlike most of their empiricist predecessors, they fence off no realm of inquiry from the principles they set out.

In the Conclusion, she adds:

The core pragmatist thought is about the human predicament. We must try to explain our practices and concepts, including our epistemic norms and standards, using those very practices, concepts, norms and standards. This is the pragmatist’s task and we have found that, within the pragmatist tradition, there are different ways of trying to fulfill it.

I’ll finish with a brief example of pragmatist thinking. The great Scottish philosopher David Hume is sometimes viewed as a skeptic (e.g. he believed there is no rational basis for ever thinking that one event causes another). The pragmatist John Dewey, however, saw Hume as a predecessor:

While in his study, Hume finds skepticism compelling, but as soon as he leaves that secluded place of theoretical philosophizing, skepticism loses any force it might have had. The skeptic’s doubts, as Peirce would put it, are paper doubts [107].

According to the pragmatists, what matters, even from a philosophical perspective, is how our ideas connect with our lives outside the philosophy class.

Lincoln at Gettysburg: The Words That Remade America by Garry Wills

The brilliant author Garry Wills did a public service when he wrote this book about Abraham Lincoln’s “Gettysburg Address”. Chapters on 19th century oratory, the “rural cemetery” movement and Lincoln’s choice of words provide context, but those aren’t the parts of the book that make it important.

Wills’s principal thesis is that Lincoln’s focus on the idea of equality as stated in the Declaration of Independence (“all men are created equal”) changed our understanding of the Constitution and America itself:

The Gettysburg Address has become an authoritative expression of the American spirit — as authoritative as the Declaration itself, and perhaps even more influential, since it determines how we read the Declaration. For most people now, the Declaration means what Lincoln told us it means, as a way of correcting the Constitution itself without overthrowing it. It is this correction of the spirit, this intellectual revolution, that makes attempts to go back beyond Lincoln so feckless. The proponents of states’ rights may have arguments, but they have lost their force, in courts as well as in the popular mind. By accepting the Gettysburg Address, its concept of a single people, dedicated to a proposition, we have been changed. Because of it, we live in a different America (146-147).

As originally written, the Constitution not only accepted the existence of slavery but gave preferential treatment to the slave states. Lincoln, however, forcefully proclaimed that “our new nation” was “dedicated to the proposition that all men are created equal”. Furthermore, he challenged us to continue “our unfinished work” to insure that America’s government would truly be, by implication, of all the people, by all the people and for all the people. Lincoln’s brief remarks at the dedication of the Soldiers National Cemetery in Gettysburg, Pennsylvania, a few months after the cataclysmic Battle of Gettysburg, helped make our country a different and better place. Garry Wills’s excellent book explains why and how that happened.

Other Minds: The Octopus, the Sea and the Deep Origins of Consciousness by Peter Godfrey-Smith

Peter Godfrey-Smith is an Australian professor of philosophy who has spent many hours scuba-diving in order to observe the behavior of octopuses and cuttlefish. The book is an attempt to trace the evolution of mental activity from its earliest beginnings hundreds of millions of years ago, when bacteria began reacting to their surroundings. The author believes that mind and consciousness didn’t suddenly spring into existence; they developed gradually through millions of years. But he admits that nobody knows for sure.

Neither do we know what it’s like to be an octopus. We don’t even know for certain that it’s like anything at all. Maybe octopuses go about their business without feelings or anything like consciousness. Godfrey-Smith, however, argues that it’s reasonable to believe that creatures of many sorts feel pain when they are injured. But where to draw the lines (if there are any lines) between bacteria that simply react, animals that feel pain and creatures like us who are self-conscious is a mystery.

Octopuses are especially interesting because our common ancestors lived about 500 million years ago. Octopuses developed complex nervous systems, arranged differently than ours, independently from most other animals, including us. That means, in Godfrey-Smith’s words, “meeting an octopus is, in many ways, the closest we’re likely to get to meeting an intelligent alien”. It’s really too bad that they can’t tell us what it’s like to be them.

I wish the book ended with a summation of the author’s conclusions. I do remember the idea that nervous systems first evolved in order to respond to a living thing’s surroundings, and then to monitor its internal states and control its movements. And I remember a lot about the interesting behavior of octopuses and their close relations, cuttlefish. But I can’t say I came to any solid conclusions about the deep origins of consciousness. If the author reached any conclusions, he should have reminded his readers what they were.