Summing Up

https://commons.wikimedia.org/wiki/File:Lamb_Horace_bw.jpg

In 1932, at the end of a 60-year career studying hydrodynamics, Sir Horace Lamb addressed the British Association for the Advancement of Science.

“I am an old man now,” he said, “and when I die and go to heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics, and the other is the turbulent motion of fluids. And about the former I am rather more optimistic.”

One for All?

https://www.flickr.com/photos/dave_hale/860846402
Image: Flickr

Suppose that there’s a power outage in your neighborhood. If someone calls the electric company, they’ll send someone to fix the problem. This puts you in a dilemma: If someone else makes the call, then you’ll benefit without having to do anything. But if no one calls, then you’ll all remain in the dark, which is the worst outcome:

volunteer's dilemma payoff matrix

This is the “volunteer’s dilemma,” a counterpart to the famous prisoner’s dilemma in game theory. Each participant has a greater incentive for “free riding” than acting, but if no one acts, then everyone loses.

A more disturbing example is the murder of Kitty Genovese, who was stabbed to death outside her New York City apartment in 1964. According to urban lore, many neighbors who were aware of the attack chose not to contact the police, trusting that someone else would make the call but hoping to avoid “getting involved.” Genovese died of her wounds.

In a 1988 paper, game theorist Anatol Rapaport noted, “In the U.S. Infantry Manual published during World War II, the soldier was told what to do if a live grenade fell into the trench where he and others were sitting: to wrap himself around the grenade so as to at least save the others. (If no one ‘volunteered,’ all would be killed, and there were only a few seconds to decide who would be the hero.)”

The Guinness Book of World Records lists the Yaghan word mamihlapinatapai as the “most succinct word.” It’s defined as “a look shared by two people, each wishing that the other would initiate something that they both desire but which neither wants to begin.”

(From William Poundstone, Prisoner’s Dilemma, 1992.)

Kindling Trouble

Now, for something a bit more serious: I am starting a new religion. Care to join? As with the Catholic religion, my religion has an index of forbidden books. There is only one book that the index forbids. Can you guess which? You probably have! It is the index itself!

— Raymond Smullyan, “Self-Reference in All Its Glory!”, conference “Self-Reference,” Copenhagen, Oct. 31-Nov. 2, 2002

Podcast Episode 121: Starving for Science

https://pixabay.com/en/wheat-grain-crops-bread-harvest-1530316/

During the siege of Leningrad in World War II, a heroic group of Russian botanists fought cold, hunger, and German attacks to keep alive a storehouse of crops that held the future of Soviet agriculture. In this week’s episode of the Futility Closet podcast we’ll tell the story of the Vavilov Institute, whose scientists literally starved to death protecting tons of treasured food.

We’ll also follow a wayward sailor and puzzle over how to improve the safety of tanks.

See full show notes …

A Cognitive Illusion

https://www.flickr.com/photos/minhimalism/5708719581
Image: Flickr

Given these premises, what can you infer?

  1. If there is a king in the hand then there is an ace, or if there isn’t a king in the hand then there is an ace, but not both.
  2. There is a king in the hand.

Practically everyone draws the conclusion “There is an ace in the hand.” But this is wrong: We’ve been told that one of the conditional assertions in the first premise is false, so it may be false that “If there is a king in the hand, then there is an ace.”

But almost no one sees this. Princeton psychologist Philip Johnson-Laird writes, “[Fabien] Savary and I, together with various colleagues, have observed it experimentally; we have observed it anecdotally — only one person among the many distinguished cognitive scientists to whom we have given the problem got the right answer; and we have observed it in public lectures — several hundred individuals from Stockholm to Seattle have drawn it, and no one has ever offered any other conclusion.” Johnson-Laird himself thought he’d made a programming error when he first discovered the illusion in 1995.

Why it happens is unclear; in puzzling out problems like this, we seem to focus on what’s true and neglect what might be false. Computers are much better at this than we are, which ironically might lead a competent computer to fail the Turing test. In order to pass as human, writes researcher Selmer Bringsjord, “the machine must be smart enough to appear dull.”

(Philip N. Johnson-Laird, “An End to the Controversy? A Reply to Rips,” Minds and Machines 7 [1997], 425-432.)

10/18/2016 UPDATE: Readers Andrew Patrick Turner and Jacob Bandes-Storch point out that if we take the first premise to mean material implication (and also allow double negation elimination), then not only can we not infer that there must be an ace, but we can in fact infer that there cannot be an ace in the hand — exactly the opposite of the conclusion that most people draw! Jacob offers this explanation (XOR means “or, but not both”, and ¬ means “not”):

I’ll use the shorthand “HasKing” to be a logical variable indicating whether there is a king in the hand.
Similarly, “HasAce” is a variable which indicates whether there is an ace in the hand.

We’re given two statements:

#1: (HasKing → HasAce) XOR ((¬HasKing) → HasAce).

#2: HasKing.

#2 has just told us that our “HasKing” variable has the value “true”.

So, we can fill this in to #1, which becomes “(true → HasAce) XOR (false → HasAce)”.

I’ll call the sub-clauses of #1 “1a” & “1b”, so #1 is “1a XOR 1b”.

1a: “(true → HasAce)” is a logical expression that’s equivalent to just “HasAce”.

1b: “(false → HasAce)” is always true — because the antecedent, “false”, can never be satisfied, the consequent is effectively disregarded.

Recall what statement #1 told us: (1a XOR 1b). We now know 1b is true, so 1a must be false. Thus “HasAce” is false: there is not an ace in the hand.

Jacob also offered this demonstration in Prolog. Many thanks for Andrew and Jacob for their patience in explaining this to me.

Sky-High

http://commons.wikimedia.org/wiki/File:Pile_ou_face.png

A memory of Lewis Carroll by Lionel A. Tollemache:

He was, indeed, addicted to mathematical and sometimes to ethical paradoxes. The following specimen was propounded by him in my presence. Suppose that I toss up a coin on the condition that, if I throw heads once, I am to receive 1d.; if twice in succession, 2d.; if thrice, 4d.; and so on, doubling for each successful toss: what is the value of my prospects? The amazing reply is that it amounts to infinity; for, as the profit attached to each successful toss increases in exact proportion as the chance of success diminishes, the value (so to say) of each toss will be identical, being in fact, 1/2d.; so that the value of an infinite number of tosses is an infinite number of half-pence. Yet, in fact, would any one give me sixpence for my prospect? This, concluded Dodgson, shows how far our conduct is from being determined by logic.

Actually this curiosity was first noted by Nicholas Bernoulli; Carroll would have met it in his studies of probability. Tollemache wrote, “The only comment that I will offer on his astounding paradox is that, in order to bring out his result, we must suppose a somewhat monotonous eternity to be consumed in the tossing process.”

(Lionel A. Tollemache, “Reminiscences of ‘Lewis Carroll,'” Literature, Feb. 5, 1898.)