Altamura Man

altamura man

About 150,000 years ago, a Neanderthal man was exploring the Lamalunga Cave in southern Italy when he fell into a sinkhole. Too badly injured to climb out again, he died of dehydration or starvation. Over the ensuing centuries, water running down the cave walls gradually incorporated the man’s bones into concretions of calcium carbonate. Undisturbed by predators or weather, they lay in an immaculate state of preservation until cave researchers finally discovered them in 1993.

This is a great boon for paleoanthropologists — “Altamura Man” is one of the most complete Paleolithic skeletons ever discovered in Europe — but there’s a downside: The bones have become so deeply involved in their matrix of limestone that no one has found a way to remove them without destroying them. So, for now, all research must be carried out in the cave.

Good Turns

https://pixabay.com/en/britain-cab-car-city-classic-19228/

In order to get a license, London taxicab drivers must pass a punishing exam testing their memory of 25,000 streets and every significant business and landmark on them. “The Knowledge” has been called the hardest test of any kind in the world; applicants must put in thousands of hours of study to pass a series of progressively difficult oral exams that take, on average, four years to complete. The guidebook for prospective cabbies says:

To achieve the required standard to be licensed as an ‘All London’ taxi driver you will need a thorough knowledge, primarily, of the area within a six-mile radius of Charing Cross. You will need to know: all the streets; housing estates; parks and open spaces; government offices and departments; financial and commercial centres; diplomatic premises; town halls; registry offices; hospitals; places of worship; sports stadiums and leisure centres; airline offices; stations; hotels; clubs; theatres; cinemas; museums; art galleries; schools; colleges and universities; police stations and headquarters buildings; civil, criminal and coroner’s courts; prisons; and places of interest to tourists. In fact, anywhere a taxi passenger might ask to be taken.

Interestingly, licensed London cabbies show a significantly larger posterior hippocampus than non-taxi drivers. Psychologist Hugo J. Spiers writes, “Current evidence suggests that it is the acquisition of this spatial knowledge and its use on the job that causes the taxi driver’s posterior hippocampus to grow larger.” Apparently it’s not actually driving the streets, or learning the information alone, that causes the change — London bus drivers don’t show the same effect; nor do doctors, who must also acquire vast knowledge; nor do cabbies who fail the exam. Rather it seems to be the regular use of the knowledge that causes the change: Retired cabbies tend to have a smaller hippocampus than current drivers.

While driving virtual routes in fMRI studies, cabbies showed the most hippocampal activity at the moment a customer requested a destination. One cabbie said, “I’ve got an over-patched picture of Peter Street. It sounds daft, but I don’t view it from ground level, it was slightly up and I could see the whole area as though I was about 50 foot up. And I saw Peter Street, I saw the market and I knew I had to get down to Peter Street.” Non-cabbie volunteers also showed the most activity when they were planning a route. “Thus,” writes Spiers, “the engagement of the hippocampus appears to depend on the extent to which someone thinks about what the possible streets they might want to take during navigation.”

(Hugo J. Spiers, “Will Self and His Inner Seahorse,” in Sebastian Groes, ed., Memory in the Twenty-First Century, 2016.)

Summing Up

https://commons.wikimedia.org/wiki/File:Lamb_Horace_bw.jpg

In 1932, at the end of a 60-year career studying hydrodynamics, Sir Horace Lamb addressed the British Association for the Advancement of Science.

“I am an old man now,” he said, “and when I die and go to heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics, and the other is the turbulent motion of fluids. And about the former I am rather more optimistic.”

One for All?

https://www.flickr.com/photos/dave_hale/860846402
Image: Flickr

Suppose that there’s a power outage in your neighborhood. If someone calls the electric company, they’ll send someone to fix the problem. This puts you in a dilemma: If someone else makes the call, then you’ll benefit without having to do anything. But if no one calls, then you’ll all remain in the dark, which is the worst outcome:

volunteer's dilemma payoff matrix

This is the “volunteer’s dilemma,” a counterpart to the famous prisoner’s dilemma in game theory. Each participant has a greater incentive for “free riding” than acting, but if no one acts, then everyone loses.

A more disturbing example is the murder of Kitty Genovese, who was stabbed to death outside her New York City apartment in 1964. According to urban lore, many neighbors who were aware of the attack chose not to contact the police, trusting that someone else would make the call but hoping to avoid “getting involved.” Genovese died of her wounds.

In a 1988 paper, game theorist Anatol Rapaport noted, “In the U.S. Infantry Manual published during World War II, the soldier was told what to do if a live grenade fell into the trench where he and others were sitting: to wrap himself around the grenade so as to at least save the others. (If no one ‘volunteered,’ all would be killed, and there were only a few seconds to decide who would be the hero.)”

The Guinness Book of World Records lists the Yaghan word mamihlapinatapai as the “most succinct word.” It’s defined as “a look shared by two people, each wishing that the other would initiate something that they both desire but which neither wants to begin.”

(From William Poundstone, Prisoner’s Dilemma, 1992.)

Kindling Trouble

Now, for something a bit more serious: I am starting a new religion. Care to join? As with the Catholic religion, my religion has an index of forbidden books. There is only one book that the index forbids. Can you guess which? You probably have! It is the index itself!

— Raymond Smullyan, “Self-Reference in All Its Glory!”, conference “Self-Reference,” Copenhagen, Oct. 31-Nov. 2, 2002

Podcast Episode 121: Starving for Science

https://pixabay.com/en/wheat-grain-crops-bread-harvest-1530316/

During the siege of Leningrad in World War II, a heroic group of Russian botanists fought cold, hunger, and German attacks to keep alive a storehouse of crops that held the future of Soviet agriculture. In this week’s episode of the Futility Closet podcast we’ll tell the story of the Vavilov Institute, whose scientists literally starved to death protecting tons of treasured food.

We’ll also follow a wayward sailor and puzzle over how to improve the safety of tanks.

See full show notes …

A Cognitive Illusion

https://www.flickr.com/photos/minhimalism/5708719581
Image: Flickr

Given these premises, what can you infer?

  1. If there is a king in the hand then there is an ace, or if there isn’t a king in the hand then there is an ace, but not both.
  2. There is a king in the hand.

Practically everyone draws the conclusion “There is an ace in the hand.” But this is wrong: We’ve been told that one of the conditional assertions in the first premise is false, so it may be false that “If there is a king in the hand, then there is an ace.”

But almost no one sees this. Princeton psychologist Philip Johnson-Laird writes, “[Fabien] Savary and I, together with various colleagues, have observed it experimentally; we have observed it anecdotally — only one person among the many distinguished cognitive scientists to whom we have given the problem got the right answer; and we have observed it in public lectures — several hundred individuals from Stockholm to Seattle have drawn it, and no one has ever offered any other conclusion.” Johnson-Laird himself thought he’d made a programming error when he first discovered the illusion in 1995.

Why it happens is unclear; in puzzling out problems like this, we seem to focus on what’s true and neglect what might be false. Computers are much better at this than we are, which ironically might lead a competent computer to fail the Turing test. In order to pass as human, writes researcher Selmer Bringsjord, “the machine must be smart enough to appear dull.”

(Philip N. Johnson-Laird, “An End to the Controversy? A Reply to Rips,” Minds and Machines 7 [1997], 425-432.)

10/18/2016 UPDATE: Readers Andrew Patrick Turner and Jacob Bandes-Storch point out that if we take the first premise to mean material implication (and also allow double negation elimination), then not only can we not infer that there must be an ace, but we can in fact infer that there cannot be an ace in the hand — exactly the opposite of the conclusion that most people draw! Jacob offers this explanation (XOR means “or, but not both”, and ¬ means “not”):

I’ll use the shorthand “HasKing” to be a logical variable indicating whether there is a king in the hand.
Similarly, “HasAce” is a variable which indicates whether there is an ace in the hand.

We’re given two statements:

#1: (HasKing → HasAce) XOR ((¬HasKing) → HasAce).

#2: HasKing.

#2 has just told us that our “HasKing” variable has the value “true”.

So, we can fill this in to #1, which becomes “(true → HasAce) XOR (false → HasAce)”.

I’ll call the sub-clauses of #1 “1a” & “1b”, so #1 is “1a XOR 1b”.

1a: “(true → HasAce)” is a logical expression that’s equivalent to just “HasAce”.

1b: “(false → HasAce)” is always true — because the antecedent, “false”, can never be satisfied, the consequent is effectively disregarded.

Recall what statement #1 told us: (1a XOR 1b). We now know 1b is true, so 1a must be false. Thus “HasAce” is false: there is not an ace in the hand.

Jacob also offered this demonstration in Prolog. Many thanks for Andrew and Jacob for their patience in explaining this to me.