MLB's Contact Crisis

This is completely a first draft / work-in-progress. Please point out typos, confusing bits, or outright mistakes in the comments! -EMV

Part I: The Problem

A spectre is haunting North America: the spectre of declining baseball attendance. MLB attendance peaked in 2007 at 32,696 per game; last year it was down 12% from that high, at 28,768. This year to date, it’s 27,640.

That the audience for baseball may be aging is a natural fear. And the notion that Americans might eventually lose interest in the sport in favor of less subtle and more lurid and violent ones seems reasonable as well. Theodore Sturgeon’s 1964 “How to Forget Baseball” remains the only science fiction story ever published in Sports Illustrated, and posits a future where baseball is played only in Amish-like enclaves. The underrated 1969 novel The Last Man is Out is set in a future where MLB has contracted significantly and moved to smaller cities.

The good news is that there’s no evidence that either the aging of the audience or the imagined growing irrelevance of the game itself is behind the attendance drop.

In fact, it’s possible to model annual attendance in the post-PED era with terrific accuracy (adjusted r-squared = .92) with just two broad on-the-field factors. [1]

Collapse )

The Future of Physics!

A few days ago I had the pleasure of briefly meeting Adam Becker, the author of What is Real?: The Unfinished Quest for the Meaning of Quantum Physics, and hearing him give a terrific introductory talk on the topic at Harvard's Science Center. Becker's isn't the first book to give maverick physicist David Bohm a fair shot, but it rather blew my mind (and thrilled me) that he first mentions Bohm in his Prologue. Bohm has gone from being omitted from popular books on physics to being the hook.

One of the things the talk did was explode the myth that Bohr won his debates with Einstein--a point I made in the first letter of mine that New Scientist ever published (the other two have been about neural correlates to consciousness):

"It seems grossly premature to declare any sort of victory for Neils Bohr in his great debate with Einstein over the meaning of quantum mechanics (26 February, p 36). Yes, the violation of Bell’s inequality in experiments by Aspect and others appears increasingly likely to rule out Einstein’s favored interpretation, local realism. But many physicists—going back to John S. Bell himself—would unhesitatingly put their money on the eventual acceptance of a non-local hidden variable theory such as David Bohm’s. And if that proves to be the case, then Einstein was correct about realism, correct about determinism (versus inherent randomness), but wrong about locality—while Bohr was wrong on all three counts."

Becker admitted that he's not a Bohmian. I think everyone should be. So here, from an answer on Quora, is what I think is a lockdown argument for something fundamenrally Bohmian being part of the eventual TOE.


Q: How long do you think it will be before a single interpretation of quantum mechanics beats the others (because of scientific evidence, not popular opinion)?

It won’t be scientific evidence that identifies the winner. It will be theoretical success.

I think that QM will be reconciled with relativity via the production of a deeper theory that produces both. The deeper theory will incorporate something close enough to Bohmian mechanics to declare it the winner. This follows necessarily from seven facts (in the original answer; an eighth has just been discovered.)

  1. Relativity cannot be the last word on the nature of space and time, because it cannot accommodate or explain the subjective experience of the passage of time, which is to say, consciousness. We are conscious of only what is happening in the present moment of time, yet in the Minkowski block universe there’s no such thing. There therefore must be a deeper theory of space and time from which relativistic theory emerges, and which can accommodate consciousness by featuring a real time.

  2. There’s a second, entirely independent argument that time must be real, and that we hence need a deeper theory. We’ve gotten used to the counter-intuitive idea that special relativity (SR) says that time is an illusion, but people miss that it also says that interactions (causal force) are illusory, too. Time has a fundamentally puzzling aspect—the conflict between theoretical reversibility and what we observe—that makes a counter-intuitive reframing seem quite plausible. This is not the case at all with interactions, which are the bedrock of physics. It’s extraordinarily counter-intuitive that when we’re struggling to separate two powerful magnets, all that “really exists” is the arrangement of matter in spacetime according to a set of mathematical rules, rather than an actual causal force.

  3. The seeming impossibility of reconciling general relativity and QM suggests that both QM and relativity emerge from a deeper theory. In this paradigm, the way you explain what happens very close to black holes or in the moments after the big bang is not via a theory of “quantum gravity”—the very concept of which is revealed to be a sort of category mistake—but by directly using the deeper theory, the one that is applicable in conditions before QM and relativity separate conceptually.

  4. Only Bohmian mechanics holds relativity (specifically, SR) to be other than fundamental; it calls for an absolute standard of simultaneity and frame of reference, neither of which we can ever detect (these would, however, be features of the deeper theory).

  5. Bohmian mechanics cannot be made compatible with SR, which some hold as its fatal flaw, but a careful analysis reveals that this is a feature, not a bug. Ordinary QM was made compatible with SR only by the enormous effort and work-around of renormalization. Such efforts have failed for QM and GR.

    Reframe this: the more complete version of QM, Bohm, is incompatible with both SR and GR, which is consistent. The version of QM that’s missing a key component (a mechanism for entanglement, which seems to violate SR ) can be jury-rigged for SR compatibility by the very dint of its incompleteness. But now the fact that SR can be made compatible with QM, while GR cannot, is inconsistent, and hence problematical.

  6. The ubiquitous quantum entanglement that uniquely characterizes Bohmian mechanics keeps on showing up elsewhere. It follows if you try to derive spacetime from information (which suggests that it follows if you try to derive it from anything more fundamental), and it also follows if you try to create a version of QM that avoids complex numbers (real-vector-space QM). Since complex numbers cannot refer to anything real and are just a mathematical shortcut, they need to be absent from the deeper theory.

  7. Bohmian mechanics removes QM’s fundamental randomness. A deeper theory has the potential to remove all of the weirdness of QM, by deriving the wave function as a genuine statistical mechanics and explaining how wave / particle duality arises.

  8. Update: On September 18, 2018, the world of physics was shaken by the publication in Nature of a gedanken experiment by Daniela Frauchiger and Renato Renner that shows that Quantum theory cannot consistently describe the use of itself. The paradox results when quantum measurements have been made, but observers ignorant of those measurements model the system containing them as still being in a superposition (as the QM formalism demands they do). I think there is a solid argument that only in Bohmian mechanics is there no paradox, because the philosophical import of Bohm is that it’s not legitimate to use QM to evaluate a system where a quantum measurement may have taken place. Superpositions and state reduction (“wave function collapse”) in Bohm are epistemic, not ontological; they reflect only our knowledge and not reality. Therefore, QM cannot be used recursively.

Bohm himself believed in a deeper reality beneath our observable one. One of his many ideas about the relationship of this “implicate order” to our “explicate order” was that the latter is a projection from the former. This idea predated the holographic theory (a finding from black hole thermodynamics which shows that the contents of any volume of spacetime can be projected from its surface) by many years. It seems to me like a surefire candidate for an element of the deeper theory.

I’d say five to ten years before we get something that looks workable.

Modern Indie SF Films: The Definitive Critical List

I think it's easier to point people to this labor of love by posting the link here (opens in new window).

You'll note that the IMDB plot summaries, which in their list format play an important role, are woefully incnsistent in quality. I actually wrote (which is to say, rewrote) the summaries for #'s 4, 43, and 48, and have submitted improvements for 1 and 4 that they haven't used ... I may submit a bunch of others at some later point.

The Science of Free Will, a Talk at the Watertown Library ...

... on a date and time in August to be determined by you, the potential attendees!

Take the best-time survey at

Here's the description of the talk.

Got Free Will?

We feel we have free will, because we feel we have the ability to choose what we do. Science says that’s an illusion: our choices may seem to be free, but they are in fact utterly determined by the past and the laws of nature. Our brains are calculating what to think and do next, with no more freedom of will than our smartphones.

Most philosophers agree. Either “free will” means something other than what we think it does, or it doesn’t exist at all.

I say: it’s real … and it’s spectacular.


The problem of free will is likely the thorniest in all of philosophy. Yet it is a fundamentally scientific problem, and the philosophical debate largely reads like science rather than philosophy. (Geeks rejoice!)

We all possess an innate understanding of free will. It derives from our apparent capacity to “choose otherwise,” and seems to be necessary for moral responsibility. As just mentioned, however, science tells us that the laws of nature seem to preclude free will. There seems to be no actual capacity to choose otherwise. This is the problem of determinism. (The repetition of “seems” in this paragraph seems to be necessary, too.)

If human beings do have free will, then one of two things must be true. Either science is wrong about determinism, or our innate concept of free will and moral responsibility is incorrect—but correctable. And these are problems in physics and psychology, respectively. More science!

I open this talk with a summary of the second chapter of my work-in-progress, Mind as Matter: A Comprehensive Theory of Phenomenal Consciousness and Actual Free Will. I explain the basics of the philosophical scientific debate, and outline the essential arguments of the opposing philosophical camps—those who believe the science of determinism must be wrong, and those who believe that “free will” must mean something other than what we think it means.*

Once I’ve done that, the talk takes an unexpected direction.

“Only for a conscious agent is there a problem of free will, and if free will does exist, it can only exist in conscious agents.”—John Searle.

And I would add: the only thing that only consciousness can do is provide a mechanism for free will.

In the talk’s second half, I explain how consciousness works, and how it serves as a way to introduce genuine free will into an otherwise deterministic universe.

Oh, and if I have time, I’ll explain how quantum mechanics makes sense.

* Immanuel Kant (1781) famously disapproved of the latter option: “Some try to evade [determinism] by … a comparative notion of freedom … as we call the motion of a clock a free motion, because it moves its hands itself … so although the actions of man are necessarily determined by causes which precede in time, we yet call them free … This is a wretched subterfuge with which some persons … think they have solved, with a petty word-jugglery, that difficult problem, at the solution of which centuries have labored in vain.”

The (Secret) Philip K. Dick Reading Plan, Part I

This Monday evening, at the Coolidge Corner Theatre in Brookline, I gave a short introduction to their presentation of Blade Runner: The Final Cut (the first entry in their "Big Screen Classics" series for this year). That intro, which explained the influence of the film on Philip K. Dick's literary reputation, is on my Facebook page, and a longer version will appear here next. But I also promised to post a PKD reading plan that would also provide the secret to becoming a Dickhead Dickophile.

And the secret is this: the sum of all of PKD's best novels is way, way, greater than the parts. The novels talk to each other in your brain, and they do so whether you're thinking about them or not.

As a result, it's very common for folks to read three or four PKD novels, thinking, this guy is really good. (Like other really good authors they've encountered.) And then, on the fourth or fifth novel, a tipping point is crossed. The novels start that internal conversation, and suddenly you want to read them all.

I said this in the talk: I can think of no major literary figure with a broader range of characteristic styles and structures, and a wider set of major thematic concerns. PKD was asking so many big questions—what’s the nature of reality, what makes us human, why is there evil, just to start with—that you couldn’t address them all in any one book.

In fact, what is presented in the novels is a fairly complete and comprehensive worldview. But by its very nature, it is necessarily distributed across many books.

And it’s not only that so many different questions are addressed, or themes explored (e.g., the innate human capacity to deal with adversity, even on a cosmic level). Each of Dick’s major concerns is too big to cover completely in a single novel. Why is there evil in the world? And what should our response to it be? You could devote an entire literary career to exploring that theme. And indeed there are key reflections on the problem of evil in The Man in the High Castle, Do Androids Dream of Electric Sheep, and A Scanner Darkly (to begin with), but no one of those contains the full argument. And a similar breakdown could be given for each of his other major concerns.

So, here's a PKD reading plan, in which I'll address the other remarkable feature of his oeuvre—the structural and stylistic variety.

There are seven novels that made his reputation, and six of them fall into three well-matched pairs. The seventh is Androids, and that has always been my choice as the best place to start.

Here are the other six, with the dates they were completed.

The Man in the High Castle (11/61)
Martian Time Slip (10/62)
The Three Stigmata of Palmer Eldritch (3/64)
[Do Androids Dream of Electric Sheep (6/66)]
Ubik (12/66)
A Scanner Darkly (4/73)
VALIS (11/78)

The Man in the High Castle won Dick the Hugo Award for Best Novel, and was, on and off, the only PKD novel in print, other than his most recent, at the time of his death in 1982. It is being adapted by Amazon as a mini-series; the pilot is already streaming for free on Amazon Prime, and it’s terrific.

It and Martian Time Slip (which the great British sf author and major PKD-phile Brian W. Aldiss once adapted for an unproduced BBC mini-series) form a pair. Both novels were written with PKD’s preferred working method, which was to spend four to six months taking hundreds of thousands of words of notes, in ever increasing detail, until he had the entire plot of the novel in his head. (He apparently discarded them along the way; none of these notes were saved, and David G. Hartwell, PKD’s last editor, has testified that Phil sold him The Transmigration of Timothy Archer by telling him the entire story, scene by scene, in a recitation so detailed it lasted hours. Without notes).

Both novels use PKD’s favorite early-sixties structure, which many film buffs will recognize from movies like P.T. Anderson’s Magnolia and Alejándro Iñárritu’s Babel. Multiple, seemingly separate story threads, each told him from the point of view of a different character, intersect in all sorts of ways, from head-on collisions to hugely ironic glancing blows, of which the characters themselves are often unaware of.

As a result of this combination—careful advance plotting, but extraordinary complication—the novels are very well written but nevertheless do not really show off Dick’s considerable prowess as a prose stylist. And that’s because PKD almost never rewrote his novels. He typed them up and sent the first drafts to his agent. (Hartwell says he received the finished manuscript of Timothy Archer ten weeks after that conversation.)

The Three Stigmata of Palmer Eldritch and Ubik form another pair (even though their composition was separated by a dark period in PKD’s life, in which he wrote a few of his least satisfying novels.) John Lennon wanted to film The Three Stigmata; Michel Gondry recently abandoned his effort to adapt Ubik, calling it unfilmable (by which he meant that the one screenwriter he hired to tackle it couldn’t crack it).

This pair of novels used his backup working methodology, implemented when he was desperately short of cash in the early sixties. And that was to take handfuls of amphetamines, sit down at the typewriter, and plot a novel as he typed, in the space of four or five weeks. (We know that six weeks would typically pass between the receipt of one manuscript by his agent and the next, and I’m guessing ten days to rest a bit and decide what the next book would be about, in general.)

As a result, these books have writing that’s at times so sloppy it’s hard to believe that they’re written by the same author of the earlier or (especially) later books. And yet there are flashes of startling prose in these novels. Here’s the appearance of an immense, inimical alien spaceship in Now Wait for Last Year:

It was huge enough, he thought, to feed forever; even from the spot where he stood, at the very least a mile from it, he could see that it consisted of a limitless, appetitive self that would begin any time now to gulp down everything in sight.  

Of course, they also have plots so good that it strains credulity that they were composed on the fly. Furthermore, they have an exhilarating breakneck pace, in rather jaw-dropping contrast to the measured pace of his earlier books. There aren’t many writers who have written novels that some readers will, not unfairly, find too slow to get through—and who have other novels that those readers can be pointed at with the warning that they may need to be read at a single sitting. Ubik, in particular, is a book I’ve called “The Space Mountain of literature,” in comparison to other books that are mere roller-coaster rides.

Finally, we have a third pair of novels, A Scanner Darkly, quite faithfully adapted, via rotoscoped animation, by Richard Linklater, and VALIS, turned into a quite beautiful opera by Tod Machover. Both are to some degrees autobiographical (the first half of VALIS almost completely so), dealing respectively with PKD’s experience in the 70’s drug culture, and the strange, quasi-religious experiences he had in February and March of 1974 and which obsessed him until the end of his life. And they contain his very best writing, again, accomplished in a single draft. I actually discovered the handwritten draft of most of the opening chapter of VALIS in the PKD papers stored in (literary estate executor) Paul Williams’ garage; a comparison to the published text revealed a single changed preposition.

The wild card in the plan is the existence of Dr. Bloodmoney, or How We Got Along After the Bomb (2/63), which has a reputation nearly as great as these seven, and is a third great novel in the style of High Castle and Martian Time-Slip, written immediately after those two (something I correctly divined despite the publication dates suggesting otherwise. Paul had always planned to someday go to the offices of Dick’s agents and check the dates of receipts of all the manuscripts, but my suggestion that doing so would make much better sense of PKD’s career prompted him to make it a priority. Paul published the findings in his book on PKD, Only Apparently Real, but he sent me the results right away as a reward for my correct guess.)

In my next post, I’ll outline the specific reading plan.

A Massively Cool Interstellar Insight (Spoilers, Duh!)

How does "quantum data" solve the puzzle of gravity? Here's a post from IMDB's message boards (interesting that the style here is a bit more conversational than elsewhere on the blog) ...

I've been thinking about this a bit, and as is so often the case in Nolan's movies, there's really cool stuff that must have happened behind the scenes that you can safely infer happened.

So daddy Brand reached a point in his theoretical work on gravity where he was stymied because "he couldn't reconcile gravity and quantum mechanics." But that's the present case. We don't have a theory of quantum gravity. There must have been more to that report of his, more scientific detail about what roadblock he ran into, that Nolan leaves out of the movie because you don't ever want it sounding like a science lecture.

I'll come back to that in a moment.

But we can infer that he believed that the roadblock could be overcome if they had data for the way things behave quantum mechanically near a singularity. Because we don't know the answer to that. The theories don't make sense when used together in a massive gravitational field.

They clearly set up an experiment for TARS to run if he (it?) ever got the opportunity. Murph would be able to find the details of that experiment, once she knew that it was necessary. She would know what data to expect. And if that data were a simple two-D graph, the way some property X behaves as a function of time or distance, it's very credible that Cooper could transmit the results to her. It could be several such graphs, of course. (Alternately, Murph gets the data first and then finds the key to unlock it when she knows to look for it. But the key is that the experiment was planned and that she can find a breakdown of what data would constitute the results.)

So, in what way was Prof. Brand stymied? I came up with a bunch of possibilities, but one stands out because it's so cool.

There are currently a bunch of different attempts to reconcile quantum mechanics with gravity (i.e., general relativity). There's string theory, there's loop quantum gravity .. well, if you're interested, Google it!

A lot of physics will get done between now and the time Interstellar happens. Clearly the quantum gravity problem has still not been solved.

I believe that Brand has determined that there are three rival theories, each of which explains all the known data, but which cannot be discriminated among. (This happens all the time in physics, that very different theories make predictions for experiments which vary subtly, and which can only be measured with great difficulty.) The three theories would lead to entirely different ways of creating anti-gravity. Each would take all of the remaining scientific resources on Earth -- IOW, each one requires the building of a machine the size of the Large Hadron Collider, but it's three different machines.

IOW, the situation Brand found himself in is precisely the one that the crew of the ship find themselves in when they get out of the wormhole.

You might ask, why not pick one of the theories at random and build that machine? One in three odds of saving mankind are better than none in three. But it's quite possible that each of the theories had tunable parameters (this is actually true in string theory). IOW, even in this broad theory, it works with many different combinations of, say, the fundamental strength of gravity and some aspect of the dimensionality of spacetime. Even if you got lucky and built the right sort of machine, without the data you wouldn't know how to tune it to get it to work. You would have to spend years trying different variations, by trial and error. Still hosed.

So the real plan A was to try to get the quantum data. I'll have to see the film again to see if Brand's behavior is consistent with that, but I think it is.

-- Why claim to still be working on the theory? Because that sounds much more hopeful. The odds of actually having TARS run the needed experiment, and getting the data back to Earth, were very small. Plan A was real, but it was a huge longshot, essentially a Hail Mary pass with no time on the clock. If you were going to tell the truth about that, it's close to admitting that there is no likely solution. Better to say, I'm working on it and I vow to complete it.

-- That Brand does vow to continue to make an effort to solve the problem is now explained. That was sincere, and not a lie. He vows to wait for the quantum data, should it ever come, and use it to complete plan A. His lie only involves the odds of success.

-- His deathbed despair is well explained by the fact that they have not heard from the ship in years, and communication from it seems impossible. Plan A has failed, and he did lie about it.

The other thing that's so cool about this is that it requires both Brand and especially Murph to be much less brilliant than if there were no theory at all. It's a total strain of credulity that Cooper's daughter is a Hawking-level genius. But if she's merely an excellent physicist, one good enough to get a position at a major college (were there were any left), that's entirely credible, and she would be able to use the data, figure out which theory it pointed to, and determine the proper parameters -- all tough, challenging work, but not requiring an improbable level of genius.

Now, if I can figure all of this out in 36 hours, I certainly think the Nolans must have done so over the three years they worked on the movie. And I think it's wonderfully cool.

Alzheimer's Susceptibility and the MBTI Intuitive / Sensing Trait

A copy of a letter just sent to New Scientist. They've printed two of mine already--both on the interpretation of quantum mechanics, a topic yet to appear here.

Jessica Smith of the UK Alzheimer’s Society points out the need to “tease out the difference between those with type 2 diabetes who develop Alzheimer’s and those that don’t” (6 December, p 6). I believe a hugely important clue to Alzheimer’s susceptibility was discovered by the Nun Study of Aging and Alzheimer’s Disease, but its practical significance has gone unrecognized.

The Nun Study compared age-22 autobiographical essays written by those who did and did not go on to develop the disease, and discovered a remarkable difference in what the Study analysts have characterized as linguistic density and complexity (high levels being protective). That difference, however, can be better characterized by the “intuitive / sensing” trait dichotomy of Jung’s personality theory; the nuns at risk had characteristically produced mere lists of facts about themselves, while those who proved to be immune featured a “complexity of interrelated ideas.” (This interpretation was doubtlessly missed because the commercialization of Jung’s theory as the Meyers-Briggs Type Indicator has made it unfashionable, and because it is mistakenly believed to be necessarily in conflict with the better-established Big Five model, when it in fact attempts to describe a deeper level of traits).

In a term paper for a Harvard graduate seminar in 2001, I presented a wealth of evidence to argue that this trait (better called “interpretive / empirical”) is fundamental, and is mediated by the basal forebrain cholinergic system, whose paradigmatic role is the inhibition of the brain’s default associative spread. (The paper received an A grade from Professor Mark Baxter, now of the Icahn School of Medicine at Mount Sinai, and one of the pioneering researchers on the role of acetylcholine in regulating attention.) The cholinergic system is well known to innervate the pyramidal cells that in Alzheimer’s are destroyed by beta-amyloid plaques, and to suffer catastrophic damage itself, beginning in the early stages of the disease.

My hypothesis in 2001 was that interpretive types, with their innate low levels of cholinergic regulation, were somehow immune to the disease process. The apparent mechanism is now clear. In pyramidal cells, amyloid precursor protein (APP) has a well-established regulation by acetylcholine. Deficits in this regulation are known to increase the production of beta-amyloid, which in turn is neurotoxic to the cholinergic cells. This is one of the destructive feedback loops characteristic of the disease process, and it may underlie the extraordinarily elevated disease risk of individuals with Down syndrome. Trisomy 21 produces an abnormal surplus of APP—seemingly too much for ordinary levels of acetylcholine to regulate properly.

It would therefore seem to be impossible for interpretive types, with their low innate levels of cholingeric innervation, to have anything but correspondingly low levels of APP in the cells involved in the disease process. These levels are apparently so low that aberrant production of beta-amyloid, regardless of originating factor, can never cross the threshold where any of the components of the disease process can begin. (While the function of APP is far from fully characterized, it is believed to be involved in synapse formation, which could be part of a mechanism for coordinating normal APP levels with their regulatory innervation.)

This hypothesis is attractive, because the interpretive / empirical trait is immensely easier to test for than the linguistic density and complexity that it mediates. One simply asks for endorsement of a statement such as “things remind me of other things all the time”; a 4 or 5 answer on a 5-point Likert scale indicates a low or extremely low disease susceptibility. (That particular question has achieved highly significant correlations with predicted behavioral responses in a preliminary version of my own trait instrument.) Verifying this would be a great boon to ongoing research. Researchers who wish a copy of the 2001 paper may contact me by commenting on the [copy / longer version] of this letter at

Peter Jackson's Lord of the Rings, Film Criticism, and Tarsem Singh's The Fall

Over at the Boston Globe, film critic Peter Keough is in the process of soliticing entries for next Sunday's "Cinemania" column: all-time best fantasy adventures. Here's the letter I wrote him.
As a World Fantasy Award nominee (essentially for critical work) and a cinephile, I feel it’s my holy duty to chime in here. My choices are easy: the complete Extended Edition of Peter Jackson’s The Lord of the Rings; The Wizard of Oz; and, if you regard them as adventures (and why not?), Spirited Away and La belle et la bête.

But I have more on my mind than simply voting. I want to heap outlandish praise on a film that, at first glance, doesn’t seem to need it, and to champion one that doesn’t technically qualify and to which you gave a lukewarm review.

The last time I checked, the various installments of The Lord of the Rings had the highest user ratings of any films at Netflix. They were rapturously reviewed by contemporary critics, and buried in Oscars. And yet they combined for just a single vote in the last Sight & Sound poll of critics and directors, which is to say one less than Borat. Is it possible that a film so universally regarded as exquisite entertainment has no deep value as art?

Of course not. In fact, it’s my opinion that the 11 hours of The Lord of the Rings constitute the single greatest achievement in the history of cinema. Now, that’s an opinion you might expect to hear from a fanboy who hasn’t seen much else, but the film that LOTR edges out of the top spot in my heart is the five-hour original cut of Fanny and Alexander. My list of all-time favorites (besides the usual suspects like Vertigo and 2001) continues with the likes of A Separation, O Lucky Man!, The Rules of the Game, and The Passion of Joan of Arc, and if you put a gun to my head I might choose Ugetsu over Seven Samurai as my favorite Japanese film. So, yes, I’ve seen all of the contenders. I’ll still take the Jackson, thank you.

The reason why LOTR has garnered no deep critical respect, I think, is simple: it’s too long. If its brilliance could be boiled down to three hours so that critics could watch it one night and Seven Samurai the next, there’s no question in my mind that it would have comparable stature. But to grasp just how great the film is, you have to watch all three Extended Editions (and I’m sure that many critics haven’t seen them at all), and preferably in one day. And then you have to do that at least a few more times to discover that the film indeed gets better and deeper with each screening (I’m up to 11 times, including five day-long marathons). Who has that time when every new week brings a Hangover Part III and R.I.P.D. to review?

I do think the film will eventually acquire the reputation it deserves. Sharp critics who revisit the film while open to the idea that it is high art and not just a wonderful diversion will begin to sense how deep the characters are and how radical the narrative is, and acquire the desire to explore further. How many films of this scope change the protagonist in the middle of the third act? I rather doubt that the last two reels of the last Hunger Games installment will be from the point of view of Katniss’s gardener. (This is of course inherent in the source material, but Jackson does an extraordinary job of underlining it.)* How many films of heroism and triumph end with essentially all of the characters suffering some degree of heartbreak?

In its novel form, this is the most popular story of all time, and there’s a wealth of critical literature justifying that response. Jackson’s adaptation is by no means flawless, but as often as not it improves on the source material (e.g., the character of Boromir). Of course it’s a masterpiece.

And the film that needs to be mentioned in this discussion, even though it’s not a fantasy per se, is Tarsem Singh’s The Fall. As you may remember, this is a film about someone telling a fantasy story. It is not, like The Princess Bride, a fantasy story with an extrinsic frame (I believe you made this error of interpretation, as did many other critics). The story told by Lee Pace’s crippled stuntman character is not just secondary to the surrounding narrative, it is not very well told, which is to say the incoherence that bothered many critics is intentional (in a way that’s always pointed and often comic). What’s more, it is sometimes misunderstood by its audience, for whom English is a second language! And yet, once you grasp the complex relationship between the teller, the listener, and the tale, their combination is astonishingly moving, despite these handicaps. I know of no better narrative exploration (in film or in literature) of the power and methodology of story in general, and of fantastic story specifically. It’s one of my fifteen favorite films of all time. You should really give it another chance; you may find it a revelation.

* In the novel, Frodo says to Sam “I am glad that you are here with me” as they await their death at Mt. Doom; Jackson gives this line to Frodo at the end of the first film and has Frodo say “I’m glad I’m here with you, Samwise Gamgee” at the end. And once you understand that the POV has shifted, the notorious “multiple endings” make sense. There’s really only one false ending, and it’s as essential as Vertigo’s: they get back to the Shire, but instead of the expected fairy tale ending, Frodo is not happy.
Peter replied graciously:

Well argued, though I don't agree 100%. I still think the books are much better, simply because it allows me to imagine a lot of it, instead of having it all shown. True of all films, perhaps, but more so in this.

Your other choices are also intriguing.

Too bad I don't have room to reprint this.
My reply:

I’ve read the novel  17 or 18 times; along with John Crowley's Engine Summer, it's my favorite book in the world. I find that every time I see the film, I let go of the novel’s brilliance more, and can better appreciate the film as an equally great alternative. And that ends up solving the problem you mention; what I imagine when I read the novel has its own life (e.g., Legolas is more “elfin” yet somehow not nearly as pretty).

Where Jacoby Ellsbury's Power is Going (Part 2 of 2)

At the time of the last e-mail in Part 1, Jacoby Ellbsury was leading off and hitting .283 / .366 / .404. He remained in the leadoff spot through July 25th, hitting .220 / .254 / .273 in the intervening 30 games to drop his season line down to a much less satisfying .259 / .327 / .355.

They then dropped him to the bottom of the order for 9 of his next 10 starts (7 games hitting 9th and 1 each hitting 7th and 8th). He hit .429 / .459 / .629 in those 10 games (and two PA off the bench), while going 0/4, HBP in his sole game hitting leadoff.

They then returned Ellsbury to the leadoff spot, and he hit .300 / .333 /.438 the rest of the way.

Ellsbury's performance when dropped down in the order was precisely as I had predicted, but there was weak evidence of a changed overall approach upon his return to the top of the lineup. He was better, but not so much better that I've ever been motivated to test the improvement for significance.

Nevertheless, I felt convinced at the time that this interpretation of his performance and power upside was correct, and I looked forward to the day when he would start attacking the ball whenever he was in a pitcher's count, and blossom as a home run hitter.

Again, I have no idea how these e-mails actually fit into Ellsbury's history. At one extreme, it's possible that they tried getting him to change his approach immediately, and couldn't convince him to try it for two years. At the other extreme, the e-mails may have been neglected and forgotten (although I had been thanked and told they were interesting), even when they finally noticed for themselves that Ellsbury's approach in most hitter's counts was criminally defensive.

What we do know is that he started hitting home runs in 2011. If you look at his 2011 splits, you'll see that he hit .388 / .512 / .711 when ahead in the count, an OPS relative to league (sOPS+) of 154. In 2009, he had hit .319 / .455 / .476 when ahead, which sounds good, but is actually well below average (sOPS+ of 86). His sOPS when behind in the count went up from just 170 to 180.

And you know what? It was incredibly obvious. He was sitting on pitches and hammering them, and he'd never done that before. It caught me eye almost immediately and made me very happy. I think even Jim Rice noticed it and pointed to it at the cause of his improvement.

What's truly interesting is Ellsbury's percentage of balls hit in the air. Previous to 2011: .475, .483, .499, .507 ... and the next term in that series is .570. That, folks, is best explained as the product of a somewhat different swing. A swing that a guy who believed he might be able to hit more home runs by sitting on more pitches might adopt. So we have good reason to believe that Ellsbury changed his approach in two different, complementary ways in 2011.

And then he suffered a serious shoulder injury seven games into 2012, and was out until the All-Star break. He ended up with .533 balls in the air, easily his second highest career mark, but a decline in HR/FB from .167 to .047. In other words, the shoulder injury killed his power, but his swing was still close to his 2011 version, and since he wasn't hitting the ball on the ground, where BABIP thrives, he ended with a dismal slash line of .271 / .313 / .370.

His 2013 divides into two very interesting chunks. On July 3 he was hitting .298 / .361 / .404, with a career-low .474 balls in the air, and a microscopic .013 HR/FB. This is a guy who recognizes his shoulder is still hurting him, has gone back to hitting the ball on the ground, and is having success with the style.

From July 4th on, Ellsbury hit .298 / .346 / .458, with a .518 balls in the air and, more tellingly, a .140 HR/FB. Now, there's no question in my mind that his 2011 numbers were greatly boosted by the reluctance of opposing pitchers to take him seriously as a power threat (that's just based on watching every game and often saying "I can't believe they threw him that pitch" while grinning). I think his 2013 second-chunk .140 HR/FB is precisely the same pop as his .167 from 2011; the difference is tougher pitches.

So the next question when we try to project Ellsbury's power going forward is his balls-in-air percentage. I don't think it's too easy or even desirable to change your swing too much mid-season, so I think it's quite possible that his .474 to .518 change in 2013 doesn't represent what he's capable of, if he begins a season feeling his shoulder is 100% (as he ought to in 2014) and is in a home-run hitting frame of mind (ditto). I think .540 or .550 is reasonable.

Finally, there's Yankee Stadium. In 2011, it would have turned his 15 Fenway homers into 17 (adding five to right and subtracting three from left), but in 2013, it would have turned 4 into 11. It's hard to derive a prediction from that, and no one has HR/FB Park Factors by batter handedness. But my most recent edition of the Bill James Handbook tells me that LHB hit about 75% more HRs in Yankee Stadium than in Fenway.

So, what happens if Ellsbury maintains his second-half HR/FB, improves his balls-in-air percentage to .550, and plays 120 games for the Yankees? He hits 25 HR. If he plays 140 games, he hits 30.

I think he's going to be worth the money.

Do I wonder whether I helped him earn even a tiny part of it? I suppose I do.

Where Jacoby Ellsbury's Power Came From (Part 1 of 2)

In June of 2009 I was still receiving severance pay from the Boston Red Sox after being laid off as a baseball operations consultant unexpectedly at the start of the year—a move dictated by upper management as a response to the recession. I was grateful for this unexpected generosity, and when we parted ways I told them I’d send them anything I found truly interesting and important, as a freebie.

The previous year I had noticed that Jacoby Ellsbury had astounding splits depending on the identity of the following hitter, and they were opposite from the conventional wisdom: he was amazingly better with a weak hitter behind them.

In this pair of long e-mails to the team, I explain that phenomenon, and advise them on a simple change Ellsbury could make in his approach—one that would unlock his power potential.

Note that I am not claiming responsibility for Ellsbury’s blossoming as a hitter. In fact, it's quite possible that this argument was ignored in the short run, and that his hitting coaches eventually noticed what I did, and gave him the same advice. But I believe it completely explains where Ellsbury’s 2011 power spike (usually regarded as a bizarre fluke) came from. More about that in Part 2.

June 14: A Freebie (Ellsbury Solved)

I’ve got 5 different lines of evidence, all independent, that point to the same conclusion:

When Jacoby Ellsbury thinks it’s his job to get on base (which is most of the time), he does no guessing, no sitting on pitches in hitter’s counts, and just reacts to what’s thrown. And he’s not very good. When he doesn’t think that’s his job, he does guess and does look for pitches to drive—and is a vastly better hitter as a result.

Here’s the direct evidence that he usually is just reacting. It’s from this year’s pitch/fx data and is about a week old. I’ve been looking all year at pitches thrown right down the middle (center third of the 20” of the plate plus the black, center third of the zone top to bottom as defined by pitch/fx). Here’s Jacoby’s rank among the 9 regulars:

1st in percentage seen (of all pitches)
7th in % taken (only Ortiz and Bay have swung at more)
2nd in lowest % of swings and misses (only Lowell is better)
2nd in lowest % of fouls per swing (barely trailing Lowell)
and hence a huge #1 in % of pitches down the middle hit fair

He is 5th in BA on those balls hit fair, but next to last (to Green) in Iso and SA.

This was puzzling. He swings at way more pitches down the middle than average and rarely misses them, but doesn’t hit them hard at all.

Well, why would a player take a pitch right down the middle? Because he was expecting something else. Why would he swing and miss at one? Because he was expecting something else and didn’t recognize the difference.

That Ellsbury rarely takes or misses a pitch down the middle suggests that, on the whole, he’s not expecting anything. He’s just up there reacting. Not that big league hitters guess on every pitch, but I think the good ones all do it, to a degree, especially when ahead in the count.

Is there more evidence to support this? Take his career splits by count (as of last week). He actually has a higher BA and SA after falling behind 0-1 (.305 / .401) than getting ahead 1-0 (.278 / .398). The reverse split is just random, but the lack of the expected positive split (last year the league was .240 / .363 after getting behind 0-1, .280 / .452 after getting ahead) indicates that he’s usually not using the count to help predict what he might see. The entire reason hitters hit better when ahead in the count is that the opposing pitching repertoire is narrowed, allowing more effective guessing / sitting on pitches.

Here’s his SA after various counts compared to AL ‘08 average, sorted by the difference:



AL ‘08

After 0-2 .326 .274 .052
After 0-1 .401 .363 .038
After 1-2 .309 .289 .020
After 2-1 .431 .426 .005
After 1-1 .393 .389 .004
After 2-2 .302 .319 -.017
After 1-0 .398 .452 -.054
After 2-0 .433 .487 -.054
Full .288 .376 -.088
First pitch .441 .544 -.103
After 3-0 .385 .508 -.123
After 3-1 .256 .488 -.232

Just a massive failure to take advantage of being ahead in the count, despite better-than-average numbers while being behind.

This insight allows us to reinterpret my last year’s analysis. And that gives us two more lines of evidence. As you recall, he had a split where he was immensely better with a weak hitter behind him than with an elite one. And he has been much better in his career with 2 outs than with 0.

I thought that both of these were about the pitcher’s different approach (pitch around him vs. challenge him). But now we see that it’s his own approach that changes. Hitting 8th with a Lugo type behind him, he’s not putting a premium on setting the table and starts to think about driving the ball. So he starts to guess, sit on pitches in hitter’s counts. Same thing when he’s up with 2 outs versus 0.

I would have sent this a week ago, but I wanted to test the theory with another line of evidence, another set of numbers compiled after I came up with the idea. I was going to break down his performance by count depending on following hitter, but that’s an enormous amount of work and I’m busy.

Then tonight he comes up with a four-run lead in the 9th, is obviously sitting on a pitch, and hits it out. And I realize, there’s another split to look at. He should be better with the Sox leading comfortably, when getting on base is not a premium, than with them trailing, when it’s everything. And here are the numbers from last year:

Sox trailing, 181 PA, .272 / .320 / .361 (and 4 GDP)
Up 3 or more runs, 121 PA, .297 / .355 / .495 (and 1 GDP).

Just tell him that he should always be up there with an idea of what he’s going to see, always guessing and looking for a pitch to drive when he gets ahead, regardless of score, outs, men on base, or the hitter on the on-deck circle. And he should be terrific.


June 18: RE: Ellsbury Solved

Thanks much for [an opportunity to buy tickets at face value]. And as reward (or punishment), the final word on Ellsbury, including, for the first time ever, why he has a 1000+ OPS hitting in front Lugo, Lowrie, Green, Crisp, and Cash (not just combined as a group, in front of each and every one of them!). I can’t believe it took me so long to figure out…
Numbers are through Tuesday night.

I. Admonition

Tell Ellsbury to stop trying to move the runners over!

He has 28 career PA with runners on 1st or 1st and 2nd, 0 outs, and a good hitter on deck. He’s hit .111 / .143 / .111 with 6 GDP. That’s 31 outs in 28 PA. And a 21.4% GDP rate. And he’s not improving at it—in fact, he’s 1 for his last 15, HBP, 5 GDP going back to last July 11.

He has 10 GDP in his other career 207 GDP opportunities—a 4.8% rate. The difference in rates has a 1 in 937 chance of happening randomly. He also has a .036 K+BB% in these PA vs. .182 otherwise, which is also statistically significant, so he’s exhibiting no patience at all.

II. Facts

Next, here’s what I think are his relevant career splits: [HRC = HR / Contact, XIP = XBH / Balls in Play, 1B% = 1B / (1B + Outs in Play)]












Move 'em Over 28 .111 .143 .111 .254 -0.24 .036 .000 .000 .000 .115
Other gd next, 0 outs or less than 2 run lead 634 .262 .309 .353 .662 4.00 .125 .057 .014 .053 .251
Gd next, 1 or 2 outs and 2+ run lead 170 .340 .398 .431 .829 7.41 .112 .082 .000 .088 .323
Bad next, 0 outs or trail 95 .384 .442 .500 .942 8.51 .095 .074 .026 .053 .380
Bad next, 1 or 2 outs, tied or leading 73 .464 .513 .791 1.310 20.34 .123 .055 .083 .109 .429

You can see that his strike zone command is better in rows 3 and 4 than row 2. In the last row, his K% and BB% revert to baseline (quite possibly a SSS fluke) but he has a massive spike in HR power; 36% of his career HR have been hit in these 7% of his career PA. He also has his best splits for XBH / BIP and for 1B / (1B + Outs in Play).

His overall good next / bad next split is now .272 / .322 / .360 in 832 PA (4.15 RC/27) vs. .419 / .474 / .632 in 168 PA (12.78 RC/27).

The good hitters are almost all Pedroia: 739 PA vs. 39 for Youkilis, Ortiz, Drew, and Lowell. In case you think the Bad next splits are driven by one guy, here’s all the guys with 10 or more PA:

Next Batter






Lugo 61 .382 .452 .564 1.015 11.16
Lowrie 39 .486 .538 .743 1.281 14.95
Green 20 .500 .545 .500 1.045 16.42
Crisp 11 .500 .545 .900 1.445 27.64
Cash 10 .500 .500 1.100 1.600 24.30

III. Interpretation

There are two related things going on here.

  • He has two different approaches depending on game situation, but the one he uses most of the time is very counter-productive.

  • As a result, his overall numbers and reputation do not reflect his actual ability as a hitter. And when he bats in front of a weak hitter, the opposing pitchers have no fear of him or the next guy and are not working to get him out at all. They’re just throwing it right over the plate, and he’s killing it.

Two approaches. The first we’ll call “just get on base” [GOB] and the second “try to do some damage” [DD]. He just tries to get on base whenever there’s 0 outs or whenever the team is trailing. He tries to do some damage when there’s 1 or 2 outs and the team is up by 2 or more runs.

When there’s 1 or 2 outs and the score is tied or we’re up by a run, his approach depends on the next hitter. He tries to just get on if the hitter is good, do some damage if he’s bad.

All of this makes absolutely perfect sense. The problem is, of course, that his GOB approach sucks. Just trying to get on knocks 90 points off his OBP.

I earlier identified one aspect of the two approaches: that he’s just reacting when he’s GOB, while he’s guessing and sitting on pitches when he’s DD. There may of course be other differences. The hypothesis is backed up by the pitch/fx data: this year he’s taken 22% of pitches down the middle in GOB situations but 38% in DD. The odds against that being random are just 5 to 1 but it is absolutely in the expected direction.

No respect. How a pitcher attacks a hitter is clearly more complex than simply challenging them vs. pitching around them, which really only applies to good hitters. Bad hitters are essentially always challenged, but when they’re protected by a good hitter, they are pitched much more carefully. When Ellsbury hits in front of Pedroia they take him very seriously and work to get him out. When Julio Lugo or Nick Green is up next, they essentially get way too overconfident and treat him like the punch-and-Judy hitter that he isn’t.

IV. Advice.

You don’t really need to know the components of the two approaches. Just verify that he does change his approach according to the game situation, and tell him to abandon the GOB approach and just try to do damage all the time. If he does that, he should go on a tremendous tear hitting 8th.

In terms of batting order, you paradoxically want to leave him there as long as they’re disrespecting him and he’s killing the ball. It’ll probably take a few weeks before teams notice that he’s not a weak and easy out and stop feeding him meatballs. But from that point onward he should still be a .380 OBP, .430 SA guy at a minimum if he sticks with the DD approach. (And his walk rate will go up—it’s true that he sees more 3-ball pitches in the strike zone than almost anyone on the team.) So you would move him back to the top of the order when he appears to “cool down” to normal.