The Antepenultimate Truth

A Massively Cool Interstellar Insight (Spoilers, Duh!)
ericmvan
How does "quantum data" solve the puzzle of gravity? Here's a post from IMDB's message boards (interesting that the style here is a bit more conversational than elsewhere on the blog) ...

I've been thinking about this a bit, and as is so often the case in Nolan's movies, there's really cool stuff that must have happened behind the scenes that you can safely infer happened.

So daddy Brand reached a point in his theoretical work on gravity where he was stymied because "he couldn't reconcile gravity and quantum mechanics." But that's the present case. We don't have a theory of quantum gravity. There must have been more to that report of his, more scientific detail about what roadblock he ran into, that Nolan leaves out of the movie because you don't ever want it sounding like a science lecture.

I'll come back to that in a moment.

But we can infer that he believed that the roadblock could be overcome if they had data for the way things behave quantum mechanically near a singularity. Because we don't know the answer to that. The theories don't make sense when used together in a massive gravitational field.

They clearly set up an experiment for TARS to run if he (it?) ever got the opportunity. Murph would be able to find the details of that experiment, once she knew that it was necessary. She would know what data to expect. And if that data were a simple two-D graph, the way some property X behaves as a function of time or distance, it's very credible that Cooper could transmit the results to her. It could be several such graphs, of course. (Alternately, Murph gets the data first and then finds the key to unlock it when she knows to look for it. But the key is that the experiment was planned and that she can find a breakdown of what data would constitute the results.)

So, in what way was Prof. Brand stymied? I came up with a bunch of possibilities, but one stands out because it's so cool.

There are currently a bunch of different attempts to reconcile quantum mechanics with gravity (i.e., general relativity). There's string theory, there's loop quantum gravity .. well, if you're interested, Google it!

A lot of physics will get done between now and the time Interstellar happens. Clearly the quantum gravity problem has still not been solved.

I believe that Brand has determined that there are three rival theories, each of which explains all the known data, but which cannot be discriminated among. (This happens all the time in physics, that very different theories make predictions for experiments which vary subtly, and which can only be measured with great difficulty.) The three theories would lead to entirely different ways of creating anti-gravity. Each would take all of the remaining scientific resources on Earth -- IOW, each one requires the building of a machine the size of the Large Hadron Collider, but it's three different machines.

IOW, the situation Brand found himself in is precisely the one that the crew of the ship find themselves in when they get out of the wormhole.

You might ask, why not pick one of the theories at random and build that machine? One in three odds of saving mankind are better than none in three. But it's quite possible that each of the theories had tunable parameters (this is actually true in string theory). IOW, even in this broad theory, it works with many different combinations of, say, the fundamental strength of gravity and some aspect of the dimensionality of spacetime. Even if you got lucky and built the right sort of machine, without the data you wouldn't know how to tune it to get it to work. You would have to spend years trying different variations, by trial and error. Still hosed.

So the real plan A was to try to get the quantum data. I'll have to see the film again to see if Brand's behavior is consistent with that, but I think it is.

-- Why claim to still be working on the theory? Because that sounds much more hopeful. The odds of actually having TARS run the needed experiment, and getting the data back to Earth, were very small. Plan A was real, but it was a huge longshot, essentially a Hail Mary pass with no time on the clock. If you were going to tell the truth about that, it's close to admitting that there is no likely solution. Better to say, I'm working on it and I vow to complete it.

-- That Brand does vow to continue to make an effort to solve the problem is now explained. That was sincere, and not a lie. He vows to wait for the quantum data, should it ever come, and use it to complete plan A. His lie only involves the odds of success.

-- His deathbed despair is well explained by the fact that they have not heard from the ship in years, and communication from it seems impossible. Plan A has failed, and he did lie about it.

The other thing that's so cool about this is that it requires both Brand and especially Murph to be much less brilliant than if there were no theory at all. It's a total strain of credulity that Cooper's daughter is a Hawking-level genius. But if she's merely an excellent physicist, one good enough to get a position at a major college (were there were any left), that's entirely credible, and she would be able to use the data, figure out which theory it pointed to, and determine the proper parameters -- all tough, challenging work, but not requiring an improbable level of genius.

Now, if I can figure all of this out in 36 hours, I certainly think the Nolans must have done so over the three years they worked on the movie. And I think it's wonderfully cool.
Tags: ,

Alzheimer's Susceptibility and the MBTI Intuitive / Sensing Trait
ericmvan
A copy of a letter just sent to New Scientist. They've printed two of mine already--both on the interpretation of quantum mechanics, a topic yet to appear here.

Jessica Smith of the UK Alzheimer’s Society points out the need to “tease out the difference between those with type 2 diabetes who develop Alzheimer’s and those that don’t” (6 December, p 6). I believe a hugely important clue to Alzheimer’s susceptibility was discovered by the Nun Study of Aging and Alzheimer’s Disease, but its practical significance has gone unrecognized.

The Nun Study compared age-22 autobiographical essays written by those who did and did not go on to develop the disease, and discovered a remarkable difference in what the Study analysts have characterized as linguistic density and complexity (high levels being protective). That difference, however, can be better characterized by the “intuitive / sensing” trait dichotomy of Jung’s personality theory; the nuns at risk had characteristically produced mere lists of facts about themselves, while those who proved to be immune featured a “complexity of interrelated ideas.” (This interpretation was doubtlessly missed because the commercialization of Jung’s theory as the Meyers-Briggs Type Indicator has made it unfashionable, and because it is mistakenly believed to be necessarily in conflict with the better-established Big Five model, when it in fact attempts to describe a deeper level of traits).

In a term paper for a Harvard graduate seminar in 2001, I presented a wealth of evidence to argue that this trait (better called “interpretive / empirical”) is fundamental, and is mediated by the basal forebrain cholinergic system, whose paradigmatic role is the inhibition of the brain’s default associative spread. (The paper received an A grade from Professor Mark Baxter, now of the Icahn School of Medicine at Mount Sinai, and one of the pioneering researchers on the role of acetylcholine in regulating attention.) The cholinergic system is well known to innervate the pyramidal cells that in Alzheimer’s are destroyed by beta-amyloid plaques, and to suffer catastrophic damage itself, beginning in the early stages of the disease.

My hypothesis in 2001 was that interpretive types, with their innate low levels of cholinergic regulation, were somehow immune to the disease process. The apparent mechanism is now clear. In pyramidal cells, amyloid precursor protein (APP) has a well-established regulation by acetylcholine. Deficits in this regulation are known to increase the production of beta-amyloid, which in turn is neurotoxic to the cholinergic cells. This is one of the destructive feedback loops characteristic of the disease process, and it may underlie the extraordinarily elevated disease risk of individuals with Down syndrome. Trisomy 21 produces an abnormal surplus of APP—seemingly too much for ordinary levels of acetylcholine to regulate properly.

It would therefore seem to be impossible for interpretive types, with their low innate levels of cholingeric innervation, to have anything but correspondingly low levels of APP in the cells involved in the disease process. These levels are apparently so low that aberrant production of beta-amyloid, regardless of originating factor, can never cross the threshold where any of the components of the disease process can begin. (While the function of APP is far from fully characterized, it is believed to be involved in synapse formation, which could be part of a mechanism for coordinating normal APP levels with their regulatory innervation.)

This hypothesis is attractive, because the interpretive / empirical trait is immensely easier to test for than the linguistic density and complexity that it mediates. One simply asks for endorsement of a statement such as “things remind me of other things all the time”; a 4 or 5 answer on a 5-point Likert scale indicates a low or extremely low disease susceptibility. (That particular question has achieved highly significant correlations with predicted behavioral responses in a preliminary version of my own trait instrument.) Verifying this would be a great boon to ongoing research. Researchers who wish a copy of the 2001 paper may contact me by commenting on the [copy / longer version] of this letter at ericmvan.com.

Peter Jackson's Lord of the Rings, Film Criticism, and Tarsem Singh's The Fall
ericmvan
Over at the Boston Globe, film critic Peter Keough is in the process of soliticing entries for next Sunday's "Cinemania" column: all-time best fantasy adventures. Here's the letter I wrote him.
 ----------
As a World Fantasy Award nominee (essentially for critical work) and a cinephile, I feel it’s my holy duty to chime in here. My choices are easy: the complete Extended Edition of Peter Jackson’s The Lord of the Rings; The Wizard of Oz; and, if you regard them as adventures (and why not?), Spirited Away and La belle et la bête.

But I have more on my mind than simply voting. I want to heap outlandish praise on a film that, at first glance, doesn’t seem to need it, and to champion one that doesn’t technically qualify and to which you gave a lukewarm review.

The last time I checked, the various installments of The Lord of the Rings had the highest user ratings of any films at Netflix. They were rapturously reviewed by contemporary critics, and buried in Oscars. And yet they combined for just a single vote in the last Sight & Sound poll of critics and directors, which is to say one less than Borat. Is it possible that a film so universally regarded as exquisite entertainment has no deep value as art?

Of course not. In fact, it’s my opinion that the 11 hours of The Lord of the Rings constitute the single greatest achievement in the history of cinema. Now, that’s an opinion you might expect to hear from a fanboy who hasn’t seen much else, but the film that LOTR edges out of the top spot in my heart is the five-hour original cut of Fanny and Alexander. My list of all-time favorites (besides the usual suspects like Vertigo and 2001) continues with the likes of A Separation, O Lucky Man!, The Rules of the Game, and The Passion of Joan of Arc, and if you put a gun to my head I might choose Ugetsu over Seven Samurai as my favorite Japanese film. So, yes, I’ve seen all of the contenders. I’ll still take the Jackson, thank you.

The reason why LOTR has garnered no deep critical respect, I think, is simple: it’s too long. If its brilliance could be boiled down to three hours so that critics could watch it one night and Seven Samurai the next, there’s no question in my mind that it would have comparable stature. But to grasp just how great the film is, you have to watch all three Extended Editions (and I’m sure that many critics haven’t seen them at all), and preferably in one day. And then you have to do that at least a few more times to discover that the film indeed gets better and deeper with each screening (I’m up to 11 times, including five day-long marathons). Who has that time when every new week brings a Hangover Part III and R.I.P.D. to review?

I do think the film will eventually acquire the reputation it deserves. Sharp critics who revisit the film while open to the idea that it is high art and not just a wonderful diversion will begin to sense how deep the characters are and how radical the narrative is, and acquire the desire to explore further. How many films of this scope change the protagonist in the middle of the third act? I rather doubt that the last two reels of the last Hunger Games installment will be from the point of view of Katniss’s gardener. (This is of course inherent in the source material, but Jackson does an extraordinary job of underlining it.)* How many films of heroism and triumph end with essentially all of the characters suffering some degree of heartbreak?

In its novel form, this is the most popular story of all time, and there’s a wealth of critical literature justifying that response. Jackson’s adaptation is by no means flawless, but as often as not it improves on the source material (e.g., the character of Boromir). Of course it’s a masterpiece.

And the film that needs to be mentioned in this discussion, even though it’s not a fantasy per se, is Tarsem Singh’s The Fall. As you may remember, this is a film about someone telling a fantasy story. It is not, like The Princess Bride, a fantasy story with an extrinsic frame (I believe you made this error of interpretation, as did many other critics). The story told by Lee Pace’s crippled stuntman character is not just secondary to the surrounding narrative, it is not very well told, which is to say the incoherence that bothered many critics is intentional (in a way that’s always pointed and often comic). What’s more, it is sometimes misunderstood by its audience, for whom English is a second language! And yet, once you grasp the complex relationship between the teller, the listener, and the tale, their combination is astonishingly moving, despite these handicaps. I know of no better narrative exploration (in film or in literature) of the power and methodology of story in general, and of fantastic story specifically. It’s one of my fifteen favorite films of all time. You should really give it another chance; you may find it a revelation.

* In the novel, Frodo says to Sam “I am glad that you are here with me” as they await their death at Mt. Doom; Jackson gives this line to Frodo at the end of the first film and has Frodo say “I’m glad I’m here with you, Samwise Gamgee” at the end. And once you understand that the POV has shifted, the notorious “multiple endings” make sense. There’s really only one false ending, and it’s as essential as Vertigo’s: they get back to the Shire, but instead of the expected fairy tale ending, Frodo is not happy.
-------
Peter replied graciously:

Well argued, though I don't agree 100%. I still think the books are much better, simply because it allows me to imagine a lot of it, instead of having it all shown. True of all films, perhaps, but more so in this.

Your other choices are also intriguing.

Too bad I don't have room to reprint this.
----------
My reply:

I’ve read the novel  17 or 18 times; along with John Crowley's Engine Summer, it's my favorite book in the world. I find that every time I see the film, I let go of the novel’s brilliance more, and can better appreciate the film as an equally great alternative. And that ends up solving the problem you mention; what I imagine when I read the novel has its own life (e.g., Legolas is more “elfin” yet somehow not nearly as pretty).
Tags: ,

Where Jacoby Ellsbury's Power is Going (Part 2 of 2)
ericmvan
At the time of the last e-mail in Part 1, Jacoby Ellbsury was leading off and hitting .283 / .366 / .404. He remained in the leadoff spot through July 25th, hitting .220 / .254 / .273 in the intervening 30 games to drop his season line down to a much less satisfying .259 / .327 / .355.

They then dropped him to the bottom of the order for 9 of his next 10 starts (7 games hitting 9th and 1 each hitting 7th and 8th). He hit .429 / .459 / .629 in those 10 games (and two PA off the bench), while going 0/4, HBP in his sole game hitting leadoff.

They then returned Ellsbury to the leadoff spot, and he hit .300 / .333 /.438 the rest of the way.

Ellsbury's performance when dropped down in the order was precisely as I had predicted, but there was weak evidence of a changed overall approach upon his return to the top of the lineup. He was better, but not so much better that I've ever been motivated to test the improvement for significance.

Nevertheless, I felt convinced at the time that this interpretation of his performance and power upside was correct, and I looked forward to the day when he would start attacking the ball whenever he was in a pitcher's count, and blossom as a home run hitter.

Again, I have no idea how these e-mails actually fit into Ellsbury's history. At one extreme, it's possible that they tried getting him to change his approach immediately, and couldn't convince him to try it for two years. At the other extreme, the e-mails may have been neglected and forgotten (although I had been thanked and told they were interesting), even when they finally noticed for themselves that Ellsbury's approach in most hitter's counts was criminally defensive.

What we do know is that he started hitting home runs in 2011. If you look at his 2011 splits, you'll see that he hit .388 / .512 / .711 when ahead in the count, an OPS relative to league (sOPS+) of 154. In 2009, he had hit .319 / .455 / .476 when ahead, which sounds good, but is actually well below average (sOPS+ of 86). His sOPS when behind in the count went up from just 170 to 180.

And you know what? It was incredibly obvious. He was sitting on pitches and hammering them, and he'd never done that before. It caught me eye almost immediately and made me very happy. I think even Jim Rice noticed it and pointed to it at the cause of his improvement.

What's truly interesting is Ellsbury's percentage of balls hit in the air. Previous to 2011: .475, .483, .499, .507 ... and the next term in that series is .570. That, folks, is best explained as the product of a somewhat different swing. A swing that a guy who believed he might be able to hit more home runs by sitting on more pitches might adopt. So we have good reason to believe that Ellsbury changed his approach in two different, complementary ways in 2011.

And then he suffered a serious shoulder injury seven games into 2012, and was out until the All-Star break. He ended up with .533 balls in the air, easily his second highest career mark, but a decline in HR/FB from .167 to .047. In other words, the shoulder injury killed his power, but his swing was still close to his 2011 version, and since he wasn't hitting the ball on the ground, where BABIP thrives, he ended with a dismal slash line of .271 / .313 / .370.

His 2013 divides into two very interesting chunks. On July 3 he was hitting .298 / .361 / .404, with a career-low .474 balls in the air, and a microscopic .013 HR/FB. This is a guy who recognizes his shoulder is still hurting him, has gone back to hitting the ball on the ground, and is having success with the style.

From July 4th on, Ellsbury hit .298 / .346 / .458, with a .518 balls in the air and, more tellingly, a .140 HR/FB. Now, there's no question in my mind that his 2011 numbers were greatly boosted by the reluctance of opposing pitchers to take him seriously as a power threat (that's just based on watching every game and often saying "I can't believe they threw him that pitch" while grinning). I think his 2013 second-chunk .140 HR/FB is precisely the same pop as his .167 from 2011; the difference is tougher pitches.

So the next question when we try to project Ellsbury's power going forward is his balls-in-air percentage. I don't think it's too easy or even desirable to change your swing too much mid-season, so I think it's quite possible that his .474 to .518 change in 2013 doesn't represent what he's capable of, if he begins a season feeling his shoulder is 100% (as he ought to in 2014) and is in a home-run hitting frame of mind (ditto). I think .540 or .550 is reasonable.

Finally, there's Yankee Stadium. In 2011, it would have turned his 15 Fenway homers into 17 (adding five to right and subtracting three from left), but in 2013, it would have turned 4 into 11. It's hard to derive a prediction from that, and no one has HR/FB Park Factors by batter handedness. But my most recent edition of the Bill James Handbook tells me that LHB hit about 75% more HRs in Yankee Stadium than in Fenway.

So, what happens if Ellsbury maintains his second-half HR/FB, improves his balls-in-air percentage to .550, and plays 120 games for the Yankees? He hits 25 HR. If he plays 140 games, he hits 30.

I think he's going to be worth the money.

Do I wonder whether I helped him earn even a tiny part of it? I suppose I do.

Where Jacoby Ellsbury's Power Came From (Part 1 of 2)
ericmvan
In June of 2009 I was still receiving severance pay from the Boston Red Sox after being laid off as a baseball operations consultant unexpectedly at the start of the year—a move dictated by upper management as a response to the recession. I was grateful for this unexpected generosity, and when we parted ways I told them I’d send them anything I found truly interesting and important, as a freebie.

The previous year I had noticed that Jacoby Ellsbury had astounding splits depending on the identity of the following hitter, and they were opposite from the conventional wisdom: he was amazingly better with a weak hitter behind them.

In this pair of long e-mails to the team, I explain that phenomenon, and advise them on a simple change Ellsbury could make in his approach—one that would unlock his power potential.

Note that I am not claiming responsibility for Ellsbury’s blossoming as a hitter. In fact, it's quite possible that this argument was ignored in the short run, and that his hitting coaches eventually noticed what I did, and gave him the same advice. But I believe it completely explains where Ellsbury’s 2011 power spike (usually regarded as a bizarre fluke) came from. More about that in Part 2.

June 14: A Freebie (Ellsbury Solved)

I’ve got 5 different lines of evidence, all independent, that point to the same conclusion:

When Jacoby Ellsbury thinks it’s his job to get on base (which is most of the time), he does no guessing, no sitting on pitches in hitter’s counts, and just reacts to what’s thrown. And he’s not very good. When he doesn’t think that’s his job, he does guess and does look for pitches to drive—and is a vastly better hitter as a result.

Here’s the direct evidence that he usually is just reacting. It’s from this year’s pitch/fx data and is about a week old. I’ve been looking all year at pitches thrown right down the middle (center third of the 20” of the plate plus the black, center third of the zone top to bottom as defined by pitch/fx). Here’s Jacoby’s rank among the 9 regulars:

1st in percentage seen (of all pitches)
7th in % taken (only Ortiz and Bay have swung at more)
2nd in lowest % of swings and misses (only Lowell is better)
2nd in lowest % of fouls per swing (barely trailing Lowell)
and hence a huge #1 in % of pitches down the middle hit fair

He is 5th in BA on those balls hit fair, but next to last (to Green) in Iso and SA.

This was puzzling. He swings at way more pitches down the middle than average and rarely misses them, but doesn’t hit them hard at all.

Well, why would a player take a pitch right down the middle? Because he was expecting something else. Why would he swing and miss at one? Because he was expecting something else and didn’t recognize the difference.

That Ellsbury rarely takes or misses a pitch down the middle suggests that, on the whole, he’s not expecting anything. He’s just up there reacting. Not that big league hitters guess on every pitch, but I think the good ones all do it, to a degree, especially when ahead in the count.

Is there more evidence to support this? Take his career splits by count (as of last week). He actually has a higher BA and SA after falling behind 0-1 (.305 / .401) than getting ahead 1-0 (.278 / .398). The reverse split is just random, but the lack of the expected positive split (last year the league was .240 / .363 after getting behind 0-1, .280 / .452 after getting ahead) indicates that he’s usually not using the count to help predict what he might see. The entire reason hitters hit better when ahead in the count is that the opposing pitching repertoire is narrowed, allowing more effective guessing / sitting on pitches.

Here’s his SA after various counts compared to AL ‘08 average, sorted by the difference:


Count

JE

AL ‘08

Diff
After 0-2 .326 .274 .052
After 0-1 .401 .363 .038
After 1-2 .309 .289 .020
After 2-1 .431 .426 .005
After 1-1 .393 .389 .004
After 2-2 .302 .319 -.017
After 1-0 .398 .452 -.054
After 2-0 .433 .487 -.054
Full .288 .376 -.088
First pitch .441 .544 -.103
After 3-0 .385 .508 -.123
After 3-1 .256 .488 -.232

Just a massive failure to take advantage of being ahead in the count, despite better-than-average numbers while being behind.

This insight allows us to reinterpret my last year’s analysis. And that gives us two more lines of evidence. As you recall, he had a split where he was immensely better with a weak hitter behind him than with an elite one. And he has been much better in his career with 2 outs than with 0.

I thought that both of these were about the pitcher’s different approach (pitch around him vs. challenge him). But now we see that it’s his own approach that changes. Hitting 8th with a Lugo type behind him, he’s not putting a premium on setting the table and starts to think about driving the ball. So he starts to guess, sit on pitches in hitter’s counts. Same thing when he’s up with 2 outs versus 0.

I would have sent this a week ago, but I wanted to test the theory with another line of evidence, another set of numbers compiled after I came up with the idea. I was going to break down his performance by count depending on following hitter, but that’s an enormous amount of work and I’m busy.

Then tonight he comes up with a four-run lead in the 9th, is obviously sitting on a pitch, and hits it out. And I realize, there’s another split to look at. He should be better with the Sox leading comfortably, when getting on base is not a premium, than with them trailing, when it’s everything. And here are the numbers from last year:

Sox trailing, 181 PA, .272 / .320 / .361 (and 4 GDP)
Up 3 or more runs, 121 PA, .297 / .355 / .495 (and 1 GDP).

Just tell him that he should always be up there with an idea of what he’s going to see, always guessing and looking for a pitch to drive when he gets ahead, regardless of score, outs, men on base, or the hitter on the on-deck circle. And he should be terrific.


--------------------------------

June 18: RE: Ellsbury Solved

Thanks much for [an opportunity to buy tickets at face value]. And as reward (or punishment), the final word on Ellsbury, including, for the first time ever, why he has a 1000+ OPS hitting in front Lugo, Lowrie, Green, Crisp, and Cash (not just combined as a group, in front of each and every one of them!). I can’t believe it took me so long to figure out…
Numbers are through Tuesday night.

I. Admonition

Tell Ellsbury to stop trying to move the runners over!

He has 28 career PA with runners on 1st or 1st and 2nd, 0 outs, and a good hitter on deck. He’s hit .111 / .143 / .111 with 6 GDP. That’s 31 outs in 28 PA. And a 21.4% GDP rate. And he’s not improving at it—in fact, he’s 1 for his last 15, HBP, 5 GDP going back to last July 11.

He has 10 GDP in his other career 207 GDP opportunities—a 4.8% rate. The difference in rates has a 1 in 937 chance of happening randomly. He also has a .036 K+BB% in these PA vs. .182 otherwise, which is also statistically significant, so he’s exhibiting no patience at all.

II. Facts

Next, here’s what I think are his relevant career splits: [HRC = HR / Contact, XIP = XBH / Balls in Play, 1B% = 1B / (1B + Outs in Play)]


Split

PA

BA

OBP

SA

OPS

RC/27

K%

BB%

HRC

XIP

1B%
Move 'em Over 28 .111 .143 .111 .254 -0.24 .036 .000 .000 .000 .115
Other gd next, 0 outs or less than 2 run lead 634 .262 .309 .353 .662 4.00 .125 .057 .014 .053 .251
Gd next, 1 or 2 outs and 2+ run lead 170 .340 .398 .431 .829 7.41 .112 .082 .000 .088 .323
Bad next, 0 outs or trail 95 .384 .442 .500 .942 8.51 .095 .074 .026 .053 .380
Bad next, 1 or 2 outs, tied or leading 73 .464 .513 .791 1.310 20.34 .123 .055 .083 .109 .429

You can see that his strike zone command is better in rows 3 and 4 than row 2. In the last row, his K% and BB% revert to baseline (quite possibly a SSS fluke) but he has a massive spike in HR power; 36% of his career HR have been hit in these 7% of his career PA. He also has his best splits for XBH / BIP and for 1B / (1B + Outs in Play).

His overall good next / bad next split is now .272 / .322 / .360 in 832 PA (4.15 RC/27) vs. .419 / .474 / .632 in 168 PA (12.78 RC/27).

The good hitters are almost all Pedroia: 739 PA vs. 39 for Youkilis, Ortiz, Drew, and Lowell. In case you think the Bad next splits are driven by one guy, here’s all the guys with 10 or more PA:


Next Batter

PA

BA

OBP

SA

OPS

RC/27
Lugo 61 .382 .452 .564 1.015 11.16
Lowrie 39 .486 .538 .743 1.281 14.95
Green 20 .500 .545 .500 1.045 16.42
Crisp 11 .500 .545 .900 1.445 27.64
Cash 10 .500 .500 1.100 1.600 24.30

III. Interpretation

There are two related things going on here.


  • He has two different approaches depending on game situation, but the one he uses most of the time is very counter-productive.

  • As a result, his overall numbers and reputation do not reflect his actual ability as a hitter. And when he bats in front of a weak hitter, the opposing pitchers have no fear of him or the next guy and are not working to get him out at all. They’re just throwing it right over the plate, and he’s killing it.

Two approaches. The first we’ll call “just get on base” [GOB] and the second “try to do some damage” [DD]. He just tries to get on base whenever there’s 0 outs or whenever the team is trailing. He tries to do some damage when there’s 1 or 2 outs and the team is up by 2 or more runs.

When there’s 1 or 2 outs and the score is tied or we’re up by a run, his approach depends on the next hitter. He tries to just get on if the hitter is good, do some damage if he’s bad.

All of this makes absolutely perfect sense. The problem is, of course, that his GOB approach sucks. Just trying to get on knocks 90 points off his OBP.

I earlier identified one aspect of the two approaches: that he’s just reacting when he’s GOB, while he’s guessing and sitting on pitches when he’s DD. There may of course be other differences. The hypothesis is backed up by the pitch/fx data: this year he’s taken 22% of pitches down the middle in GOB situations but 38% in DD. The odds against that being random are just 5 to 1 but it is absolutely in the expected direction.

No respect. How a pitcher attacks a hitter is clearly more complex than simply challenging them vs. pitching around them, which really only applies to good hitters. Bad hitters are essentially always challenged, but when they’re protected by a good hitter, they are pitched much more carefully. When Ellsbury hits in front of Pedroia they take him very seriously and work to get him out. When Julio Lugo or Nick Green is up next, they essentially get way too overconfident and treat him like the punch-and-Judy hitter that he isn’t.

IV. Advice.

You don’t really need to know the components of the two approaches. Just verify that he does change his approach according to the game situation, and tell him to abandon the GOB approach and just try to do damage all the time. If he does that, he should go on a tremendous tear hitting 8th.

In terms of batting order, you paradoxically want to leave him there as long as they’re disrespecting him and he’s killing the ball. It’ll probably take a few weeks before teams notice that he’s not a weak and easy out and stop feeding him meatballs. But from that point onward he should still be a .380 OBP, .430 SA guy at a minimum if he sticks with the DD approach. (And his walk rate will go up—it’s true that he sees more 3-ball pitches in the strike zone than almost anyone on the team.) So you would move him back to the top of the order when he appears to “cool down” to normal.

See Upstream Color, If You Can
ericmvan
As you may know, Upstream Color is the long-awaited second film from Shane Carruth, the autodidact auteur behind 2004's extraordinary Primer. To say I'd been looking forward hugely to this film would be an understatement.

To say that I was not let down would be one, too. It's better than I dare dreamed.

I adore narratives that demand repeat exposure and reveal more of themselves with every iteration. That of course describes the work of both Gene Wolfe and Christopher Nolan, but it's also Primer (some would say almost to a fault). That's all that I hoped for from Carruth's new film; an emotionally resonant text that would, above all, set those "oh my god I think I understand this" bombs going off in my head, and do so in different ways each time I saw it. (At some point I'm going to propose that the acronymic omgitiut should be recognized as a full-fledged film genre; if you recognize the source of the phrase you know that such films can be domestic dramas as well as sf puzzle-boxes.)

What I got was something much more. Imagine that Terrence Malik made one of these films, and you've got something like Upstream Color, and indeed critics who are indifferent to narrative challenge for its own sake are swooning over this, and asserting that they love it despite feeling no need to solve the problems it presents.

There is, I think, almost a precise parallel here in the career of Darren Aronofsky. Pi was a terrific debut, though nowhere near as good as Primer. After making the even better and much more conventional Requiem for a Dream, Aronofsky then spent years trying to make a film that would tell a challenging sf story with glorious visuals, using SFX to achieve the sort of aesthetic rapture you get from a Malick film, rather than the sense-of-wonder that you'd get from Kubrick or the opening shot of Star Wars. When the funding fell through, he made the film anyway, after re-writing it as a small budget film. And that was The Fountain.

After Primer, Carruth spent years trying to make an sf film called A Topiary, and when the funding fell through, he made Upstream Color instead. In both its artistic aims and thematic concerns it seems to me as close as possible as any film could be to The Fountain (and vice versa). I liked The Fountain quite a bit and I'm looking forward to seeing it again some day. After one exposure to each, though, I'd make the following comparison: the science in Upstream Color is much more interesting and feels more like it has complexity, internal logic and consistency; and in every way, Upstream Color is the more accomplished film. Yes, I'm asserting that as good as Darren Aronofsky is, he's not in Carruth's class as a writer or director. And Carruth is his own cinematographer, composer, co-editor, and first camera operator, and excels in each role, and he's more than solid as the male lead.

What Carruth has done here is almost unprecedented in film history. And I'm not talking about doing everything but the catering -- that goes without saying. It's this: spectacular debuts are almost never followed by a significantly better film. Have you even heard of Gran Casino, In This Our Life, Stagestruck, A Woman is a Woman, There's Always Vanilla, Alex in Wonderland, The Last Movie, or Crimewave? They are respectively, second films by Bunuel (following L'Age de Or), Huston (The Maltese Falcon), Lumet (12 Angry Men), Godard (Breathless), Romero (Night of the Living Dead), Mazurky (Bob & Carol & Ted & Alice), Hopper (Easy Rider), and Raimi (The Evil Dead). (Two more recent examples of the principle: Andrew Niccol's Gattaca and S1m0ne, and Florian Henckel von Donnersmarck's, The Lives of Others and The Tourist.) Even when a director hits home runs his first two times out, the first film is usually superior: Citizen Kane and The Magnificent Ambersons, Pather Panchali and Aparajito, The 400 Blows and Shoot the Piano Player; or regarded as more or less its equal: Badlands and Days of Heaven, or Being John Malkovich and Adaptation.  .

I can come up with only four instances where a director or directors topped a classic first film with an even better second effort, and two of those have asterisks. Gene Kelly followed his first collaboration with Stanley Donen, On the Town, with Singin' in the Rain -- but Donen made several films in the interim (his first being Royal Wedding). The Coen Bros. followed Blood Simple with Raising Arizona, but that is regarded, I think, as an incremental improvement rather than a dramatic one -- still impressive, but not eye-opening. And that leaves us with Mike Nichols following Who's Afraid of Virginia Woolf with The Graduate, and the comparison that I think is closest to the bone: Reservoir Dogs and Pulp Fiction. That's the one time in film history where a director's first two films give me the same sense of "we knew this guy was great, but, really, we had no idea."

Upstream Color comes out as a Blu-Ray / DVD combo pack on May 7, but it deserves to be seen on a big screen. Proceeds go to financing Carruth's next film. Anita and I will be back to see it a second time next weekend, dragging friends along. See it if you can.
Tags: ,

Trailer #2!
ericmvan
The list of blog posts I intend to write keeps growing ... but I'm managing to keep myself too busy to get to any of them.

A fellow named Peter Keating is doing a story for ESPN Magazine on, more or less, "life after sabermetrics" -- what happens to folks who get laid off by professional baseball clubs? He's raising the question of whether pro teams really understand how to get the most benefit from folks like myself. The article's about three former Red Sox consultants: Mike Gimbel, Voros McCracken, and (largely, it seems) me. I've talked to Peter for hours on the phone, and had the pleasure of not only meeting him in NYC (he lives in Jersey) but taking him to his first Mission of Burma show (my 267th). He's a great guy; our conversations have taken long detours as he volunteered the opinion that Buffy the Vampire Slayer's "Once More With Feeling" is the greatest thing in the history of television (I concur) and quizzed me on my favorite Star Trek: TOS episodes -- because, of course, he needed to know. Last Tuesday a three-person photo crew spent the afternoon here with two SUVs full of gear and shot 342 photos of me, which may match my previous lifetime total. I hope at least one or two came out well.

I'm guessing the article will appear in April, and it should provide an interesting counterpoint to pages 160-2 of Francona: the Red Sox Years, which talks about my role with the club (and gets the typical amount of facts wrong, but not in any kind of damaging or insulting way).

In the meantime, after a six-month hiatus I've finally resumed working on "the book," which is to say A Nature of Consciousness, which is to say the scientific paper "A Testable Theory of Phenomenal Consciousness and Causal Free Will" from which it will be adapted. I couldn't be more pleased with how the work is going, or more terrified that it will make me more famous than I care to be. Which, I'm learning, may not be a high bar: after being the relentless center of attention for the photo crew for four hours, I told the photographer that I may be sending future requests for photos in his direction.

And in the other meantime, I added 38 movies to the list of 2011 movies I wanted to see, to bring the total to 166; I've got twenty left to see, and when I'm done I'll do a massive data analysis in an attempt to build a model predicting my own rating from Netflix's guess and a slew of other numbers. (In case anyone questions the sanity of so thorough an approach, my current Top 10 includes two movies that I had initially decided not to bother with, and the very last batch added to the queue has already produced a Top 35 film.)  I hope to get the full 2011 rundown online in late March of early April, together with whatever I can glean about the relationships among critical and audience tastes after crunching all those numbers.

And that's why none of the following has been written yet:


  • A review of The Hobbit: An Unexpected Journey for TheOneRing.net, and the promised 4th part of my series for them

  • An essay for film buffs on the nature and meaning of Slipstream as a genre (one of 2012's best films, Holy Motors, is quintessential slipstream, but no one in the film world knows that concept)

  • A solution of a major psychopharmacological riddle: the mode of action of the super-stimulant Provigil (modafinil)

  • The final attention theory post

  • Most importantly, a series of posts entitled This is Your Brain at the Movies, including the results of a survey I constructed where three of my proposed fundamental personality traits can be shown to explain about 50% of how much someone likes Cloud Atlas. Bits and pieces of this have been scattered all over the Web in the form of comments to reviews written long ago.

I'll be accepting bets on whether my next entry will be a) one of those, b) something else entirely, or c) another meta-entry. But in the meantime, for those thirsting for actual content, I'll leave you with this quick list of favorite 2012 films, so far, in order:

Top 10: Cloud Atlas, The Dark Knight Rises, Lincoln, Moonrise Kingdom, Seven Psychopaths, Zero Dark Thirty, Holy Motors, Amour, The Master, Silver Linings Playbook. HM: The Avengers, End of Watch, Beasts of the Southern Wild, Monsieur Lazhar, The Turin Horse, Barbara, The Cabin in the Woods.

(Significant films not yet seen: Once Upon a Time in Anatolia, Cosmopolis, Magic Mike, Killer Joe, Searching for Sugar Man, Wuthering Heights, Killing Them Softly, How to Survive a Plague, Compliance, Kahaani, Headhunters, Sound of Noise.)

Present and Coming Attractions!
ericmvan
Over at TheOneRing.Net, I'm a guest writer. You don't want to know how many hours I put into this piece (and its sequels), but I'm very proud of the result. Thanks to TORn for running it!

In the meantime, this space should feature, in mid-January, an epic "2011: The Film Year in Review." Yes, 2011, because it takes a full year to catch up to the previous year's obscure movies as they come out on DVD. Last night, for instance, I watched a very satisfying Bollywood road trip epic, Zindagi Na Milegi Dobara, which ranked #171 at the U.S. box office (my original assertion that it was never released in the US is a function of BoxOfficeMojo's broken search function) and didn't crack the top 235 in Crirtic's Top 10 list mentions. I've tentatively ranked it as my 41st favorite movie of the year--out of 111 that I've seen so far.

The full recap will rank 128 movies, from least good to best, with full information such as Rotten Tomatoes, Metacritic, and IMDB and Netflix user ratings ... and a pithy spoiler-free review of each.

Oh, yes, and the long-delayed Part IX of the Attention-Switching Model post. And hopefully, much else.
Tags: ,

A Model for Attention-Switching, Part VIII: Norepinephrine in Humans
ericmvan

We’ve seen that there is not only a potent evolutionary rationale for the evolution of norepinephrine (NE) as the neuromodulator regulating attention, but that our hypothesis about its role gives us remarkable insight into the behavior of our earliest vertebrate ancestors.  But you probably don’t have any friends with file drawers full of unfinished projects who are also ray-finned fishes, let alone sharks.  So let’s see what sense our hypothesis makes of human behavior.

And let’s forget our NE hypothesis for a moment and just start with DA. We’ve proposed that it turns on phenomenal consciousness, especially the experience of emotion. This means that high-DA people are passionate and low-DA people are dispassionate. (This explains, incidentally, why highly opinionated people tend to talk loudly and gesture with their hands: remember that DA also controls motor activity through a second pathway.) In terms of cognition, high-DA people tend to be highly emotionally invested in the things they’re thinking about, while low-DA people tend to be less invested—more, well, dispassionate.

Is there a cognitive trait that would make sense to correlate with these two different emotional relationships to the contents of our thoughts? Well, sure: the more emotionally invested you are in what you’re thinking about, the longer you’d want to stay thinking about it. The more dispassionate you were, the more likely you’d be to move on and start thinking about something else. So it makes perfect good sense to correlate passion with length of attention span—and that means DA with NE.

And which of the four combinations of the two traits might be a particularly bad idea? Someone with high DA and low NE will be passionate about the contents of their thoughts, but flighty and prone to attentional shifts. Since attentional shifts can lead to creativity, that doesn’t sound like a bad combination. But low DA and high NE would give you someone dispassionate about the contents of their thoughts, but prone to linger on them. That doesn’t sound good: that sound like a recipe for boredom.

Note that we’re not saying that there aren’t dispassionate people with healthy attention spans; we are talking here about excluding the combination of the extremes of the traits. What you won’t see—what, in fact, we don’t see, except perhaps in rare variants of AD(H)D—is a dispassionate person who has trouble tearing themselves away from what they’re thinking about, as passionate people sometimes do.

You would think that that would be built into the brain: the dispassionate person would never have trouble tearing themselves away, because at some point they’d just get bored. But that’s begging the question or thinking circularly. What does it mean to be bored, cognitively? What controls boredom, chemically?

What we’ve found here is that the level of passion is controlled by one thing and the ability to tear oneself away is controlled by another completely independent thing. If in fact we observe that dispassionate people almost never have the problem of being unable to tear themselves away, we need to explain why that combination almost never exists. And our hypothesis about the roles of DA and NE explains that perfectly. Dispassionate people are that way because they have relatively inactive DA-producing enzymes, and that guarantees that they will never have levels of NE so high that they are prone to hyperfocusing. (It’s conceivable that someone dispassionate might have problem with hyperofocusing because of other defects in the attentional hardware, which is why I don’t rule out this being a rare form of ADD.)

So this correlation works really well for humans, and in fact has some explanatory power in terms of explaining why the vast majority of people who are prone to hyperfocus are passionate about thinking.

In the last installment of the model proper (there’ll be a second set of posts on implications), we’ll look at the history of the understanding of the role of NE in attention.


A Model for Attention-Switching, Part VII: The Evolution of Norepinephrine
ericmvan

So, by a process of elimination we’ve decided that norepinephrine (NE) controls attention by sending a signal that multiplies the salience tags in active memory, thus controlling the salience gradient. The more NE, the more likely we are to attend to the most salient potential attendums.

How much sense does this make? We’ll look at this two ways: chemically and historically (personally, even). And the chemical argument will divide into two parts: one about evolution, and one looking at traits in humans.

The key chemical fact about NE is that it’s very close to dopamine (DA) structurally. In fact, in the synthesis of NE from the amino acid tyrosine, DA is an intermediate step; the brain actually makes DA, for a moment, in the process of making NE.

Note that I’m not saying that the brain “makes NE out of DA,” although that’s technically true (and you may read that elsewhere). But that implies than some of an existing, usable cache of DA is being converted to NE, and that’s not at all true. In fact, if that were true, the levels of DA and NE would be inversely correlated; if you had a lot of one, you would have only a little of the other. But in fact the levels of the two neuromodulators are positively correlated: if you have a lot of one, you tend to have a lot of the other. The DA-producing cells all have a pair of enzymes which make DA out of tyrosine. The NE-producing cells add a third enzyme, dopamine β-hyroxylase, which turns the DA into NE. If you have alleles (“genes”) for especially active or inactive versions of either of the first two enzymes, you will thus tend to have high or low levels of both DA and NE.

Furthermore, if you think about this chemical chain, you will see that there’s nothing to prevent someone from having very high levels of DA production but very low levels of NE—you’d just need very active variants of the DA-making genes and a very weakly productive version of the NE-making one. But the opposite would be impossible. If you have very low levels of DA production, that sets an upper limit on how much NE you can produce; the NE-making cells just don’t have enough DA to convert to NE even if the NE-making enzyme is very active. So folks who make very little DA are forced to make relatively little NE as well.

So, evolution has selected for these two relationships: in general, DA and NE levels are correlated, and specifically, low DA and high NE is a forbidden combination. Does this make sense in terms of our hypothesized roles for each?

The first thing you might want to know is at what point in evolution this relationship was established. And it so happens that four of the five neuromodulators of the control brain go way, way down the evolutionary ladder, and are found in invertebrates. NE is the exception. Invertebrates don’t have NE; they instead have octapamine (OA) serving an apparently analogous role. OA is in the same family of chemicals but does not have DA as a precursor.

So in the original neuromodulatory paradigm, which is incredibly ancient, the five chemicals had unrelated manufacturing pathways. But at some more recent evolutionary point (not necessarily when the vertebrates evolved; I’m not sure anyone has ever examined the neurochemistry of hagfish and lampreys, which are on neighboring sides of the invertebrate / vertebrate division), this substitution happened:

Tyrosine -> [one enzyme] -> tyramine -> [dopamine β-hydroxylase] -> OA

became

Tyrosine -> [two enzymes] -> DA -> [dopamine β -hydroxylase] -> NE

It’s important to note the conservation of the dopamine β -hydroxylase enzyme. The neuromodulator filling our hypothesized attention-controlling role went from OA to NE because a different substrate was provided for this enzyme.

We can infer a surprising amount about the behavior of our early vertebrate ancestors from this knowledge. Let’s begin by reminding ourselves of the original purpose for varying the salience gradient: to adapt the strength of attention to the current environment, on the fly. It’s highly adaptive if you have the ability to keep attention focused once you have identified a predator threat, and nearly as adaptive if you can keep it focused after having identified a food source or potential mate. And it’s highly adaptive if you can instead keep attention volatile when there is no potential threat, food, or mate in sight, and the environment needs to be scanned and searched for same. So it’s no wonder that some sort of attentional control goes back essentially to the earliest animals.

What can we infer from the substitution of DA for tyramine as the substrate for the enzyme that produced the attention-controlling neuromodulator? The apparent evolutionary purpose of this substitution was to correlate the levels of DA and that chemical, so that organisms with a high or low supply of one chemical would tend to have the same sort of supply of the other. That immediately tells us something crucial: that there were already different functional alleles for the enzymes involved in the synthesis of DA and OA. Because if every organism had the same allele for each of the four synthesizing enzymes, the levels would already be correlated: they’d be the same for every individual. And a little thought reveals that we would in fact need variation in both the levels of DA and of OA in order to make correlating them meaningful.

Now, it’s not necessarily true that evolution would accommodate different alleles for these enzymes, and hence different levels of the neuromodulators. If there were a single best level of OA to have, any mutation of the enzymes involved in its synthesis would have rendered the organism less able to compete. The mutation would have been selected against, weeded out. But we know from the shift from OA to NE that more than one allele was present in the population: there were (at the least) high-OA and low-OA organisms, and even though their behavior would be different as a result, neither had an evolutionary advantage.

And what can we make of that? If we’re right about the role of OA / NE, we are talking about organisms with different innate attention spans. And if different attention spans and hence different behaviors were equally adaptive, then we are talking about the organisms filling different behavioral niches. And in fact it’s not hard to imagine that an organism with a different attention span than its conspecifics would have an evolutionary advantage when hunting or being hunted, which could even mean evolutionary pressure to select for a wide variety of OA levels. Each distinct level of OA would correspond to a different behavioral niche in the great predator / prey dance.

So this tells us a remarkable amount about the sophistication of the behavior of the earliest vertebrates: their environments and predator / prey interactions were complex enough to accommodate multiple behavioral niches. There were organisms with short attention spans and ones with long attention spans. And there were organisms with low DA levels and with high DA levels. And whatever was mediated by that trait, it was evolutionarily advantageous for the low DA organisms to have a short attention span (and, to a lesser extent, for the high-DA organisms to have a longer one).

So what was the DA trait involved in this adaptation? We’ve hypothesized that DA turns on phenomenal consciousness, especially the experience of emotion, and hence the intensity of pleasure responses (and almost certainly pain responses as well). But that’s not the only thing DA does. DA initiates movement, and its current relative level represents the organism’s energy reserve or capacity for action. And as we saw quite a while ago, DA holds information in working and active memory, since that provides the simplest way of setting the correct salience tag.

This last use of DA would seem to be the best candidate for the trait needing correlation, since it’s the one involved in attention. Individuals with high levels of DA would have larger stores of working and active memory. Let’s imagine four types of hunting behavior, derived from the four combinations of DA level and attention span (when hunted, behavioral differences melt away, as every organism gets its attention span driven up to transient high levels) .

High-DA, long attention: Their ability to keep attention focused on a potential prey situation is rewarded by their large capacity to store potentially relevant details of that environment.

High-DA, short attention: Their propensity to shift attention is rewarded by their ability to store potentially relevant details of multiple different environments.

Low-DA, short attention: Their propensity to shift attention is compatible with their relatively limited ability to store information about the environment. Once they’ve observed as much as they can absorb, if there’s no prey found, they move on. (The reason why low DA does not put them at an evolutionary disadvantage is that it confers advantages unrelated to attention, such as a diminished conscious experience of pain.)

Low DA, high attention: OK, this one doesn’t work. They’d be attending to their environment past the point where they could extract and store additional information about it. They’d all get outcompeted by the other three types, and they’d starve.

And that explains why OA was replaced by NE. By making the size of working and active memory a prerequisite to the strength of the attention span, you eliminate individuals who have the ineffective combination of a low memory capacity but long attention span. And that would confer an evolutionary advantage on the mutation that caused the correlation. If we start with two equally sized populations of sharks, one of which still uses OA to control attention and one which has the mutation that substitutes NE, in the next generation the NE sharks will be more prevalent, since some of the OA sharks will have starved. With each passing generation the population imbalance will increase, and eventually, the OA sharks would become extinct.

 And at this point you probably wonder what this means for people. That’s the next post.


You are viewing ericmvan