Episode 156: Carsploitation

The Overthinkers tackle Pixar’s Cars 2, the meaning of exploitation, Magic: The Gathering, and listener feedback.

Matthew Wrather hosts with Peter Fenzel, Mark Lee, Josh McNeil, and David Shechner to overthink Pixar’s Cars 2, the meaning of exploitation, Magic: The Gathering, and listener feedback (!).


→ Download Episode 156 (MP3)

Want new episodes of the Overthinking It Podcast to download automatically? Subscribe in iTunes! (Or grab the podcast RSS feed directly.)

Tell us what you think! Leave a comment, use the contact formemail us or call (203) 285-6401 to leave a voicemail.

20 Comments on “Episode 156: Carsploitation”

  1. cat #

    The Overthinking It Podcast is highly inappropriate. Cars 2 was obviously far too provocative of a starting point. :)

    As far as I know, high school students are still programming their TI 89’s. I never added anything into mine except formulas but I remember having to learn basic programming.

    Possibly the best easter egg of the OTI podcast.


  2. Brian #

    The OTI tag line is great, it’s not too broad/banal or use jargon as an exclusive weapon in defense of nerd tribalism. Maybe it’s not the best of all possible tag lines, but it’s definitely a great one, functional and accurate with a hint of humor- Idk how much funnier you can get before going too broad and losing the analysis angle.

    On calculators, I never took a math class that needed the ti-89 but my friends did and they downloaded, I doubt they programmed, a game called “Fall-Up” where you control a ball through a maze that’s scrolling at every increasing speed, played all during English class to the point where my vision was distorted afterward making everything scrolling and blurry. I can’t imagine playing that being appealing today with the availability of proper ergonomically designed games in full color on your cell phone or in freaking 3D on a Nintendo 3DS, but programming might still be interesting for it.


  3. Howard #

    The Overthinking It Podcast: Smartsploitation

    The OTI Podcast: We have to go deeper

    The OTI Podcast: The popular culture has layers. Like an onion.


    • cat #



  4. Chris #

    I like the tagline. Long live the tagline!

    You were given scientific calculators to use? I had to buy one, which is to say I had to have one bought for me. I got by with a regular calculator (Calculator Classic, if you will) until Honor’s Physics in 12th grade when suddenly there was work I couldn’t do without the ability to use a calculator to make graphs. Although, I think that maybe in some of my math classes they were made available. That seems plausible, but I don’t have any definitive memory of such a situation, though I do know that I had used scientific calculators prior to my aforementioned Physics class. My point is this: I never used them in any way than how they were intended because I did not have the ability to do anything resembling programming.


  5. Anthony Abatte #

    The OTI tagline is perfect. It serves as a mission statement and grabs the audience intended. Don’t feel compelled to change it. I find myself saying it along with everyone at the end of the show every week.

    My memories of TI calculators included mainly playing Tetris and Dope Wars. A text based game that taught us more about the stock market than our teachers did. The perks of being in advanced or “honors” classes meant that we were able to finish our work in a timely fashion and have fun with calculators.

    One last thing. Matt Wrather’s description of the actor who played Shredder reminded me of this pic. Wonder if he would sign this one?



  6. shechner OTI Staff #

    The perks of being in advanced or “honors” classes meant that we were able to finish our work in a timely fashion and have fun with calculators.

    Truly, the greatest perk of being in advanced or “honors” classes was that it meant you were the sort of person who could have fun with calculators.

    …Well, other than the innate fun anyone gets by calculating 24 * 2417, and then rotating the display 180°. Heheh.


  7. Skef #

    There is one problem with your tag line, which is that it makes it sound as if you’re doing popular culture a favor by analyzing it, which is … questionable, and a little pretentious.

    I think the appropriate word is not “deserve”, but “warrant”.


    • Stokes OTI Staff #

      “www.overthinkingit.com: questionable, and a little pretentious,” might be a more faithful description, truth be told.


  8. Gab #

    I’d marry Ruth Bader Ginsberg.

    Now see, I wonder if the thing is that Cars 2 is just relatively bad, as in sub-par in comparison to all of the other Pixar movies, while still, in the general pool of kids’ movies, is still up in the rankings. Did any of you actually see it?

    On Exploitation: One of my favorite political philosophers, Iris Marion Young, says, “The injustice of exploitation consists in social processes that bring about a transfer of energies from one group to another to produce unequal distributions, and in the way in which social institutions enable a few to accumulate while they constrain many more.” I’d say the definition of exploitation more appropriate for this is the watered down version that ignores social institutions. See, the lite version of Young’s definition is that exploitation is when someone is taken advantage of. So in the case of Cars 2, Disney is taking advantage of the already pre-set market of consumers by making a product Disney knows will get consumed. Think about it. If you listen to the podcast again, pretty much every time “exploitation” or one of its variants gets used (apart from when Fenzel gives his specific definition), it can be switched out with a form of “take advantage.” So I guess what I’m saying is yeah, the film industry is exploitative in a more… colloquial…?… way of thinking. Now you could turn it into a more Marxist definition if you switch the time spent watching out for the labor put in by the workers and attach to that the money spent on tickets and merchandise- the capitalists are the theater companies, retailers, and, of course, the studios.

    The thing about Magic cards is that a card’s monetary value is supposed to be determined by its “value” in-game, as in how powerful it is, as well as how many of them are around. What happens sometimes with WotC is they’ll incorrectly estimate a card’s value when they release it. And this, then, gets reflected in the card “type,” meaning common, uncommon, or rare. My favorite example is Loxodon Warhammer. It first came out in Mirridon as a common for a few prints, then was switched to an uncommon. Then in I think Tenth Edition, it was changed to rare. Why? Well, +3/0 and lifelink for only 3 mana? And if the creature it’s attached to dies, it just stays on the board to get attached to something else? That’s HIGHLY powerful, I’ve heard it described in play as “cheap” with a negative connotation (because the mana required is so low). I have it in every type, but I tend not to use it because it irks people so much. Which adds another layer to it, since the in-play “value” of a card or an action is supposedly demonstrated by how much mana is required for its use, but those are not always “fair,” as many still think of Loxodon Warhammer- just because it’s a rare now doesn’t mean it’s any “cheaper” to play once it’s in your hand or on the board. (Sidenote: The game was invented at my undergrad college. One of the students that helped Richard Garfield- the professor given the most credit for it- test and come up with rules and cards at first came back to talk to our Magic club when I was a junior. So awesome.) Anyhoo, I’ve gone off track here…


  9. Howard #

    Regarding polling and margins of error, it actually is meaningful if candidate A is +2 over candidate B, even if the margin of error is 2 (and Shechner, I’m surprised you didn’t talk about this). I believe the margin of error reported in most opinion polls is actually the sampling error, which goes as the square root of the sample size. For example, if I take a sample of 100 people, the error is 10, and therefore 10%. For 1000 people, the typical size of an opinion poll, the error would be around 30, so around 3%. Sampling error is an attempt to take into account the fact that you are taking a discrete subset of a large population.

    What margin of error does not mean is that, if candidate A’s support is polled at 51% with a margin of error of 2, the true level of his support must be between 49-53. It’s actually more like a normal distribution, peaked at 51, with a standard deviation of 2 (and it’s been a while since I’ve done these kinds of statistics, so there may be a more sophisticated way to do it). The rule of thumb for normal distributions is that 68% of the curve falls within one sigma of the mean, 95% within two sigma, and 99.7% within 3 sigma. Applying that here, there’s a 68% change candidate A’s true level of support is between 49-53, a 95% chance it’s between 47-55, and a 99.7% chance it’s between 45 and 57.

    What does that mean for the opinion poll? There are a couple of complicating factors that make the math more difficult, but ultimately you have two normal distributions peaked at different values. The degree to which the peaks are separated relative to the margin of error tells you how probable it is that A is ahead of B. However, the fact that A is peaked at a higher value than B is indicative that A is “probably” ahead of B.


    • shechner OTI Staff #

      Sheckie can only talk about *so* much math in a given podcast, Howard. I got to use the phrase “If we take as axiom…” so that basically met quota.

      I’m not sure that there’s any widely-adopted standard for the reporting or analysis of opinion polls. Which is to say, I’m sure there are standards that *should* be widely adopted, or which are *purported* to be widely adopted, but who can really tell the degree or character of mathematical rigor with which the producers of Tyra approach the results of their “Which potential First Lady has a more youthful-looking nasal septum?” poll? I’d hazard a guess that a great number of polling agencies just report the standard deviation, standard error of the mean or sampling error, without thought of, say, propagation of their error between different recording media, or the 95% confidence interval of the measurement itself.

      Besides, the other OverThinkers’ commentary is, pragmatically speaking, accurate. A 51%:49% race with a 2% “margin of error” (regardless of how it’s calculated) means that the two candidates are in nearly a dead-heat. Another way of saying this is that the statistics cannot give the reader significant assurance that one candidate or the other is leading the race. Even if, for example a student’s T test* proved that the means of the two populations (“I’m voting for Baltar!” vs. “I’m voting for Team Edward!”) were significantly different, the significance couldn’t be that substantial. After all, if each population follows a classical Gaussian, than the probability that the apparent 49% candidate (Team Edward), actually carries a majority of the vote is fairly high.

      I’m conflicted as to whether we should feel grateful that such news agencies (of which I would count Tyra) even report the margin of error at all. On the one hand, it somehow conveys to the audience the general concept of measurement error, a fundamental issue underlying scientists’ trust of their own experiments. On the other, it also conveys a possibly (likely?) false sense of confidence in the reporters’ honesty and acumen. Indeed, they can’t be lying to us; they’ve apparently used some basic mathematical process to confirm the validity of their results… [Shifty-eyes.]

      So, as Benjamin Disraeli is famed to have said, “There are three kinds of lies: Damned Lies, Statistics, and Things-Written-About-My-Momma’-on-the-House-of-Lords’-Bathroom-Stall-Walls.” Twain later punched this up with a bit of clever editing.

      *-This, too, brings up a more pervasive point, even in the Sciences: which statistical test was used in generating a p-value? I think at one point or another many of us end up being (often unwittingly) the perpetrators of a statistical con game…


      • Howard #

        Fair enough. The only polls I’ve ever really paid attention to are political polls, and there are plenty of pretty well-respected pollsters there.

        I guess the issue is that I just don’t have a good intuition with statistics, and really, I don’t think the general population does either. If it’s coming down to the wire and poll averages show 51-49 in favor of Team Edward over Team Jacob, then what are the chances Edward actually wins? Yeah, my gut says it’s close to 50-50, but it’s probably more like 55-45, or maybe even 60-40. I still don’t think it means that when you’re within the margin of error you can’t say ANYTHING about the data.

        Basically, I just trust whatever Nate Silver of FiveThirtyEight fame says.


        • shechner OTI Staff #

          Well, I doubt that my stats are any better than yours: I’ve never had any formal training, I’ve just picked up a bit from working with computational biologists. That said, you’re absolutely right about the country’s general (lack of) statistical knowledge: we’re probably way out in at *least* the 98th percentile of people who, well… who know what a percentile is. :)

          My sense about the problem at hand, though, is that there’s a difference between measurement error and statistical significance. The measurement error (the “margin of error”) is a measure of how accurately you’ve measured your numbers. However, what’s really informative in a case like this is the statistical significance in the *differences* you’ve measured. That is: what is the statistical support for being able to call 49 and 51 different numbers, given the experimental noise? The measurement error–the noise–has influence over the significance (we probably wouldn’t be having this discussion if Matt had said that the margin of error was +/- 20%, or +/- 0.0002%), but it’s not a direct readout of the assay (poll)’s analytical power. For that, you would need to do some sort of statistical test to confirm that – given your data and your measurement errors, you can confidently detect a 2 point difference between different camps. Sometimes you can, sometimes you can’t.


  10. Wade #

    I don’t see what the big deal over the tagline is. I think it’s perfectly evocative of what the website and podcast are all about. It’s kind of a zinger, kind of longwinded, but entirely appropriate and charming nonetheless. I’ve wound up using the tagline to describe OTI to friends on multiple occasions, simply because it just works. It just works, dammit!

    Also a suggestion: Perhaps a happy balance between doing a listener feedback episode every once in a while and not doing one at all would be to do what you guys did this week. Dedicate the last 5-10 minutes or so to an listener email or two. It may not make be particularly efficient, but it would probably shut us up for a while. Isn’t that the hallmark of any great bureaucracy?


    • Chris #

      I agree. One piece of listener feedback a week could be a good thing.


  11. McNeil OTI Staff #

    I think Wade may have inadvertently solved the tagline question…
    OverthinkingIt.com: The Hallmark of Great Bureaucracy

    Then we can finally move the site to a higher level: writing sappy but sassy greeting cards to people in various state and federal agencies. It could also tie in with the article I’ve been trying to finish for years: “The Influence of 19th Century Noh Theater on the IRS 1040 Individual Income Tax Form”


    • shechner OTI Staff #

      Spoiler alert: the Earned Income Credit worksheet derives from a post-Meiji attempt to revitalize Noh by infusing kabuki elements.


Add a Comment