Updated: Jan 4
Written by Dashiell Young-Saver
One of my top students looked up from his paper, where he had neatly written a new set of conditions for what seemed like the 800th significance test we learned that semester. He asked me, “So, like, what does this mean?”
I started giving him an explanation of why we check conditions given the distributional assumptions of…
“No. What I mean is: What’s the point of all this state-plan-do-conclude stuff?” he asked.
I paused, then replied: “To properly make inferences from data.”
He responded, “But sir, if I was given some data – like a spreadsheet or something like that – and someone had a question about it….I don't think I'd know what to do with it. Even with this stuff, I’m writing conclusions, but I don’t really know what they mean….or,even, why they’re important. You know?”
I must have looked shell-shocked because, after he said that, the whole class stopped and stared at me. His words hit me like a ton of bricks. I realized I had taught inference all wrong.
It was my first year of teaching stats, before I made my lessons interactive and relevant. I spent that spring semester teaching inference the same way I learned it: as a set of procedures that followed from pre-packaged summary statistics about contrived contexts. I reduced the beauty of data analysis into worksheet fodder. So, of course, my students didn’t really know what they were doing or the significance (pun intended) behind it.
After that year, I vowed never to fall into the same trap. I made my lessons simulation/experience-based (EFFL) and based them in data from relevant contexts – topics my students genuinely cared about. As a result, my students’ understanding of inference changed dramatically.
For example, I taught an inference lesson on race and policing. I knew my students cared about the topic, so they would have a real stake in the data and the conclusions they drew. When I asked them about the validity of the sampling method (random condition), about concerns with doing multiple tests, about the difference between statistical significance and practical importance, and about the meaning and generalizability of their conclusions, they cared about the answers to those questions. They felt that their analysis could meaningfully contribute to their understanding of the world. This extra engagement drove them to further understand their work at a conceptual level.
When combining the greater intuition from EFFL with the extra engagement from relevant contexts, there’s no stopping what your students can do. That’s why Stats Medic and Skew The Script have teamed up once again to bring you our first batch of inference lessons (inference for proportions).
Again, I find myself indebted to Lindsey and Luke for their kindness, energy, and thoughtfulness as we built these lessons. I know I can’t wait to use them in my classroom, and I’m already looking forward to more collaborative inference lessons to come (for means, chi-square, and maybe even regression!).
Here’s the first batch. Enjoy!
LESSON: German Tank Problem
Topic: Introducing Sampling Distributions
Answer Key: PDF
What we like about this lesson: it's not just using a mean to estimate a mean. Instead, students estimate a maximum with serial numbers. This gets them to thinking outside the box at a conceptual level.
As was beautifully noted by Josh Tabor in an earlier blog post, avoid getting lost in creating new statistics. In order to better hit the learning targets, move quickly into having students analyze bias and variability in the provided statistics.
Topic: Confidence Interval for One Proportion
Answer Key: PDF
What we like about this lesson: it demonstrates how to carefully consider evidence about a topic that is often discussed through incendiary and overly-simple claims in popular discourse.
Immigration data from the EU was used, since it’s more reliable than US immigration data. The EU context may throw students off a bit, so it’s important to show how the debates around immigration in the EU mirror the debates in the US. It’s also important to have students think carefully about the generalizability of their results.
Ensure that roughly 44% of the beads represent males, as this is the best estimate of the population proportion among immigrants to Madrid in the available data.
LESSON: Flint Water Crisis
Topic: Significance Test for One Proportion
Answer Key: PDF
Population Data: PDF
What we like about this lesson: it shows how citizen-driven-statistics can be used to effectively test claims made by public officials.
The EPA standard is a bit strange at first read (10% of homes having high-lead is “ok” according to the EPA?!). But it’s the real standard! And Flint clearly surpassed that threshold. So, try to avoid getting side-tracked into conversations about the standard itself. Such conversations won’t tend to hit lesson targets.
LESSON: Race-Resume Lesson
Topic: Significance Test for Two Proportions
Answer Key: PDF
Lakisha Resume: PDF
Emily Resume: PDF
What we like about this lesson: it demonstrates how causal inference about important issues is possible when experiments are well-designed.
Make sure to present the CVs to students under the framing of a real applicant to the math department of your school. Also, ensure assignment of CVs is random and that students evaluate the CVs independently. If doing this activity online, have students use a random number generator. Then, give them a link to the proper CV (as a PDF on Google Drive, for example) based on their random draw. Check out this example.
It probably won’t be possible to do this activity and the gender-CV activity in the same school year, since students will know what’s happening the second time. So, you’ll have to choose one!
It’s important to relate in-class results (whichever direction they end up) to results from the original study. Then, raise a discussion around generalizability: to which population are your class’s results generalizable? To which population are the original study’s results generalizable? How do both experiments influence your understanding of hiring discrimination?