# Response Bias Project Makeover

**Leigh Nataro teaches elementary statistics, math for business, and math for teaching at Moravian University in Bethlehem, PA. Leigh has been an AP Exam Reader and Table Leader and was on the AP Statistics Instructional Design Team, where she helped to tag items for the AP Classroom question bank. In addition to leading AP Statistics workshops, Leigh leads in-person and virtual Desmos workshops with Amplify. Leigh can be reached on Twitter at **__@mathteacher24__**. **

A picture of a pizza is shown to a student and then the student is randomly assigned to answer one of two questions.

Question 1: Would you eat this pizza?

Question 2: Would you eat this vegan pizza?

Does telling a person that a food is vegan impact their response? This was the experiment created by a pair of my students for one of the most engaging projects done in AP Statistics - __The Response Bias Project__. Students create an experiment to see if they can purposefully bias the results of a question.

Does inserting the word “vegan” into this question while showing the exact same picture bias the results? Here are the results obtained by the students. What do you think?

In my original iteration of this project, students would create various colorful (and often glitter-laden) graphs to compare the results.

**But how likely would it be to get these results by chance if no bias was present? **

To answer this question, The Response Bias Project needed a makeover.

**The Makeover: Include a Simulation **

One of the most challenging AP Stats topics for students is estimating p-values from simulations. This is one of the reasons we do a simulation on the first day of class with playing cards called “Hiring Discrimination: It Just Won’t Fly”. (An online version of the scenario and simulation can be found here: __Hiring Discrimination__.) Several times during the first semester, we use playing cards to perform simulations and then interpret the results to determine if a given claim is convincing. Although we never formally call the results a p-value, examining the dotplots from simulated results sets the stage for tests of significance in the second half of the course. Making over this project with a simulation leads to more robust conclusions and reinforces the idea of a p-value based on simulations.

Originally my students used the free trial of __Fathom__ to create their simulations. However, in recent years Fathom has not been supported on newer Mac operating systems. This led me to investigate using the Common Online Data Analysis Platform or __CODAP__. This is a free online tool for data analysis and it is specifically designed for use with students in grades 6-14. Students can save their work in google drive, but an account is not required to access CODAP.

To understand what students need to do to create their simulation, I share an instructional video related to one of the projects. Here is the video that goes with the vegan pizza project: __CODAP for Response Bias Project__ and the CODAP file: __Vegan Pizza Simulation__.

Based on their results from the simulation, the students determined if adding "vegan" created biased results or if it was possible to get results like what they saw in their experiment due to chance alone. Students display their work on posters that are hung around the room. (A sample of posters is included at the end of the blog.) Then, half the class stands by their posters and they present to the few students that are in front of them. This takes about five minutes. Then, the students move on to another poster and pair of presenters. Students get to give their presentation about three or four times to different small groups of their peers. There are no powerpoints, no notecards, less nervousness and stress for students and this gives them practice with more informal presentations they might need to give at some point in the future. Another benefit is students get to see multiple instances of simulated results which helps to lay the foundation of the __concept of a p-value__ for future units in the course.

**Why Is This Makeover Helpful?**

Although students learn about some common topics from AP Statistics earlier in their math careers, simulation is a topic that is new and often challenging for many students. Creating the simulation in CODAP helps students to understand __what each dot in the simulation represents__ and how the overall distribution shows more likely and less likely outcomes. Students also identify where the value of their statistic (the count of yes answers) falls on the dotplot. They are essentially answering the test of significance question, that is “assuming there was no bias, how many times did the observed outcome or a more extreme outcome occur by chance alone?” Reading about the results on their classmates' posters and hearing about it in multiple presentations solidifies the concept of using probabilities from simulations to draw conclusions.

#### The Concept of Simulations from the AP Statistics Course and Exam Description

**Skill 3.A:**Determine relative frequencies, proportions, or probabilities using simulation or calculations.**UNC.2.A.4**Simulation is a way to model random events, such that simulated outcomes closely match real-world outcomes. All possible outcomes are associated with a value to be determined by chance. Record the counts of simulated outcomes and the count total.**UNC.2.A.5**The relative frequency of an outcome or event in simulated or empirical data can be used to estimate the probability of that outcome or event.

Note: On the 2023 AP Statistics Operational Exam, determining a probability based on data from a simulation was part of __free-response question 6__.

To understand how students are expected to use simulations on the AP Exam, consider Free Response Question 5 from the __2022 AP exam__. Students were asked to use results from a simulation to estimate a p-value from a dotplot. Understanding what each dot represented and that they needed to determine where the sample statistic of 5.66 fell relative to the dots on the plot were the two key concepts needed to calculate the correct p-value. On the dotplot only 3 of the 120 simulated differences in means was 5.66 or higher. Students then needed to also compare the p-value to 0.05 and state their conclusion in context.

2022 #5 estimating a P-value from a dotplot