Probability Distributions and Expected Value

4 teachers like this lesson
Print Lesson


SWBAT develop a basic understanding of the graphs of probability distributions and how to calculate and use expected values.

Big Idea

As the semester nears its end, we make beautiful graphs, and briefly consider some deeper ideas that students might see in their future study of statistics.

Flip That Coin On Your Table: What's Going to Happen?

1 minutes

The purpose of today's opener (on the second slide of the lesson notes) is just to get kids talking.  As students arrive, they'll find a penny at each table, and they'll see their instructions projected on the board.  This opener is supposed to be a little irreverent, and kids have as much fun reacting to it as I have watching them react.  If anyone asks, "Wait, for real?" I say, "YES!  The pressure is on -- what are you going flip?"  

Of course, there's something else going on behind this.  As kids get to talking in their groups, I'm listening: what kinds of probability are they talking about?  Do these informal conversation indicate that students grasp the distinction between empirical and theoretical probability?

In the next part of this lesson, we'll continue to develop that distinction.

Class Discussion and Guided Notes: Creating Graphs

20 minutes

As students flip their pennies and talk about what's happening, I distribute this "Four Probability Graphs" handout. Today, students will use some of the data representations they studied in Unit 1 to represent the different outcomes of probability experiments.  That's what standard S-MD.1 says students should be able to do:

Define a random variable for a quantity of interest by assigning a numerical value to each event in a sample space; graph the corresponding probability distribution using the same graphical displays as for data distributions.

This standard allows students to revisit a skill from earlier in the semester. When it comes standards-based grading, I frame this as an opportunity for students to improve their grades on earlier learning targets if they need to. Although we won't fully treat the standard - I won't have time to formally define the idea of a "random variable" like I would in an AP Stats course - the big idea for today is that we can use graphs that we're already familiar with to represent probability.

This will be a Guided Notes lecture.  I will guide students in creating each of these graphs, as we talk about what's happening as we go along.

Graph #1: Flipping Pennies

On the first graph we create (see the handout and slide #3 of the lesson notes), we will flip a coin and plot the accumulated percentage of coins coming up "heads" after each flip of our experiment. Graphs like these are popular in statistics textbooks; I find that students gain a deep understanding of the value of running multiple trials, and randomness in general, by creating one as a group.

There is space on the graph to record data for 65 flips: after each of the first ten flips, and then after every five from there.  I explain to students how this graph is set up, and then I ask for a volunteer to help me keep track of the data on the side board.  Then I ask for a volunteer to go first, by flipping the penny at their table, noting that everyone will get two or three chances to contribute a flip.  

Now, after that first flip, there's either going to be a 0% or a 100% chance of flipping heads.  We plot that, and to connect this back to the opener, I say, "Does this mean that the coin will come up heads 100% of the time?"  After two flips, the graph can only be at 0%, 50% or 100%, and I ask students to note the possibilities after each step.  Here is an example of what the graph might look like after five flips.  At this point, we note that this is discrete data - there are no "partial flips" in between the integers - but that in order to make the graph easier to read and interpret, we can decide to connect the dots.  

We continue, with students taking turns to flip the coins, and developing the graph as we go (10 flips40 flips, and when we're done, 65 flips).  I like to help students make a few observations about streaks.  First, is that in general, they will happen.  The only thing weirder than a coin coming up heads four or five times in a row would be flipping a coin 100's of times and never seeing that happen.  Second, and more importantly, is that as our number of flips increases, a streak has less influence on the graph.

We'll see this idea again on Graph #2 -- the more trials we use, the more our data will like we expect, even though on a small scale, anything can happen!

Graph #2: Flipping 100 Pennies

Now that we've flipped 65 pennies and observed the results of that, we'll move on to Graph #2 (see the handout and slide #4 of the lesson notes).  Again, we're going to create this graph by running an experiment on the fly, but this time we'll ask, "What if a "trial" isn't one flip, but a set of 100 flips?"  We're also going to enlist a computer to help out with this task.  "Even though we could flip these pennies 100's of times," I say, "my goodness, do we really want to?"  No: this exactly the kind of task best left to computers.  In this screencast, I share an overview of what I show to students.  If you're interested, here is the Java code I use, but as I note in the video, you can also find plenty of tools online to accomplish the same task.

I tell everyone that this graph will be a dot plot.  On the handout, I've provided empty space, but I leave it to students to decide how they're going to set it up.  I'll often start out by setting up my graph on the board like this, and asking, "Do you guys think this is enough?"  Then after running just a few trials - often just one - we note that, "nope that's not going to cover it".  It doesn't take long to press the "run" button on the computer 50 times, and to plot each result.  After 50 trials, the dot plot might look like this.  The questions I ask at this point are specific to our results like, "does this mean that we're more likely to get 53 or 54 heads in 100 flips than 49 or 50?"  

Both of the first two graphs help us get review empirical vs. theoretical and the importance of multiple trials.

Graph #3: Rolling Two Dice

The third graph will be purely theoretical, and as we transition to it, I ask students to think about the distinction between what we've just done on the first two graphs, and just "imagining our way through an experiment."  I continue, "Graph #3 is about dice, but I'm not going to give you any dice to roll.  I want you to work with your group to imagine rolling two dice, and all the possible sums of the two numbers that you might roll."  

After a few minutes, I ask students to identify the size of the sample space for the experiment they're doing in their minds.  To do so, students must recognize that these two dice or independent of each other, and use the fundamental counting principle to recognize that six outcomes on each die will yield a total of 36 outcomes for their sum.  Many students have seen this before, and they're confident in how easy this feels after stretching their minds on the first two graphs.  "Now, we're most interested in creating a graph of this sample space," I say.  "Try to create a histogram representing all possible outcomes of rolling two dice."

From here, some groups of students need more help and other don't need any.  If everyone is rolling with the task, I'll invite students to sketch their work on the board.  If students are having a hard time, I'll help out.  My goal is to help students as little as I have to, but not to withhold anything.  Giving notes via a running conversation as we work together to create a graph always yields the kinds of insight I hope for students to have.  After we're satisfied that the histogram is complete, we might annotate it - note that this is another figure that's popular in stats textbooks - and talk about the patterns we see.  

Graph #4: Rigged Dice

The final graph of this activity once again involves theoretical data, but this time the dice are rigged as described on slide #7 of the lesson notes.  Again, I'll help as little as possible, but I'm prepared to show students how to create an addition table displaying the possible outcomes of this scenario, and to guide them in creating the graph.  I tell students that they have the option of creating a box plot or a histogram.  You can see that, like on graph #2, I've provided minimal guidance on the handout.

This graph will provide a simple counterpoint to the normal-looking curves that we produce by running coin-tossing trials or rolling standard six-sided dice.

New Notes: Interpreting Graphs and Expected Value

30 minutes

We all just worked hard to create those four graphs, and hopefully everyone had plenty of good ideas. Now it's time to give a basic working introduction to the idea of expected value. As with many of the ideas I've shown students in this course, we won't give expected value the full treatment it deserves. My goals are to expose students to this idea, because it's fascinating and useful, and to help them review some key algebraic concepts along the way.

I tell everyone that I've gone a few steps further on Graph #2. I used my handy computer to run 101, 1000, and 100,000 trials of the coin-flipping experiment. The results of each are on slides #8-10 of the lesson notes.  Do I sound like too much of a nerd when I say that it's a dramatic reveal when we get to the plot of 100,000 trials? Maybe - but take my word for it: even some of the most resistant students gasp with delight when they see it. There is no prescribed formula for talking about these graphs.  My goal, as it has been throughout the unit, is to help students appreciate the way a bigger, more predictable picture takes shape as we run a greater number of trials.

With the plot of 100,000 trials projected on the screen, I ask students where they've seen this before. Students are quick to point out that it looks very similar to Graph #3: Rolling Two Dice. "Ah, so we have experimental data looking more like theoretical data if we run enough experiments," I say. "But what else does it look like?" And maybe someone remembers the normal curve that we studied in Unit 1. I flip to slide #11, and say, "Remember this?" Then, on slide #12, we return a problem from earlier in the year, about using the normal curve and standard deviation to determine the percentage of adult men that are over or under a certain height. This time, I want students to recognize that we're using the language of probability when we say that a certain percentage of the population is of a certain height.

Ear Infections

Next, we look at a real-world example of a probability distribution. The graph on slides 13 and 14 was pulled from the online resources of the Biostatistics in Dentistry course at the University of Washington in Seattle, which is exactly the kind of course students are taking when they contact me years later to laugh and say that they're using what they saw in this course.

First, we just look at the graph to see how graphs like this ones we've spent the day creating might be used in the context of real data. Then, we notice that it does seem to follow a "curve," but that it looks more like the rigged-dice example, because it is "off-center". Then, on slide #15 we look at the corresponding data table, and the kind of question we're able to answer by looking at data like this:  How many ear infections can I “expect” a newborn to get by the time they turn 4?

I define the idea of expected value for students, and show them how to use this data to calculate the expected value by multiplying each possible outcome by the probability of that outcome, and then taking the sum of those products. The solution is 2.038. So how do we interpret that result?

Expected Value, Intuitively

"My child might get two ear infections, or they might get three," I say. "They also might get more or less than that. But is it possible for an individual child to get exactly 2.038 ear infections?"

Finally, we come to one more big idea: expected value doesn't tell us exactly what will happen for a specific case.  Rather, it serves more as a mean for the larger population.  It's just like when we hear that the average American family has 2.2 children. No one has exactly that many kids, but if you gather ten families, it's likely enough that they'll have a total of 22 children among them. And the larger our sample grows, the closer we'll get to that mean.

For one more way to look at the same idea, we return to the idea of dice on slide #17. Using the expected value calculation, which in this case involves finding the average of six equally-likely outcomes, we get that the expected value is 3.5, which we can't ever actually roll. But if we roll the dice 10 times, it's reasonable to predict that the sum of those ten rolls will be somewhere around 35, and again, as that sample size continues to grow, we'll move closer and closer to this mean.

One Abstract Example

On slide #17 is one more example problem that we can use to practice calculating expected value. I usually don't use this slide until later, when students are working on the practice problems that I share in the next section, or as an opener to a follow up lesson. Whenever the time is right, it's ready when we need it.

Practice: Interpreting Graphs and Expected Value

10 minutes

For students to practice, I've prepared this problem set. Each of the five situations here highlights a different aspect of what students have seen today, over the course of the unit, and even over the course of the semester.

The first problem, "TV Watching and Computer Use," provides another real-world probability graph, this one from the CDC. I ask students first to compute the percentage of youth who spend two or more hours each day watching TV or using computers, which is an application of the addition rule. The next two questions are about expected value, but they don't phrase it that, instead asking "what is the average?" But the idea here is get students making that connection between expected value and the mean of a probability distribution, which is what MD-2 calls for: "Calculate the expected value of a random variable; interpret it as the mean of the probability distribution."

The second problem, "Drawing Two Playing Cards From a Deck," takes us back to the deck of cards that we've used as an example throughout the unit. I provide a frequency table, which students use to create a histogram (reviewing of a skill from Unit 1), consider how to represent probability on this graph, and then answer questions similar to what they did in problem #1, by adding probabilities in a certain range, and calculating expected value.

Problem #3, "Scratch-Off Coupons," revisits a scenario from the previous problem set, but this time the question is about expected value.

Problems #4 and #5 give a brief treatment to the idea of calculating expected values in games of chance. As I reflect upon in the next lesson, this is one topic that I would love to have spent more time on this semester, but there's never enough time for everything! These two problems give kids an example of a popular field of study; if any students want to continue, I'll provide resources accordingly.