I’m going to be speaking in Rochester tomorrow about the use of statistical simulation in Common Core Algebra II. I think this is a deep topic that we are only beginning to understand. So far as I can tell, inferential statistics has never been taught in a standards system that clearly places simulation ahead of more formulaic approaches. Don’t get me wrong, I completely support the move for more experiential education in probability and statistics.
The ideas that underlie the statistics standards, especially in CC Alg II, come directly from the GAISE Report (Guidelines for the Assessment and Instruction in Statistics Education). I have to say, for a report with such a laborious name (notice that assessment comes before instruction), it’s a really great read. And, yes, I know how weird that sounds. Here’s the report:
The report encourages the use of statistical simulation to help students understand the connection of probability to inferential statistics. Given the almost impossible task of assessing a student’s understanding of simulation or the use of it, we will likely have to teach the traditional statistical formulas. Keep in mind, these formulas are not mentioned in the standards nor in the NYSED clarifications of the CC Alg II standards. Yet, what else can they place on a standardized tests. Can’t you just wait for the mnemonic for margin of error = 2* standard deviation/square root of n? That cannot be a pleasant tune.
We’ve put together web based apps and TI-84 calculator programs to do the three major simulations suggested in the standards and in the GAISE Report. After the description of each simulation, I give a link to the web app and a link to the calculator code.
Sample Normal Distribution: In this simulation, the user specifies the mean and standard deviation for a normally distributed population. Random samples of any size are then generated a user specified number of times and a distribution of the sample means and sample standard deviations are produced. This simulation can be used to establish how likely a particular sample mean would be from a given population.
Sample Proportion Simulation: In this simulation, the user specifies a population proportion value ranging from 0 to 1 (population p). Random samples of any size are then generated a user specified number of times and the distribution of sample proportion (p) values is produced. This simulation can be used to establish how likely a sample p value (p hat) would be given a population with a different p.
Difference of Sample Means: In this simulation, the user inputs data sets from two treatments. The data sets are scrambled to produced random groupings of outcomes a user specified number of times. For each random grouping, a difference of treatment means is calculated. A final distribution of these differences is produced. This simulation can be used to establish how likely an observed difference in treatment means would be by chance assignment alone. This is equivalent to determining how likely the observed difference is due to the treatments.
I look forward to seeing how these statistics standards play out in the coming years, both in how we educate students and how we assess them. I hope that the powers that be clarify those standards soon. I’ve always felt that I can be most creative with my teaching when the rules of assessments are clearly laid out there.
We are entering a great stage of education where internet based technology, such as our web based simulation apps, will allow students to easily experiment with the crucial (but under appreciated) connection between probability and statistics. It is a rich connection that can come alive with simulation. I hope that we have contributed both tools and lessons to get us part of the way there. We know there is still great work to be done and look forward to contributing more lessons to this topic as the year goes on.
See you all in Rochester!