I’ve been forced to go through the torturous process known as learning stats for non-specialists. Hundreds of thousands, if not millions, of students have to go through this ordeal. I’m obviously not the first one to be infuriated by the bad teacher and the poor structure, but I’ve decided to take my venting in a positive direction! I understand the hubris of implying I could teach a better course than my current teachers…but considering this is his first course, I prolly could.
Edit: I have a discovered an alternative way to teach/learn stats called Resampling Stats. I dunno if it’s much better but I’ll be testing it out using the giftware Statistics101 program + the Intro text included with it under Help menu + the intro text for resampling stats available online. Keep you posted! If it works, I may ditch the rest of these suggestions in favour of this one.
I would organize the whole course around questions. After reading and implementing Cal Newport‘s excellent How to Become a Straight-A Student, and realizing my academic hero, Robert Cialdini, teaches by posing a question at the beginning of class that’s so fascinating that students stay past the end of class to find the answer, I’ve started thinking in terms of questions and answers. Researchers have questions and go out to find answers midst the chaos of life. I would organize the whole stats course around these three questions because that’s what stats being taught to non-specialists is all about:
- How do you communicate your data to others? Science is collaborative endeavor for the most part now.
- How do you discover and prove your conclusions?
- How do you spot bullshit? In your own data and in others with lots of examples from the media.
The main theme of the course is signal vs noise. That’s what stats is from what I’ve been able to figure out. How do you differentiate the “signal” for a real correlation from the noise of simply “chance”. In fact, that’s how most inferential statistical techniques such as t-tests and ANOVA are setup–as literally the ratio of signal over the ratio of noise.
I would use the excellent Statistics for Behavioural Sciences by Gravetter and Wallnau. Though I may opt for a more non-textbook such as Head First Statistics.
The great thing of using the textbook by Gravetter and Wallnau is it’s organized in a fashion that lends itself to questions. For example, the first few chapters assume that you have knowledge about the underlying population. However, after the first few chapters, I could organize the lectures around removing information we thought we had, for example, “What do you do if you don’t have the underlying population mean?” And thus help organize the various statistical techniques in students’ heads.
Why the emphasis on questions? The main reason is to serve the god of salience. People remember things that are salient and important. Questions are an easy way to make things salient and to help organize incoming information. I would also try to use as many relevant examples as I could, including data from the class itself.
Taking a page from the teaching book of the wonderful John Vervaeke, I would start with folk intuitions and show the many kinds of cognitive biases we fall prey to. For example, why bother with advanced regression techniques when you can just show the data in a scatterplot? Cause you can change the scale of the scatterplot, fool our intuitions and make it look like there’s no relationship, or make it look like there is. I would show some examples of both.
In the first class, each student will be asked to come up with a research question that they could answer with numbers that they themselves are interested in. Doesn’t have to be related to whatever field, but simply a question they would like to know the answer to. As the class goes on, the various methods needed to find answers to those questions would be answered. This gives each student a personal stake in learning the method to ask the questions they themselves generated. This again serves the god of salience. Examples could include figure out sports stats such as whether a player is actually in a slump after recovering from an injury, or whether video games cause violence.
The other thing I would do, which I consider extremely important, is to try to give students an intuition for how various things work. I would do that by including little interactive programs that allow students to mess with the numbers. There’s a few on this page along with other good resources for teaching. I don’t have a source handy for this but I recall reading an anecdote of a young woman who felt she was never good at math was given a problem that allowed her to play with the numbers and see the results. She apparently learned the concept of slope of a line very quickly. You could do this on paper but it takes much longer to do the variations which can be done in seconds on a computer.
I would also include some of those jokes and stories because those make things salient to human beings. I would try to teach about the development of statistics as a discipline cause those stories are sometimes fascinating and help you realize that real human beings came up with these techniques and the challenges they faced.
I would also be able to improve my performance by doing this: keeping track of some students in the class and see how they perform in a practical application of the material after the course. Many disciplines have terrible records where students of say, Physics, who’ve done well in their introductory course can’t understand an example of counter-intuitive conservation of momentum. So I would try to ensure what I’m teaching sticks and find ways to make it stick. This sort of feedback allows for the system to improve beyond just student ratings of the course or teacher.
*sigh* I think I’m done.
PS Here’s some faulty heuristics/cognitive biases:
People estimate the likelihood of a sample based on how closely it resembles the population. (If you are randomly sampling sequences of 6 births in a hospital, where B represents a male birth and G a female birth; BGGBGG is believed to be a more likely outcome than BBBBBG.) Use of this heuristic also leads people to judge small samples to be as likely as large ones to represent the same population. (70% Heads is believed to be just as likely an outcome for 1000 tosses as for 10 tosses of a fair coin.)
Use of the representative heuristic leads to the view that chance is a self-correcting process. After observing a long run of heads, most people believe that now a tail is ‘due’ because the occurrence of a tail will result in a more representative sequence than the occurrence of another head.
People ignore the relative sizes of population subgroups when judging the likelihood of contingent events involving the subgroups. For example, when asked the probability of a hypothetical student taking history (or economics), when the overall proportion of students in these courses is 0.70 and 0.30 respectively, people ignore these base rates and instead rely on information provided about the
student’s personality to determine which course is more likely to be chosen by that student.
Strength of association is used as a basis for judging how likely an event will occur. (E.g., estimating the divorce rate in your community by recalling the divorces of people you know, or estimating the risk of a heart attack among middle-aged people by counting the number of middle-aged acquaintances who have had heart attacks.) As a result, people’s probability estimates for an event are based on how easily examples of that event are recalled.
The conjunction of two correlated events is judged to be more likely than either of the events themselves. For example, a description is given of a 3 1 -year old woman named Linda who is single, outspoken, and very bright. She is described as a former philosophy major who is deeply concerned with issues of discrimination and social justice. When asked which of two statements are more likely, fewer pick A: Linda is a bank tellel; than B: Linda is a bank teller active in the feminist niovement, even though A is more likely than B.
A related theory of recent interest is the idea of the outcome orientation (Konold, 1989a). According to this theory, people use a model of probability that leads them to make yes or no decisions about single events rather than looking at the series of events. For example: A weather forecaster predicts the chance of rain to be 70% for 10 days. On 7 of those 10 days it actually rained. How good were his forecasts? Many students will say that the forecaster did not do such a good job, because if should have rained on all days on which he gave a 70% chance of rain. They appear to focus on outcomes of single events rather than being able to look at series of events-70% chance of rain means that it should rain. Similarly, a forecast of 30% rain would mean it will not rain. 50% chance of rain is
interpreted as meaning that you cannot tell either way. The power of this notion is evident in the college student who, on the verge of giving it up, made this otherwise perplexing statement: ‘I don’t believe in probability; because even if there is a 20% chance of rain, it could still happen’ (Falk & Konold, 1992, p. 155).”
From this excellent paper on how to teach stats better (PDF warning).