Hovercrafts and Aunties: Learning Statistics as a Foreign Language

To many who are not members of our Craft, and even some that are, Statistics is something of a Foreign Language, difficult to grasp without a good understanding of its grammar, or at least a whole swag of useful rules.

Stats is also difficult to teach, note our students look of bored angst when we try to explain p values.

So could we teach Stats like a foreign language?

For starters, why don’t we teach statistical ‘tourists’/’travellers’/’consumers’ some useful ‘phrases’ they can actually use, like how to read Excel files into a stats package, how to do a box plot, check for odd values, do some basic recodes etc.

Such things rarely appear in texts. Instead, we tumble about teaching the statistical equivalent of ‘the pen of my aunt is on the table’ or ‘my hovercraft is full of eels’ (Monty Python), or ‘a wolverine is eating my leg’ (Tim Cahill).

For example, as well as assuming that the data are all clean and ready to go, why do stats books persist in showing how to read in a list of 10 or so numbers, rather than reading in an actual file?

Just as human languages may or may not directly have universal concepts, the same may apply for stats packages. The objects of R for example, are very succinct in conception, but very dfficult to explain.

Such apparent lack of universality, may be why English borrows words like ‘gourmand’ (to cite from my own book chapter), as English doesn’t otherwise have words for a person that eats for pleasure. Similarly, courgette/zucchini sounds better than baby marrow (and have you ever seen how big they can actually grow?).

Yet it’s a two way street, with English providing words to other languages, such as ‘weekend’.

According to the old Sapir-Whorf hypothesis, language precedes or at least shapes thought (but see John McWhorter’s recent 2014 book The Language Hoax), so if there’s no word for something, it’s supposedly hard to think about it.

In Stats package terms, instructors would have to somehow explain that it is very easy to extract and store, say, correlation values in R, for further processing, putting smiley faces beside large ones etc. But in SPSS and SAS we would normally have to use OMS/ODS, and think in terms of capturing information that would otherwise be displayed on a line printer. This is a difficult concept to explain to anyone under 45 or so!

Although there are many great books on learning stats packages, (something for a later post), and I myself can ‘speak’ SPSS almost like a native after 33 years, I only know a few words of other human languages, (and, funnily enough, only a few “words” of R).

If you’ll excuse me, my aunt and her pen are now going for a ride on a hovercraft.
(I hope there’s no eels! )

REFS

Sapir-Whorf Hypothesis

https://www.princeton.edu/~achaney/tmve/wiki100k/docs/Sapir%E2%80%93Whorf_hypothesis.html

Counter to the Sapir-Whorf Hypthesis

http://global.oup.com/academic/product/the-language-hoax-9780199361588;jsessioniid=455D218B50BA25BC37B119092C3F7CDE?cc=au&lang=en&

Hovercraft, Gourmands and Stats Packages

McKenzie D (2013) Chapter 14: ‘Statistics and the Computer’ in
http://www.elsevierhealth.com.au/epidemiology-and-public-health/vital-statistics-paperbound/9780729541497/

 

“AIC/BIC not p” : comparing means using information criteria

A basic principle in Science is that of parsimony, or reducing complexity where possible, as typified in the application of Occam’s Razor.

William of Occam (or Ockham), a philosopher monk named after the English town that he came from, said something to the effect of ‘pluralitas non est ponenda sine necessitate’ (‘plurality should not be posited without necessity’). In other words, don’t increase, beyond what is necessary, the number of entities needed to explain something.

Occam’s Razor doesn’t necessarily mean that ‘less is always better’, it merely suggests that more complex models shouldn’t be used unless required, to increase model performance, for example. As is commonly, but probably mistakenly believed to have been proposed by Albert Einstein, ‘everything should be made as simple as possible, but not simpler’.

 

Common methods of measuring performance or ‘bang’, taking into account the cost, complexity or ‘buck’, are the Akaike Information Criterion (AIC), Bayesian or Schwarz Information Criterion (BIC), Minimum Message Length (MML) and Minimum Description Length (MDL).

Unlike the standard AIC, the latter three techniques take sample size into account, while MDL and MML also take the precision of the model estimates into account, but let’s just keep to the comparatively simpler AIC/BIC here.

An excellent new book by Thom Baguley ‘Serious Stats’ (serious in this case meaning powerful rather than scarey) http://seriousstats.wordpress.com/ shows how to do a t-test using AIC/BIC in SPSS and R.

I’ll do it here using Stata regression, the idea being to compare a null model (e.g. just the constant) with a model including the group. In this case we’re looking at the difference between headroom in American and ‘Foreign’ cars in 1978. (well, it’s Thursday night!).

Here’s the t-test results

(1978 Automobile Data)
——————————————————————————
Group |     Obs        Mean    Std. Err.   Std. Dev.   [95% Conf. Interval]
———+——————————————————————–
Domestic |      52    3.153846    .1269928    .9157578    2.898898    3.408795
Foreign |        22    2.613636     .103676    .4862837     2.39803      2.829242

 

Domestic has slightly bigger mean headroom (but also larger variation!), p value is 0.011, indicating that the probability of getting a difference in means as large as or larger than the one above (0.540), IF the null hypothesis, that the populations means are actually identical, holds, is around 1 in a 100.

 

Using the method shown in Dr Thom’s book (Stata implementation on my Downloadables page) we get

 

Akaike’s information criterion and Bayesian information criterion

—————————————————————————–
Model |    Obs    ll(null)   ll(model)     df          AIC         BIC
————-+—————————————————————
nullmodel |     74   -92.12213   -92.12213      1     186.2443    188.5483
groupmodel |     74   -92.12213   -88.78075      2     181.5615    186.1696

 

AIC and BIC values are lower for the model including group, suggesting in this case that increasing complexity (the two groups), also commensurately increases performance (i.e. need to take into account the two group means for US and non-US cars, rather than assuming there’s just one common mean, or universal headroom)

Of course, things get a little more complex when comparing several means, having different variances etc (as the example above actually does, although means still “significantly” different when differences in variances taken into account using separate variance t-test).  Something to think about, and more info on applying AIC/BIC to variety of statistical methods can be found in refs below, particularly 3 and 5.

 

Further Reading (refs 2,3,4 and 5 are the most approachable, with Thom Baguley’s book referred to above, more approachable still)

      1. Akaike, H., A new look at the statistical model identification. IEEE Transactions on Automatic Control, 1974. 19: p. 716-723.
      2. Anderson, D.R., Model based inference in the life sciences: a primer on evidence. 2007, New York: Springer.
      3. Dayton, C.M., Information criteria for the paired-comparisons problem. American Statistician, 1998. 52: p. 144-151.
      4. Forsyth, R.S., D.D. Clarke, and R.L. Wright, Overfitting revisited : an information-theoretic approach to simplifying discrimination trees. Journal of Experimental and Artificial Intelligence, 1994. 6: p. 289-302.
      5. Sakamoto, Y., M. Ishiguro, and G. Kitagawa, Akaike information criterion statistics. 1986, Boston, MA: Dordrecht.
      6. Schwarz, G., Estimating the dimension of a model. Annals of Statistics, 1978. 6: p. 461-464.
      7. Wallace, C.S., Statistical and inductive inference by minimum message length. 2005, New York: Springer.
      8. Wallace, C.S. and D.M. Boulton, An information measure for classification. Computer Journal, 1968. 11: p. 185-194.

 

      —————————————————————————–

 

 

 

 

Electric Stats: PSPP and SPSS

Most people use computer stats packages if they want to perform statistical or data analysis. One of the most popular packages, particularly in psychology and physiotherapy, is SPSS, now known as IBM SPSS. Although there is room for growth in some areas such as ‘robust regression’ (regression for handling data that may not follow the usual assumptions), IBM SPSS has many jazzy features / options such as decision trees and neural nets and Monte Carlo simulation, as well as all the old faves like ANOVA, t-tests and chi-square.

I love SPSS and have been using it since 1981, back when SPSS analyses had to be submitted to run after 11 pm (23:00) so as not to hog the ‘mainframe’ computer resources. Alas, as with Minitab, SAS and Stata and others, SPSS can be expensive if you’re not a student or academic. An open source alternative that is free as in sarsparilla and free as in speech, is GNU PSPP, which has nothing whatsoever to do with IBM or the former SPSS Inc.

PSPP has a syntax or command line / program interface for old school users such as myself, *and* a snazzy GUI or Graphic User Interface. Currently, it doesn’t have all the features that 1981 SPSS had (e.g. ‘two-way ANOVA’), let alone the more recent features, although it does have logistic regression for binary outcomes such as depressed / non depressed. PSPP is easy to use (easier than open source R and perhaps even R Commander, although nowhere near as powerful).

PSPP can handle most basic analyses, and is great for starters and those using a computer at a worksite etc where SPSS is not installed, but need to run basic analyses or test syntax. The PSPP team is to be congratulated!

http://www.gnu.org/software/pspp/   free, open-source PSPP

http://www-01.ibm.com/software/analytics/spss/  IBM SPSS

(students and academics can obtain less expensive versions of IBM SPSS from http://onthehub.com)