At the last Maths Conference in Kettering I presented my ideas on the link between how we assess pupils and their perception of that assessment, and how this impacts on the interest that pupils have for maths.  It’s a fairly long and tortuous logic chain, and some of the links are certainly less robust than others.

This set of blogs is my attempt to make my way though the sticky maze of theory and practice.

In this first blog I will set out the first steps of my argument.  If you find the fatal flaw, please tweet me @bettermaths before I swap the cow for a handful of magic beans!


Here is a maze drawn by a member of Dangerous and Brilliant Maths group.  They are primary aged and very nimble mathematicians. It is a solvable maze, but I’ll leave you to work it out!

It makes the point nicely I think.

My basic ‘compound disinterest’ argument was catalysed from two pieces of research. The first is from Plymouth University about teacher attitude to annual testing and the affect it was having on the relationships between staff. The second was some of my own digging into the distribution of subject background of teachers in schools and in SMT/headship (but more on this later).

My argument boils down to this:

Current policy and practice in my school means that the way I teach my pupils tends to encourage them to make short term, quantifiable and measurable progress, and to perform well in end of unit and year tests, but this does not embed the habits they need to be successful mathematicians.

I find myself doing activities that are efficient in terms of meeting these goals, but not planting the seeds of maths wonder and awe I would like. This is common for each teacher across the pupils’ school experience, from year 2 to year 11. Each year this compound disinterest does what compounds do, it compounds.


Step 1: Competitive

I think teacher assessment used to be a positive sum game. We all  looked at longer term goals, such as performance at GCSE or A-level, or getting to university or employment, or being passionate students, and great advocates for the school.

However, it is no longer a positive sum game – no longer about the future prospects of the pupils, the ‘we are all in this together’. My assessment of the pupils this year affects the expected progress of them next year. Plymouth found evidence that staff are not scoring some pupils as they thought they ought to, due to pressure from the ‘next teacher in the chain’ to give them more scope to show progress.

Competition between schools has always been evident – see these email extracts for example:

“Having better GCSE pass rate than ‘school B’ isn’t everything, it’s the only thing.”

“Good luck on open day, and remember, winners are grinners.”

“If we are simply willing to do what they won’t then we can [increase recruitment].”

What has changed now though are the conversations we have about the performance of the pupils.  It’s ‘our’ data.  Have you been asked ‘what are you going to do about your data?’. The accountability has changed too.  We look at winning rather than playing well.  The result is more important than the performance.  Getting good test results in the 6 weekly test is the measure, not being good at maths.  It’s the difference between being a good driver always and being able to pass your driving test. And I think this affects the attitude of pupils to their learning and their own self efficacy.

Step 2: Efficient

Step 2 is much shorter and more secure I think. We find the most efficient route to the end point. If the end point is marked books, or feedback, or test results, or lesson plans… then we get there, efficiently. This is also how we talk to and about pupils in their performance. They take these performance cues from us and they attempt to do them well.