Ronseal Assessment Part 2

Assessment that ‘does what it says on the tin’

The first five questions can be found at Ronseal Assessment Part 1 

6. Are you assessing that learning has actually been learned?

There is a difference between ‘learning’ and what is actually ‘learned.’ 

A 2013 OECD study reported here ranked England 22/24 in numeracy and literacy, the only country in which 55-65 year olds performed better than 16-24 year olds.  Research on The levels of attainment in literacy and numeracy of 13-19 year olds in English 1948-2009 found that 17% and 22% respectively of 16-19 year olds have poor literacy and numeracy.  For these students, ‘learning’ has not translated successfully into ‘learned’. 

Tim Oates points out that “each and every child [should be]... able to understand all of the curriculum...all children get access to all elements.”  He argues that, “a distinctive feature of high-performing systems is a radically different approach to pupil progression... crude categorisation of pupil abilities and attainment is eschewed in favour of encouraging all pupils to achieve adequate understanding before moving on.” 

Stephen Tierney writes here that an assessment system should, “Help close the gap for each student between current and expected learning as well as closing the gap between different groups of students,” while Daisy Christodoulou points out here that, “There are pupils out there who are really struggling with basic skills.  Flawed assessment procedures are hiding that fact from them and their teachers, and therefore stopping them getting the help they need to improve.  Worse, the kinds of assessment processes which would help to identify these problems are being completely ignored.”   

How will you know that every student has “deep secure learning of key constructs [Oates]?  

In, Is it possible to get assessment right? David Didau refers to the use of Mastery Pathways in which foundational skills and knowledge are taught and repeated until students have mastered them.  As David points out, “This provides real clarity over what students know and don’t know at a given point in time and it can be used to identify students who don’t make progress.” 

However, Phil Stock finds here that “in practice, a mastery approach to assessment can be time-consuming for teachers to implement and can detract from planning better lessons.”  

Strategies

  1. The benefits of mastery assessment @Pragmatic Education
  2. Ideas on incorporating mastery into the assessment framework @must do better...
  3. Use assessment to look ahead not back: create a benchmark of brilliance @Reflecting English
  4. What might mastery learning look like in History @Improving Teaching

7. Permanently?

Tim Oates criticises the “relentless transformation into high stakes” which distorts the curriculum, prioritising “undue pace” over “secure learning.”  

Are you assessing fluent permanent learning rather than rapidity?

Nuthall points out that “as learning occurs so does forgetting.”  Research here finds that, “a particularly effective way to enhance student learning is to engage in repeated, retrieval-based practice tests that are followed by restudy and that are distributed across time.  This ‘overlearning’ (which Willingham argues should be by 20%) creates fluent, automatic understanding and transfers learning to the long-term memory allowing permanent learning.

At Michaela Community School, the focus is on students achieving permanent mastery of content through:

 Strategies

  1. Find out more about Michaela’s strategies @Pragmatic Education
  2. Use Knowledge Frameworks to help students hold onto knowledge @...to the real
  3. Knowledge Frameworks in Maths @...to the real
  4. Using Knowledge Organisers in Science @Class Teaching
  5. Build a retrieval curriculum @Belmont Teach
  6. Permanent Learning @Love Learning Ideas

If you like Knowledge Organisers, James Theo has set up a Google Drive folder to share them here.

8. Is your assessment accurate?

In Measuring Up: What Educational Testing Really Tells Us, Koretz argues that, “Test scores usually do not provide a direct and complete measure of educational achievement. Rather they are incomplete measures, proxies for the more comprehensive measures that we would ideally use but that are generally unavailable to us.”

Assessment should:

  • Be reliable

  • Sample the domain that it is testing effectively

  • Allow valid inferences to be drawn 

  • Be meaningful

As Daisy Christodoulou explains here, “in order to make a valid inference about a pupil’s ability in a particular domain, you have to make sure the test adequately samples that domain.”  Tests can fall prey to construct underrepresentation when they inadequately sample the domain (for example teaching to the test), or the opposite danger of construct-irrelevance variance when they test beyond the domain (for example written Maths problems assessing literacy as well as Maths ability).

These aims can be contradictory.  For example, assessing students’ ability to apply knowledge beyond context is meaningful, but produces a higher construct-irrelevance variance than a more narrowly focused test.  As Daisy Christodoulou points out, this “make[s] it much harder to identify the specifics of what a pupil is good or bad at.”

She argues that, “certain kinds of test give more reliable scores.  For example, multiple choice tests often give more reliable information than performance assessments...[in comparison] marking extended writing is extremely complex and difficult to do reliably.”  

Therefore authenticity might only be achieved at the expense of reliability.  Grant Wiggins, quoted in, the now sadly defunct, Webs of Substance, writes that, “Authentic tests are representative challenges within a given discipline.  They are designed to emphasize realistic... complexity... In doing so, they must necessarily involve somewhat ambiguous, ill structured tasks or problems.”  Research here found that “assessment which encourages students to think for themselves – such as essay questions, applications to new contexts, and problem-based questions – shifts students... towards a deep [learning] approach” generally related to high levels of academic achievement.  Similarly, Andy Tharby writes here that “the accuracy of assessment should play second fiddle to what we should be attending to, which, to borrow John Hattie’s metaphor, is helping students to become ‘unstuck’...surely, these decisions are where teacher expertise really lies.”

Daisy Christodoulou recommends using complex performance-based assessments “only where the relevant construct cannot be adequately tapped using other forms of assessment.”  Her advice when marking essays, because we are better at making comparative than absolute judgements, is to focus "more on comparison of essays to other essays and exemplars, and less on comparison of essays to the mark scheme" so that we have "something that sits behind the criterion, giving it meaning."

Example from Using pupil work instead of criteria

Daisy Christodoulou.png

Controversially, she also suggests here that, Teaching which focuses on past papers and test prep is not teaching to the domain. It’s teaching to the sample. The improvements it generates are not likely to be genuine.”

Phil Stock argues here that, “assessment is more robust if it draws upon a range of different forms and provides multiple opportunities for that learning to be demonstrated e.g. MCQ, essay, short answers.”

Top 20 Principles from Psychology for PreK-12 Teaching and Learning finds that reliability of assessment improves when teachers:

  • Carefully align assessments with what is taught 
  • Use a sufficient number of questions overall and variety of questions and types of questions on the same topic 
  • Use item analysis to target questions that are too hard or too easy and are not providing sufficient differentiation in knowledge 
  • Are mindful that tests that are valid for one use or setting may not be valid for another 
  • Base high-stakes decisions on multiple measures instead of a single test 
  • Monitor outcomes to determine whether thereare consistent discrepancies across performance or outcomes of students from different groups

And don't forget your own cognitive biases which might impede accuracy, explained in Chapter 2 of David Didau's excellent new book: What if everything you knew about education was wrong?

 Strategies

  1. 47 questions to ask of any assessment here
  2. Research reported by UK Ed Chat on the reliability of teacher assessment
  3. Using multiple choice assessment @The Wing to Heaven 
  4. Using questions instead of criteria to promote accuracy @The Wing to Heaven

9. Are you assessing authentically as well?

Hattie comments here that, “we have a very very narrow concept of excellence.”  Similarly, Harry Fletcher Wood writes here that, “Our current measures offer high grades without recognising or expecting students to adopt [an]... altered view of the world; I’m not sure this can be described as success.”

In Knowledge and the Future School, Michael Young writes, “whatever we ask our students to do- there should be some idea in our mind that it has a ‘truthful’ outcome.”  Similarly, Martin Robinson argues in Trivium that, “schools should enable us to work towards something over time in an authentic way.”

At Northern Rocks 2015, Dame Alison Peacock described how The Wroxham School use Learning Review to allow students to assess their learning successes and challenges.

Take the opportunity to deconstruct what excellence as a subject practitioner looks like.  Michael Fordham describes this here as, judging students “against the domain.”  Stephen Tierney suggests here"start from what would excellence look like...define excellence and put it within the context of an assessment."

Strategies

  1. Develop a deeper view of your subject @Improving Teaching
  2. Teaching KS3 English through the Trivium @Surrealanarchy
  3. Assessment to create subject practitioners @Radical History
  4. The power of true standards @Christopher Waugh
  5. Using assessment to elevate learning rather than rank students

10. Is assessment actually improving learning?

Given the 10 benefits of testing for learning, should we consider using more testing in our assessment?

Research here suggests that the following methods, all of which could be incorporated into assessment, improve learning:

   Distributed testing forces students to think harder and works best “when the lag between sessions [is] approximately 10-20% of the desired retention interval” 
   Interleaved testing strengthens memory retrieval
   Elaborative interrogation enhances learning by integrating new information with prior knowledge
   Self-explanation helps students understand processes

Strategies

  1. Build a retrieval curriculum @Belmont Teach
  2. Revisiting past content @Bodil’s Blog

11. And teaching?

To have the greatest impact, assessment should be sustainable and focused on classroom practice.  

Michael Tidd criticises the “unnecessary work” created by APP here.  As he points out, “an assessment system needs to record meaningful information (note, not data) about how students are progressing through the required learning.”

Similarly, Harry Fletcher Wood decries “over-simplified, time-consuming junk data masquerading as assessment” here.  He argues for, “frequent, useful assessment...[of] students’ knowledge of individual concepts and ideas, and their capacity to use that knowledge... Question-level analysis in departments provides usable insights.... [from which] departments can create short-term solutions...alongside longer-term ones.”

Phil Stock suggests here that, “most assessment should primarily aim to inform the next steps, whether in the classroom or more widely across a department or year group [and] any inferences drawn from assessment should be acted upon as quickly as possible.”

In his Northern Rocks presentation, Stephen Tierney argued that assessment needs to be "designed around the learner... Find out what they don't know and teach it them."  The timing of assessment is also vital: "the teacher has to catch [the students] at that moment they can't do something."  He argues herethat the current assessment system has been, "designed by leaders to feed the accountability monster rather than teachers to inform the learning of children."  Part of the problem is that the 'grain size' of assessment varies across subjects, therefore "a whole school assessment policy has to be the composite, within guiding principles determined at a school level, of the individual subjects' assessment regimes." 

Stephen Tierney and Ross McGill suggest using an Achievement Plan to facilitate assessment-driven teaching:  

 

Top 20 Principles from Psychology for PreK-12 Teaching and Learning finds that effectiveness of assessment improves when teachers:

  • Focus systematically on setting goals for their students
  • Determine whether students have met these goals
  • Consider how to improve their instruction in the future
  • Keep the length of time between assessment and subsequent interventions relatively short; this is when effects on student learning will be strongest

 Strategies

  1. Identify learning gaps quickly: What do you not understand? @Reflecting English
  2. Use a Progress Dialogue pro forma to identify learning gaps and strengths from assessment @Love Learning Ideas
  3. Use question by question mark book analysis.  Example here from @Leading Learner

David Didau here on assessment DON’Ts:

  • Display an ignorance of how students actually learn
  • Assume progress is linear and quantifiable, with learning divided into neat units of equal value
  • Predefine the expected rate of progress
  •  Limit students to Age Related Expectations that have more to do with the limits of the curriculum than any real understanding of cognitive development
  •  Invent baseless, arbitrary, ill-defined thresholds (secure, emerging etc.) and then claim students have met them
  • Use a RAG rating (Red, Amber, Green) based on multiples of 3 to assess these thresholds
  • Apply numbers to these thresholds to make them easy to manipulate
  • Provide an incentive to make things up

Next steps and recommendations

  1. How to design assessment without levels @Class Teaching
  2. Designing post-levels assessment from scratch @Belmont Teach
  3. Principled Assessment Design by Dylan Wiliam
Posted on June 25, 2015 .