Once you have identified the big ideas, seminal works, major concepts and habits of mind within your subject you need to define what excellence looks like.
If you want excellence you must define it in your own mind and that of your students. This is the work done by the designer or architect of the learning at an early stage in the planning process. It is the criteria by which you will judge whether excellence has been achieved. The description of excellence develops with the increasing maturity and stage of learning of the student. For example, in looking at how writers elicit an engagement and response from their readers there are a number of literary devices that can be used. These devices become increasingly complex and varied as a student’s capability within English or a language develops.
The following You Tube clip is called Austin’s Butterfly. Be warned, you’ve got to like you’ve excellence with lots of cheese. If you haven’t seen it before it’s well worth watching. It shows the power of clear feedback and a willingness to keep going until excellence has been achieved.
Note the idea of breaking the feedback into two parts: the shape of the wings and then the colouring. This is important in two respects: firstly, both crucial elements required to produce an excellent scientific drawing of the butterfly are identified. Secondly, it allows step by step feedback to be given and acted on without Austin becoming overwhelmed.
I worry that we settle for mediocrity, from our students, too often.
Writing Great Success Criteria
Writing great success criteria which increase clarity about excellence, in terms of the requirement from the student’s outcome, can be difficult:
- First, you have to be clear in your own mind what excellence looks like
- Secondly, you have to communicate this to the students with absolute clarity
If you want clarity in the classroom you need it in the staff room first.
Last year I was working with some staff looking at a particular example of a success criterion. It was related to learning about the different types of religious orders and why people choose one over the other.
The starting success criterion was:
“State the type of religious order joined e.g. apostolic or contemplative”
My question is very simple, “If I do that do I get an A*?” Success criteria must be challenging and direct students towards excellence. I just keep asking the same question and adding to the success criteria until it made clear to students what excellence would look like in their learning.
We ended up with:
“Compare and contrast apostolic and contemplative orders explaining why people join one based on personal preference, scriptural quotes, chosen purpose and challenges of life.”
Now I know what I need to do to get an A*. Success as defined by the initial and revised criteria is very different.
Success criteria need to be:
Specific – It’s important to be clear about the elements that are required for excellence – the “perfect solution” in Mathematics or inclusion of “personal preference, scriptural quotes, chosen purpose and challenges of life.” Clarity comes in part through specificity.
Extensive – This is linked to specificity but requires all the main elements of the excellent answer to be included. The issue of balance is raised here as students won’t necessarily be helped by a long tick list, what are the main elements that are the key to excellence?
Challenging – Keep asking yourself, “Would this produce an A* answer?”
Once we have determined what excellence looks like the next stage is considering how this will be evidence.
Gathering the Evidence
The process of gathering evidence goes to the core of our assessment systems both formative and summative. There are two key points in learning – the starting point and the end point. I think we are often pretty poor at assessing starting points.
If we do not assess a student’s starting point we have the potential to create two different but significant problems for ourselves and our students:
- Students do not possess the prior learning required to access the new learning. They cannot connect the new learning to prior learning and fall into a learning black hole.
- Students already know the new learning or substantial elements of it. The work is too easy and time is wasted teaching them what they already know.
The Post Levels World of Assessment
There is an element of freedom, in out post-levels assessment World. We must use this to reconnect assessment to the learning and the learner rather than to data production for leaders. If you decide to adopt the SOLO Taxonomy as a theory of learning, for use in your class room, then it makes sense to use it as the basis for assessing that learning.
Assessing students’ learning in this way and then recording it gives a teacher and her/his students really useful information about what has been learnt and next steps.
Looking at you current mark book can you determine what students actually know and understand?
Do you know what each student’s next step is?
Assessment needs be constructed around the key milestones in the learning. Think about a series of hinge point questions that would act as milestones or markers on a particular part of the learning journey:
Have students grasped the key facts – Uni- & Multi-structural level?
Have students connected these facts into a more coherent idea(s) – Relational Level?
Have students connected together ideas to form a coherent concept or schema which they can now apply – Extended Abstract Level?
Using the spreadsheet below to record data (imagine Concept 1 is about expansion of solids): it looks like Debbie, Amjad, Damian & Dan understand the concept and can apply it; Mary, Ross, Stephen, Debbie and Mel have a grasp of the idea that when a solid expands the particles move further apart but cannot yet apply their understanding whereas Mark and John know that solids expand when heated but can’t yet explain why.
Looking across the various concepts it is clear Stephen is struggling. Is greater intervention required by the teacher to support his learning?
If you now look at Concept 3 (think Particle Theory) then it is clear most of the students’ learning is at an early stage. Is there is a need for the re-teaching of this particular topic? It may also require a bit of reflection from the teacher about how it was originally taught. It’s easy to become defensive about this and see it as a criticism but try “being fascinated” instead. It opens up a World of possibilities.
Where Are You & How Are You Going?
Imagine you are a pilot. You know where you are taking off and where you intend to land. During the journey you are continually assessing where you are and making any necessary adjustments. It’s the same on the learning journey. You need to keep assessing where the students are up to and how they are going. As John Hattie would say, “Make the Learning Visible.”
There are a whole series of absolutely brilliant examples of how this could be done from the University of York Science Education Group. The example below has been taken from it and used here with the kind permission of Robin Millar and Mary Whitehouse (@MaryUYSEG).
The question is focussed on a key scientific concept about how we see. It includes a common misconception students have about the source of light and its direction of travel. At a glance a teacher can see what a student is thinking and how secure their thinking is. I will look at the potential for using confidence grids as part of a class room assessment routine in the final post of this series: Pedagogy & Practice. The important thing to grasp at this point is: plan the assessments early in the planning process. They define the milestones and end points of each stage of the learning journey.
Whilst this is focussed on Science I think the principle cuts across all subjects. It has certainly inspired some new thinking about writing schemes of learning at St. Mary’s Catholic Academy:
Mary Whitehouse (2014) Using a Backward Design Approach to Embed Assessment in Teaching
Teaching to the Test
The expression, “Teaching to the test”, is often used as shorthand of expressing everything that is wrong with education today. The curriculum is too narrow, the extreme level of accountability has become perverse or we are teaching students to jump through hoops rather than to love and cherish our subjects. Assessment drives the curriculum. I don’t have a problem with this so long as what we are assessing and how we are assessing it is of significant value in our subject or discipline.
The issue isn’t teaching to the test is wrong per se, the problem is when the tests aren’t worth teaching to.
Any end of topic test has to be carefully constructed to assess whether students know the basic facts taught within the topic, on which the deeper learning is built. The test should then go on to assess whether students have linked the facts to create ideas and then connected these to form the concepts at the heart of the topic. These end of topic tests should be formative in nature. They give feedback to the teacher and students about who has learnt what and what will need re-teaching.
Summative assessment to assign a grade to a student is different and some key principles are covered in the resource which can be downloaded using the link at the bottom of the post. This doesn’t preclude summative assessment also being used for formative purposes.
There is a growing body of evidence that testing helps learning. It links to the revision and revisiting of information that assists automaticity of recall. This recalled learning can then be used as a basis for future learning.
This resource contains the following information:
A PDF copy is available to download here:
Thanks to a few people for their thoughts and feedback on some early drafts of this section:
The other posts and booklets in the DIY Teaching CPD Series are:
DIY Teaching CPD: Structure & Sequence
DIY Teaching CPD: Pedagogy & Practice
Bambrick-Santoyo, P (2012) Leverage Leadership Jossey-Bass (Kindle Edition)
Hattie, J (2012)Visible Learning for Teachers. London: Routledge
Hattie, J & Yates, G (2014) Visible Learning & the Science of How We Learn. London: Routledge
Millar, R & Whitehouse, M. (2014) York Science: Developing Formative Assessment in Science. A presentation to the ASE.
Oates, T (2013) “Using International Comparisons to Refine the National Curriculum”, A speech by Tim Oates, Group Director of Assessment Research and Development, Cambridge Assessment, to the Mayor’s Education Conference, November 2013