Our previous articles (part 1 and part 2) have looked at how problem solving is used in STEM subjects and the skills that can be transferred to other subjects to improve learning.
In part 3 we learn how debugging can be utilised in other lessons as well as posing questions we as teachers should all be thinking about…
Debugging – evaluation and testing in action
In computing, more often than not, the problem is not solved first time - it is through debugging that improved solutions are built. The ‘father of computational thinking’ Seymour Papert, and later Klahr and Carver, argued that the strategies which pupils applied to debugging could transfer to dealing with mistakes more positively in the rest of the curriculum.
Some evidence points to debugging as being a more powerful and transferrable learning experience than programming alone. CAS’ Miles Berry has written about debugging as encouraging ‘growth mindset’ in students, supporting success across the curriculum.
Debugging is not a unique activity in schools. Evaluation of actions and outcomes is key in all STEM disciplines, and is something which students often find difficult. Might a common approach between departments help them?
Scientists evaluate data and the conclusions drawn from it. Reflecting on their procedure and the weight of evidence. In design and technology, evaluation is through the use of products and collections of user feedback, something that applies too in computing projects too.
In computing, a deeper evaluation of algorithmic solutions can answer key questions and strengthen thinking skills. Is this the best and most efficient way to solve a problem? If not, what could be improved? Again the key question, and something that features in many ‘metacognitive’ approaches to learning is – how do we know if we have succeeded? Success criteria are vital in each STEM subject, whether it is proving a hypothesis, functional testing, or simply checking your working out.
Another powerful, transferable concept is the trade-off. This involves a hard headed assessment of what is needed and to what extent it should be pursued. In science, how many measurements of growing cress seeds are needed to get reliable results? How can materials cost be offset against product value in design and technology? What degree of accuracy is needed in mathematics? These questions are driven by cost vs benefit – just as algorithms are evaluated in computer science, where the cost is measured in time or computing power requirements.
While there are few answers, I hope this short series of articles has given food for thought.
Can you commit part of a training day to working this through with colleagues in mathematics, science and design and technology? Can your thoughts be shrunk to an ‘elevator pitch’, sparking their interest in cross-departmental ways of working?
It is doable and worthwhile, so what’s stopping you?