The Protégé effect
It's pretty well-accepted that teaching is a great way to learn. It's not just a nice thing that people say, its a well-studied phenomenon called the Protégé effect.
There are a few mechanisms in play:
- Preparation: If I know I will need to teach a person a thing, then I will put effort into learning that thing. I might put more effort into learning than I usually would simply because I don't want to seem like an idiot.
- Recall: If I'm teaching a person a thing, I will need to remember the thing. This is a type of spaced-repetition
- Assessment: The person who is being taught is likely to have questions, and some of those questions might be unexpected. This will have a high chance of dispelling illusions of competence. Low-stakes skills assessments are a very powerful teaching tool, and teaching is a lot like a quiz sometimes
- Elaboration: By explaining concepts and handling unexpected questions, a teacher is likely to come up with new ways to describe them. New analogies, new angles, new lenses. This helps with something called structure building. Structure building is a process of mental modeling, organization and categorization of information. People with high structure building abilities do better at recalling information and applying skills in new situations
Teaching to learn is powerful. And it's backed by science.
Applications in a code school
There are many, many ways to apply teaching to learn. I'm going to describe one mechanism that I set up quite recently. It's looking very promising.
Step 1: Get learners to rate themselves on a bunch of different skills
Step 2: Put the learners into groups. Every group should have a topic or skill they focus on and one "strong" learner acting as the teacher. The goal of the group is to make sure that everyone in the group knows their stuff by the end
Step 3: Wait a while and set some expectations. For example, the learners should have group study sessions at least twice weekly for two weeks. Once everyone has mastered the allocated skill, they can move on to another skill.
Step 4: Get the learners to rate themselves and their peers on different skills. Calculate a final rating
Step 5: Go back to Step 2
The group's "teacher" ends up preparing for lessons, and they field questions that they don't know the answers to. The fact that the learners stay in a group for a couple of weeks means that if the "teacher" discovers a hole in their knowledge, they have time to fix it.
All the good things happen. People's skill ratings rise over time.
Here are a few direct quotes from a few of the student-teachers in these little groups (we called them pods):
"Taking on the pod leader role has been instrumental in my personal development. While my delivery skills may not have been strong initially, this opportunity has provided a platform for me to practice and refine them. I am proud to share that I have witnessed noticeable improvement in my ability to effectively communicate my knowledge compared to just a few weeks ago."
"Explaining something to my peers allows me to solidify my own understanding of the topics at hand. Sometimes one is not really sure if they fully understand the topic they have been given but relaying it to other people makes you understand the topic further."
"I have significantly enhanced my communication and leadership skills."
"I had to do a bit of research on how to do certain things and be able to come up with examples to demonstrate how things work in that particular sub-topic. That helped me level up on loop topics. You get to learn how to communicate with ease."
"I was a little nervous at first, but as the sessions went on, I improved and the more I discussed the topics we had covered with my group, the better I understood them myself."
And it goes on and on... the feedback has been universally good.
It's also quite clear that the learners are reporting on growth outside of the technical skills that they are focused on. They confront things that scare them; they work on communication and leadership skills; they build confidence.
It's lovely to see.
The challenges
Of course, this is not a perfect, easy thing to get right. Here are a few things that need to be dealt with:
The ratings might be inaccurate
Learners who appear strong might be lacking critical skills. Personally, I don't trust the system enough to just leave learners to teach themselves and each other. It's likely that they'll miss some things; there are likely to be unknown unknowns.
The answer to this is to have some professional teachers sniff around and do spot checks. This will also tell us how effective the system really is.
A professional teacher is someone who:
- has mastered the work. In a code school, this would mean a dev who ideally has experience working as an actual developer
- has been trained as an educator. Teaching skills and technical skills are different skills. Simply expecting an experienced dev to have the skills of an experienced teacher is unrealistic
Professional teachers act as a safety net and validate the system instead of teaching everyone everything themselves.
Struggling learners might get left behind
If a learner's ratings are consistently low, they are not benefitting from the system or trying to dodge teaching because it's uncomfortable.
Again, this problem is solved by using expert teachers. These folks can spend time with those who appear to be struggling and assist or diagnose any problems.
The strong learners get all the benefits
Since the biggest beneficiary of this system is the teacher, and the teachers are the strongest learners already (or at least seem like they are), then it is possible that they will continue being allocated the teacher role.
When grouping learners up, it's essential to set up simple rules and parameters to make sure different people get a shot at teaching and to make sure people don't linger on a topic for too long. For example, you can say that a person can't be a teacher twice in a row. Or a teacher can't teach the same topic more than once in 3 iterations.
Initially, these rules come from gut feeling. As the data rolls in, they can be optimized.
Scale
If you are running this kind of system with 20 learners, you can quickly sort them into groups by hand. However, automation becomes necessary if you scale up the number of learners or the number of different rules and parameters in play.
This shouldn't be a big deal. If you are running a code school, there should be some coders who can help with this sort of thing. I made use of a kind of box-packing algorithm.
Monitoring
If this is a compulsory part of a code-school's teaching mechanisms, then it would be essential to track what is happening. Relying on self-reports is not enough, and it gets tricky if you are working remotely. It's important to use the right tools for the job.
End
I don't have much more to say besides:
- teach people stuff, it's good for you
- if you interact with learners, professionally or otherwise, try to get them to teach people stuff. It's good for them
- if you run a school, try out the mechanism I outlined above. If you try it out please let me know how it goes!
Want to learn from me?
I'm running some technical workshops and long-term mentorship over at Prelude. These are damn fine learning experiences for individuals and teams.
The training covers skills such as: Python, Django, HTMX, AlpineJS, Git, Tailwind, Playwright and more.