Mathematicians Ask for Help

Lately I’ve been struggling to finish a piece about growth mindset research, a topic that I can’t seem to leave alone. I always come back to it, for reasons that aren’t entirely clear to me.

Summer is ending, and teachers are putting their classrooms back together. A lot of classrooms — if social media is to be believed — have bulletin boards that look like this:

4752f4d6678d305451d64b04e5718b59

There were no bulletin boards like this at the high school I attended. I don’t remember if there were any bulletin boards at all — there must have been, but only for announcements and intramural schedules. No teacher would have dreamed of decorating their classrooms in this way. We wouldn’t have taken it seriously; it probably would have been destroyed the second the teacher turned their back to us.

****

I only had two math teachers in all of high school. Rabbi Weiss covered 9th, 11th and 12th Grade math for the honors track. He hated geometry, so he found someone else to cover that. Rabbi Weiss also taught me Talmud/Halacha in 10th Grade, so he was my teacher for all of high school.

Yeshivas, in my experience, are incredibly competitive places. (All-male yeshivas, I mean.) Who would make it to the top class? Who would be offered advanced placement in the Israeli schools? Who would win Torah Bowl? (Yep! Torah Bowl.)

Rabbi Weiss gave long, difficult exams in both math and Talmud. There were two competitions on every exam: who would finish first? who would finish last? Because Rabbi Weiss gave you as much time to finish these monster tests as you needed, and you could look up any sources you wanted (for Talmud — you were on your own for math). You could win for speed or you could win on endurance.

I won the speed competition on the first Talmud exam in 10th Grade, but that was the last time I won that. For every other test that year I was one of the last to finish. I’m not sure what changed.

(Come to think of it, grades were totally a part of our competition too. Getting a perfect score on one of those exams was another thing we fought for.)

There’s more to say about all of this: about Rabbi Weiss’ pedagogy, how badly I miss the summer Talmud classes that met in his basement, his sense of humor, and how even though all of us were highly competitive we were also best of friends, studying together and nudging each other along.

I have to say a bit more about Torah Bowl. I was made captain of a team, and how we made it to the championship. I wasn’t the fastest and never had scary-good memory for trivia, but I also drafted well and our crew was formidable. I could tell you about the legal question — from Bava Kamma if you care — we were asked in sudden death, to crown a winner. It was about damages: is such-and-such more like starting a fire, or digging a treacherous pit? And while I don’t remember the answer, I remember that I raised my hand and answered wrong, losing the contest.

****

I signed up for Multivariable Calculus in my first semester at college. I had just come from studying in a yeshiva in Israel for a year. I had a great time, but there was no question: I returned from Israel with a bad case of angst and melodrama. I was obsessed with questions of self-worth, all of which had been highlighted by the constant talk of “who’s a genius?” that permeated that world. This was the state of mind I was in when I started college.

Here’s what I wanted to do: I wanted to show up in tough classes and kick some ass, because otherwise what are you worth? You can only contribute that unique something if you have that unique something.

I wasn’t really prepared for Multivariable Calculus. Rabbi Weiss taught strictly through note-giving and homework-reviewing. It wasn’t terrible for us — along with a bunch of my classmates, I aced the AP exams — but it left me with relatively shallow reserves to draw on in my first college class.

Most importantly, though, I saw Multivariable Calculus as a referendum on me. I didn’t ask questions in class. It was only near the end of the semester that, sheepishly, I arrived at my teacher’s office for help. I profusely apologized for, like, a whole minute before my teacher (whose English wasn’t great) made it clear through intense eye-rolling that I was being ridiculous. Of course he was right — I was ridiculous.

In the end of the year, my classmates managed to get this guy a teaching award. I walked around campus rolling my eyes — haha, my turn now! — because they were giving an award to this guy? The guy who frequently stopped class to ask for English translations of mathematical terms? The guy who, I felt, had given me nothing, no life-vest, no rope, no help?

The big, big thing I was missing was that all the non-grumps in the class liked him precisely because he would ask questions. In doing so, he made everyone else feel as if they could ask too. That was the whole thing.

****

A few years ago I taught a 9th Grader who came with a warning: his teacher last year had been able to get nothing out of him. He shuts down, I was told, and this was absolutely confirmed by what I saw in class during the first few weeks.

At the start of my career, I would have diagnosed him with a struggle-allergy. He wasn’t willing to dig in; he was used to things being handed to him in math; he didn’t know that struggle is normal, a sure sign that learning is happening.

I don’t want to dismiss all of this, but I’ve found a different strategy more helpful. It’s simpler too — which is good, because I don’t do well with complex. I need simplicity in my teaching, as much as possible.

Here’s what I did for my 9th Grader: I told the entire class, “I want you all to ask me questions. Lots of questions. When you’re feeling stuck: ask me for help.”

And, then, when my 9th Grader didn’t ask me questions I walked over to him: “I really want you to ask me some questions if you’re stuck.”

When that didn’t work (“I’m doing fine Mr. P”) I went back to him and I said: “You’re going to start having an easier time with these problems when you start asking me some questions.”

And, finally, when he asked me a question, I answered it as best I could and said, “This was great — please keep asking questions.”

At risk of driving home the point a bit too strongly: I really, really wanted him to ask me questions.

When a student is working on their own — tinkering away, seemingly content — it might not be that they’ve embraced struggle. It might be that they’re embarrassed to ask for help. Kids sometimes end up thinking that you’re supposed to deal with problems on your own, and that in fact dealing with issues on your own is a sign of intelligence and academic worth. It’s certainly what I thought, sitting in the back of Rabbi Weiss’ class or in my professor’s office hours.

It’s the thing I look for, most of all, in evaluating how a student is doing. If they’re asking questions, they expect to learn. If they aren’t, it could very well be that they’ve given up, or are considering it.

I’m not great at classroom culture — kids like me OK, I think — but this is one thing I know that I do. It’s one thing, nothing complicated, but I beg kids to ask me questions. It’s how you grow.

****

On and off for the past seven years, I’ve been trying to learn more math. Not just to solve problems, but to learn a new discipline of math, or to relearn my college material in more depth.

Each summer I sign myself up for a new mathematical project; each year I fail. What I’ve realized, though, is that I can’t do this on my own. I need to ask for help. This summer has been my most exciting summer for learning math, and it’s entirely because I’ve realized that I just need help to learn new stuff.

(Shout out to Anna, Ben, Ben, David, Evelyn and anybody else who has helped me out with math over the past few months! Thank you.)

A lot of teachers — myself included — find it helpful at times to talk about the nature of mathematical work with students. So: mathematicians prove things; mathematicians struggle; mathematicians make mistakes; etc.

The thing is, though, that mathematicians do a lot of things. We get to pick and choose which aspects of mathematical culture we want to promote with kids. Mathematicians prove things, sure, but they also invent discriminatory algorithms. (Put that on a poster!) So we make choices.

It’s a choice to emphasize struggle, mistake-making and individual effort in our classes. What we’re trying to do is emphasize that one’s success in class is in one’s control. And that’s often true, but I don’t think that it mostly happens by trying harder on problems, which is what our growth mindset messages seem to emphasize.

Mathematicians struggle, it’s true, but mathematicians also ask for help. And when it comes to helping kids who have given up, I don’t find it helpful to emphasize the normality of struggle and frustration in math. (We’re all frustrated! might not be the most compelling sales-pitch on behalf of our subject to these students.)

I do find it helpful to beg kids to do this one thing: ask, ask, ask. It’s how you get someone to help you learn. Ask!

****

Barry Mazur (another of my math teachers) helped prove Fermat’s Last Theorem:

KEN RIBET: I saw Barry Mazur on the campus, and I said, “Let’s go for a cup of coffee.” And we sat down for cappuccinos at this cafe, and I looked at Barry and I said, “You know, I’m trying to generalize what I’ve done so that we can prove the full strength of Serre’s epsilon conjecture.” And Barry looked at me and said, “But you’ve done it already. All you have to do is add on some extra gamma zero of m structure and run through your argument, and it still works, and that gives everything you need.” And this had never occurred to me, as simple as it sounds. I looked at Barry, I looked at my cappucino, I looked back at Barry, and I said, “My God. You’re absolutely right.”

Mathematicians ask questions. Sometimes these questions are fun and playful, but other times the questions are more straightforward: can you help me understand this?

I wish that a teacher had told me — no, begged me — to ask questions that weren’t aimed at impressing anybody. Maybe then I could have been better equipped for math in college, and I wouldn’t have had to run away from it after that first taste. There’s so much more that I could have learned during those years if I had been more comfortable seeking clarity from those who had it.

So, put it on a poster: When you feel stuck, ask for help.

Advertisements

A Quick One, On Politics and Teaching

I’m watching Grace’s talk (which you should watch too) and thinking about her question:

Is teaching necessarily political?

This is a question that I find tremendously tricky — though I sometimes feel alone in finding it so, and I often do a terrible job explaining my trouble. I’ll try again here.

In watching Grace’s talk, I see a difference between two ways of arguing for viewing teaching through a political lens:

  1. You should adopt a political lens because it will help your students, and because it’s the right thing to do.
  2. You must adopt a political lens because teaching is political, and you have to open your eyes up to reality.

The second metaphor is behind talk of being “woke.” Right? It’s saying things just are a certain way. You need to see teaching as political just as you must see the world as round. Wake up!

This reminds me of a favorite passage from Maimonides’ treatise on sin and recovery:

“Ye that sleep, bestir yourselves from your sleep, and ye slumbering, emerge from your slumber, examine your conduct, turn in repentance, and remember your Creator!”

To see teaching as non-political is to slumber; to realize that it’s not is to open your eyes.

For whatever reason, though, this language feels wrong to me. It’s the first way of putting things that I’m much more comfortable with. Not that teaching is necessarily political, but that we can choose to see it as such, and that we should because it’s the right thing to do.

(I feel nervous sharing these rough thoughts. Some might accuse me of getting caught up in language, but what can I say? The question is one of language, and I’m caught up in it.)

In a comment on one of Grace’s incisive posts, I tried to draw an analogy between teaching as necessarily political and teaching as necessarily spiritual to try to make sense of this all. I’ll quote it here, but definitely go and read Grace’s post in its entirety:

Is teaching spiritual? Well, to someone who sees the world through spiritual lenses it certainly is! Every interaction — each moment — is stuffed with spiritual potential. Our sense for the spiritual is, arguably, tied up with the experiences of kindness, connection, understanding. We’re also capable of casual cruelty, and that mundane disregard for other people is the opposite of what it means to be spiritually engaged in a moment. In short, each moment in teaching is potentially spiritual, so let’s go out and say it: teaching is spiritual work. (Even when you fail to sense it, or treat the moment as mundane.)

At the same time, the classroom is not a religious center and there is a great deal of spiritual activity that would be inappropriate in a classroom context. In that sense, teaching is not spiritual, i.e. there is not widespread agreement among parents, students, educators and other stakeholders that there ought to be spiritual activity in the classroom. (Certainly not that there ought to be any particular sort of spiritual activity present.)

So is teaching inherently spiritual? It depends what you mean.

(a) A spiritual person (I guess I am) could say, yes, absolutely. Teaching is, or it can be, spiritual work. (And the absence of spiritual meaning is taking a sort of spiritual stand, too.)
(b) On the the other hand, spirituality is not an agreed upon purpose of schools or schooling. So you can bring spirituality to the fore of your classroom, but there are risks involved. (Like losing your job, or offending someone who has a strong opposition to spirituality or your particular spiritual message.)

We might also ask, SHOULD schools be more spiritual?

All of this feels as if it’s closely parallel to what we talk about when we talk about whether teaching is political.

The way of thinking about this that I find most natural is that teaching is not necessarily political, though it’s possible to see all of teaching through a political lens, and I really think that you should. 

Why see teaching through a political lens, if it’s not necessarily political? Because it’s the right thing to do for your students. It’ll sensitize you to a host of issues that — whether or not they help increase test scores or get kids into college — will make your classroom a more humane place for your students. People need to be loved and understood; your students are people. A political perspective helps.

But I admit to being entirely unsure of this, and confused as to whether there is really any real difference here. Is there anything important at stake between these two ways of arguing for seeing teaching as political? Are these just two ways of saying the same thing, or two fundamentally different perspectives on politics and teaching?

I don’t know, and I don’t know if I’ve articulated where I’m at in a way that can convince you that I’m not trying to stir up shit or to cause trouble, and I also don’t know if I’ve convinced anyone that this is coming from a place of really sincere concern for doing right by my students. I don’t know why this question feels important and elusive to me, but it does.

And now go watch Grace’s talk! It really is great.

Writing is allowed to be hard

What makes this post weird, for me, is it started with having something to say. Lately, this is not how I write. Here is the origin story of my last several posts:

And so on. Now, I don’t want to be facetious. It’s not like I start these projects without any thought about what I’m going to say. Usually it’s sort of a nascent take. It’s often extremely tentative: maybe I’ll end up saying…

The point isn’t that I go in to a piece of writing without anything in mind. The point is that all these recent posts have required active development. Through a combination of research, drafting and editing, I figure out what the post is about well after I decide to write it.

I mention all of this because I’ve been talking to people recently about why they stopped (or never started) blogging. Before you misunderstand my purpose, there’s nothing wrong with not blogging. Seriously: do whatever you want. I never want to be the guy to criticize someone for not doing something. As long as nobody’s getting hurt, don’t-do to your heart’s content.

Here’s the thing. A lot of people were telling me that they don’t blog because they don’t have ideas, or because they’ve already said what they want to say, or they don’t have the time, and so on and so on. These are all entirely legitimate reasons not to write — along with the very best reason, which is “I don’t feel like it.”

I worry, though, that in the online math teacher community (mtbos) the dominant, default view about writing is that it’s supposed to be easy. The expectation in our community is that writing about teaching is most appropriate as either an organic expression of your views or as a casual, nearly-personal record of your professional practice.

Now: this isn’t such a big deal! There is no crisis in the mathtwitterblogosphere — the community is growing, and pretty much everyone is having a fun, meaningful time. I certainly don’t see myself as a dork Cassandra.

(OK fine, just a bit.)

Here’s what I think might’ve happened. Blogging was a fantastic medium on which to build a math education community. The community’s initial growth was enabled by a particularly flexible type of writing — relatively quick posts that shared a brief, relatively unsexy thing about teaching. This wasn’t the only way to blog, but it was a fantastic, accessible genre for teachers who were new to the community. It was easy to dive in, and a lot of generous engagement resulted while knowledge and resources accumulated.

Along with this success, the community developed a series of (totally reasonable and beneficial!) norms around accessibility. Blogging doesn’t need to be anything fancy, and you don’t even need to worry about a reader — write for yourself, and if other people find it helpful? Hey, that’s a bonus.

People are justifiably sensitive about this point so let me say it again: I am not critiquing this view on blogging, or even its prominence in the blog-o’-land. It’s a message that maximizes accessibility, and that is probably the most important value of our community.

I think that now might be an especially good time to remind people that there’s another way to write in this community, which is to slowly, painstakingly, dutifully carve out posts. And — thinking entirely personally here — it’s just so, so much fun to write like that. You should try it! Taking writing seriously is a hoot.

Let’s get the costs out of the way: I spend a ton of my free time reading and writing. Call it whatever you want — hobby, avocation, craft — but it’s time-consuming. It’s also sometimes unnatural, in the sense that I have to search for something to say, and I need to figure out how to say it. (I still fire off a quick sharing post from time to time, but I’m drifting away from it.) And, because I work hard on this stuff, I sometimes get frustrated when my work is ignored or when I see myself as having failed.

So much for costs. The benefits: seriously, it’s a blast. I learn so much more from crafting a piece than from a post like this one, where I’m sort of just yapping. And, if the past is any indication, I’ll probably be a bit disappointed with the response to this post. Some folks will like it, others won’t, and that’ll be that. My longer, more complex pieces, though, have generated incredibly meaningful responses. I’m blown away by the comments people have left on these posts, and my email correspondence has been rich as well. And that’s all I’ve ever really wanted from this blogging thing — to get to write and to have it mean something real to my peers.

(It’d be nice to have writing in legit publications so my parents could have something to talk about, but that would just be a cherry on top of my current situation.)

What I’ve found, after a lot of stumbling and searching, is that an especially fruitful genre for me is review. Some of the most fun I’ve had writing (generating the most exciting responses) has come when I read a difficult book or article as best I can and try to make sense of it in writing.

I would love to read more complex, critical writing about reading, especially from teachers: won’t someone humor me?

Another type of post that I’m finding especially fun is the research/practice post. I find it a tricky balance. You need to tell two stories at once, taking care to weave them together without sublimating experience to research or dismissing serious findings. This type of piece also gives me that awesome feeling I had when I started blogging and people were still sharing the unsexy things — the feeling that, potentially, any classroom moment could be transformed into a post and thereby be significant beyond the moment itself.

This is another type of post that, while I suppose anyone in education could write it, is especially interesting to me coming from people in classrooms.

There’s a third type of post that I’ve been trying to figure out how to handle. I really want to get better at writing straight math. I want to learn how to apply what I know about teaching to the sort of content that I’m interested in learning about. And I’m also interested in using writing as an engine and discipline for learning new mathematics. My experience with the history of algebra essay was totally energizing; I’m ready for more.

But I’m also eager to read more writing about mathematics from the people who know the most about helping other people make sense of it. It’s a type of writing that is particularly apt for teachers to do, and yet I don’t see much of it.

These three areas — the review essay, the research/practice post, straight math — are some of my favorite types of writing to read, and I am especially interested in reading them from teacher-writers. My purpose here isn’t to nay-say what anyone else is doing. I just want to share how much fun, how rewarding it’s been to explore these areas in my own writing, and to try to entice someone else to start down a similar path.

These kinds of writing will always be hard and time-consuming. But so is making incredible math videos or putting together a presentation. I think there’s a community of writers out there in mtbos interested in playing around with writing, but I don’t think it’s come together quite yet. And maybe there are some people that are looking for a way in on blogging, but haven’t figured out how to make it click yet.

My message, then, is that writing is allowed to be easy, but it doesn’t need to be. Writing can be an effortful process that ends, but doesn’t start, with having something to say. It can involve research, months of planning, asking friends for editing and revising, revising, revising. And, when everything clicks, this sort of writing yields rewards different in kind to the rewards for the more common modes of blogging.

Blogging can be very, very hard but so much more fun.

What I’ve Learned About Practicing Multiplication Facts

I. 

During this past school year, I started practicing math facts in a new way with my 3rd and 4th Graders. The name I came up with for the routine was “Forwards and Backwards Practice.”

Like all my classroom ideas, it was lazy and simple. I handed a piece of blank paper to each kid. I told everyone that we’d be doing an activity in two rounds, that they should write “Round 1” at the top of their papers. Then I wrote the “forwards” and “backwards” problems on the board.

The “forwards” problems were pretty familiar to my kids. Solve the equation; put a number in the blank to make the equation true:

4 x ___ = 28, 8 x 4 = ___, ____ x 7 = 42

The “backwards” questions were more open-ended. On the board, I simply wrote three numbers:

21; 42; 81

I explained that for these I wanted the kids to write as many multiplications as they could remember that equaled each number. Accurate “backwards” answers for 21 would be 3 x 7, 1 x 21, etc.

As kids were wrapping these questions up, I called attention back to the board. If there was a common mistake, this is when I mentioned it. I shared accurate answers to each question, emphasizing what I wanted to emphasize.

Then, I erased the board. I told kids that there would be a second round of questions in a minute that were very closely related. Take a minute, I said, and study the multiplication we just reviewed. Try to remember as much of these as you can. When a minute is up, you’ll flip your page over to the blank side for Round 2.

Here’s what I did, basically: I swapped the forwards and the backwards questions. The backwards questions were now forwards equations, and the forwards were now put in backwards form.

IMG_0351.JPG
My planning notebook.

 

That means that the corrections and practice that the kids got in Round 1 are relevant for Round 2. If a kid is just starting to practice 7 x 3, then they get a chance to study it and try to remember it for a problem that is coming right up, moments after they study.

That’s why I like this routine. It packs a pretty virtuous cycle into a fairly quick package:

  • Think about what you already know
  • Get some explicit instruction in response to your work
  • Study
  • Try to remember it

One thing I like about this routine is that it solves a problem I was having with other whole-group practice, which was some kids were finishing my practice much before others. I didn’t want to end the activity, but the quick finishers needed something to do. Backwards practice is something that sort of “naturally” differentiates. It’s end-goal is vague; kids interpret it according to their understanding of multiplication, so they each student tends to find appropriate math to work on, and my speed-demons don’t force me to call a quick end to the activity.

Depending on the group and their confidence, knowledge, etc., I might vary how closely the questions in Round 2 resemble the questions in Round 1. If kids are really at the beginning of their learning of multiplication, the Round 2 questions might very closely resemble the ones in Round 1. Or we might keep them all “forwards” practice, just knock out different numbers in the equation. Or change the direction (i.e. from 3 x 7 to 7 x 3).

II. 

I also use flashcards with my 3rd and 4th Graders.

When I first introduced them, I was very, very nervous. I tend to worry about the most anxious kids in my class, and I had two nervous wrecks in my 3rd Grade group. (One was receiving medical care for his stress.) How would they react to all this? They were already shutting down when I gave out worksheets. Flash cards would only be worse.

So, I introduce flashcards. I get each kid a little plastic decks and I get a ton of colorful index cards. (It turns out that you need both of these things to make this work, because otherwise kids lose their cards or mix them up with other decks. I tried to pull this off with envelopes and white flash cards last year and it was a total disaster.)

We slowly start filling out cards with multiplication (and addition) problems. I ask kids to practice, and I explain what practicing means, and I tell them what good practice looks like. (“None of this stuff where you’re shouting out answers while someone else is thinking. We don’t want to take away someone else’s chance to think.”) And then I give them a good chunk of time to start practicing.

Things looked good in class, but you never know for sure, so I asked kids at the end of class to write a bit about how they liked practicing math with their decks. I’m very interested in what one of my high-anxiety kids thinks, so I grab him at the end. What did you write, O? What were you thinking?

What he tells me is really interesting. He says that he really prefer the cards because they only show one problem at a time. When he sees a page with a ton of problems on it he gets overwhelmed, distracted, stressed out. But cards are significantly less stress for him.

The year goes on. There are a few groups that are getting a bit competitive when they practice, which I come to think is fine as long as I keep an eye on it. I do maintenance on their practice: be nice; you can write another problem as a “starter problem”; make sure everyone you’re practicing with has a chance to answer; you can do this by yourself; throw out a few cards that are too easy. I ask questions: are the cards too easy? are they too hard?

Are you enjoying this practice? I ask that often, because I’m sort of surprised by how much they’re enjoying themselves. But they are, really.

Flashcards are just great for practice. The answer is right there — if you get it wrong you get correction and a nudge in the right direction. (Math facts is the sort of thing that it really does help to get quick corrections on.)

There are other benefits too. Like O said, you only see one problem at a time. You can go fast, you can go slow. You can turn the cards over and do “backwards practice.” You can take the deck home and practice by yourself. You can quickly take it out if you finish an activity quickly — it can go on the menu.

One challenge I’ve had with flashcards is that some kids persist in using really inefficient strategies when practicing with their decks. This is because they are basically choosing how long to spend on each card in their decks. This is attenuated somewhat by kids practicing together but it’s something that I had to keep an eye on while they were practicing.

III.

When I wanted a bit more control over which fact families my students practiced, I used dice games:

ums-copier@saintannsny.org_20161128_103007-page-001.jpg
Roll a die for the top left box, then for the top right, then for the middle left, middle right, etc.

It’s another dumb, easy thing. The only problem here is that there are no corrections when kids are practicing. I had my students write down their results for this sort of practice, but I often couldn’t catch mistakes quickly enough to be useful for their practice.

IV. 

I want to help my students commit as many multiplication facts as they can to memory. I don’t want to feteshize math fact automaticity — some kids do OK without this knowledge — but it’s really useful knowledge for learning more math. Why wouldn’t I try to help my kids commit their math facts to memory?

What’s the best way to do this? Well, you need a theory as to how kids come to commit facts to memory. As I’ve written about before, my perspective is you learn what you practice. If you want to remember facts, you have to practice remembering them. And if you don’t practice remembering them — if you only ever practicing skip-counting to derive them — you’ll probably never come to memorize them.

This helps me navigate the world of multiplication practice, where controversy abounds.

Take, for example, speed practice. Daniel Willingham and Daniel Ansari recently wrote a post defending speed practice. I left a comment arguing that we needed to know why speed can help kids in their practice before we defend it:

In one study I read (about fluency software) I learned that students with learning disability did not improve their addition fluency through untimed practice. Why? Because during untimed practice, the students simply DERIVED the facts rather than trying to RECALL them. In other words, you’d see a lot of kids in front of screen counting out 3 + 9 with their fingers instead of trying to recall them from memory. The kids were already pretty good at using this strategy, and the untimed practice allowed them to keep doing what they were good at.

I see this in my own students too. It’s not so much that timed practice is helpful for learning directly, as much as it creates a context in which kids practice the things you’d like them to practice.

A solution is timed practice with immediate fact instruction. (You got 3 + 9 wrong? OK, 3 + 9 = 12. Try again.)

[…]
The worst case scenario is that teachers give kids a full worksheet of problems, and kids can’t directly recall ANY of them. Instead, kids work on using strategies to derive the facts. The teacher says to solve as many as you can, but the students can only correctly answer that many questions using direct recall — with strategies, there’s not enough time. Time pressure (along with the long list of problems) generates anxiety, which makes it harder still to answer problems correctly. None of this produces fact fluency.

Based on talking to colleagues and other math educators, this worst case scenario is in fact prevalent in US classrooms. These “Mad Minute” activities could be used appropriately, but they are instead often given to novices who are not prepared to draw on their mostly memorized facts for the activity. And, I think, this probably does generate feelings of helplessness and anxiety.

As a result of all this, when I think about fact practice I end up asking myself this question all the time: Will the kids be practicing derivation or recall? 

And here’s a fundamental follow-up: Kids can’t practice recall unless they are being prompted with the correct answers during the practice.

I really don’t like Mad Minute activities because they don’t prompt you with corrections or instruction in the fact during the activity. So you can’t really learn anything from the activity unless you’re “almost there.” Maybe it helps you practice pulling out the fact from memory, but it can’t help you learn that fact with automaticity without some sort of prompting during practice.

That’s why I like splitting up practice into two rounds, as I do during “forwards/backwards” practice. I get to give prompting/instruction in between the rounds, and then kids get a chance to practice with it during Round 2.

It’s also why I like practice with flashcards, especially if kids are reminded to try to figure out the answer as quickly as they can. (They basically do this anyway.) While I worried that this would be stressful for my kids, I’ve actually found the opposite. Flashcards, the way I use them in class, tend to be less stressful than other conventional practice activities (like long problem sets).

The absence of prompting/corrections is a downside of my dice practice, though it’s attenuated somewhat by the way the problems will reappear as kids cycle through the different boxes and repeat factors. Still, it’s a form of practice that probably would be better at helping kids have a chance to practice strategies rather than remembering.

I think it’s important to be thoughtful here. Math facts aren’t the be-all of school math, but they do make a difference for kids’ future learning.

The fundamental disagreement I have with a lot of people in math education is that I don’t think that practice using a strategy helps kids commit facts to memory. (Though I do believe that having efficient strategies does help kids commit facts to memory. Both knowing efficient strategies and recall practice are important for developing automaticity. I have citations for this. See also the Willingham/Ansari piece.)

And my fundamental displeasure about the debate is how rarely it gets into the classroom details. So, you’ve got a position on how multiplication should be taught? Does it fit on a slide? Do people take pictures of it with their phones during conferences? Tweet it, retweet it, like it?

That’s great, seriously, but let’s talk the nitty gritty. What are your activities? What does your class look like? What is it that you do?

When measures of steepness disagree

I.

My students know a lot more about skiing than I do. I grew up in Skokie, IL — an exceptionally flat place, we went sledding down at a pile of garbage called ‘Mount Trashmore’ in Evanston — but a lot of my students go on vacations to resorts and stuff in the winter.

Once or twice, a Jewish youth group took me to Wisconsin to ski. Wisconsin sort of has hills. A midwestern ski resort is the sort of place where you can choose whether to slide down a hill on skis or an inflatable tube. It is also home to the tamest “Black Diamond” slopes in the country — colder but otherwise not much different than the slides my son plays on at the park.

Anyway, that’s what I know about skiing. Glad to get that off my chest.

II.

Towards the beginning of my trigonometry unit — after studying the tangent ratio for several days — I showed this picture to my geometry class. In whole-group, I asked my students to notice as much as they could, and after that I asked the class to try to figure out what all the numbers represented:

anglescale (1).jpg
from here, h/t @mathyvisuals

When I teach trigonometry, one of my first goals is to help kids see that angles and the tangent ratio both are measures of steepness. Trigonometry is the art of moving between these two different measures. With a trig table or a calculator you can take an angle and look up its associated ratios, and you can look things up the other way (ratios to angles) too. This is true for all the trig functions, and my students encounter it first in the context of the height-to-width ratio.

If you’re trying to describe the steepness of a ski slope — again, not a major concern growing up in Skokie — you could talk about the height:width ratio, or you could talk about the angle of inclination. That chart above rates the difficulty of ski slopes in terms of the angle, but it just as well could have done it in terms of ratios. (I asked my students to draw slopes with heights and widths in each zone.)

The American with Disabilities Act describes the appropriate steepness of a ramp in terms of both measures:

ADA Ramp Specifications Require a 1:12 ramp slope ratio which equals 4.8 degrees slope or one foot of wheelchair ramp for each inch of rise. For instance, a 30 inch rise requires a 30 foot handicap wheelchair ramp.

Every ramp, hill, slide or mountain has a steepness. To bring that physical concept into the realm of mathematics, we have to measure it. But there are many ways to measure steepness, and often we want to be able to move between them. That’s a big part of what trigonometry is.

III.

Before really launching into the trig unit, I task kids with a series of “Which Is Steeper?” problems.

img_0050.jpg
Which ramp is steeper?

Along with everything else, these problems also really help kids use the height:width ratio as a measure of steepness.

What I’m looking for is for kids to fluently use three little micro-skills:

  • when two ramps are the same height (or the same width), the ramp with less width (or more height) will be steeper
  • when heights are different, scale one ramp until its dimensions match the other’s, and then directly compare the other dimension
  • in general, compare the steepness of two ramps by dividing the height by the width and comparing the ratios

The way I see it, these micro-skills are important background knowledge to support the procedures for finding missing sides of triangles using trig — especially if you come into this work without a lot of comfort with ratios and setting up and solving equations like \frac{5}{x} = \frac{17}{19}.

(I do a lot to help kids with ratios, but I don’t usually focus on setting up and solving the equations. Maybe I should.)

IV.

Once I think my kids are getting comfortable using the height:width ratios to find missing sides of right triangles, I show them the physical trig table. There is so much for kids to learn from the trig table — I think it’s a shame when students move straight to looking up values on the calculator.

Screenshot 2017-07-12 at 5.36.23 PM.png

The most amazing thing about the trig table — at least it’s my favorite thing, and kids often get excited by it — is what happens as we approach 90 degrees. The sine and cosine functions change a bit, of course, but the tangent values just explode:

Screenshot 2017-07-12 at 5.38.18 PM.png

Kids often are surprised by this, but it makes a lot of sense. Adding another degree of steepness always makes the height:width ratio larger, but not always by the same amount. If your ski slope is very, very flat, then going up by a degree doesn’t increase the ratio very much. If your slope is a double black diamond, though, upping the steepness by a degree leads to a radical change in the ratio, a huge increase.

I always try to use this as an opportunity to introduce some important language to my students: the relationship between steepness ratios and angles is non-linear; a small change in the angle doesn’t always have the same impact on the ratio’s size.

When I think of multiple ways of measuring things, I usually think of pairs of measures that stand in a linear relationship. The nurses measure my newborn daughter’s weight in terms of grams and pounds. When you lose a pound of weight you’re losing 453.92 grams — always. It doesn’t matter how much or how little you weigh. A pound and 453.92 grams are simply interchangeable.

But a lot of pairs of things in the world vary in non-linear ways. In a sense, an additional year of investment is worth more in the future than it’s worth now; a falling ball drops faster as time goes on. I don’t know how many opportunities there are to study this in terms of measurements, but it seems a fruitful arena for chipping away at the assumption that everything is linear.

V.

And, now, we get to the question that has been bugging me for the last few months: How much steeper is an 89 degree ramp than an 88 degree one? A lot or a little?

Remember: whether with ski-slopes or with ramps, there are two ways to measure steepness. You can measure it in terms of the angle or in terms of the ratio.

From the point of view of angles, the 89 degree ramp is just as different from the 88 degree ramp as a 21 degree ramp is from a 20. Which is to say, a bit steeper.

But look at the ratios! Maybe we should think in terms of height:width, in which case the 89 degree ramp is much steeper than the 88 degree ramp, especially compared to what happens when you add a degree of steepness lower down the trig chart.

I have no idea how to think about this at all.

One way out of this conundrum would be to assert that one of the measures of steepness is the actual, true measure of steepness. But any choice seems arbitrary. Both angles and ratios seem perfectly fine. Why choose one over the other?

(Maybe we’d try to further plant things on a human foundation; how much more effort would it take to climb up each of these ramps? Let’s run experiments that measure physical exertion; maybe we could use physics to model this. Steepness would just then be an expression of human exertion. This is a weird idea.)

Another way out could be to deny that there is any single thing that we’re measuring at all. Maybe steepness isn’t one single thing — it has an angle dimension and a ratio dimension. But what does that mean?

I really have no idea what to think. As we near 90 degrees it seems that the two measures of steepness disagree on how much of a difference a small change makes. Which means that we’re measuring the same quantity (steepness) with tools that are fundamentally incompatible.

What does it mean for two measures to be incompatible? What other measures are like this?

In trying to sort this all out — and I hope it’s clear that I’m awfully confused — I’ve been also thinking about something Freddie deBoer wrote about educational testing:

Incidentally, it’s a very common feature of various types of educational, intelligence, and language testing that scores become less meaningful as the move towards the extremes. That is, a 10 point difference on a well-validated IQ test means a lot when it comes to the difference between a 95 and a 105, but it means much less when it comes to a difference between 25 and 35 or 165 and 175. Why? In part because outliers are rare, by their nature, which means we have less data to validate that range of our scale.

Could that help us think about what’s going on with steepness? Clearly there is no such validation problem when it comes to the steepness of right triangles — we can always draw more! — but maybe there is something analogous going on. We might say: it just doesn’t mean very much to get precise about how steep a very steep ski slope is. Numbers break down, our measures of steepness fall apart, and all we can say about very steep things is just the tautological thing — they’re pretty damn steep.

That is, there just is no way to precisely talk about the steepness of a very steep ramp, as the measures disagree.

But that seems weird too, and I’m lead to the conclusion that I don’t understand this very well at all.

 

High School Algebra in Ancient Mesopotamia

I.

On an online forum for discussing math, a user named Mr. Javascript  (his bio: “If you’ve ever gone to the doctor, purchased insurance, or used a credit card, my code may have been executed.”) took a swing at polynomial factoring:

The wife and I are sitting here on a Saturday night doing some algebra homework. We are factoring polynomials and we both had the same thought at the same time: when are we going to use this?

Polynomial factoring — as those of us steeped in high school algebra know — is the art of “unmultiplying” an algebraic expression. One of these tricks for unmultiplying an expression is the difference of squares identity. My favorite uses of it involve arithmetic:

25 - 4 \rightarrow (5 + 2)(5 - 2)

100 - 1 \rightarrow (10 + 1)(10 - 1)

400-9 \rightarrow (20 + 3)(20 - 3)

In school math, however, the difference of squares is typically used in the context of algebraic factoring exercises:

x^2 - 9 \leftrightarrow (x + 3)(x - 3)

a^2 x^2 - 9b^2 \leftrightarrow (ax + 3b)(ax - 3b)

\frac{a^2 x^2}{100} - \frac{9b^2}{121} \leftrightarrow (\frac{ax}{10} + \frac{3b}{11})(\frac{ax}{10} - \frac{3b}{11})

And children are often asked to commit to memory the general form of this rule:

a^2 - b^2 = (a + b)(a - b)

It’s these algebraic factoring exercises that frustrate people like Mr. and Mrs. Javascript.

Part of the problem is that factoring is too much of one thing, not enough of another. It’s typically introduced to students as a method for solving polynomial equations. But it’s never the only method taught. If you hate or fear algebraic manipulation, are you going to solve an equation by factoring? Not if you can graph it. And if algebraic manipulation is your speed, why bring a spoon to a knife fight? The quadratic formula or completing the square could be your go-to.

So, nobody’s students likes factoring. (Sit down, Honors Algebra.) It seems frivolous and useless. Which is why I was a bit surprised to see it coming up again and again while reading about ancient mathematics. How could factoring be useless if it played such a large role in ancient mathematics?

I’ve been on a bit of a math history kick lately. I started with The Beginnings and Evolution of Algebra, a book I found while scanning the shelves at school for some summer reading. Beginnings and Evolution seems to heavily rely on van der Waerden’s dry but important Geometry and Algebra in Ancient CivilizationsA search for an up-to-date, well-written version of all this led me to Taming the Unknown: A History of Algebra from Antiquity to the Early Twentieth Century, which has been the best of the bunch for my needs.

“Using the history of algebra, teachers of the subject can increase students’ overall understanding of the material.” This is from Katz and Parshall, at the start of Taming the Unknown. Could Mesopotamian scribes show us how to teach factoring? What exactly can a modern teacher glean from mathematical history?

II. 

Not many people have five words in their name, but most people aren’t Bartel Leendert van der Waerden. Though a student of Emmy Noether (who was Jewish) he managed to hold on to his university position in Germany under Nazi rule. (True, to the Nazis he made a point of his “full-blooded Arianness”. In correspondence, though, he was disposed against the regime. He’s clearly guilty of cowardice and self-interest, but it’s hard to know quite how harshly to judge the past.)

He wrote the first comprehensive textbook on modern algebra, and later turned to the history of mathematics. In both Scientific Awakening and Geometry and Algebra in Ancient Civilizations, he put ancient sources in conversation with a modern mathematical perspective. Sometimes he reported finding modern theorems lurking in the work of the ancients. These included various identities that today we would teach as factoring, including the difference of squares.

Our knowledge of Mesopotamian mathematics comes from clay tablets found in Iraq. Some of the tablets (like Plimpton 322) contain calculation tables, while others are collections of word problems with solutions. Intriguingly, we think most of these documents are pedagogical artifacts, either used for instruction or practice. (Some of them have errors!)

Here’s a “real-world” problem from a clay tablet called MS 5112:

“The field and 2 ‘equal-sides’ heaped [added together] give 120. The field and the same side are what?”

This is equivalent to the modern-day equation x^2 + 2x = 120. Van der Waerden’s claim about the difference of squares formula — that the Mesopotamians knew and used it — largely depends on how they solved problems such as those found on MS 5112.

Modern algebra students learn how to use the difference of squares to solve equations, but not for equations like x^2 + 2x = 120. Modern students would only use the difference of squares when the equation is explicitly presented as a difference of squares, e.g. x^2 - 9 = 0 or 100 - 4x^2 = 0. These ancient sources are using the difference of squares transformation as their go-to move for solving quadratic equations.

When presented with a problem such as x^2 + bx = c, the Mesopotamians would typically transform the x^2 + bx expression into a difference of two squares.

Pictorially, the right chunk of this rectangle — the bx — is cut in half down the middle…

Screenshot 2017-07-02 at 7.49.02 PM.png

…and pasted at the bottom of the left chunk, creating a difference-of-squares arrangement:

Screenshot 2017-07-05 at 11.47.22 PM.png

This was the fundamental step in their solution of a quadratic equation.

And then things get rolling: the area of the full square is \frac{b^2}{4}+c; the side length is \sqrt{\frac{b^2}{4}+c}; the missing length, x is  \sqrt{\frac{b^2}{4}+c} - \frac{b}{2}. We have just come very, very close to deriving the quadratic formula, and we’ve done so by seeing x(b + x) as a difference of squares.

To me, this is a surprising connection. I’ve known about this method for solving equations for years, but have never seen it through the lens of the difference of squares identity. Factoring may seem frivolous, but van der Waerden argues that it was a central part of how Mesopotamians did mathematics.

III.

All the above — the “real world” word problem and its solution — comes to us in the language of geometry: fields, squares, lengths, areas. Van der Waerden, of course, noted this:

From the very beginning, algebra has always been closely connected with geometry. In Babylonian problem texts, the unknown quantities are very often called “length” and “width”, and their product “area”. The product of a number by itself is called “square”, the number itself “side” (of the square).

For van der Waerden, this is all besides the point; it’s just a geometric sheen over an algebraic essence:

We must guard against being lead astray by the geometric terminology. The thought processes of the Babylonians were chiefly algebraic. It is true that they illustrated unknown numbers by means of lines and areas, but they always remained numbers.

He also writes that “in ancient civilizations geometry and algebra cannot well be separated,” but that is because algebra was being performed in a thoroughly geometric context. Modern students may use symbols and ancient ones used shapes, but all are doing algebra.

These days, most historians of math do not agree with this picture — they see the Mesopotamian work as essentially geometric, not algebraic. True, it was algorithmic — there was a definite procedure that was repeatedly used — but what the Mesopotamians passed on were methods for manipulating areas and lengths, not numbers.

The current perspective is the result of historians taking a fuller view of the ancient world than that taken by the earlier generation of researchers. Current historians know a lot about the Mesopotamians: about their geography, culture, society, economy, etc. The first generation of historians of Mesopotamian mathematics, in contrast, were mainly mathematicians-turned-historians who had narrower interests — people like good-old Nazi-tolerating van der Waerden.

Mathematicians tend to see math as a set of truths universally held and recognized. (Carl Gauss may or may not have suggested communicating with aliens by etching an enormous Pythagorean Theorem diagram into the Siberian tundra, but they don’t tell stories like that about chemists.) It’s only natural that when mathematicians turned to the past (another alien world) they would see algebraic continuity, not difference.

Current historians see the difference, though. Through a better understanding of Mesopotamian language they have arrived at translations that attempt to better represent the mathematics as it was, not as it is. What an early mathematician-historian translated as “coefficient” is now translated as “projection,” a subtle change with important implications: “When expressed in these very concrete terms, Old Babylonian algebra becomes not arithmetical but geometrical and metric: concerned not with abstract numbers but with measured lines, areas, and volumes,” Eleanor Robson writes.

It’s exciting to look at the past and seek insight into modern teaching dilemmas. But, if their mathematics was fundamentally different from our’s, is this project even possible?

IV. 

There is another instance of factoring the difference of squares appearing in discussions of ancient mathematics. It involves a connection between the Pythagorean Theorem and the difference of squares. Here too, the connection was made by an earlier generation of scholars and has more recently been challenged by contemporary historians.

Like van der Waerden, Otto Neugebauer also began his career as a mathematician in Germany. When the Nazis asked him to sign a loyalty oath, though, he refused and was suspended from work. He continued on in Germany until 1939, when the Nazis took over his mathematical journal and he made his way to the United States.

Neugebauer is especially known for his work with Mesopotamian clay tablets. More than any other scholar, he was responsible for uncovering mathematics in these ancient records.

Plimpton 322 is a clay tablet containing a carefully organized table of numbers:

 

Plimpton_322.jpg
Plimpton 322: We used to think these were Pythagorean triples.

At first, nobody thought Plimpton 322 was special. But Otto Neugebauer took another look at the table and announced that this was actually a mathematical treasure: a Babylonian record of Pythagorean triples (i.e. whole numbers that could be sides of a right triangle, like 3/4/5 or 5/12/13).

How did these ancient mathematicians produce this table? This is where, for Neugebauer, factoring the difference of squares comes in.

We typically introduce the Pythagorean Theorem as a sum of squares relationship:

A^2 + B^2 = C^2

But it’s equally true that the Pythagorean Theorem is saying something about a difference of squares:

A^2 = C^2 - B^2

Which means that you could just as well put it like this:

A^2 = (C + B)(C - B)

It’s not obvious that both (C + B) and (C-B) both must be square numbers, but they do. Call the first square number s^2 and the second t^2. Which means that the following two equations are true:

C + B = s^2

C - B = t^2

Add those two equations together, and you get a new one.

2C = s^2 + t^2

Subtract them, and you get an equation for b.

2B = s^2 - t^2

So, there you have it. Pick two numbers, swap them in for s and t and you get yourself values for b and c (you can get a too) and you have an A^2 + B^2 = C^2 triple. Tada: the ancient Mesopotamian method for finding Pythagorean triples!

Once again, though, this historical connection has been questioned. Eleanor Robson wrote a fantastic article challenging Neugebauer’s view. She argues on both mathematical and contextual basis that this table can’t represent Pythagorean triples. For her, this is just another example of mathematicians not understanding Mesopotamia on its own mathematical and social terms.

Part of the problem, again, is that Neugebauer’s idea is intensely algebraic, whereas in ancient Iraq the mathematics was chiefly geometric. Part of the problem is also that Neugebauer didn’t know what these sorts of tables were typically used for in Mesopotamia, so he misunderstood their cultural use.

Whether or not it reflects history, the mathematics here is solid.  The Pythagorean Theorem is connected to factoring a difference of squares, just as the factoring connects to solving x^2 + bx = c.

The historical question is whether this mathematics would have been meaningful to the ancients. The pedagogical question is whether it could be meaningful to our students.

V.

So: can the studying the past help us better teach factoring?

It’s tempting to cull specific ideas from this history. The connection of factoring the difference of squares to solving quadratics and the Pythagorean Theorem are still knocking around my head. I don’t know if there’s a way to bring these connections to my students, and I also don’t know if they’d enjoy them as much as I do. I don’t know yet — I’m going to have to think on this for a while more.

I’m wondering, though, if there’s maybe a more general lesson about teaching algebra to take from all this.

The mistake of the early mathematician-historians was to see too much of algebra in the cut-and-paste geometry of the Mesopotamians. What they failed to understand was the extent to which this ancient math was fully geometrified. It was fully and thoroughly geometry, all the way down.

It seems weird, then. Why didn’t the Mesopotamians make the leap to algebra? And why don’t our students make these same connections?

In the history of education there have been people who have made very strong claims about the similarity of children’s development to the historical development of cultures. This is wrong — and often racist and colonialist, as it assumes that other cultures are further behind in an inevitable path towards the present.

But historians of mathematics have a more nuanced view of Mesopotamia now. It’s not that ancient cultures knew — or failed to know — algebra, as much as they had their own sort of algorithmic geometry. It made sense to them, and it needs to be understand in its own context and time.

All of this, though, makes me a little bit more pessimistic about the usefulness of geometry for helping students learn algebraic concepts. The geometry of cut-and-paste really is different from the algebra of factoring. It’s only when you understand both that you can look back and see the connections between them, as van der Waerden did.

When faced with a tough topic, math teachers often like to change the context — add a story, move to pictures, put things in geometric terms. A lesson from this history of algebra could be that we should worry very, very worried about whether these more comprehensible contexts are really aids for understanding the difficult things.

Each context is its own little world, and the sense that we can make of it is not easily bridged to some other area. In particular, there is nothing simple about moving from geometry to algebra.

Why Mythbusting Fails: A Guide to Influencing Education With Science

By Michael Pershan and Benjamin Riley. We wrote it together, and Ben posted it over at the Deans for Impact blog. I think there are a lot more educators than scientists who read my blog, but pretty much everything but the details apply to an educator interested in helping scientists understand classroom teaching better. Or your daily life, political persuasion, arguments with a friend, etc. – MP

The persistence of neuromyths

“If it disagrees with experiment, it’s wrong,” physicist Richard Feynman said. “In that simple statement is the key to science.”

By this measure, the learning-styles hypothesis has failed too many times to count. Experiment after experiment has shown that matching the form of instruction to a student’s preferred “style” of learning – such as auditory or visual – does not improve a student’s understanding. As a result, the vast majority of cognitive scientists are certain that learning styles have been debunked.

And yet many educators still believe learning styles are important. What gives? Why don’t they trust the science?

One possibility is that educators are simply unaware of the research undermining learning styles’ usefulness. If this is the problem, then there is a simple solution: just spread the word!

To that end, recently a group of 30 learning scientists – including Steven Pinker, Hal Pashler and others – published a letter to inform teachers that learning styles is a “neuromyth” that “create[s] a false impression of individuals’ abilities, leading to expectations and excuses that are detrimental to learning in general, which is a cost in the long term.”

But what if educators are presented with this information…and they still don’t change their minds?

Take Terry J., a lifelong public educator from Canada. Terry read the scientists’ letter, but it failed to convince her to change her practice. “I know that I am a visual learner,” she said. “There is research that supports the idea that learning styles matter, and there is research that says learning styles are bunk.”

Perhaps Terry just “hates scientists.” But this plainly isn’t true – Terry is very interested in science and research that can inform education practice. In fact, she helps design professional development for her province, and she and her team use educational research to plan their workshops. She even consults with a research foundation located just down the hall from where she works.

Terry’s seen the reports, but – in a striking bit of symmetry – she believes that when it comes to learning styles, it’s the research community that is uninformed. The perspective of the letter-writing neuroscientists, she said, is just “based on different assumptions and interpretations.”

The scientists who signed this letter would of course find much to critique in Terry’s views. But if their letter was intended to persuade anyone, surely Terry sits squarely within the target market. She is a public servant, protected from profit motives, with a longstanding interest in applying educational research in practice.

If these scientists can’t persuade Terry, who can they?

The challenge of cultural cognition

Terry’s resistance to scientific authority might seem bizarre – certainly it will cause some learning scientists to shudder. Yet this sort of resistance is not unique to education. In fact, it’s easy to find examples of resistance to scientific insights even in fields that are thought to be highly scientific, such as medicine. And, even more paradoxically, this resistance to scientific evidence can coexist with a strong trust in science and scientists.

How can this be? The answer may lie with what some researchers describe as “cultural cognition.”

Cultural cognition describes how we interpret certain facts and evidence through the lens of our existing values. Usually, we accept scientific claims as true because, overall, most of us trust science and scientists. But – in rare but notable cases – our stance on a scientific matter comes to take on a larger, much more personal meaning. Beliefs about science can become entangled with our self-identities, even if they didn’t start out that way.

Take climate change. Despite mounting evidence and a clear scientific consensus on the relationship between human activities and the rise of global temperatures, beliefs about the cause of global warming are growing more polarized in the U.S., rather than less. From the cultural-cognition perspective, this is largely because beliefs about global warming have become statements about who we are – namely, whether we self-identify as environment-protecting liberals or industry-defending conservatives. This means that increasingly Americans not only disagree on the level of risk posed by global warming, but whether there is even a scientific consensus.

This outcome was not inevitable. But many climate-science advocates emphasize the (perceived) ignorance or anti-science attitudes of those who don’t understand global warming or its causes. In doing so, these advocates effectively insult precisely the people they wished to persuade. The tragic result? The normal bipartisan trust in science has become “polluted” on climate change, in part because of communication strategies that have antagonized existing values, and activated cognitive defenses.

The perils of threatening teacher autonomy

So now let’s return to education and learning styles. What we want to suggest – tentatively and with caveats – is that we run the risk of polluting the environment on learning sciences in the same way people have polluted the climate-change communication environment.

In particular, we worry that some researchers do not fully appreciate the importance that educators place in their own autonomy. Teachers are the ultimate deciders of what takes place in their classrooms, an autonomy that provides them with a major source of professional satisfaction. Teachers may not receive high wages or status, but they do receive tremendous psychic rewards when students appear to learn as a result of their decisions. And educators possess a great deal of (reasonable) sensitivity in protecting this autonomy, as there is a long history of “outsiders” seeking to tinker with what happens within districts, schools and classrooms.

So the scientific consensus that learning styles do not exist will become irrelevant if educators come to see their beliefs about learning styles as critical to their professional autonomy. And one way to heighten the risk of that happening is through talk of what science demands teachers do or believe.

Now, the caveats. We don’t have any direct evidence that teachers currently believe in learning styles because they see it as necessary to establish their professional identities. Nor are we familiar with any evidence tracking changes in educator beliefs about learning styles over time. For all we know, educators who read the recent letter from 30 scientists denouncing learning styles as a “neuromyth” are busy reshaping their beliefs and changing their practice.

But suppose that our analysis is correct. This poses a very delicate dilemma for advocates of learning science (ourselves included). After all, scientific evidence should have weight, and if educator autonomy extends to believing in myths, well, that’s undeserved autonomy.

Is there no hope for change?

Teaching learning science to educators

Let’s return to Terry J. She read the letter by leading scientists denying the existence of learning styles, but nevertheless continued to believe in the debunked hypothesis. Attempts to persuade Terry to abandon this neuromyth may backfire if they emphasize her obligation to accept the burden of scientific evidence, especially if Terry sees this as a threat to her professional autonomy. This is a vexing challenge.

There is a way forward. We need more teaching – and less preaching – to influence the beliefs of educators such as Terry. To achieve this, advocates of learning science should borrow from the playbooks of good science teachers. These teachers do not prioritize getting students to reject their existing beliefs, but instead seek to foster new scientific knowledge in their students. They replace scientific misconceptions, rather than debunk them.

But how should learning science be taught to teachers? We urge learning-science advocates to ask three questions before attempting to influence teachers.

1. What do educators already believe about how learning takes place, and why?

A bedrock principle of cognitive science is that we learn new ideas by reference to what we already know. Effective teachers are eager to understand their students’ existing beliefs so that they (the teachers) can use prior beliefs and understanding to develop new knowledge.

We think advocates of learning science should be more curious about why teachers believe what they believe, including learning styles. Math teacher Dylan Kane provides a great example of this curiosity in action. He recently conducted a short, non-scientific poll of his followers on Twitter — many of them educators – to learn more about where enthusiasm for learning styles might stem from.

The contrast suggests that while learning styles are popular, what’s really popular is instruction involving multiple modalities. Perhaps some teachers who express a belief in learning styles “really mean that they try to use a variety of representations and activities in class,” as Kane wrote in a subsequent blog post. This is something to encourage in education; good teachers know the value of teaching their students in more than one way.

Of course, Kane’s poll was not scientific, and we don’t know how many of the respondents are practicing teachers. But we suspect most advocates of learning science advance their arguments against learning styles with even less data regarding the existing beliefs of their intended audience (educators). If so, their attempts to build new knowledge in educators may be premised on a misunderstanding of what educators believe, or why. That’s a recipe for an unproductive dialogue.

2. What scientific insights about learning are important for educators to understand?

Another principle from cognitive science is that our decisions are guided by mental models and representations. The most common forms of science communication focus on making evidence-based information available, and assume this information will be incorporated into the recipient’s mental model as a matter of course. Usually, it isn’t, and recipients simply retain their existing ways of seeing the world.

Instead of simply sharing evidence or information, we suggest advocates of learning science spend more time helping teachers understand models of learning they can employ in the classroom. Happily, many scientific principles are useful for teaching, but here we consider just one: dual coding.

Dual-coding theory states that the mind processes words and pictures along different pathways. Researchers have found that instructors can present more information to students by distributing it across both words and pictures. Instruction that does this in artful and in complementary fashion often will reach more students than instruction that does not.

As some learning scientists have aptly observed, dual coding covers similar territory as learning styles. For that reason, advocates of learning science would be wise to introduce dual coding as an alternative to learning styles. We doubt teachers will immediately reject learning styles – students rarely discard their initial (mis)conceptions right away – but over time, if dual coding proves effective in the classroom, teachers may find that learning styles has lost its appeal. Once again: it is better to replace ideas than to debunk them.

3. How might we create opportunities for teachers to practice their understanding of learning science?

Practice is essential for learning, but not all practice is equally effective. A great science teacher (or teacher of any subject, for that matter) provides students with many opportunities to practice their new understanding in structured ways.

This presents a real challenge for advocates of learning science. Blog posts, op-eds, and social media all have a role to play in raising scientific awareness, but we know that real learning requires more. How can we foster opportunities for teachers to try new science-informed practices – and receive useful feedback as they do?

There are no easy answers to this question, but our hunch is that at a minimum it will require learning scientists to approach educators with more humility. Instead of attacking myths, scientists need to approach educators as professional colleagues. As colleagues, teachers and scientists have much to learn from and teach each other about what works in a classroom.

Learning scientists can help practicing teachers improve by inviting them to attend learning-science conferences, collaborating on rapid-cycle research projects, and by providing direct professional development in the local schools where teachers teach. And whenever these interactions take place, we hope learning scientists will listen to teachers, and learn from their experiences in the classroom to inform future research.

This sort of approach might reach Terry J. in a way that info-spreading never could. Terry loves science, research and education. It’s hard to imagine her saying “no” to an opportunity to collaborate with scientific researchers in a collegial way. Will anyone offer her the opportunity?

Building bridges between science and teaching

We believe we are at a crucial moment in the relationship between learning science and education. More than ever, there are organizations and individuals seeking to share the fruits of scientific research with educators. We suspect there have never been as many books about learning science in the hands of teachers as there are today. We should celebrate this development.

At the same time, we worry about the danger of backlash. “Teachers must reject the learning styles ‘neuromyth’” is a provocative headline. It’s also polarizing. The more teachers are told what they must do or believe, the greater the risk that they will become antagonistic toward learning science, or even research generally. We should endeavor to prevent that from happening.

Scientists know a lot about the cognitive processes that can lead to learning – or not. Educators know a lot about the instructional processes that can lead to learning – or not. By building on their respective knowledge bases, and treating each other with mutual respect, we can foster the scientific profession of teaching.

Reading Research: The Case of Mrs. Oublier

I.

A Revolution in One Classroom: The Case of Mrs. Oublier (link) is an oft-cited piece of education research by David K. Cohen. It’s a case study of just a single teacher (Mrs. O) and her math teaching, at a time (the ’80s) when California lawmakers sought to radically transform math teaching in the state.

Mrs. Oublier is a pseudonym, oublier meaning “forgotten” in French. She’s earned this pseudonym for thinking her teaching had undergone a revolution, though in the eyes of Cohen she hardly changed any of the important stuff. I guess the point is that she oublier-ed to make these changes? Or that reformers didn’t help her make them?

Anyway, a lot of the fun of the piece is seeing the funhouse-mirror ways in which Mrs. O interprets those cutting-edge ideas about manipulatives, small group work, and estimation. And Cohen has serious things to say about why policy-makers never quite reached Mrs. O in the way they intended to, though I might question some of his conclusions.

Another thing that’s interesting about this piece is what it’s not: a representative sample from the teaching population. It’s the story of one teacher. Cohen tells us that Mrs. O’s story matters, but why should we believe him?

There’s no denying that Cohen tells a good story. But isn’t research supposed to be more than a good story?

II. 

Mrs. O has been teaching second grade math for four years. The kids like her; colleagues like her; administrators think she’s doing a great job.

As a student, Mrs. O hadn’t liked math much, and she didn’t do too well in school. When she got to college, though, she started doing better. What changed? “I found that if I just didn’t ask so many why’s about things that it all started fitting into place,” she tells Cohen. So, that’s not a great start.

And yet, Mrs. O tells Cohen that she’s interested in helping her students really understand math. She also tells him that she’s experienced a real revolution in her teaching, a departure from the traditional, worksheet+drill methods she used when she began. On the basis of his observations, Cohen is strongly inclined to agree with her on this.

In the centerpiece episode, Cohen catches Oublier in the midst of a fairly ridiculous lesson. Oublier wants to teach her students about place value (so far so good). To do this, she wants to introduce another base system (debatable, but not necessarily a disaster). So Oublier gives each kid a cup of beans and a half-white/half-blue board.

Mrs. O had “place value boards” given to each student. She held her board up [eight by eleven, roughly, one half blue and the other white], and said: “We call this a place value board. What do you notice about it?”

Cathy Jones, who turned out to be a steady infielder on Mrs. O’s team, said: “There’s a smiling face at the top.”

On a personal note, I have been teaching 3rd and 4th Graders for four years and the idea of giving kids those little cups of beans gives me minor terrors. What if the cups spills? How early do you have to get to school to set up the beans? What if a kid eats a bean?

Anyway, after Mrs. O has ensured that all the kids noticed that their boards are half-white and half-blue, she starts the game. The game is supposed to be about grouping and regrouping in place value systems, but it’s really entirely about beans. She calls out a command, and the kids add a bean. At no time does she connect the beans to numbers.

According to Cohen, this was no accident, as Mrs. O wasn’t really a fan of making numbers explicit in her activities:

This was a crucial point in the lesson. The class was moving from what might be regarded as a concrete representation of addition with regrouping, to a similar representation of subtraction with regrouping. Yet she did not comment on or explain this reversal of direction. It would have been an obvious moment for some such comment or discussion, at least if one saw the articulation of ideas as part of understanding mathematics. Mrs. O did not teach as though she took that view. Hers seemed to be an activity-based approach: It was as though she thought that all the important ideas were implicit, and better that way.

Oublier is a huge believer in manipulatives — in fact, the transition from worksheets to manipulatives seems to be a big part of what her “revolution” entailed. For Mrs. O, kids learn through the physical manipulation of the objects. As in, learning is the direct result of touching beans:

Why did Mrs. O teach in this fashion? In an interview following the lesson I asked her what she thought the children learned from the exercise. She said that it helped them to understand what goes on in addition and subtraction with regrouping. Manipulating the materials really helps kids to understand math, she said. Mrs. O seemed quite convinced that these physical experiences caused learning, that mathematical knowledge arose from the activities.

Oublier tells Cohen that she relies heavily on a textbook, Mathematics Their Way, and that this text was the major source of some of her new ideas about physical activities and teaching math. From poking around, it looks like the whole text has been posted online, including the lesson that Mrs. O was caught teaching. Here’s what the bean-counting activity looks like in the text:

Screenshot 2017-06-11 at 8.55.38 PM.png

OK, now the next page of that activity:

Screenshot 2017-06-11 at 9.11.52 PM.png

But you won’t believe what’s on the page after that:

Screenshot 2017-06-11 at 9.12.43 PM.png

This is sort of getting repetitive so I’ll just skip ahead five pages:

Screenshot 2017-06-11 at 9.13.37 PM.png

Cohen comes down pretty hard on this curriculum, and on Mrs. O for using it:

Math Their Way fairly oozes the belief that physical representations are much more real than symbols. This fascinating idea is a recent mathematical mutation of the belief, at least as old as Rousseau, Pestalozzi, and James Fenimore Cooper, that experience is a better teacher than mere books. For experience is vivid, vital, and immediate, whereas books are all abstract ideas and dead formulations.

I’ve focused on the manipulative episode, but that’s just part of her teaching that’s detailed in the piece. According to Cohen, Oublier generally seems to adopt the exterior of cutting-edge math teaching while sort of missing their points. She asks kids to estimate, but doesn’t give them chances to think or share ideas. She uses manipulatives, but doesn’t really ask kids to think much with them. She puts kids into small groups, but basically uses this as a classroom management structure. She avoids numbers and abstraction wherever possible.

This was certainly not what California’s math reformers had in mind.

III. 

The point, for Cohen, is that California’s math reformers let Mrs. O down. But how, exactly?

I found myself needing more context for the California reforms than Cohen provides. Fortunately, the journal issue in which Mrs. O originally appeared was entirely dedicated to the California math reforms. (In fact, every piece in that issue was a different in-depth case study like Mrs. O.)

Cohen actually leads off the issue with a helpful summary of the aims and methods of the 1985 math reforms (link). At their center was a document, the California Math Framework. The Framework called for a transformation of math teaching away from rote memorization and drill, and towards a focus on conceptual understanding, teaching kids to communicate about math, problem solve, work in groups, make sense of math, etc.

So far, nothing new. Reform groups like NCTM have been pumping out these documents for a century.

What was new was the muscle California chose to employ. The state education office said that they would only reimburse districts for textbooks that met the standards of the Framework. And then they actually followed through by rejecting all the texts that publishers initially submitted. Eventually, the state got what they wanted and created an approved list of textbooks for districts to choose from.

(As Alan Schoenfeld notes in his Math Wars piece, California — along with Texas and New York — determine what gets published nationally because of the size of their markets. The publishers basically design their books for the big states, and the rest of the country gets dragged along. So California’s reform muscle had national implications.)

This was half the plan. The other half was to change the state tests for kids so that they also reflected the vision of the Framework. The idea was that if textbooks and tests were in place, teachers would come around all on their own.

I missed this the first few times, but this is why Cohen dwells so much on Oublier’s textbook choice. Oublier’s favored Math Their Way text was not an accepted California text, and Oublier’s district had adopted something else. Oublier likes Math Their Way, though, so she just uses that in her classroom instead. None of her superiors seems to mind either.

In other words, that entire “change teaching by making a list of textbooks” plan was sort of stupid. It failed to account for the ability of teachers to get other textbooks if they wanted to.

The fundamental assumption of the policy seemed to be that teachers need permission, or perhaps incentives, to teach in new ways. As Cohen points out — over and over — this is not the case. Teaching in fundamentally different ways implies believing that you should teach differently as well as knowing how to do so.

It’s pretty simple, actually: if you want to change teaching, you can’t ignore the teachers.

IV. 

Even as Cohen critiques the California reforms, he still seemed to me pretty cheery about the potential for policy to impact reform.

First, he really does seem to give a lot of agency to math textbooks. He keeps on talking about the influence of the Math Their Way book on Mrs. O. On the one hand, the book’s influence on her comes at the expense of the Framework’s reach. At the same time, if a textbook can really have such a strong impact on a teacher, then the premise of the California reforms has been upheld. If you’re a reformer reading Cohen, I imagine that your mind starts wandering: imagine what would’ve happened if we could’ve gotten the right book in her hands!

Beyond Cohen’s implicit optimism about textbook reform, he also wonders aloud about the possibility that a bit of incentive-engineering could have steered someone like Mrs. O towards better teaching:

“The only apparent rewards were those that she might create for herself, or that her students might offer. Nor could I detect any penalties for non-improvement, offered either by the state or her school district.”

These two sources of optimism, when put in context, seemed a bit dated to me. Cohen published this article in 1990, just after NCTM published its Principles and Standards for School Mathematics in 1989. This was, in many ways, a higher-profile go at California’s Framework, and (surprisingly to all involved) it took off, becoming a blockbuster for NCTM.

In the 90s, NSF would fund the development of new math texts that were aligned with the NCTM standards. My sense is that they didn’t live up to the expectations of the textbook-optimists. The texts were just texts, tools that teachers could use well or poorly depending on their understanding of math and of teaching.

It turns out: textbooks can’t transform teachers.

(Textbooks, it also turns out, can become highly visible targets of controversy, and nearly all use of the reform textbooks became contentious in the 90s. So that seems like it needs to be part of the textbook-reform calculus.)

Cohen seems to think that Math Their Way transformed Mrs. O, but he also thinks that she didn’t really revolutionize her teaching. The changes were cosmetic. And there’s a huge difficulty determining how the text impacted because of the plain fact that she chose this curriculum. Presumably, she chose it because she was disposed to. It fit with her understanding of math and of teaching. It didn’t fundamentally challenge her, and I see no reason to think that a text has any such power of a teacher, even when imposed.

Cohen’s other musing — about incentives — has echoes in No Child Left Behind and performance pay reforms. These reforms have also failed to live up to the dreams of the reformers, as all reforms do, and teaching chugs along, mostly as it has.

At times, it seemed to me that Cohen believes that the fundamental problem, for Mrs. O, is that her views on the nature of math remain unchanged:

…however much mathematics she knew, Mrs. O knew it as a fixed body of truths, rather than as a particular way of framing and solving problems. Questioning, arguing, and explaining seemed quite foreign to her knowledge of this subject. Her assignment, she seemed to think, was to somehow make the fixed truths accessible to her students.

I’m not particularly sympathetic to this critique. Math, among other things, is a fixed body of truths (theorems, facts, relationships) that we ought to help students know.

But forget that for a moment. Cohen sometimes seems to think that this isn’t just a problem for Mrs. O, but the root problem. If we could just help Oublier see that math isn’t quite as she thinks it is — that it’s dynamic, a source of puzzles, it’s about thinking and not just about knowing — then her teaching really would undergo a real revolution.

This seems to be where we are, right now, in math education reform. We’re not trying to save the world with NSF-funded textbooks, and we’re not hoping to incentivise great teaching. We believe, like Cohen, that the fundamental problem is one of learning, and that the fundamental problem is a fundamental problem, some ambitiously big thing that, if we can help teachers attain, the rest of their teaching will fall into place.

Right now, one version of the “fundamental problem” is productive struggle. NCTM has included this in their latest set of reform standards, the Principles to Actions standards. And if you’re in Baltimore this July, you can attend a three-day summer institute focused on productive struggle. The workshop promises to show how productive struggle is tied to every dimension of effective math instruction, from planning to feedback to wider advocacy.

I don’t think I believe in this sort of reform either. Cohen keeps drawing comparisons in this piece between teacher and student learning — both are challenging, he says, both take time. And that’s true. But imagine if we treated students like teachers. In other words, imagine if instead of teaching math to kids we had a workshop a few times a year where we tried to fundamentally alter their conceptions of math, and then sort of hoped that the rest of their math learning would just fall into place.

I know the comparison isn’t exactly direct, or fair, but I don’t believe that any knowledge can be altered by changing one fundamental element. Knowledge isn’t really structured that way, it seems to me. It’s not built on a foundation. To alter teaching you’d have to alter it broadly, not centrally. And broad change just can’t happen in a three-day workshop.

The final source of optimism that Cohen raises is that maybe Mrs. O represents progress for math reform. Though she hasn’t seemed to internalize the message of the reform, this sort of messy progress is what progress actually looks like.

I have no way of knowing if that’s true, but it certainly strikes me as possible. I haven’t read more recent work of Cohen’s. I wonder if, looking back on the last 30 years of reform, he’s still as optimistic.

V.

Hey, wait a second! This is just a single case study. We were swept along in this gripping tale (aptly summarized) and assumed she represented some larger trend, but that’s just the illusion of focus. Cohen’s fooled us, then, hasn’t he? Maybe Mrs. O means nothing at all. (Or, at least, nothing beyond her own case.)

There are two things that temper this sort of skepticism. First, the journal that published Mrs. O also published four other case studies in the same issue (open version). So in addition to the case of Mrs. O, you also get the case of Carol Turner, Cathy Swift, Joe Scott, and Mark Black.

(Unclear if the other pseudonyms are also supposed to be deeply meaningful. Mark Black, because policymakers treat him like a black box. Cathy Swift, because the reforms were too fast! The other two stump me. Maybe they’re anagrams? Joe Scott = COOT JEST.)

Five case studies are only a bit better than one, but these other four cases present a lot of the same mixed-success-at-best themes as Mrs. O’s case. That helps.

The other thing that tempers skepticism about Mrs. O’s relevance is that Cohen actually also identified the “forgotten teacher” problem in a very different piece of research.

That other piece is called Instructional Policy and Classroom Performance: The Mathematics Reform in CaliforniaThis time around, Cohen and his team do pretty much the opposite of “sit in the back of a classroom and watch.” They survey 1,000 California elementary teachers. They ask teachers to rate how frequently they employ various instructional activities in class. Hey, they ask, wouldn’t it be nice if all these teacher responses really pointed to two types of teachers? We could call them “traditional” and “reform-friendly”…

Err, did I say “traditional”? I meant “conventional”:

Screenshot 2017-06-12 at 10.51.01 PM.png

 

Anyway, Cohen’s group also asked teachers what professional learning opportunities they had, in relation to the math reforms. (I love that ‘Marilyn Burns’ is an option.)

Screenshot 2017-06-12 at 10.52.30 PM.png

 

What they find basically supports Cohen’s take in his Mrs. O piece — reform is possible, but only when it focuses on professional development that targets teacher learning:

Our results suggest that one may expect such links when teachers’ opportunities to learn are:

  • grounded in the curriculum that students study;
  • connected to several elements of instruction (for example, not only
    curriculum but also assessment);
  • and extended in time.

Such opportunities are quite unusual in American education, for professional
development rarely has been grounded either in the academic content of schooling or in knowledge of students’ performance. That is probably why so few studies of
professional development report connections with teachers’ practice, and why so many studies of instructional policy report weak implementation: teachers’ work as learners was not tied to the academic content of their work with students.

Some people love the Mrs. O piece, but hated the sort of study that we previously read here, the one about teacher-centered instruction for first graders. First, because they rely on teacher responses to survey questions, and how much can you really learn from that? Second, because the statistical work can hide researcher assumptions that then become tricky to dig out. Third, because with scale comes quality control issues. You really no longer know what you’re dealing with.

To which, we might ask, why did Cohen produce exactly this kind of study when it came to evaluate the success of California’s reforms?

I talk to just as many people, though, who hold the complete opposite view. To them, something like the Mrs. O study is useless, as it doesn’t help us identify the causal forces at work. Maybe the reform failed Mrs. O, but compared to what? There are no controls, and without some sort of random assignment to a treatment can we really be sure that a focus on teacher-learning would make the difference Cohen said it would?

Is it too soft of me to say that both critiques are right?

It’s not my job to study teaching, but it sure seems hard. Every research approach has trade-offs. The way I see things, it’s best to use multiple, incompatible approaches to study the same things in teaching from wildly different perspectives. Why? Because of how it’s possible to take wildly different incompatible perspectives on teaching.

At one point, Cohen points out that Mrs. Oublier seemed comfortable living in contradiction:

Elements in her teaching that seemed contradictory to an observer therefore
seemed entirely consistent to her, and could be handled with little trouble.

But there really isn’t anything strange here at all. Everyone is willing to live with some contradictions in their lives. Contradictions can be unlivable, but they can also be productive — in teaching, in life, but also in research. Intellectually incompatible perspectives can be desirable.

Anyway, enough about all this. What should we read next?

Reading Research: What Sort of Teaching Helps Struggling First Graders The Most?

I always get conflicted about reading an isolated study. I know I’m going to read it poorly. There will be lots of terms I don’t know; I won’t get the context of the results. I’m assured of misreading.

On the other side of the ledger, though, is curiosity, and the fun that comes from trying to puzzle these sort of things out. (The other carrot is insight. You never know when insight will hit.)

So, when I saw Heidi talk about this piece on twitter, I thought it would be fun to give it a closer read. It’s mathematically interesting, and much of it is obscure to me. Turns out that the piece is openly available, so you can play along at home. So, let’s take a closer look.

I. 

The stakes of this study are both high and crushingly low. Back in 2014 when this was published, the paper caught some press that picked up on its ‘Math Wars’ angle. For example, you have NPR‘s summary of the research:

Math teachers will often try to get creative with their lesson plans if their students are struggling to grasp concepts. But in “Which Instructional Practices Most Help First-Grade Students With and Without Mathematics Difficulties?” the researchers found that plain, old-fashioned practice and drills — directed by the teacher — were far more effective than “creative” methods such as music, math toys and student-directed learning.

Pushes all your teachery buttons, right?

But if the stakes seem high, the paper is also easy to disbelieve, if you don’t like the results.

Evidence about teaching comes in a lot of different forms. Sometimes, it comes from an experiment; y’all (randomly chosen people) try doing this, everyone else do that, and we see what happens. Other times we skip the ‘random’ part and find reasonable groups to compare (a ‘quasi-experiment‘). Still other times we don’t try for statistically valid comparisons between groups, and instead a team of researchers will look very, very closely at teaching in a methodologically rich and cautious way.

And sometimes we take a big pile of data and poke at it with a stick. That’s what the authors of this study set out to do.

I don’t mean to be dismissive of the paper. I’m writing about it because I think it’s worth writing about. But I also know that lots of us in education use research as a bludgeon. This leads to educators reading research with two questions in mind: (a) Can I bludgeon someone with this research? (b) How can I avoid getting bludgeoned by this research?

That’s why I’m taking pains to lower the stakes. This paper isn’t a crisis or a boon for anyone. It’s just the story of how a bunch of people analyzed a bunch of interesting data.

Freed of the responsibility of figuring out if this study threatens us or not, let’s muck around and see what we find.

II. 

The researchers lead off with a nifty bit of statistical work called factor analysis. It’s an analytical move that, as I read more about, I find both supremely cool and metaphysically questionable.

You might have heard of socioeconomic status. Socioeconomic status is supposed to explain a lot about the world we live in. But what is socioeconomic status?

You can’t directly measure someone’s socioeconomic status. It’s a latent variable, one responsible for a myriad other observable variables, such as parental income, occupational prestige, the number of books you lying around your parents’ house, and so on.

None of these observables, on their own, can explain much of the variance in student academic performance. If your parents have a lot of books at home, that’s just it: your parents have a lot of books. That doesn’t make you a measurably better student.

Here’s the way factor analysis works, in short. You get a long list of responses to a number of questions, or a long list of measurements. I don’t know, maybe there are 100 variables you’re looking at. And you wonder (or program a computer to wonder) whether these can be explained by some smaller set of latent variables. You see if some of your 100 variables tend to vary as a group, e.g. when income goes up by a bit, does educational attainment tend to rise too? You do this for all your variables, and hopefully you’re able to identify just a few latent variables that stand behind your big list. This makes the rest of your analysis a lot easier; much better to compare 3 variables than 100.

That’s what we do for socioeconomic status. That’s also what the authors of this paper do for instructional techniques teachers use with First Graders..

I’m new to all this, so please let me know if I’m messing any of this up, but it sure seems to me tough to figure out what exactly these latent variables are. One possibility is that all the little things that vary together — the parental income, the educational attainment, etc. — all contribute to academic outcomes, but just a little bit. Any one of them would be statistically irrelevant, but together, they have oomph.

This would be fine, I guess, but then why bother grouping them into some other latent variable? Wouldn’t we be better off saying that a bunch of little things can add up to something significant?

The other possibility is that socioeconomic status is some real, other thing, and all those other measurable variables are just pointing to this big, actual cause of academic success. What this ‘other thing’ actually is, though, remains up in the air.

(In searching for other people who worried about this, I came across a piece from History and Philosophy of Psychology Bulletin called ‘Four Queries About Factor Reality.’ Leading line: ‘When I first learned about factor analysis, there were four methodological questions that troubled me. They still do.’)

So, that’s the first piece of statistical wizardry in this paper. Keep reading: there’s more!

III.

Back to First Graders. The authors of this paper didn’t collect this data; the Department of Education, through the National Center for Education Statistics, ran the survey.

The NCES study was immense. It’s longitudinal, so we’re following the same group of students over many years. I don’t really know the details, but they’re aiming for a nationally representative sample of participants in the study. We’re talking over ten-thousand students; their parents; thousands of teachers; they measured kids’ height, for crying out loud. It’s an awe-inspiring dataset, or at least it seems that way to me.

As part of the survey, they ask First Grade teachers to answer questions about their math teaching. First, 19 instructional activities…

Screenshot 2017-05-16 at 8.55.12 PM

…and then, 29 mathematical skills.

Screenshot 2017-05-16 at 8.55.48 PM

Now, we can start seeing the outlines of a research plan. Teachers tell you how they teach; we have info about how well these kids performed in math in Kindergarten and in First Grade; let’s find out how the teaching impacts the learning.

Sounds, good, except HOLY COW look at all these variables. 19 instructional techniques and 29 skills. That’s a lot of items.

I think you know what’s coming next…

pic1.png

FACTOR ANALYSIS, BABY!

So we do this factor analysis (beep bop boop boop) and it turns out that, yes, indeed some of the variables vary together, suggesting that there are some latent, unmeasured factors that we can study instead of all 48 of these items.

Some good news: the instructional techniques only got grouped with other instructional techniques, and skills got groups with skills. (It would be a bit weird if teachers who teach math through music focused more on place value, or something.)

I’m more interested in the instructional factors, so I’ll focus on the way these 19 instructional techniques got analytically grouped:

Screenshot 2017-05-16 at 9.08.53 PM.png

The factor loadings, as far as I understand, can be interpreted as correlation coefficients, i.e. higher means a tighter fit with the latent variable. (I don’t yet understand Cronbach’s Alpha or what it signifies. For me, that’ll have to wait.)

Some of these loadings seem pretty impressive. If a teacher says they frequently give worksheets, yeah, it sure seems like they also talk about frequently running routine drills. Ditto with ‘movement to learn math’ and ‘music to learn math.’

But here’s something I find interesting about all this. The factor analysis tells you what responses to this survey tended to vary together, and it helps you identify four groups of covarying instructional techniques. But — and this is the part I find so important — the RESEARCHERS DECIDE WHAT TO CALL THEM.

The first group of instructional techniques all focus on practicing solving problems: students practice on worksheets, or from textbooks, or drill, or do math on a chalkboard. The researchers name this latent variable ‘teacher-directed instruction.’

The second group of covarying techniques are: mixed ability group work, work on a problem with several solutions, solving a real life math problem, explaining stuff, and running peer tutoring activities. The researchers name this latent variable ‘student-centered instruction.’

I want to ask the same questions that I asked about socioeconomic status above. What is student-centered instruction? Is it just a little bit of group work, a little bit of real life math and peer tutoring, all mushed up and bundled together for convenience’s sake? Or is it some other thing, some style of instruction that these measurable variables are pointing us towards?

The researchers take pains to argue that it’s the latter. Student-centered activities, they say, ‘provide students with opportunities to be actively involved in the process of generating mathematical knowledge.’ That’s what they’re identifying with all these measurable things.

I’m unconvinced, though. We’re supposed to believe that these six techniques, though they vary together, are really a coherent style of teaching, in disguise. But there seems to me a gap between the techniques that teachers reported on and the style of teaching they describe as ‘student-centered.’ How do we know that these markers are indicators of that style?

Which leads me to think that they’re just six techniques that teachers often happen to use together. They go together, but I’m not sure the techniques stand for much more than what they are.

Eventually — I promise, we’re getting there — the researchers are going to find that teachers who emphasize the first set of activities help their weakest students more than teachers emphasizing the second set. And, eventually, NPR is going to pick up this study and run with it.

If the researchers decide to call the first group ‘individual math practice’ and the second ‘group work and problem solving’ then the headline news is “WEAKEST STUDENTS BENEFIT FROM INDIVIDUAL PRACTICE.” Instead, the researchers went for ‘teacher-directed’ and ‘student-centered’ and the headlines were “TEACHERS CODDLING CHILDREN; RUINING FUTURE.”

I’m not saying it’s the wrong choice. I’m saying it’s a choice.

IV. 

Let’s skip to the end. Teacher-directed activities helped the weakest math students (MD = math difficulties) more than student-centered activities.

Screenshot 2017-05-16 at 9.39.26 PM.png

The researchers note that the effect sizes are small. Actually, they seem a bit embarrassed by this and argue that their results are conservative, and the real gains of teacher-directed instruction might be higher. Whatever. (Freddie deBoer reminds us that effect sizes in education tend to be modest, anyway. We can do less than we think we can.)

Also ineffective for learning to solve math problems: movement and music, calculating the answers instead of figuring them out, and ‘manipulatives.’ (The researchers call all of these ‘student-centered.’)

There’s one bit of cheating in the discussion, I think. The researchers found another interesting thing from the teacher survey data. When a teacher has a lot of students with math difficulty in a class, they are more likely to do activities involving calculators and with movement/music then they otherwise might be:

Screenshot 2017-05-16 at 9.48.00 PM

You might recall that these activities aren’t particularly effective math practice, and so they don’t lead to kids getting much better at solving problems.

By the time you get to the discussion of the results, though, here’s what they’re calling this: “the increasing reliance on non-teacher-directed instruction by first grade teachers when their classes include higher percentages of students with MD.”

Naming, man.

This got picked up by headlines, but I think the thing to check out is that the ‘student-directed’ category did not correlate with percentage of struggling math students in a class. That doesn’t sound to me like non-teacher-directed techniques get relied on when teachers have more weak math students in their classes.

The headline news for this study was “TEACHERS RELY ON INEFFECTIVE METHODS WHEN THE GOING GETS ROUGH.” But the headline probably should have been “KIDS DON’T LEARN TO ADD FROM USING CALCULATORS OR SINGING.”

V. 

Otherwise, though, I believe the results of this study pretty unambiguously.

Some people on Twitter worried about using a test with young children, but that doesn’t bother me so much. There are a lot of things that a well-designed test can’t measure that I care about, but it certainly measures some of the things I care about.

Big studies like this are not going to be subtle. You’re not going to get a picture into the most effective classrooms for struggling students. You’re not going to get details about what, precisely, it is that is ineffective about ineffective teaching. We’re not going to get nuance.

Then again, it’s not like education is a particularly nuanced place. There are plenty of people out there who take the stage to provide ridiculously simple slogans, and I think it’s helpful to take the slogans at their word.

Meaning: to the extent that your slogan is ‘fewer worksheets, more group work!’, that slogan is not supported by this evidence. Ditto with ‘less drill, more real life math!’

(I don’t have links to people providing these slogans, but that’s partly because scrolling through conference hashtags gives me indigestion.)

And, look, is it really so shocking that students with math difficulties benefit from classes that include proportionally more individual math practice?

No, or at least based on my experience it shouldn’t be. But the thing that the headlines get wrong is that this sort of teaching is anything simple. It’s hard to find the right sort of practice for students. It’s also hard to find classroom structures that give strong and struggling students valuable practice to work on at the same time. It’s hard to vary practice formats, hard to keep it interesting. Hard to make sure kids are making progress during practice. All of this is craft.

My takeaway from this study is that struggling students need more time to practice their skills. If you had to blindly choose a classroom that emphasized practice or real-life math for such a student, you might want to choose practice.

But I know from classroom teaching that there’s nothing simple about helping kids practice. It takes creativity, listening, and a lot of careful planning. Once we get past some of the idealistic sloganeering, I’m pretty sure most of us know this. So let’s talk about that: the ways we help kids practice their skills in ways that keep everybody in the room thinking, engaged, and that don’t make children feel stupid or that math hates them.

But as long as we trash-talk teacher-directed work and practice, I think we’ll need pieces like this as a correction.