Feedbackless Feedback


Not all my geometry students bombed the trig quiz. Some students knew exactly what they were doing:

Screenshot 2017-05-26 at 3.12.57 PM

A lot of my students, however, multiplied the tangent ratio by the height of their triangle:

Screenshot 2017-05-26 at 3.19.05 PM.png

In essence, it’s a corresponding parts mistake — the ’20’ corresponds to the ‘0.574’. The situation calls for division.

Half my class made this mistake on the quiz. What to do?


Pretty much everyone agrees that feedback is important for learning, but pretty much nobody is sure what effective feedback looks like. Sure, you can find articles that promise 5 Research-Based Tips for great feedback, but there’s less there than meets the eye. You get guidelines like ‘be as specific as possible,’ which is the sort of goldilocks non-advice that education seems deeply committed to providing. Other advice is too vague to serve as anything but a gentle reminder of what we already know: ‘present feedback carefully,’ etc. You’ve heard this from me before.

As far as I can tell, this vagueness and confusion accurately reflects the state of research on feedback. The best, most current review of  feedback research (Valerie Schute’s) begins by observing that psychologists have been studying this stuff for over 50 years. And yet: “Within this large body of feedback research, there are many conflicting findings and no consistent pattern of results.”

Should feedback be immediate or delayed? Should you give lots of info, or not very much at all? Written or oral? Hints or explanations? If you’re hoping for guidance, you won’t find it here. (And let’s not forget that the vast majority of this research takes place in environments that are quite different from where we teach.)

Here’s how bad things are: Dylan Wilam, the guy who wrote the book on formative assessment, has suggested that the entire concept of feedback might be unhelpful in education.

It’s not looking like I’m going to get any clarity from research on what to do with this trig quiz.


I’m usually the guy in the room who says that reductionist models are bad. I like messy models of reality. I get annoyed by overly-simplistic ideas about what science is or does. I don’t like simple models of teaching — it’s all about discovery — because I rarely find that things are simple. Messy, messy, (Messi!), messy.

Here’s the deal, though: a reductionist model of learning has been really clarifying for me.

The most helpful things I’ve read about feedback have been coldly reductive. Feedback doesn’t cause learning . Paying attention, thinking about new things — that leads to learning. Feedback either gets someone to think about something valuable, or it does nothing at all. (Meaning: it’s affecting either motivation or attention.)

Dylan Wiliam was helpful for me here too. He writes,

“If I had to reduce all of the research on feedback into one simple overarching idea, at least for academic subjects in school, it would be this: feedback should cause thinking.”

When is a reductive theory helpful, and when is it bad to reduce complexity? I wonder if reductive theories are maybe especially useful in teaching because the work has so much surface-level stuff to keep track of: the planning, the meetings, all those names. It’s hard to hold on to any sort of guideline during the flurry of a teaching day. Simple, powerful guidelines (heuristics?) might be especially useful to us.

Maybe, if the research on feedback was less of a random assortment of inconsistent results it would be possible to scrap together a non-reductive theory of it.

Anyway this is getting pretty far afield. What happened to those trig students?


I’m a believer that the easiest way to understand why something is wrong is usually to understand why something else is right. (It’s another of the little overly-reductive theories I use in my teaching.)

The natural thing to do, I felt, would be to mark my students’ papers and offer some sort of explanation — written, verbal, whatever — about why what they did was incorrect, why they should have done 20/tan(30) rather than 20*tan(30). This seems to me the most feedbacky feedback possible.

But would that help kids learn how to accurately solve this problem? And would it get them to think about the difference between cases that call for each of these oh-so-similar calculations? I didn’t think it would.

So I didn’t bother marking their quizzes, at least right away. Instead I made a little example-based activity. I assigned the activity to my students in class the next day.


I’m not saying ‘here’s this great resource that you can use.’ This is an incredibly sloppy version of what I’m trying to describe — count the typos, if you can. And the explanation in my example is kind of…mushy. Could’ve been better.

What excites me is that this activity is replacing what was for me a far worse activity. Handing back these quizzes focuses their attention completely on what they did and what they could done to get the question right. There’s a time for that too, but this wasn’t a time for tinkering, it was a time for thinking about an important distinction between two different problem types. This activity focused attention (more or less) where it belonged.

So I think, for now, this is what feedback comes down to. Trying to figure out, as specifically as possible, what kids could learn, and then trying to figure out how to help them learn it.

It can be a whole-class activity; it can be an explanation; it can be practice; it can be an example; it can be a new lesson. It doesn’t need to be a comment. It doesn’t need to be personalized for every student. It just needs to do that one thing, the only thing feedback ever can do, which is help kids think about something.

The term ‘feedback’ comes with some unhelpful associations — comments, personalization, a conversation. It’s best, I think, to ignore these associations. Sometimes, it’s helpful to ignore complexity.


14 thoughts on “Feedbackless Feedback

  1. At NCTM, Dylan Wiliam talked about how a better way to talk about formative assessment might be to talk about pedagogies of engagement and pedagogies of responsiveness, because FA can be interpreted in so many more ways that don’t actually support learning very well.

    I wonder if somethings similar could be said of feedback. This sounds like a great pedagogy of responsiveness to me, and I wonder how my teaching would change if I thought about pedagogies of responsiveness instead of teaching.

    But maybe it’s just semantics and doesn’t matter.


    1. I’m not sure if it matters or not, but I have a hard time seeing the advantage of those terms. If teachers interpret ‘formative assessment’ in too many ways, why can’t they misinterpret ‘pedagogies of responsiveness’ too?

      I think the real issue with all these teaching concepts — feedback, formative assessment, pedagogies of responsiveness — is they’re too ambitious. ‘Feedback’ captures too much of what we do in the classroom to be useful. There is no way to think about how to give effective feedback because that would have the be a theory of how to respond to students while they’re practicing in class, discussing a new example, failing a quiz, trying a problem solving task, etc. You’re always giving feedback, of one sort or another, in the classroom. It’s not like poking a rat in sealed box; it’s teaching.

      Formative assessment seems like it comes closer, but it also doesn’t seem ‘ready for scale.’ It captures still-yet too much of teaching. When are we not assessing students in the classroom? When are we not listening for gasps of comprehension, and using that info to guide our next decision? Any pedagogical concept that encompasses exit tickets and Shell Center tasks is talking about too much of teaching. We’re going to lose ourselves in the process.

      The concepts we need for teaching — it seems to me right now — need to be much, much more narrowly focused.

      I like this post, but it would probably be useless to a history teacher. That’s fine, and probably the way it should be. We’ll communicate best about teaching when our talk is more closely hitched to content.

      And, while I don’t claim to have any certainty here, I think we’ll do best to theorize about teaching in much narrower chunks. Let’s have a theory of how to effectively give exit tickets, and another bit of theorizing about what those Shell Center activities can do. We don’t need to talk about ‘how to give effective feedback.’ Instead, let’s talk about how to respond to kids’ work after a quiz, and then let’s have AN ENTIRELY SEPARATE CONVERSATION about how to respond to kids who have a bunch of mistakes in their work during class. And maybe even this is too high-level a discussion; maybe this needs to be connected to specific content if we’re going to make sense to each other.

      Filed under: nitty-gritty theorizing


      1. I think you’re right on about more specific terms being far more useful. I’m sure we could spend all day exploring this. I have one more point — I like pedagogies of responsiveness in part because it communicates a value of teaching — that teaching should be responsive to what students know and don’t know, and that day-to-day pedagogy should honor that. Formative assessment seems to communicate less of a value.

        Here’s another example. I really like the phrase backwards design. I think it communicates a value about how curriculum is constructed — that it begins with the end in mind. Lots of other phrases about curriculum don’t communicate that value.

        I’m splitting hairs here, but I think there’s something to phrases like this — “pedagogies of engagement”, “pedagogies of responsiveness”, “backwards design”, “assessment that moves learning forward”. Early in my teaching, I literally just didn’t do most of that. My assessment was formulaic, I planned activities day by day trying to do cool stuff, I didn’t respond to what students knew and didn’t know. I think the fact that those phrases communicate values is useful. Doesn’t make them as useful as more specific ways of talking about teaching, but adds something.


  2. Formative feedback at scale is possible, depending, of course, on definitions. I teach an intro programming course for business undergrads. They do 40 hands-on projects (plus exams) per semester. Students are allowed to resubmit exercises that aren’t up to standard. That means 2,000+ manually graded submissions per semester for a class of 45 or so.

    I wrote s/w to make it practical. Each exercise has a rubric for formative feedback. Each rubric item (like “Input validation”) has canned responses (like “Put user input into a string variable, check for numeric, and only then store in a numeric variable.”) Clicking on rubric item responses makes grading fast (enough) to be practical at scale. Perfect? No, but not bad.

    It would be harder with math. Program source code more-or-less exposes student thinking, making it relatively easy to find mistakes. In math, it would be harder to identify thought errors that lead to solution errors, I’m guessing.

    There are some short movies showing the feedback system at It’s open source, but I wouldn’t recommend the current version. I’m rewriting it, based on lessons learned from a few years of use.


  3. I know you’re blogging about feedback and learning science and whatnot, but you used trig as an example so I’m just going to hijack your comments now…

    A couple things that I wanted to say about the content:
    1) Is this the sort of mistake you described recently where, when you’re learning something new, you over-apply the pattern that looks right? Often one’s first trig problem is finding a missing opposite side with a known hypotenuse, and so the final calculation is hyp * sin(angle). Makes sense that kids would think “trig problems look right when they are side * ratio(angle)”. So this is a predictable error.

    2) I have no idea what trig lessons preceded this one, but I’ve been interested lately in the role of learning to do calculations reliably before algorithms, symbols, and shortcuts are introduced. I think we would still see kids overgeneralize side * ratio(angle) whenever we introduced formal trig function notation, but I’ve been interested in just how long we could afford, in high school to have kids looking at a table of similar triangles, and then a trig ratio table, and getting good at all of the calculation and proportion work in an explicitly proportion-y context, before introducing details like the ratio-as-single-number, how to use a calculator to find the ratio, and function notation. In some sense the mistake here is that kids aren’t using their proportion schema (where, by now, it’s usually okay to have #/x = #/# as well as x/# = #/#), they’re using a new and buggy trig problem schema (buggy in part, perhaps, because they’re grappling with ratio-as-single-number and function notation and calculator steps all in a solving proportions context, and proportions aren’t always kids’ best things anyway). It seems to me like kids have forever to get good at adding, subtracting, multiplying, and dividing, then we speed up on exponents, zoom through radicals, and make them master the concept, calculations, and algorithms for logs and trig ratios all in one day, instead of doing concepts, then calculations for a good long time, and then algorithms and notation.

    Liked by 1 person

    1. Is this the sort of mistake you described recently where, when you’re learning something new, you over-apply the pattern that looks right?


      I’ve been interested lately in the role of learning to do calculations reliably before algorithms, symbols, and shortcuts are introduced.

      Me too! Especially with trig. Here was my sequence, and email me if you’d like to see my materials:
      (1) Compare the steepness of various ramps, and identify h:w ratio as an important steepness comparison.
      (2) Review similarity and scaled figures.
      (3) Give students h:w ratios and ask them to find missing sides of angles.
      (4) Really emphasize that a h:w ratio in decimal form (e.g. .5672) can be thought of in fraction or ratio form (e.g. .5762:1 or 5762/1).
      (5) Then, give a trig table, and do the same exercises but just looking up h:w ratios in the ‘tan’ column.
      (6) At the end, introduce problems where h:hypotenuse ratios are useful, and use those on the trig table.

      My sense, from this unit, is the overgeneralization happened before I introduced tan at all. It happened when we were just using h:w ratios. For these kids, their proportions schemas weren’t working quite right, before the trig came into play.

      So, without denying the phenomenon you’re describing, I’m not sure that’s what happened here.


  4. I looked at the bird problem again, and the second lot didn’t check the answer.
    11.5 +20 < 46 so not even a triangle.
    "Check your working" is almost valueless if one doesn't check the solution.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s