Monday, November 16, 2020

Student Perception Survey (Fall 2020)

Three times a year I give my students a Student Perception Survey, to see how they're experiencing my class. I wrote in depth here (link) about what the surveys do, and how I use them. In an effort to process and reflect on my first doing of the survey last week, I thought it'd be helpful for me to publish my reflections here.

LOW-INFERENCE ANALYSIS OF RESULTS

  • All together, I've got 96 responses, out of my 126 students across 5 sections.
  • All 5 sections completed the survey at approximately the same rate.
  • For the most part, the students completed the survey if and only if they were present on the day I gave the survey. It was open on Google Classroom, so a couple submitted it after.
    • On the day we gave the survey, there were 96 students present (out of 126)
    • For the 96 who completed, this is when they completed the survey
      • 5 = day before (I posted to Classroom a day early)
      • 73 = day we did it in class together
      • 17 = 1-12 days after (within 12 days, as of this writing)
  • Most students who started the survey responded to all the items, though there were a few students who skipped some questions.
  • Performance by category, aggregated across all sections
    • Scores
      • Captivates 73%
      • Challenges 78%
      • Cares 73%
      • Clarifies 77%
      • Classroom Management 89%
      • Consolidates 76%
      • Confers 77%
    • I notice that except for Classroom Management, everything else falls within a pretty narrow range of 73-78%.
  • Performance by section, aggregated across all items
    • My overall rate of teacher favorable responses in each section was 75%, 78%, 79%, 80%, 81%.
  • Looking at category averages for each section
    • My lowest performing category/section was 64% in Captivates for one section.
    • My highest performing category/section was 93% for Classroom Management in a different section.
  • Looking at performance on individual items, with data aggregated across all sections:
    • Highest: 97%: Students in this class treat the teacher with respect (Classroom Management)
    • Lowest: 45%: My teacher seems to know if something is bothering me (Cares)
  • I had comparable performance between the questions that are positively framed ("My teacher knows when the class understands, and when we do not") and some questions are negatively framed ("When s/he is teaching us, my teacher thinks we understand even when we don't"), getting 79% and 77% teacher-favorable.
  • When comparing class averages for each item, looking at the spread between different sections:
    • Biggest spread: "My teacher wants me to explain my answers--why I think what I think." The five sections responded 63%, 78%, 81%, 81%, 100% --> Spread = 37%.
    • Smallest spread: "My teacher in this class makes me feel that s/he really cares about me." The five sections responded 87%, 89%, 89%, 90%, 90% --> Spread = 3%.
MAKING INFERENCES
  • Usually Classroom Management is one of the lower categories for me, whereas now it's especially high. Given that we are fully virtual, and there isn't so much of a "classroom" to manage, I guess this isn't surprising?
    • Consider the classic situation: we're waiting for the whole class to be quiet so I can share some important info, it's taking a long time, so I start to use my body language to communicate that I'm frustrated, concerned, or disappointed. Eventually everyone quiets down, and we get started, and we move on with our lives. But that experience might trigger responses to any of these items that are not teacher-favorable:
      • Students in this class treat the teacher with respect.
      • Student behavior in this class is a problem.
      • I hate the way that students behave in this class.
      • My classmates behave the way my teacher wants them to.
      • Student behavior in this class makes the teacher angry.
    • It's a lot easier to mute a kid on Zoom, or push them out to the waiting room. To be honest, though, I've only had to do that once. Usually for students to be engaged in off-task behavior, there has to be some kind of off-task behavior to engage in. I'm finding that there is just less engagement, less exchange, on Zoom, and so we aren't getting disruptions from students, or negativity from me in response.
  • Of the three items in the Cares section, two of them are pretty good.
    • First two items
      • 89%: My teacher in this class makes me feel that s/he really cares about me.
      • 86%: My teacher really tries to understand how students feel about things.
    • But the third item 
      • 45%: My teacher seems to know if something is bothering me.
    • Last year, my colleagues and I spent a lot of time thinking about this items. It raised a lot of questions around whether this item was a reflection of the teacher, at least in the same way that the other items are. And I agree it's a complicated question.
  • Even though my completion rate was decent (96/126 = 76%) it's still a biased survey, because it's sampling only the students who are present, and are typically present on a random Monday. A more complete survey would capture the experiences of all the kids, including those who do not attend class or complete work consistently. This is a consequence of administering the survey in the exact same method that we deliver classwork (in class, through a link I post, etc.).
FOCUSING IN: PROBLEM OF PRACTICE
  • In this first cycle of the student perception survey, I'd like to focus on the category "Clarifies." This is because it contains two lowest performing items (57% and 68%), while also containing two high performing items (89% and 91%).
    • This is especially interesting because the low items and the high items seem to contradict, at least at first. Basically, it seems like my students are saying "Mr. St. checks to see if we understand, and if he thinks we don't understand he'll explain it another way....BUT, even though you are checking, you can't always tell that we don't understand."
    • The flow of assessment should be that there is a learning opportunity (red), the teacher then assesses the students (yellow), and then the teacher analyzes student responses (green). At that point, the teacher needs to make the assessment that student either "got it" (blue) or "don't got it" (grey).
    • Student responses point out a gap in the flow: I may be categorizing more students as having "got it", or categorizing the class as a whole as "got it", and then moving forward. So where could this perception be coming from?
  • Possible explanations
    • I might be setting the bar of "ok and ready to move on" lower than students are comfortable with.
      • This might makes some sense. I'm feeling some pressure to move through a curriculum at a given pace, and so want to move through things. I often feel like I'm already slowing things down to be too slow (and so boring). In fact, given that only 62% of students responded favorably to "This class does not keep my attention--I get bored."
      • In response, however, I have 79% of students agreeing that "In this class, we learn a lot almost every day." Since 79% > 62%, that suggests I could slow things down a bit, and trade a bit between those two?
    • Students might be more ready to move forward than they feel.
      • Students aren't getting papers back with scores on them. I am very regular, consistent, and frequent with using our gradebook to communicate progress and performance. But if students aren't attending to that data being provided in their gradebook, it's not going to actually lead them to adjust their perceptions of themselves and their own understanding.
      • How can I help students see the progress that they ARE making? I'll often have moments where we all go into our gradebooks, look at the grades there, and then process them, trying to internalize what we need to be taking care of. But that takes a HUGE amount of time, EVERY time, and it DOESN'T feel like we usually get much out of it. Do I have evidence that this practice has little impact? During Review Week, when we did this all week, it resulted in very little overall improvement of grades, signaling little positive impact that week. Is this something where we'll just have to bank on a longer-term impact?
    • My assessments might not be fully aligned to the work itself, and resulting in a higher rate of false positives. So even if students perform well enough on the assessment to signal readiness to move forward, it might not be a reliable enough indicator.
      • As I mentioned above, I often feel discouraged by how slow it feels like we're moving through things, and how hard-won every little bit of progress is. It is possible that this pushes me to see what modicum of progress we HAVE made, and over-reward that with higher grades.
      • The goal is to be encouraging, and to recognize that school is harder now than it has been in a long time. But the flip side of that practice is that it creates data interference, reducing the reliability with which the data signals readiness and understanding. This is a big part of the distinction between grades as generated by a traditional model, and "grades" as generated by standards-based grading (which I have committed to, at least in the way that my school has).
  • Action Steps
    • Be more transparent with students about the degree to which assessments indicate a need to go back over a topic, or to continue to study a topic. In particular, I can:
      • When a week's work has provided evidence that we need to extend the pacing, I can communicate that with students, and so help them see how what they do impacts what we do as a class.
      • "Hey folks, I was looking over our performance from last week, and it looks like we're going to need to spend a little more time working through [TOPIC]. So we've got this week's problem set that is going to look at this idea in another way."
    • Be more clear (with myself and my students), about how I am grading.
      • Our standards-based grading system allows for assignments that are rooted in standards, as well other assignments that are considered "practice." They are less targeted than standards-based assignments, and a bit more like the traditional grading model.
      • Let the standards-based assessments reflect performance on the standards. Let the "practice" assignments reflect "practice," and if necessary use those assignments to buoy morale, while allowing standards-based grades to guide instruction.
If you have any questions, thoughts, or feelings about any of this, I'd love to hear them. I'll report back some time in February with the second round of survey results!

No comments:

Post a Comment