“Non-Stress Tests,” Assessment, and the Body

I am a good student; most of us who make a career in academia are. I have always known how to prepare for classroom assessment. Because I attended a strict, private school in England growing up, I knew how to draw up “revision timetables”and begin studying months in advance, color coding notes and setting up intermittent reward systems. When my sister and I played school as children, I always wanted to be the teacher, grading her work and putting stickers at the top. I still take pleasure in grading my student’s work. I remember laughing, reading a lively writer’s voice in bed one evening, when my husband looked over at me, a college instructor at the time as well, and asked, “Are you reading your students’ papers for fun?”

I was. At the time, I had faith that students, if they worked hard enough and made use of all of the resources available to them, would succeed in my class.

This belief changed after a high-risk pregnancy, when I was subjected to testing twice a week for much of my third trimester. These tests consisted of growth scans, in which a machine measured my baby’s body parts, as well as the “non-stress test,” a test that decides whether or not a baby is in distress by measuring their heart rate. The non-stress test gets its name from the more violent test it replaced, the stress test, one in which women were put under bodily stress—induced contractions—in order to measure the baby’s response. Eventually, physicians figured out these methods actually induced labor and replaced the stress test with the non-stress test. At the time, I was told by ultrasound technicians that growth scans estimating a baby’s weight could be off by two pounds, and after minimal research, I discovered that both tests are often inaccurate, if not totally ineffective, in indicating whether or not a baby is in distress. Despite the known failures of these assessment practices, pregnant people become subject to more expensive testing, more monitoring, and, in many cases, scheduled inductions and surgery, based on the results of these tests. I knew all this, but the stakes were high, and I was continually told these tests were necessary despite their failures, what was best for my baby and I, and that, as a responsible mom-to-be, I was required to show up and take them.

Why risk it?

So I dutifully showed up for my appointments, with blind faith in what the authority figures in my life were telling me, and when my baby’s heartbeat didn’t meet the required criteria after about an hour, the busy Brooklyn hospital admitted me to the labor and delivery unit at thirty-four weeks pregnant. I remember wondering if I was going to deliver, unable to reach my mom or husband. The woman who came in to double-check my insurance information seemed uncomfortable, to say the least. After all, I was paying for this experience. The nurses kept promising my doctors would show up; they never did. Eventually, a nurse put an IV in me, “to wake the baby up,” and my baby’s heart did what was required. Evelyn’s heart rate never indicated she was “in distress”—but it wasn’t meeting the criteria for release. When her and I were both finally allowed to leave, we were given a sheet with a “follow up recommendations” section left blank. This entire, cautionary hospital stay cost upwards of 2,000 dollars, before insurance payments.

Ever since, I cannot help but notice parallels in the ways students are assessed and accommodated (or not) at the university level. Academic testing, writing exams and papers, is a bodily experience, and we assume all bodies will be able to fit the criteria and standards we have set. When students give me accommodation paperwork (more often than not they apologize when doing so), I have always assured them—like the nurses, doctors, authority figures that surrounded me—that everything will be fine; that if they work hard enough to succeed, they will. But no matter how kind or supportive college instructors are, to use my husband’s words after Evelyn almost failed another “non-stress test,” two weeks later, “sometimes these tests are bullshit.” Sometimes, assessment practices fail students, no matter the amount of effort they are able to put in.

So, what can we, college instructors, do about it? How are we holding ourselves responsible for how assessment practices fail, let alone the physical and mental impact even the best testing has on our students? Like physicians, educators do work to revise the ways in which we assess students, on individual, departmental, and national levels. But, still, when I talk to my writing students about placement tests, the common core, the SAT, all of the standardized tests who judge who they are as students and writers, we quickly come to the conclusion that these tests fail to say much about their capabilities, abilities, voices, unique and individual skills as writers, for a variety of reasons. There is no way—just as those monitors that Evelyn kept kicking off my belly, and the criteria made up for “most babies” that she simply didn’t meet—standardized tests and criteria won’t leave some students behind.

The stakes are high for our students, just as they were for me and my baby. Assessment matters, to where our students end up academically and professionally; to their mental and emotional health; and to how they interpret and feel about their own capabilities.

I think a partial answer to how college instructors might address this huge issue is flexibility. Flexibility in revising and adapting assessment tools to student needs; in the timeframes we give students to complete work; and in how we think about, and talk to students about, what grades mean. In all cases, this flexibility comes in response to listening to our students. If there’s one thing I felt in all hospital assessment settings, it’s that I wasn’t being heard when I wanted to have a conversation about how these tests were inaccurate, or unfair, or stressful.

To start, college instructors can build flexibility into their assignments. I start each class-period with low-stakes reading quizzes, for example. The questions are open-ended, but they know they need to provide specific and concrete details from the readings to pass. I tell them, “Come in ready to write about the most memorable moments from the readings, offering specific details, choosing whichever questions you feel most comfortable answering.” The pedagogical goal of these quizzes is to see how well students know course material, but obviously students cannot demonstrate they know everything covered in the reading or course on one assignment—why not let them discuss what most interested or intrigued them?

Another way to offer flexibility and accommodate students has to do with time. I was not afforded this luxury by the hospital—Evelyn and I could stay hooked up to the monitor for an hour, maximum, before being admitted to triage. Had we be given more time, she would’ve eventually passed. In each of my college classrooms, I come in fifteen minutes before class starts and allow students to start their reading quizzes early if they wish. For me, the pedagogical goal of reading quizzes are to determine if the students are reading and how much they comprehend of the assigned reading, not how fast they can read and respond to quiz questions. I find that this extra time helps a wide range of students, including English language learners; students with test anxiety; and students who have learning disabilities that require more time to comprehend questions and formulate answers.

I understand not all college instructors agree with flexibility when it comes to assessment, for various reasons. For one, students might encounter trouble in later, less flexible, course assessment practices. In addition, departments are often demonized for grade inflation, English departments in particular. In College Writing, a course in which students are required to draft their work multiple times and spend time on their writing (a process that best mimics what professional writing actually looks like), grades often end up being higher. Despite this, one department handbook I was given, for example, warns instructors, “Even though adjunct or untenured faculty may feel pressure to inflate grades, it is important to avoid doing so. They are a disservice to students and an embarrassment to the English department when faculty from other departments see students with weak skills and high grades.” In a grading workshop I attended, the First Year Writing co-director running the workshop presented tables and charts that demonstrated grade inflation in College Writing as compared to later writing courses, and, while acknowledging that the nature of College Writing might have something to do with the higher grades assigned (built-in drafting processes, scaffolded assignments, and paper revisions), she urged instructors to be mindful to not assign inflated grades. One way to ensure this is to not grant paper extensions, she suggested.

I understand the many structural pressures that contribute to these unforgiving approaches to assessment, from non-stress tests to writing courses, but ultimately this lack of flexibility derails what assessment, at its best, is meant to do—offer an opportunity for students to demonstrate what they have learned so, as educators, we can then provide feedback about this learning. In order to avoid grade inflation, or missing a baby in distress, testing becomes more rigid and unforgiving, causing immeasurable stress on the bodies in question.

Most important to remember, consequently, when it comes to assessment is that when we assess people, we are in a position of power, and should be transparent about the purpose of the assessment, about our goals. I struggled, when pregnant with Evelyn, to get a doctor to have a transparent conversation with me about the tests I was forced to undergo. I now feel an even greater responsibility to do what I can to create assessment practices and spaces that are not only conducive to my students’ success, that take into account the many ways in which assessment tools fail—but that are crafted from a place of empathy. I know what it felt like in my body to fail ineffective tests that said little about my body or my baby. A little empathy and flexibility would’ve gone a long way for Evelyn and I. The same holds true for our students: keeping in mind what each assessment tool is meant to measure, and how any assignment might be adapted to measure this skill or area of knowledge more flexibly and empathetically for each student, would make material differences in the lives of our students and their experiences with the academy.