Assessment
“I am puzzled why some errors should excite…seeming fury while others, not obviously different in kind, seem to excite only moderate disapproval. And I am puzzled why some of us can regard any particular item as a more or less serious error, while others, equally perceptive, and acknowledging that the same item may in some sense be an ‘error,’ seem to invest in their observation no emotion at all.”
– Joseph Williams, “The Phenomenology of Error” (1981)
The same year that I was born, Newsweek magazine published a controversial story entitled “Why Johnny Can’t Write.” This article sparked a great debate about high school and college writing curriculum as well as funding for education in the United States, but it also contributed to something that I and many other new writing instructors struggled with at the outset of our careers in the classroom: To what extent should our assessment of student writing be based on narrowly defined objectives?
Before I came to the academic world, I would have probably sided with the notion that writing should be assessed based on a strict set of empirical criteria; however, the very first time I ever graded a student paper – an 800-word personal narrative – I spent nearly 90 minutes and left 74 comments in the margins…and then realized I had 49 more papers left to grade! More troubling, though, was the realization that when I looked back on my suggestions to the student, I could see that even if he did revise the paper per my feedback, the final product would be more my writing than his. That was the beginning of a very long, sometimes difficult, but absolutely essential conversation I had and still have with myself about the need to help students meet real-world objectives while still maintaining an authentic voice as an individual possessor of rhetorical agency.
The same “rhetorical philosophy” of pedagogy I discuss in my Teaching Philosophy applies to the way I assess student work. What is the exigence? Who is the audience (or who all are the audiences)? What is the context? Who is the student and what role do they play within that context? What are the conventions of the genre or medium? What larger rhetorical capabilities will be served by whatever smaller skills this assignment seeks to impart?
The answers to those questions depend upon a number of factors, and I reserve the right to tailor my evaluative criteria based on, among other things, the larger rhetorical context of the course, its stated objectives, my personal pedagogical objectives with regard to how I teach it, the actual assignment in question, the competencies the assignment was designed to cultivate, and the potential contribution those competencies can make to a student’s greater rhetorical agency.
The following examples can help clarify how I assess work in each of the courses I regularly teach:
Before I came to the academic world, I would have probably sided with the notion that writing should be assessed based on a strict set of empirical criteria; however, the very first time I ever graded a student paper – an 800-word personal narrative – I spent nearly 90 minutes and left 74 comments in the margins…and then realized I had 49 more papers left to grade! More troubling, though, was the realization that when I looked back on my suggestions to the student, I could see that even if he did revise the paper per my feedback, the final product would be more my writing than his. That was the beginning of a very long, sometimes difficult, but absolutely essential conversation I had and still have with myself about the need to help students meet real-world objectives while still maintaining an authentic voice as an individual possessor of rhetorical agency.
The same “rhetorical philosophy” of pedagogy I discuss in my Teaching Philosophy applies to the way I assess student work. What is the exigence? Who is the audience (or who all are the audiences)? What is the context? Who is the student and what role do they play within that context? What are the conventions of the genre or medium? What larger rhetorical capabilities will be served by whatever smaller skills this assignment seeks to impart?
The answers to those questions depend upon a number of factors, and I reserve the right to tailor my evaluative criteria based on, among other things, the larger rhetorical context of the course, its stated objectives, my personal pedagogical objectives with regard to how I teach it, the actual assignment in question, the competencies the assignment was designed to cultivate, and the potential contribution those competencies can make to a student’s greater rhetorical agency.
The following examples can help clarify how I assess work in each of the courses I regularly teach:
Assessment in Composition (ENGL 1101 & 1102)
Over the years, those of us who teach composition have been tasked with teaching a great deal of content in this two-course sequence: writing, rhetoric, critical thinking, research, information competency, documentation, multimodal composition, sometimes literature, and the list goes on. For the purposes of my composition pedagogy, I certainly look at the basics (the five rhetorical canons, grammar/usage, research and documentation, etc.); however, I situate my assessment of each of these aspects of student work within the larger rhetorical context of what the assignment at hand is trying to teach. Certainly, there is a grammatical baseline, and the need for standard forms is indeed important. Therefore, if a student has basic writing problems that seriously compromise any hope that student’s work has of being rhetorically effective, I will assess that student’s work as insufficient and work to remediate those issues. On the other hand, if a student has minor typographical errors or didn’t proofread carefully enough, but the overall work is excellent, I am inclined to count off less. By the same token, if a student has arrangement problems (something that happens frequently with the rhetorical analysis essay assignment), I will consider the degree to which these arrangement problems stood in the way of the reader receiving the full effect of the student’s analytical thinking. In this case, if the arrangement problems didn’t affect cohesiveness, I would be less inclined to count off much more than a letter grade (for just that issue). If, however, the arrangement problems were of the nature that the reader had trouble understanding the student’s line of reasoning (or became distracted to the point of not being able to fully grasp everything), then I would count off a great deal more. Needless to say, I look at the argumentative/persuasive research essay through the lens of, “Is this truly persuasive?” A beautifully written argumentative paper that clearly and cleanly expresses tepid, ineffective, or fallacious arguments would never make more than a high C or low B in my class (because the exigence was ignored and the student wrote primarily an expository piece). A paper with strong arguments but compromised writing, research, or arrangement would also suffer grade-wise, because even if the arguments were sound/persuasive, they wouldn’t be effective if they weren’t delivered in an effective manner.
Assessment in Introduction to Rhetoric & Composition (ENGL 3050)
In many ways, this course feels like a graduate-level survey, and since most of the students are extremely interested in the subject matter, I choose to let the in-class presentations augment my hitting of the “high points” (assessed with short-answer unit tests) to help diversify the course content. I evaluate these in-class presentations less by the student’s ability to deliver a presentation and more by the degree to which they have researched and brought out knowledge of the topic above and beyond what their textbook conveyed or what I said in class. Questions on in-class exercises and the comprehensive final exam are more geared toward applying rhetorical theories in hypothetical situations, and in these cases, I assess answers based on the degree to which students fully comprehend the rhetorical theory or concept and how it works in actual discourse. For example, I may have a question which asks a student to describe a time when a friend misunderstood them, and then also ask them to use I.A. Richards’ theory of “rhetoric as misunderstanding” to try and determine which words used in the conversation suffered from mismatched metaphors between the two parties. The more words/metaphors the student mentions in the answer – or the deeper the student goes into why only one or two words/metaphors were mismatched between the two people – the higher the grade.
Assessment in Business Writing (ENGL 3130)
A big part of the Business Writing course involves genres and the textual conventions inherent to those genres. Rhetorically, a great deal about the composing done in this course involves what audiences of business writing typically expect. Therefore, I grade projects in this class based on how well students dedicate themselves to learning what is appropriate in each business writing context, as well as how well they replicate those conventions effectively. For example, let’s say I have a quiz question involving what type of medium is appropriate for a particular business communication scenario, why it’s appropriate, and how such a message should look. A student who answers “text message” and then constructs a brief text message including text messaging abbreviations and an emoji would score well on this question if she could provide a good rationale for why this medium and the type of message she composed would, indeed, be appropriate in the situation the question described.
Assessment in Editing for Publication (ENGL 3140)
In a course that works with existing texts, assessment is particularly difficult because many times, texts can be augmented or improved in multiple ways and there can often be numerous “right answers" for how to deal with something. The prevalence of varied style conventions (and even house style guides for some individual organizations) also works to undermine strict objectivist assessment in this course. As an alternative, my primary framework for grading assignments in Editing for Publication is to evaluate how well students are making wise editing moves and cultivating good editing instincts. For in-class presentations, this means that less of the student’s grade comes from how good of a job they do as a presenter and more from how well they get the larger editing significance of their chosen book or topic across to the rest of the class (i.e., what depth they go into, how vividly they illustrate application of the knowledge, etc.). Regarding the self-editing assignment, I work with students to reconceptualize or reimagine their existing text for a slightly different purpose than the original one, and then I grade them on the extent to which they have made content, arrangement, and style revisions based on their new purpose. Finally, for the collaborative editing assignment, I grade the process by which each group and each student collaborates to accomplish editorial tasks more heavily than I do the final product they deliver (because the goal of the assignment is for them to learn to edit effectively with others).