Quizzes are a part of my life and my students’ lives.
And though I do believe language acquisition is personal and every person, even with the same input, comprehends, internalizes, and processes language at a different pace, I find myself required to quiz every student at the same time, in the same way.
However, I have attempted to find the most proficiency-based and comprehensible way to do so, with the most chance students have to show what they know versus what I think they should know.
I used to write pretty straight forward vocabulary quizzes: I give a Latin word, I expect an English meaning.
Let’s just say those quizzes were overly easy to grade, pretty easy for students (especially those who are good at last-minute cramming) to ace, and super easy for students to completely forget the next day. It required nothing more than short-term memory recall. No real understanding. No lasting ability to remember or even use in context.
Then, after a few years, I thought I had gotten smarter: I give a Latin word, leaving out a principal part, and I expect not only the missing principal part spelled right, but an English meaning.
Not much improvement over the original vocabulary quiz.
Not long after, still thinking I was going to improve the original in some magical way making it not only harder for the student to cram for, but encourage the student to internalize and understand not only the grammar of the dictionary entry, but the larger meaning, I came up with this doozy:
- Part One: illustrate the vocabulary word given (usually four words were given)
- Part Two: fill in the missing principal part of the word and its literal translation (amavi = I loved)
- Part Three: write an English sentence(s), in which three of five vocabulary words (student choice) are used, but instead of writing the vocabulary word in English, put it into correct Latin
Bad idea. This third and more complex reiteration of the original was nothing but complicated for me to grade and the student to complete. Whatever benefits it offered in the illustration of the word, the illustrations were up to me to interpret, it lost in parts two and three. I was still encouraging students to memorize in part two and though I got some better results from the higher level students when it came to recognition in context, the lower performing students never really improved. And in part three, no matter how much grammar notes and charts I threw at the students, they consistently used one principal part and added non-sensical endings, if endings at all. I spent a lot of time trying to figure out where and when to give partial credit.
Now, I do things differently.
My quizzes today are in two parts. Part One is a story or informational reading passage with words underlined throughout. Students are asked to give a meaning or illustration of the underlined words. Literal translations are happy-faced, but non-literal translations are still correct ( happy-face: familiae = of the family / still correct: familiae = family). Part Two is fill-in-the-blank sentences.
Doesn’t sound too different, does it? Ah, but the difference lies in the comprehensibility of the context.
The story or reading passage is written as comprehensibly as possible, while often times making the meaning of the words clear in context. The words and syntax of the story/reading passage are repeats of those heard and seen in class in recent days. This is not new input. This is input we’ve seen and dealt with till we are all almost sick of it.
And the fill-in-the-blanks? Those sentences are pulled directly from similar CI activities done in the previous days, as well. Blanks are left open where the most creativity and options are available. The only thing they can’t fill in? The exact sentence, if it happens, from the above story or reading passage.
What I’ve found most interesting with these new vocabulary quizzes is just how much of the grammar (cases, persons, tenses, etc.) the students have picked up on and include either in part one or part two. As the year progresses and the students receive more input, their unconscious use and understanding of the grammar is impressive.
For now, I’m pleased. If I have to give quizzes, this gives me the best idea of how much students are comprehending and internalizing.
Informal Performance Assessments (IPAs = my “version”)
Designed to be informal, though graded, these are never entered in the gradebook as points earned divided by points possible. I assign and grade these IPAs as a record of where each student currently is and, over time, how far they’ve come.
I want students to show me what they comprehend. There aren’t really any right or wrong answers – some answers are more indicative of better or greater comprehension while other answers show a semi- or partial comprehension. I’ve set these IPAs up to “quiz” the students in multiple modalities – listening, reading, and writing. In addition, I want the students to share ideas and discuss what they understood with peers to piece together larger pictures of comprehension. Nothing wrong with applying current knowledge to new knowledge and comprehending more, especially if it leads to a greater overall understanding.
I still don’t know what is best to include or how best to include some parts of an IPA. I like having students watch and listen to a video in Latin about a related topic. I like having students find words, phrases, clauses, or even sentences in context. I think it is a good idea to have students use the language to support facts. What I don’t like about them is figuring out the best order to these skills and activities, where and when to invite peers into the discussion, and at what point I’m being too repetitive. Nor can I work out how much comprehensible input in this format is useful and when I’m asking for too much too soon.
I also haven’t found a great way to integrate larger cultural perspectives into this type of assessment (or even if I should) while not losing too much time off-topic or asking for too many skills at once.