Most of the time, ESL courses and activities are considered non-credit by their host institutions. Private language schools and larger corporations usually run some form of progress assessments for their students, but rarely are these associated to grades. In some rare cases, ESL courses may be graded and may also count for credit, especially in the case of EAP (English for Academic Purposes). To many, the fact that they are “non-credit” courses seems to imply that they are taken less seriously or are “less important” than for-credit courses. Regardless of how your language institution handles success in language courses, timely and “aligned” formative evaluations are a must in the teaching and learning process. The “non-credit” label given to ESL courses may lead to inadequate assessment and/or feedback for students, which can ultimately lead to less achievement. Below are a few guidelines for creating assessments that promote learning and self-reflection as well as target the language skills you are trying to develop in your students.
The concept of “aligned assessments” revolves around ensuring that the learning outcomes set out by your course or student expectations are properly assessed through “authentic” assessments. How many ESL conversation courses have you taught in which the final assessment was a paper-and-pencil grammar exam? The learning outcomes of such a course probably state that the student will learn to “speak” at a certain level with a certain set of linguistic concepts. Briefly, students need to speak. So why test them on how well they can conjugate verbs on paper? Speaking implies an application level as per Bloom’s cognitive dimensions. It also implies a certain psychomotor mastery at the “precision” level as defined by Bloom. Grammar exams, on the other hand, imply at most an “understand” level on the cognitive spectrum and virtually no psychomotor ability. So, in order to assess speaking, the evaluation task should involve speaking.
Gate-keeping vs. Enabling
In high-stakes (difficult, with few but large assessments) courses, a gate-keeping perspective is often put in place in order to “weed out” the weakest students. This happens a lot at the undergraduate level in which instructors may do very little to help students develop in the hope that only the most talented students move on to graduate studies. Unfortunately, this kind of perspective does not help students in the least. If our goal as instructors is to “facilitate” the learning process, our assessments should also do the same. In other words, assessments should not be used to limit success, but rather improve it. So, once an assessment is finished and “graded”, effective feedback should follow. Also, allowing students to evaluate themselves based on a clear evaluation rubric can go a long way in developing self-reflection in their learning.
Frequent Formative Feedback
Assessments do not need to be associated to grades to be effective, but they need to be followed by effective and formative feedback. Most students in ESL care primarily about their development and actual ability rather than a numerical score. Providing them with many little assessments (low stakes) throughout the course, coupled with effective and timely feedback, can curb bad approaches and completely change how a student perceives their linguistic ability. So, weight your tests lightly, provide many small, aligned tasks and give your students feedback all the time.
Do you have any preferred strategies for providing formative feedback? Please share your ideas in the comments below.
10 thoughts on “Aligning your Assessments to Improve Learning”
If institutions receive grant money for their esl programs, they are always taken seriously.
Thanks for your comment Jose! Yes, I am sure that institutions take these courses seriously. It is the public perception that I was writing about. There is a general assumption that “for-credit” courses are somehow more serious than “not-for-credit” courses. This may be true in terms of pre-requisites and program requirements, but it is absolutely irrelevant in terms of quality teaching and learning. An example I can offer is from my own experience. My institution used to received large sums of money to run full-time non-credit ESL intensives. The relationship with the ministerial body responsible for allocating these funds was seen as of utmost importance for the department in charge of running the courses/programs. However, assessment was left entirely to the discretion of the hired instructor. The ministerial patron was only interested in receiving a final grade for each student. The what, how, when and why of assessment was completely left out of the discussion and was relegated to the instructor. At the same educational institution, for-credit literature courses contained assessments that were developed by a committee of teachers, overseen by the department head, transferred to course outlines and applied consistently over all sections of the same course.
I love C.A.Ts (Classroom Assessment Techniques), and there are many. My favourite is the One Minute Paper, and The Muddiest Point. Angelo and Cross wrote a great book about this, titled “Classroom Assessment Techniques: A Handbook for College Teachers.”
Thanks for your comment Jennifer! The one-minute paper is one of my favorites as well! It is great at developing student meta-cognition.
I absolutely agree with providing constructive feedback to our ESL learners and that can be done after any product, but I think we have to have some scoring when it comes to assessment as it’s the major difference between evaluation and assessment.
Thanks for your comment Mina!
It is not clear that numerical scoring helps with learning. I believe that numerical scoring is an operational requirement more than a pedagogical one. If you follow the Hattie Ranking, (see below), it is clear that the most effective form of evaluation, numerical or not, is auto-evaluation. In other words, students should be evaluating themselves, based on clear criteria, providing these evaluations to their instructors in defense of their course work, as often as possible. It is clear that large final exams with little to no prior assessment or feedback are not used to help with learning. https://aschofield.files.wordpress.com/2011/03/list-of-hatties-analyses-by-rank-order.pdf
Thanks for your insightful post. I agree that assessment must be aligned with learning and that there is a tendency to be a gate-keeper when it comes to higher education EAP courses. Last summer, I taught in an undergraduate certificate program and the instructors were highly encouraged to fail the students who were not deemed worthy of proceeding to the grad level. I much more prefer the concept of facilitating learning as a much as possible to make sure the institution helps form well- rounded learners.
Another point from your post that resonated with me is your description of how some testing is administered; e.g. a grammar quiz to test speaking abilities. In the past, my colleagues and I resorted to oral interviews or students recording and uploading their answers to our learning platform. It is time consuming, but a step in the right direction. Also, I think about what Case (2013) wrote about the Bloom’s Taxonomy, i.e. students do not tend to progress at the same pace through every stage of the taxonomy, but rather they move in every and each directions; for instance, a student may excel at critical thinking, but not at grammar, so they may provide an excellent oral response even though marred by not very precise grammar usage and structures. This does not mean they deserve a failing mark in speaking abilities and critical thinking.
I am curious to know more about how you incorporate informal, but formative feedback into your practice.
Case, R. (2013). The unfortunate consequences of Bloom’s Taxonomy. Social Education, 77(4), 196-200
Thanks for your comments! I love continuing these discussions. I had a quick read of the Case article and agree somewhat, but I am not sure Bloom’s taxonomy has really been distorted that way the author claims. Perhaps in some cases, educators believe that students must progress from Remember, up through Application, then into synthesis, but this is not the case. Bloom’s taxonomy (As Case suggests) was meant to classify learning outcomes, not dictate learning approaches. In this, Bloom’s taxonomy is still solid. Yes, you can start learning at the “create” level – which I would propose is far more powerful, but also more daunting for less confident students.
In terms of informal formative feedback, I typically incorporate the following into my teaching practice:
1) Student Teachers – Instead of “presenting” or transmitting language concepts to students, I ask them to research and prepare to teach during homework time. When in class, student groups present a topic, prepare and distribute a focused practice activity to students, consolidate the responses to the questions/problems and finally collect feedback from their peers and teacher (me). When presenting, these students are unsure of themselves, yet the feedback they receive from me is powerful in that it lets them know whether they have erred or have understood the concept.
2) LMS Discussions & Forums – I will often use an LMS such as Moodle or Brightspace for my courses. Most “units”, “topics”, or “learning outcomes” will have a low stakes (5% or less) discussion forum or other online activity associated. Often, I might ask students to a) access the forum, b) write ten sentences and 10 questions in the simple past, c) access at least 2 of your collegues’ forum threads and try to find errors in their sentences, d) reply to at least 2 colleagues and provide them with feedback based on the following rubric (rubric provided). The goal here is to get them learning outside of class, encourage them to do so with a few points, and get them to evaluate each other for concept mastery.
3) Lots of Low-Stakes – I gawk when I see course outlines with the following evaluation scheme:
40% mid-term (multiple choice)
40% final exam (multiple choice)
20% oral presentation
I would much rather like to see
40% – 8 oral interviews at 5% each, retries possible
40% – 4 grammar quizzes at 10% each, retries possible
20% – 4 student teacher activities at 5% each.
This immediately transforms a gate-keeping styled course into one that enables learning. Formative feedback should be linked to grades, but weighted lightly with numerous opportunities to try the evaluations again in order to raise student mastery (and grade score.)
Sorry about the late response! Good point about the Taxonomy and thanks for making me think deeper about it.
I appreciate how you embed feedback into your class. I have in the past tried seminaries led by students (great success!), student-teacher conferences, and discussion fora (not a great success, but I may try again after I make a few changes).
What I would like to do is what you suggested: Allow students to resubmit their work; however, I tried once and it completely backfired because everyone wanted to resubmit to obtain a higher grade and I found myself struggling to give everything back before the end of term.
Do you have any suggestions in dealing with re-submissions, so the workload does not get out of hand?
I truly appreciate this conversation!
Oh, the woes of a master teacher! I completely understand the “extra” workload bit. There are ways to manage it, but unfortunately, this extra workload is what makes you a truly excellent teacher! As Ramsden (2003) suggests, master teachers spend most of their time working before and after class. Class itself is merely time for the teacher to observe and take notes, rather than transmit.
Using this as our guiding tenet, let us consider ways to reduce the extra workload that comes with student-centered, feedback rich classes.
1) Use class time for peer-, self- and teacher grading. If you can, don’t grade at home. Evaluate student work during class time. Ensure that all students are engaged doing a task that takes some time to complete and monitor progress. Allow students to submit the work for on-the-spot grading (feedback). If they have failed the task, provide time to try again.
2) Use auto-evalutions as often as possible, A very powerful way of developing mastery is to have students evaluate themselves, give themselves a grade and argue as to why they deserve it based on clear criteria. Students will realize their work is sub-par before submitting it – this may lead to a decrease in the number of submissions.\
3) (Most Important) – put in place “performance-based” evaluations that cannot be cheated on or “graded” in a typical sense. For example; “ask me ten questions in the present perfect in less than 5 minutes”. That’s it – that’s the test. Pair students up in class and have them ask and answer each other while you observe. Five minutes later, it will be clear to everyone whether the students have been successful or not. If so, award a grade and provide feedback on the spot – no at home grading!
Comments are closed.