Determining the effectiveness of testing formats
How do you determine how effective the type of test format you used was in enabling students to demonstrate their knowledge and/or skills?
I continually go back and keep quizzing students on material learned at the start of the course. Sometimes they learn it just for the test and then forget it.
As we are a CDL school the summative test is to demonstrate the student's ability in front of DMV personnel. To determine if our methods are effective we closely monitor our pass rates. Our learning is almost exclusively kinesthetic.
If the student taking the test scored appropriately and at the level you set for them, then the method of testing for that student was correct. If they were lost, confused, answered not fully and sounded like they were guessing them maybe the test method you chose was inappropriate for that particular student.
I critique the students after the completion of the course to see if they feel the test was an adequate measure of their knowledge, and I also take three month averages of the tests scores of all the students that have taken it in that period and see if they are up to the goal i have set.
I teach at a trade school. We used to have a standard test. We found that if a student was a good test taker that they could pass the class without knowing The critical information. We have changed to a three part final that requires course knowledge be presented before they are able to take the final written test. Our recent graduates are much more confident knowledgeable and employable. We are producing a better product.
By the depth of their response, their ability to apply content, use critical thinking to answer and/or evaluate the question
This is where my studnets hate me. I have a BS in History. We did not take multiple choice, true fals tests. We wrote essays. Why? Applied knowledge. Anybody can memorize things, but being able to apply the knowledge is a different skill set.
I teach C N A classes and most of the questions on the tests are multiple choice. I inherited these tests and have been working on writing new tests. I also look at the percentage of students that select an incorrect answer to a question. If over 50% select the incorrect answer I look to see if I covered the information correctly and completely. I then look at the question, reading it from the student's point of view. I ask the class their thought process in selecting the answer that was not correct. I get good feedback from the class and usually learn something about the perception of the question.
In a career college setting, I think one has to look at the success of the student in multiple ways. We use all multiple choice exams at our college, and the board examination the students take is also multiple choice. If our formative and summative exams are effective, one would expect students to perform well on the board examination as well. It would also be important to evaluate the student on there performance in externship. If a student passes exams, does that truly reflect the fact that we have prepared them for the job?
I feel it givesstudent the opportunity to learn the most form tests and the coarse by allowing the students correct the wrong answers.
I have tests put out by the book manufacturer and use the question banks in there for classroom tests. I also have labs that have tests of hands on ability from what they learn in the class. We combine these to come up with the student test grades.
I am a clinical nursing instructor so skills assessments are what is used to tests students on the learning of various course taught throughout the term. Students have a skills checklist and are graded accordingly.
They are then taken from the skills lab into the job arena for further assessment.
Mariann Urbancsik, BSRN
I usually have a reflective aspect to the test, usually an essay or short answer section that tests the content specific information and my overall goal for that time specific section.
Hi Kareme, I just "threw out" a question from one of my tests for the same reasons!
Susan Polick
It seems to me that it is harder to determine if the format itself is effective and easier to determine if the element (question) is effective. I'll explain.
For many of the reasons stated in this module, I've used both true/false questions and multiple choice questions to evaluate student learning. I have not had any indication that either method was terribly effective or ineffective. But, from time to time it is clear that a particular question was not valid OR that the material it covered was not clearly articulated during lecture or class activity. For example, if a majority of students missed the same question, I will "throw it out" because it tells me that the question is not valid or that I didn't properly cover the material.
My guess would be that if the same question came up as a throw-away for more than one group of students I would need to evaluate the question itself. If it came up as a throw-away only once, then I probably did not sufficiently cover the material.
I determine it by actually watching the skills I tought them by giving something called a compentency.
This is a really good idea. I think so far, I ONLY think about what I'd like them to learn from the lesson, but I don't think of how I'd word it for them in a way I'd know they learned it.
Hi David, Thanks for your post to the forum. I am also a believer in the use of frequent short quizzes as they provide students and myself with useful feedback. Best wishes for continued success in your teaching career.
Susan Polick
I find that if the students are not prepared the anxiety level in the class tends to go up, but if we do a little pre test the day before that helps greatly, just stay cool.