I determine the effectiveness of testing formats by how well my students perform on a test. If students consistently get a question wrong, I will either change the question or throw the question out entirely.
Renee Bipes
I review the number of students that answered each question incorrectly. I discuss the questions and answers after grading to get feedback. Before using the test the next time, I make any appropriate changes.
I also analyze test results but I don't look at every single question. It's more like a spot check to see if there are any glaring and obvious test questions that were problematic.
You need to tye your test to your objectives. If the test measures your objectives, it is effective. If the results on the test are bad you need to determine if it is the test or the presentation and then do better.
I grade the test and hand it back to the students. I create a spreadsheet with one row per question and one column per student. I have the students upload to the "cloud" (dropbox.com) their column with 1's for correct and 0's for incorrect responses. I gather the columns into one spreadsheet, and count in each row how many students got that question right. Then I sort the rows from least to greatest, and that way I know what question/topics need to be reviewed more carefully.
Wow Melissa - that is quite a range! Our evening and online students tend to be older but I also see a pretty good mix on occasion. Best wishes - Susan
Most of my student are in their early 20's but it does range from 17 - 50+ ! Can make for an interesting mix...
Hi Melissa- Thanks for your post to the forum. Just curious- are your students primarily adult or traditional age? Best wishes for continued success in your teaching career. Susan
I do find that I have to tweak my testing methods for each group of students... I find that the different groups that come through have different levels of vocabulary skills, comprehension and prior knowledge. Also some need more "hand-holding", if something isn't worded the same as in lecture, they get thrown off! but I feel that it comes from the desire to do well rather than a lack of comprehension..
Hi Laura - Thanks for your post to the forum. You bare doing an excellent job at managing your student assessments! Best wishes for continued success in your teaching career. Susan
I constantly review and adapt each test I give. I change formats and ways that I ask each question. I listen to student feedback about the clarity of the test questions (by how they complain about some questions!).
I have experiemented with different formats and questions. Each format and question is evaluated by me, the students, and sometimes other faculty. What I am looking for is if the test scores adequately reflect the learning of the indiviual students.
Hi Karen - Thanks for your post to the forum. You have a very reasonable approach to monitoring your assessments. Best wishes for continued success in your teaching career. Susan
I have a test that offers a choice of essay questions as well. I have found that students choose a variety of the questions, though, it's not that everyone chooses what I would see as the "easiest" questions.
I think I had a couple of those instructors!
I will check the effectiveness of tests by seeing if the scores correlate to how well students have done in other tasks during the term, such as discussions or written papers. There isn't a perfect match, but if the scores are all over the place in comparison to grades on other assignments, I will check into the questions and see if certain ones are not assessing overall learning well.
in skill based testing, results are a great indicator. results are scored using a rubric to assess all students against the same standard
Hi Scott - Thanks for your post to the forum. You are doing a great job of effectively monitoring your assessments! Best wishes for continued success in your teaching career. Susan
I use several different methods for testing: standard written tests, and skills application tests. I keep track of the average score of my written test, to gauge how easy or difficult it is. The average is currently around 85%, which says to me my written final exam could be a little more difficult. I also track which questions are most frequently missed. Some of those questions I like, because they tell me which students are paying close attention and thinking. Others of my frequently missed questions I have had to rewrite; my verbiage was confusing or misleading. To judge the effectiveness of my skills application tests, I also consider the score. I try to make these tests as close to an industry experience as possible, and use more of an industry standard when grading.
Most of the students within a Graphic Design program are visual and kinesthetic learners. The testing procedures are more about performing task and then explaining the ideas and execution. When looking at the results, I will seek holes in the information. Missing steps for example. However, most processes in the visual arts are rules of thumb, which are intended only as guides. In the eight years of my higher education I believe there were only two written test. One of which was Art History. That test was a combination of multiple choice, short answer and essay. I feel that short answers and essay are best suited for this type of hands on and creative process industry.
I would further state that True False test would not be appropriate for this degree program because other than Art History and a few basic theory topics such as color theory there are no absolute right or wrong answers. Many times review of student work is subjective, so the measuring devices break down into craftsmanship and uniqueness of concept.