I have always used multiple choice or completion exams. This seems to work well for them to demonstrate their knowledge base. True/False never seemed like a good idea to me because of the 50/50 chance they have even if they do not know the information. I haven't tried essay questions but am now considering it. I liked the suggestion of having a fellow instructor review the exam.
I select the material to be tested based on the learning objectives of the course. I often prepare test using a combination of techniques that will allow me to conduct a preliminary diagnostic of the strengths and weaknesses of my students. I use Multiple Choice, True and False, Matching and Short-Answer Questions that will help me target specific areas of study.
I agree Erik. Making the object of testing be to show the student what they know as opoosed to what they don't, puts a much more positive spin on assessment.
Hi Susan,
as part of a Design/Production team we cover a lot of material in a variety of ways. There are a lot of terms that students need to know, as well as hands on experience that they should be able to replicate. Abstract thinking also takes place place when we are looking at subjective material (ie:is it right for the demographic we're looking at).
I find my tests use a variety of the methods described in the module. This makes sense to me since I'm looking at a mix of learning objectives. I also think this variety helps in the student success of applying various types of knowledge to many situations—not to mention the completion of a test successfully.
If student success is what we're about, and showing the students what they know as opposed to what they don't, this mix of testing methods levels the playing field allowing for a variety of learning styles to succeed.
I have used the essay to access the student learning along with intermitten short tests. With this format I could determine how the students were learning and what I needed to reteach, also students could see what they needed to work on.
josh
A very infomative response Stephen - thank you!
At our school, all of our written testing is standardized through a curriculum department. The tests are constructed using mainly multiple choice answers and true false questions. The true false questions comprise approximately 20% of the questions on the test. Practical testing is also accomplished through student demonstration of a procedure. While the written tests are standardized, the tests are developed by individuals with the technical background in the specific areas of the course. We as instructors review the tests for validity and relevance. Should we find questions that are vague or ambiguous, we have the ability to submit revisions for changes. A good thing about standardized tests is that we are testing based on relevance to industry standards and not gearing a test based on what a particular class or group of students can absorb. It makes the instructor responsible to depart information that is required for students to be successful in the industry they are going to be part of.
I test them in a couple of ways, when we go over the project sheet, I make little mistakes on purpose to see if anyone catches it. Does it make sense, if they say "YES", I explain to them that they have to reread it and see if it still makes any sense. Then I explain to them were the mistake is. I want to teach them to be very thorough and ask question if you do not understand it.
Plus I rather give them projects with certain components that have to be included with the project, those components are what they just learned and told me that they understood why and how it should be done right. Anyone can tell you that they can do it, but I rather have them and prove to me they can do it.
I first assess my students abilities and it also depends on what I am teaching. I can then see what my students are best capable of understanding. That is how I choose a test method.
As a math instructor, I design my tests as multiple choice questions combined with matching and problem-solving so that I can accurately assess the scope of my students' knowledge on the subject and application of skills learned.
I try to define learning objectives or goals I want my students to meet. Based on a class/type of information I teach, I decide what testing format will be most effective. Consequently, I sometimes use completion tests when memorization is important.I value short anwer tests, and they work well in my literature classes. Essay tests are best as summative evaluations at the end of the class. I use them in composition/literature classes.When teaching critical thinking, I use a variety of testing methods.I find multiple-choice tests useful in all classes, provided they are well written and they challenge the students to think critically and apply the knowledge rather than just recall it.
I determine the effectiveness of my tests based on students' results. I also value students' feedback.
The criteria I use in selecting a particular testing format depends on the subject matter that they are being tested on.
For example: if testing them on terminology, I usally start with matching, then test them again later in the semester with completion formatting.
On practical skills I use a rubric test, they are tested upon several different points, demonstrating their abilities or skills.
I combine different formats for their final grades, due to the fact certain students do well in one format and not so well in another.
Hi Julia - I think that good schools work at finding ways to improve and looking at assessment is also necessary for continuous improvement.
I like multiple choice tests for cognitive assessment and practical evaluations for skill practice or demonstration. Usually 70% is a passing score for the cognitive assessments. I use a scoring instrument for parctice teaching. Each student is giving a copy of the scoring criteria so they know exactly what areas will be assessed. Right now there are only three options for each item: satisfactory, recommended Improvement and not observed. So the assessment is go/no go. There is a narrative area on the back of the form to provide specific constructive feedback. I may move to a 5 point likert scale in the future, but for now I want to stick with the established form, until the instructors are better at constructing their lesson plans. Too much change at one time is not good for staff motivation.
You bring up a good point, Barbara - some students may indeed have the theory down well but cannot apply it. Nevertheless, if the competency requires demonstration, that is what the student must deliver.
The format that I utilize is practical application (hands-on) to establish (1) that the students are competent in the objectives presented; and (2)there is a definite measurement of their results.
For example, in software application classes, the theory, terms, etc., are presented throughout the course. However, some students are excellent at retaining theory but not as proficient in critical thinking or demonstrating their knowledge.
I use a combined test in my Clothing construction class for the midterm. The test is 2 parts.Part 1 is a hands on where the students have to sit with me individually and sew a particular sample. It is timed and they cant refer to notes or the book.It really helps me to see how much of the demos they have retained and if they have been practicing outside of class to improve their skills. Part 2 is written and then the student have to write down the step by step instructions for how to construct a particular seam or sample.This shows me if the student has an understanding of the "order of Construction" as well as their critical thinking skills.
The class I am in uses fill in the blank tests, and the students are used to taking mutiple choice tests. When informed of the type of testing the students often complian, after taking it they usually find it easier and I know for the most part they know the information and didn't just guess.
As far as knowledge, we use standardized tests. Skills are measured on actual task completion meeting industry standarts. During skill testing however, instrutors are required to verify knowledge by questionning.
In my opinion, and I may be wrong:
The most successful keys I have to offer here is: “There is no such thing as a dumb question”; “If you don’t understand, ASK”. I like to use a wide variety of formats; Matching I like when I use or direct students towards terminology and support those with multiple choices to keep them thinking in the same direction. In the health care arena there are some questions that could be ambiguous if not clarified prior to the question. By using memory joggers the student gets into their mode of “thinking” and then when you give them a problem orientated question, of a series of must know, by redundancy this now is not only known but programmed. Questions never follow the sequences of matching. I’m sure a majority of instructors here have found that redundancy or continual exposure is what dominates success in their class, whether it is math or medical terminology. “Practice does enhance perfection”. Essay questions are great for finals. I like them. Some instructors fuss… I don’t… It’s a tool that helps me. I like to think, the effort I put into a question deserves effort in return from the student. If you have reached them, you are rewarded. It also helps me gauge where the weakness are and what I can use or do to enhance these areas.