Unit 3: Construction of Teacher Made Test

By Notes Vandar

3.1 Concept of teacher made test

A teacher-made test is an assessment tool specifically designed by a teacher to evaluate students’ understanding of the material covered in a classroom setting. Unlike standardized tests, which are created by external organizations and administered on a large scale, teacher-made tests are tailored to the specific content, objectives, and needs of a particular class or course.

These tests are an essential part of the instructional process, as they allow teachers to gauge student learning, identify areas of improvement, and adjust their teaching strategies accordingly. Teacher-made tests vary in format, including multiple-choice questions, essays, true/false questions, short answer questions, and other forms of assessment based on the subject matter and learning objectives.


Key Features of Teacher-Made Tests

  1. Customized to Curriculum:
    • Teacher-made tests are specifically designed to align with the curriculum and learning objectives of a particular course. They focus on the material that has been taught in class, ensuring relevance to the students’ experiences.
  2. Flexibility:
    • Teachers have full control over the content, format, and difficulty level of the test. This flexibility allows them to create assessments that reflect their teaching style and meet the needs of their students.
  3. Immediate Feedback:
    • Teacher-made tests offer the opportunity for immediate feedback, helping teachers assess student performance quickly and make adjustments to their instruction as needed.
  4. Formative or Summative:
    • These tests can be used for both formative assessments (to monitor student progress during a unit) and summative assessments (to evaluate learning at the end of a unit or course).
  5. Varied Question Types:
    • Teachers can include a variety of question types depending on the skills they want to assess, such as critical thinking, problem-solving, or memorization.

Importance of Teacher-Made Tests

  1. Assessment of Specific Learning Objectives:
    • Teacher-made tests allow for the assessment of specific skills or knowledge that were targeted in the classroom, providing a more accurate measure of student learning based on what has actually been taught.
  2. Adaptation to Student Needs:
    • Teachers can modify their tests based on the learning needs, language proficiency, or learning styles of their students, ensuring that the assessment is fair and accessible.
  3. Encouraging Active Learning:
    • By creating their own tests, teachers can encourage active learning by asking students to apply concepts, think critically, and engage with the material in meaningful ways.
  4. Identifying Learning Gaps:
    • Teacher-made tests help identify areas where students may be struggling, enabling teachers to provide targeted instruction or additional support.
  5. Promoting Teaching and Learning Alignment:
    • Since teacher-made tests are aligned with classroom instruction, they provide a clear link between what is taught and what is assessed, promoting coherence in the learning process.

Advantages of Teacher-Made Tests

  1. Relevance:
    • The test is directly related to the content taught in class, making it highly relevant to the students’ learning experiences.
  2. Immediate Adjustments:
    • Teachers can quickly adjust the test or their teaching strategies based on the results to address student learning gaps.
  3. Greater Teacher Control:
    • Teachers can choose the specific learning outcomes they want to assess and the best methods for doing so, tailoring the test to fit their educational philosophy.
  4. Personalization:
    • The test can be customized to the diverse abilities, backgrounds, and needs of the students, allowing for differentiated instruction.

Limitations of Teacher-Made Tests

  1. Subjectivity:
    • Teacher-made tests may introduce subjectivity, especially in the case of essay questions or open-ended responses, depending on how clear and objective the grading criteria are.
  2. Inconsistency:
    • The quality and difficulty of teacher-made tests can vary greatly from one teacher to another, making it difficult to compare student performance across different classes or schools.
  3. Time-Consuming:
    • Developing a high-quality test that aligns with instructional goals can be time-consuming for teachers, especially when ensuring that the test is both valid and reliable.
  4. Potential for Bias:
    • There is a risk of bias in question selection, where teachers may unintentionally focus on certain topics while neglecting others, which can lead to unbalanced assessments.

 

3.2 Purposes of testing: Instructional, grading, diagnostic,
selection, placement, counseling, curricular decisions and
policy making

 

Testing serves a variety of purposes in educational settings, from guiding instructional decisions to influencing policy making. Each purpose addresses specific needs, ranging from evaluating student performance to informing broader educational strategies.


1. Instructional Purposes

Instructional testing refers to the use of tests to assess student learning and to provide feedback that can improve teaching and learning. It helps teachers identify how well students understand the material and adjust their teaching strategies accordingly.

  • Purpose: To guide day-to-day teaching and help teachers assess the effectiveness of their instruction.
  • Example: Quizzes or formative assessments used during lessons to gauge student comprehension and adjust teaching methods.
  • Outcome: Better alignment between teaching and learning, with adjustments made to instructional methods based on test results.

2. Grading

Grading tests provide a summative evaluation of a student’s performance over a given period. These tests assign scores or grades that summarize student achievement.

  • Purpose: To provide a final evaluation of student learning, often at the end of a course or unit.
  • Example: Final exams or end-of-term assessments that determine student grades.
  • Outcome: Provides students with a sense of their overall performance and progress, which can influence their academic and career choices.

3. Diagnostic Purposes

Diagnostic testing aims to identify students’ strengths and weaknesses in specific subject areas. This information can be used to tailor instruction or interventions to meet the individual needs of students.

  • Purpose: To identify learning difficulties or areas of weakness that need targeted support.
  • Example: Pre-tests or assessments designed to identify gaps in knowledge before instruction begins.
  • Outcome: Enables personalized interventions to address students’ learning difficulties, improving overall learning outcomes.

4. Selection

Selection tests are used to determine eligibility for entry into specific programs, schools, or courses. These tests are often competitive and aim to select the most qualified candidates.

  • Purpose: To identify individuals who meet certain criteria for admission or selection.
  • Example: College entrance exams (e.g., SAT, ACT) or job aptitude tests.
  • Outcome: Ensures that candidates selected for a program or position possess the necessary skills and knowledge.

5. Placement

Placement testing helps place students in appropriate levels or courses based on their current knowledge and abilities. It ensures that students begin their learning at the level that best suits their needs.

  • Purpose: To determine the most appropriate starting point for a student’s learning.
  • Example: Language placement tests or math assessments to place students in the correct level of courses.
  • Outcome: Students are placed in courses that match their proficiency levels, ensuring optimal learning environments.

6. Counseling

Counseling testing provides information to guide educational and career decisions. These tests help students understand their strengths, weaknesses, interests, and aptitudes to make informed choices about their future.

  • Purpose: To support students in making educational and career decisions.
  • Example: Career aptitude tests or personality assessments that help students identify potential career paths.
  • Outcome: Students receive guidance on academic or career choices based on their test results, leading to more informed decisions.

7. Curricular Decisions

Curricular decision testing involves using test data to make decisions about curriculum development or modification. These tests can reveal gaps in the current curriculum or indicate areas where changes are needed to improve student learning outcomes.

  • Purpose: To inform changes in the curriculum based on student performance data.
  • Example: Statewide assessments that highlight areas where students are underperforming, leading to curriculum adjustments.
  • Outcome: Ensures that the curriculum is responsive to student needs and helps improve overall educational quality.

8. Policy Making

Policy making testing refers to the use of large-scale testing data to inform educational policies and decisions at local, state, or national levels. Policymakers use test results to shape strategies that improve education systems.

  • Purpose: To guide educational policies and reforms based on evidence from test data.
  • Example: National assessments (e.g., NAEP) used to inform decisions about funding, curriculum standards, or teacher evaluations.
  • Outcome: Data-driven decisions that aim to improve educational outcomes at a systemic level, promoting equity and quality in education.

 

3.3 Types of test items
3.3.1 Subjective test: types, construction and uses
3.3.2 Objective test items: types construction and uses

 

3.3.1 Subjective test: types, construction and uses

ubjective tests are assessments where students are required to produce responses that involve more extensive thought, explanation, or demonstration of higher-order thinking skills. Unlike objective tests, which have definitive right or wrong answers, subjective tests rely on the student’s ability to organize, express, and defend their ideas. The scoring of these tests often involves interpretation and judgment on the part of the examiner.


Types of Subjective Test Items

  1. Short-Answer Questions:
    • Format: Students are asked to respond in one or two sentences to a specific question.
    • Purpose: Assesses basic understanding and factual recall but may also require some explanation or justification.
    • Example: “What is the capital of France?” or “Explain the difference between weather and climate.”
    • Use: Suitable for assessing conceptual understanding, factual recall, and brief explanations.
  2. Essay Questions:
    • Format: Students are prompted to respond to a question or topic by writing a detailed, structured response.
    • Purpose: Assesses complex understanding, ability to organize thoughts, and critical thinking skills. Essays require students to synthesize information, analyze arguments, and provide evidence for their views.
    • Example: “Discuss the causes and consequences of the Industrial Revolution.”
    • Use: Ideal for assessing students’ ability to reason, argue, explain, and think critically about complex topics.
  3. Reflective Journals or Logs:
    • Format: Students write reflective entries over a period, responding to prompts or documenting their learning process.
    • Purpose: Assesses personal reflection, growth in understanding, and the ability to link theory to practice.
    • Example: “Write a weekly journal reflecting on how the concepts you learned in class apply to real-world situations.”
    • Use: Used in subjects like education, nursing, or psychology to encourage self-reflection and deeper engagement with the material.
  4. Case Study Analysis:
    • Format: Students analyze a real-world situation or case, applying theoretical knowledge to suggest solutions or offer critical evaluations.
    • Purpose: Assesses students’ ability to apply knowledge to real-world scenarios, critical thinking, and problem-solving skills.
    • Example: “Analyze the following business case and recommend strategies for improving employee retention.”
    • Use: Commonly used in fields like business, law, and social sciences to evaluate the application of theoretical knowledge.
  5. Problem-Solving Exercises:
    • Format: Students are presented with a complex problem and must explain how they would solve it, often involving multiple steps or solutions.
    • Purpose: Assesses analytical thinking, procedural knowledge, and the ability to work through complex processes.
    • Example: “Solve the following algebraic equation and explain each step in your process.”
    • Use: Typically used in mathematics, engineering, and the sciences to assess procedural knowledge and problem-solving abilities.

Construction of Subjective Test Items

When constructing subjective test items, it’s essential to ensure clarity, fairness, and alignment with the learning objectives. Here are steps for constructing effective subjective test items:

  1. Clearly Define the Learning Objectives:
    • Ensure that the subjective test questions align with the goals of the lesson or course. Identify what you want students to demonstrate through their responses, whether it’s analytical thinking, factual recall, or problem-solving.
  2. Provide Clear, Unambiguous Prompts:
    • Subjective test questions should be clear and precise to avoid confusion. Vague or ambiguous prompts can lead to incomplete or misdirected responses.
    • Example: Instead of “Discuss climate change,” ask “Explain three causes of climate change and evaluate their potential long-term impact on global ecosystems.”
  3. Ensure a Range of Cognitive Skills is Tested:
    • Design questions that challenge students at different cognitive levels. Include questions that assess recall (e.g., definitions), comprehension (e.g., explain), and higher-order thinking (e.g., analyze, evaluate, create).
  4. Allow Flexibility in Responses:
    • Subjective test items should allow for multiple perspectives or solutions, encouraging students to think creatively and critically.
    • Example: An essay question might ask students to argue for or against a particular viewpoint, providing room for different perspectives.
  5. Create a Rubric or Scoring Guide:
    • Since subjective tests rely on interpretation, it’s crucial to have a well-constructed rubric for consistent grading. A rubric should outline the key points or criteria for evaluating the response, such as clarity of argument, use of evidence, organization, and accuracy.
    • Example: For an essay question, the rubric could allocate points for structure, argument development, use of examples, and grammar.
  6. Ensure Fairness:
    • Ensure that the questions are accessible to all students, taking into account different learning styles and backgrounds. Avoid culturally biased or overly difficult questions that could disadvantage certain students.

Uses of Subjective Test Items

Subjective test items are used to assess a wide range of cognitive and practical skills. They are particularly useful in evaluating:

  1. Higher-Order Thinking Skills:
    • Subjective tests require students to analyze, evaluate, and synthesize information rather than merely recall facts. These items are ideal for assessing deep understanding and critical thinking.
  2. Communication Skills:
    • Essays and short-answer questions assess the students’ ability to communicate their ideas effectively. Writing a coherent argument, organizing thoughts, and using appropriate evidence are all essential skills that subjective tests help develop.
  3. Problem-Solving Abilities:
    • Case studies and problem-solving exercises assess the students’ ability to apply knowledge to new situations and come up with solutions, a key skill in fields like engineering, law, and business.
  4. Creativity and Original Thought:
    • Subjective items allow students to express original ideas and creative solutions. They are especially useful in humanities, social sciences, and arts-related subjects where diverse viewpoints and creativity are valued.
  5. Individual Reflection and Growth:
    • Reflective journals or logs are used to track students’ learning over time, encouraging them to reflect on their progress and make connections between theory and practice.
  6. Holistic Understanding:
    • Subjective tests can assess a student’s overall understanding of a topic or course. Essays or long-answer questions often require students to demonstrate their knowledge across multiple aspects of a subject.

 

 

3.3.2 Objective Test Items: Types, Construction, and Uses

Objective test items are questions that have one correct answer, and they are scored without interpretation or judgment from the examiner. These tests are typically used to assess knowledge of facts, concepts, and procedures, and their reliability is often higher due to the consistency in scoring. The main advantage of objective test items is that they can be scored quickly and objectively, making them suitable for large-scale assessments.


Types of Objective Test Items

  1. Multiple-Choice Questions (MCQs):
    • Format: Consists of a question (stem) followed by several answer options, only one of which is correct.
    • Purpose: Used to assess a range of learning objectives, from recall of factual information to application and analysis.
    • Example: “Which of the following is the capital of France? (a) London (b) Paris (c) Rome (d) Madrid”
    • Use: Widely used in standardized testing, classroom assessments, and quizzes. Ideal for assessing broad content efficiently.
  2. True/False Questions:
    • Format: A statement is presented, and the student must decide whether it is true or false.
    • Purpose: Assesses knowledge of facts, concepts, or relationships.
    • Example: “The Earth revolves around the Sun. (True/False)”
    • Use: Useful for assessing basic factual knowledge, quick to answer and score. Commonly used in quizzes and assessments covering large amounts of material.
  3. Matching Items:
    • Format: Two columns are presented—one with prompts (e.g., terms, definitions), and the other with options that must be matched.
    • Purpose: Tests the ability to identify relationships between items, such as matching terms with definitions or causes with effects.
    • Example: Match the following terms to their definitions:
      • (a) Photosynthesis
      • (b) Mitosis
      • (c) Osmosis
      • Definitions: (1) Cell division process (2) Movement of water molecules (3) Process of converting sunlight to energy
    • Use: Good for assessing recognition of relationships, associations, or concepts.
  4. Fill-in-the-Blank (Completion) Items:
    • Format: A sentence or statement with a blank space that the student must fill in with the correct answer.
    • Purpose: Assesses the recall of specific information or facts.
    • Example: “The capital of Japan is ________.”
    • Use: Requires active recall of information, making it more challenging than recognition-based questions. Commonly used for testing terminology or key concepts.
  5. Short-Answer Questions:
    • Format: A question or statement that requires a brief, specific response.
    • Purpose: Assesses factual knowledge, definitions, or simple calculations.
    • Example: “What is the chemical symbol for water?”
    • Use: Used when concise, specific information is required. Though similar to fill-in-the-blank items, short-answer questions are more open-ended.

Construction of Objective Test Items

When constructing objective test items, the goal is to ensure that the items accurately assess the intended knowledge or skills while minimizing ambiguity and bias. Here are the key principles for constructing effective objective test items:

1. Define the Learning Objectives

  • Before writing test items, clearly identify what knowledge, skills, or abilities you want to assess. Ensure that each question aligns with these objectives.

2. Write Clear and Concise Stems

  • The question or statement (stem) should be clear, unambiguous, and focused on one main idea. Avoid using complex wording or irrelevant information that might confuse students.
  • Example: Instead of “What is the reason that trees produce oxygen?” write “Why do trees produce oxygen?”

3. Avoid Tricky or Confusing Wording

  • Avoid using negative wording (e.g., “Which of the following is NOT…”) unless absolutely necessary, as this can confuse students. If negatives are used, they should be emphasized to ensure clarity.
  • Example: Instead of “Which of the following is NOT true?” write “Which of the following is false?”

4. Create Plausible Distractors

  • In multiple-choice questions, distractors (incorrect options) should be plausible to prevent students from easily eliminating them. They should be based on common misconceptions or errors students might make.
  • Example: If the correct answer is “Paris,” the distractors could be “London,” “Rome,” and “Berlin” (all plausible capital cities).

5. Ensure One Clear Correct Answer

  • For multiple-choice and true/false items, there should be only one correct or best answer. Avoid “all of the above” or “none of the above” options, as these can lead to guessing.
  • Example: In the question “Which of the following is a mammal?” the correct answer should be the only option that fits the definition of a mammal.

6. Ensure Validity and Reliability

  • Test items should assess what they are intended to assess (validity) and produce consistent results when administered in different contexts (reliability). Avoid biased or culturally specific items that may disadvantage certain students.

7. Pilot the Items

  • Before using objective test items in high-stakes assessments, consider piloting them with a sample group of students to identify any issues with clarity, difficulty, or discrimination (how well the item differentiates between high- and low-performing students).

Uses of Objective Test Items

Objective test items are versatile and can be used in a variety of educational settings. Here are some common uses:

  1. Standardized Testing:
    • Objective test items, especially multiple-choice and true/false questions, are commonly used in standardized tests (e.g., SAT, ACT, GRE) due to their ability to efficiently assess a large number of students and provide reliable data.
  2. Classroom Assessments:
    • Teachers frequently use objective test items in quizzes, midterms, and final exams to evaluate student understanding of key concepts. They allow for quick and easy grading, especially when there is limited time for scoring.
  3. Formative Assessments:
    • Objective items are useful in formative assessments, such as practice quizzes or online tests, to provide immediate feedback to students on their performance and help guide further study.
  4. Summative Assessments:
    • Objective items are often used in summative assessments to measure students’ overall performance at the end of a course or unit. They allow for a broad coverage of content and can be scored quickly.
  5. Diagnostic Testing:
    • Objective test items can be used in diagnostic tests to identify students’ strengths and weaknesses in specific areas. This helps inform instructional decisions and interventions.
  6. Assessment of Knowledge Across Large Groups:
    • When it is necessary to assess knowledge in a large group of students, objective items are ideal because they are easy to administer, score, and analyze. They are often used in settings like placement exams or state-mandated assessments.

Advantages of Objective Test Items

  • Efficient Scoring: Objective items can be scored quickly and reliably, either manually or using automated systems.
  • Wide Coverage: They allow for a broad assessment of knowledge, as many questions can be included in a short period.
  • High Reliability: Since the answers are clear-cut, objective tests tend to have high reliability and consistency in scoring.
  • Less Subjective Bias: Scoring is objective, reducing the possibility of bias that can occur in subjective assessments.

Disadvantages of Objective Test Items

  • Limited Assessment of Higher-Order Skills: While well-constructed multiple-choice questions can assess higher-order thinking, many objective test items focus primarily on recall and recognition of facts rather than critical thinking or problem-solving.
  • Encourages Guessing: Especially in true/false or multiple-choice formats, students may guess answers, which can affect the accuracy of the results.
  • Construction Difficulty: Writing good objective items, especially multiple-choice questions with plausible distractors, can be challenging and time-consuming.

 

3.4 Taxonomy of Educational Objectives: Cognitive Domain

The cognitive domain is one of the three domains in Bloom’s Taxonomy of educational objectives, which categorizes learning into different levels of complexity and depth. Developed by Benjamin Bloom and his colleagues in the 1950s, the cognitive domain focuses on intellectual skills and is essential in assessing and developing students’ knowledge, comprehension, and critical thinking abilities.

Bloom’s Taxonomy is widely used in educational settings to design curricula, assessments, and instructional methods, ensuring that teaching is geared toward not just rote memorization but also higher-order thinking.


Bloom’s Original Cognitive Domain

The original taxonomy categorized cognitive skills into six hierarchical levels, moving from basic recall of information to more complex and abstract forms of thinking. The levels are:

  1. Knowledge (Recall of Facts and Basic Concepts)
  2. Comprehension (Understanding Information)
  3. Application (Using Information in New Situations)
  4. Analysis (Breaking Down Information into Components)
  5. Synthesis (Combining Information to Create Something New)
  6. Evaluation (Judging or Critiquing Information)

In 2001, the taxonomy was revised to reflect modern educational theory, and the categories were renamed to be more dynamic. The revised taxonomy uses action verbs, reflecting the progression from simpler to more complex thinking processes.


Revised Bloom’s Taxonomy (Cognitive Domain)

  1. Remembering: Recall facts and basic concepts.
    • Key Actions: Define, list, memorize, repeat, identify.
    • Examples:
      • “List the stages of mitosis.”
      • “Identify the main characters in the story.”
  2. Understanding: Explain ideas or concepts.
    • Key Actions: Describe, explain, summarize, interpret, classify.
    • Examples:
      • “Explain the theory of evolution in your own words.”
      • “Summarize the main points of the article.”
  3. Applying: Use information in new, concrete situations.
    • Key Actions: Solve, use, demonstrate, apply, implement.
    • Examples:
      • “Apply Newton’s second law of motion to solve a real-world problem.”
      • “Use the concept of supply and demand to explain the price of goods.”
  4. Analyzing: Break down information into parts and examine relationships.
    • Key Actions: Differentiate, organize, relate, compare, contrast, dissect.
    • Examples:
      • “Analyze the structure of a sonnet.”
      • “Compare the economic policies of two countries.”
  5. Evaluating: Make judgments based on criteria or standards.
    • Key Actions: Critique, assess, evaluate, defend, support.
    • Examples:
      • “Evaluate the effectiveness of a marketing strategy.”
      • “Critique an argument presented in a debate.”
  6. Creating: Produce new or original work.
    • Key Actions: Design, create, construct, develop, formulate.
    • Examples:
      • “Design an experiment to test a scientific hypothesis.”
      • “Develop a new business plan for a startup.”

Levels of the Revised Cognitive Domain

  1. Remembering:
    • This is the most basic level, where students are expected to recall or recognize previously learned information. It involves retrieving, recognizing, and recalling knowledge.
    • Key Examples: Identifying terms, recalling specific facts or formulas, retrieving definitions, or recognizing information.
  2. Understanding:
    • At this level, students show that they comprehend the meaning of the material. They can explain ideas, restate information in their own words, or paraphrase key concepts.
    • Key Examples: Summarizing a concept, explaining the relationship between two ideas, or interpreting a text or diagram.
  3. Applying:
    • In this stage, students take what they have learned and apply it in a real-world situation. It involves using knowledge in different contexts, solving problems, or carrying out a procedure.
    • Key Examples: Solving math problems using learned formulas, applying a scientific principle in an experiment, or using grammar rules to correct sentences.
  4. Analyzing:
    • Here, students break down complex information into its components to understand relationships, patterns, or connections. It involves differentiating, comparing, and organizing information.
    • Key Examples: Analyzing the causes and effects of historical events, comparing theories, or identifying the components of an argument.
  5. Evaluating:
    • Evaluation involves making judgments based on criteria or standards. Students at this level can assess the value or quality of an idea, argument, or solution and provide a rationale for their judgment.
    • Key Examples: Critiquing a piece of literature, evaluating the effectiveness of an experiment, or defending an opinion based on evidence.
  6. Creating:
    • The highest level of the taxonomy involves putting together elements in a novel way or creating something new. This requires synthesis of knowledge and skills to generate innovative ideas or products.
    • Key Examples: Writing a short story, developing a new theory, or designing a project.

Importance of Bloom’s Cognitive Domain in Education

  1. Structured Learning Goals:
    • Bloom’s Taxonomy helps educators structure learning goals from simple knowledge recall to complex critical thinking tasks. It provides a framework to ensure students progressively develop deeper understanding and higher-order thinking skills.
  2. Assessment and Evaluation:
    • Teachers can design assessments that target different cognitive levels. For example, quizzes or tests might start with remembering and understanding questions before moving to applying and analyzing tasks.
  3. Curriculum Design:
    • By using Bloom’s Taxonomy, curriculum developers can create lessons that scaffold learning from basic concepts to more sophisticated problem-solving and critical thinking activities.
  4. Student-Centered Learning:
    • Bloom’s Taxonomy encourages a shift from teacher-centered instruction to student-centered learning, where students are actively involved in constructing their knowledge, applying it, and thinking critically.
  5. Differentiation:
    • Teachers can differentiate instruction by providing tasks at varying levels of cognitive demand, allowing students to work at their own pace or challenge themselves to move from basic recall to higher-order thinking.

Examples of Cognitive Domain Objectives

  • Remembering: “Recall the definition of photosynthesis.”
  • Understanding: “Explain how photosynthesis contributes to the growth of plants.”
  • Applying: “Use your knowledge of photosynthesis to design a model that explains the process.”
  • Analyzing: “Compare the processes of photosynthesis and cellular respiration.”
  • Evaluating: “Evaluate the impact of deforestation on the carbon cycle and photosynthesis.”
  • Creating: “Develop a plan to enhance photosynthesis in urban farming systems.”

 

3.5 Teacher-Made Test: Construction Process

A teacher-made test is an assessment tool developed by an individual teacher to measure students’ understanding and learning of specific course content. These tests are designed to meet the specific objectives of a class or unit, allowing teachers to assess how well students have grasped particular concepts and skills. Unlike standardized tests, teacher-made tests are more flexible and can be tailored to the unique needs of a classroom.

The construction of a teacher-made test is a systematic process that involves careful planning, item creation, and review to ensure validity, reliability, and fairness. Below is the step-by-step construction process for creating an effective teacher-made test.


Steps in Constructing a Teacher-Made Test

1. Defining the Purpose of the Test

Before constructing a test, the teacher must clearly define its purpose. This involves identifying the following:

  • What the test aims to assess (knowledge, skills, attitudes).
  • Why the test is being given (formative, summative, diagnostic).
  • When the test will be administered (mid-term, end of unit, final exam).

Examples of test purposes:

  • To assess students’ mastery of a unit on photosynthesis (summative).
  • To identify students’ misconceptions about fractions (diagnostic).
  • To monitor student progress during a math unit (formative).

2. Identifying Learning Objectives

The next step is to clearly outline the learning objectives the test will assess. These objectives should align with the curriculum and reflect what students are expected to learn. Objectives can be framed using Bloom’s Taxonomy to ensure coverage of different cognitive levels (remembering, understanding, applying, analyzing, evaluating, and creating).

Examples of learning objectives:

  • “Students will be able to define key terms related to photosynthesis.”
  • “Students will be able to solve quadratic equations using the quadratic formula.”
  • “Students will be able to analyze the causes of World War I.”

3. Choosing the Test Format

Based on the learning objectives and purpose of the test, the teacher must decide on the most appropriate test format. Common types of test items include:

  • Objective items: multiple-choice, true/false, matching, and short-answer.
  • Subjective items: essay questions, open-ended questions, performance tasks.

Each type of item serves a different purpose:

  • Objective items are useful for testing factual recall and comprehension.
  • Subjective items allow for the assessment of higher-order thinking skills, such as analysis, synthesis, and evaluation.

4. Preparing a Test Blueprint (Table of Specifications)

A test blueprint, or table of specifications, is a matrix that helps teachers organize the test to ensure it covers all necessary content areas and cognitive levels. It maps out how many questions will be asked for each topic and at what cognitive level (e.g., knowledge, comprehension, application).

Example of a test blueprint for a science test:

Topic Knowledge Comprehension Application Total Questions
Photosynthesis 3 2 2 7
Cell Respiration 2 3 2 7
Ecology 3 3 2 8
Total Questions 8 8 6 22

5. Writing Test Items

Once the format and content areas are defined, the next step is to write the test items. Each item should be:

  • Aligned with learning objectives: Ensure that each item measures the intended objective.
  • Clear and unambiguous: Avoid confusing wording or irrelevant information.
  • Appropriate for the students’ level: Ensure that the difficulty of the questions matches the students’ abilities.

Tips for writing different types of items:

  • Multiple-choice questions: Write a clear question (stem) with one correct answer and plausible distractors.
    • Example: “Which process is responsible for converting sunlight into chemical energy?”
  • True/False questions: Keep the statements simple and avoid double negatives.
    • Example: “The Earth orbits around the Sun. (True/False)”
  • Essay questions: Ask open-ended questions that allow students to demonstrate critical thinking.
    • Example: “Explain how photosynthesis and cellular respiration are interrelated.”

6. Assembling the Test

After the items are written, the teacher assembles the test by:

  • Organizing questions by topic or difficulty level.
  • Mixing question types if desired (e.g., starting with objective questions, followed by subjective ones).
  • Ensuring clear instructions for each section or item type.
  • Arranging space for students to answer subjective questions or show their work.

Considerations:

  • Balanced difficulty: Avoid putting too many difficult or too many easy questions together.
  • Logical sequence: Arrange questions so that they flow logically, making the test easier to navigate.

7. Reviewing and Revising the Test

Before administering the test, it is essential to review and revise it to identify any errors or ambiguities. The teacher should check for:

  • Clarity of instructions: Ensure all instructions are easy to understand.
  • Item validity: Confirm that each question tests the intended knowledge or skill.
  • Item difficulty: Review whether the difficulty level of each item is appropriate for the students.

Peer review: Asking a colleague to review the test can help catch mistakes or unclear items.


8. Administering the Test

When administering the test, teachers should:

  • Provide clear instructions to students about time limits, materials allowed (e.g., calculators), and how to respond to different types of questions.
  • Ensure a fair testing environment, minimizing distractions and ensuring that all students have equal opportunities to perform their best.

9. Scoring the Test

The scoring process depends on the type of test items:

  • Objective items: These are scored easily and consistently. For multiple-choice or true/false questions, a key can be used for efficient grading.
  • Subjective items: Essays or open-ended questions require a rubric to ensure consistent and fair scoring. Rubrics should outline specific criteria, such as content accuracy, organization, and critical thinking.

Example rubric for an essay:

Criterion Excellent (5 points) Good (4 points) Fair (3 points) Poor (2 points)
Content knowledge Accurate and thorough Mostly accurate Somewhat accurate Inaccurate
Organization Well-organized Mostly organized Lacks clarity Disorganized
Critical thinking Demonstrates high-level analysis Some analysis Limited analysis No analysis

10. Analyzing Test Results

After scoring, the teacher should analyze the test results to evaluate both student performance and the effectiveness of the test itself. Item analysis can help identify:

  • Difficulty index: How many students got each item right or wrong?
  • Discrimination index: How well does each item distinguish between high and low performers?

Based on this analysis, the teacher can determine whether certain questions were too difficult or easy and whether the test accurately measured the learning objectives.

 

3.5.1 Planning the test
• Writing instructional objectives
• Preparing specification chart

 

3.5.1 Planning the Test

Test planning is the first and most critical step in the construction of any teacher-made test. Proper planning ensures that the test accurately assesses students’ knowledge and skills, aligns with instructional goals, and is fair and balanced. This phase involves two key activities: writing instructional objectives and preparing a specification chart (also known as a test blueprint).


1. Writing Instructional Objectives

Instructional objectives are specific statements that define what students should know or be able to do by the end of a lesson, unit, or course. These objectives form the basis of the test and guide the development of test items.

Characteristics of well-written instructional objectives:

  • Specific: Clearly describe what the student will be able to do.
  • Measurable: Can be assessed or measured through a test or performance.
  • Observable: Focus on observable behaviors or skills (e.g., “describe,” “analyze,” “compare”).
  • Aligned with Bloom’s Taxonomy: Instructional objectives should represent different levels of cognitive complexity, from basic knowledge to higher-order thinking skills.

Examples:

  • “Students will be able to define the process of photosynthesis.” (Knowledge)
  • “Students will be able to solve quadratic equations using the quadratic formula.” (Application)
  • “Students will be able to analyze the causes of climate change.” (Analysis)

2. Preparing a Specification Chart (Test Blueprint)

A specification chart, or test blueprint, is a tool used to organize the content and cognitive levels that the test will cover. It maps out the distribution of questions across various topics and learning objectives, ensuring that the test is comprehensive and balanced.

Steps to Prepare a Specification Chart:

  1. List the content areas: Identify the major topics or units that were covered during instruction (e.g., algebra, biology, history).
  2. Identify the cognitive levels: Based on Bloom’s Taxonomy, define the cognitive levels you want to assess, such as remembering, understanding, applying, analyzing, etc.
  3. Assign the number of questions: Allocate a certain number of questions to each topic and cognitive level, ensuring that more time and emphasis are given to the most important content and skills.

Example of a Specification Chart for a science test:

Content Area Knowledge Comprehension Application Total Questions
Photosynthesis 2 3 1 6
Cell Respiration 3 2 2 7
Ecosystems 1 2 3 6
Total 6 7 6 19

This blueprint ensures that the test will:

  • Cover key content areas (e.g., photosynthesis, ecosystems).
  • Assess a variety of cognitive levels, including knowledge, comprehension, and application.
  • Distribute questions fairly based on the importance of each topic.

Benefits of Planning the Test

  1. Alignment with Instruction: The test reflects the learning goals and ensures that students are assessed on what they have been taught.
  2. Balanced Coverage: The test blueprint guarantees that all essential content areas are included and that the cognitive demands of the test are appropriate.
  3. Fairness: Ensures that the test is unbiased, with no overrepresentation of one topic or cognitive level.
  4. Efficiency: Simplifies the process of test construction, making it easier to develop items that meet instructional objectives.

 

  • Writing Instructional Objectives

Instructional objectives are clear, concise statements that specify what students should know or be able to do after a lesson, unit, or course. These objectives guide both instruction and assessment, providing a framework for teachers to evaluate student progress and ensure alignment with the curriculum. When writing instructional objectives, it’s essential to be specific, measurable, and focused on observable student behavior.


Key Characteristics of Well-Written Instructional Objectives

  1. Specific: Clearly defines what students are expected to learn.
  2. Measurable: Can be assessed using tests, projects, or other evaluation tools.
  3. Observable: Focuses on actions or skills that can be observed, such as “analyze,” “define,” or “solve.”
  4. Aligned with Learning Outcomes: Instructional objectives should support broader learning goals and curriculum standards.
  5. Varied Cognitive Levels: Use Bloom’s Taxonomy to ensure that objectives address different cognitive levels, from simple recall to higher-order thinking skills.

Steps for Writing Instructional Objectives

1. Start with a Clear Action Verb

  • Use action verbs that describe observable and measurable behavior. These verbs are often derived from Bloom’s Taxonomy and reflect various levels of cognitive complexity.
  • Examples:
    • Knowledge: define, list, name, recall, identify.
    • Comprehension: describe, explain, summarize, interpret.
    • Application: solve, apply, demonstrate, use.
    • Analysis: analyze, compare, contrast, differentiate.
    • Synthesis: create, design, formulate, construct.
    • Evaluation: evaluate, justify, critique, defend.

2. Focus on the Learning Outcome

  • Specify what students will know or be able to do after the learning process. The outcome should be linked to a concept, skill, or task.

    Example:

    • “Students will be able to define the key components of the water cycle.”

3. Include Clear Conditions (if applicable)

  • State the conditions under which the student will perform the task, such as “given a map” or “with the use of a calculator.”

    Example:

    • “Given a list of biological terms, students will be able to match each term with its correct definition.”

4. Specify the Criteria for Success

  • Determine how well students need to perform to meet the objective. This may include criteria like accuracy, time limits, or the degree of correctness.

    Example:

    • “Students will be able to solve quadratic equations with 80% accuracy.”

Examples of Instructional Objectives

  1. Lower Cognitive Level (Knowledge):
    • “Students will be able to list the five main causes of the American Revolution.”
  2. Medium Cognitive Level (Application):
    • “Students will be able to apply Newton’s second law to solve problems involving force, mass, and acceleration.”
  3. Higher Cognitive Level (Analysis):
    • “Students will be able to analyze the relationship between supply and demand in various economic scenarios.”

Bloom’s Taxonomy and Writing Instructional Objectives

Using Bloom’s Taxonomy is a helpful strategy for writing instructional objectives that target different levels of cognitive skills. The taxonomy consists of six levels, from lower-order thinking to higher-order thinking:

  1. Remembering: Recalling facts and basic concepts.
    • Verbs: define, recall, list, name.
    • Example: “Students will be able to list the stages of the cell cycle.”
  2. Understanding: Explaining ideas or concepts.
    • Verbs: explain, summarize, describe, interpret.
    • Example: “Students will be able to summarize the key points of the Civil Rights Movement.”
  3. Applying: Using information in new situations.
    • Verbs: apply, solve, use, demonstrate.
    • Example: “Students will be able to apply the Pythagorean theorem to solve right triangle problems.”
  4. Analyzing: Breaking down information into parts.
    • Verbs: analyze, compare, contrast, differentiate.
    • Example: “Students will be able to analyze the similarities and differences between photosynthesis and cellular respiration.”
  5. Evaluating: Making judgments based on criteria.
    • Verbs: evaluate, critique, justify, defend.
    • Example: “Students will be able to evaluate the effectiveness of different leadership styles in business settings.”
  6. Creating: Producing new or original work.
    • Verbs: create, design, formulate, construct.
    • Example: “Students will be able to design a sustainable energy plan for a community.”

 

  • Preparing a Specification Chart (Test Blueprint)

A specification chart, also known as a test blueprint, is a vital tool used in the planning stage of a teacher-made test. It serves as a guide for constructing an assessment by mapping out how test items will align with instructional objectives and ensuring that all key areas of the curriculum are properly covered. The specification chart ensures that the test is balanced in terms of content and cognitive levels (from lower-order thinking to higher-order thinking) and that no important topic or skill is under- or overrepresented.


Steps to Prepare a Specification Chart

1. List the Content Areas (Topics)

  • Identify the key topics or units that were taught during the instructional period.
  • These could be major sections from the curriculum, textbook chapters, or important themes of the subject matter.

    Example (for a math test):

    • Algebra
    • Geometry
    • Trigonometry

2. Determine the Cognitive Levels

  • Use Bloom’s Taxonomy to define the cognitive levels you want to assess, such as:
    • Knowledge (remembering facts or information)
    • Comprehension (understanding the material)
    • Application (using knowledge to solve problems)
    • Analysis (breaking information into parts)
    • Synthesis (combining elements to form a whole)
    • Evaluation (judging the value or effectiveness of something)

    You should ensure a mix of lower-order and higher-order cognitive levels, depending on the complexity of the content being assessed.

3. Assign Weight to Each Content Area

  • Allocate a percentage or number of items to each content area based on its importance, instructional time, or the emphasis placed during the course.
  • Topics that were covered more extensively in class should have more weight on the test.

    Example:

    • Algebra (40%)
    • Geometry (30%)
    • Trigonometry (30%)

4. Assign Weight to Cognitive Levels

  • Allocate the number of questions for each cognitive level within the content areas. More straightforward topics may have more lower-order questions (Knowledge, Comprehension), while more complex topics might require higher-order thinking (Application, Analysis, etc.).

5. Determine the Number of Questions

  • Decide on the total number of questions for the test. Then, divide these questions among the content areas and cognitive levels in proportion to the weights assigned.

Example of a Specification Chart for a Science Test

Content Area Knowledge Comprehension Application Total Questions
Photosynthesis 2 3 1 6
Cellular Respiration 3 2 2 7
Ecosystem 1 2 3 6
Total 6 7 6 19

In this example:

  • Photosynthesis has 6 questions, broken down into 2 questions for Knowledge, 3 for Comprehension, and 1 for Application.
  • Cellular Respiration has 7 questions, with a balance of questions assessing different cognitive levels.
  • Ecosystems has a mix of questions from lower to higher-order cognitive skills.

Benefits of Using a Specification Chart

  1. Balanced Coverage: Ensures all critical content areas and skills are covered, preventing any topic from being over- or under-emphasized.
  2. Fair Distribution: Helps distribute the number of questions fairly across cognitive levels, ensuring that students are tested on a range of skills.
  3. Alignment with Objectives: Ensures that the test aligns with the instructional objectives and learning outcomes that were set at the beginning of the course.
  4. Focus on Important Areas: Helps teachers focus on essential areas of the curriculum, reducing the likelihood of including irrelevant or trivial content.
  5. Guidance for Test Construction: Serves as a roadmap for constructing the test, simplifying the process of writing or selecting appropriate test items.

 

3.5.2 Preparing the test
• preparing test items
• Preparing instructions
• Preparing scoring key and marking scheme

3.5.2 Preparing the Test

Preparing a test is a crucial step in ensuring that assessments are aligned with instructional objectives and provide an accurate measure of student learning. This process involves three key components:

  1. Preparing test items (questions)
  2. Preparing instructions
  3. Preparing a scoring key and marking scheme

1. Preparing Test Items

Test items are the core of the assessment. They must be carefully designed to match the instructional objectives and the specification chart. Test items can be objective or subjective, depending on the skills and knowledge being assessed.

A. Types of Test Items

  • Objective Items: Have a single correct answer, which allows for easy and consistent grading. Examples include:
    • Multiple-Choice Questions (MCQs): A question with several answer options, one of which is correct.
    • True/False: A statement that students must mark as true or false.
    • Matching: Pairing related items, such as terms and their definitions.
    • Fill-in-the-Blank: Students complete a sentence or statement by filling in the missing information.
  • Subjective Items: Require students to construct their own answers, which may vary in length and depth. Examples include:
    • Short Answer: Brief written responses.
    • Essay: Longer, more detailed responses that require critical thinking and synthesis.
    • Problem-Solving: Solving a problem, often used in math or science assessments.

B. Guidelines for Writing Test Items

  • Ensure Alignment: Test items should match the learning objectives and reflect the cognitive levels in the specification chart.
  • Use Clear Language: Write questions that are straightforward and free of ambiguity.
  • Avoid Clues: Don’t include hints in the wording that could inadvertently reveal the answer.
  • Consider Difficulty Level: Balance easy, moderate, and difficult questions based on the students’ abilities.
  • Test Different Cognitive Levels: Include a mix of knowledge recall, application, and analysis questions.

2. Preparing Instructions

Clear instructions are essential to ensure that students understand how to approach each section or type of question in the test.

A. General Instructions

  • Overview: Provide an overview of the test structure, the types of questions included, and the total marks.
  • Time Limit: State how much time students have to complete the test.
  • Scoring Information: Indicate the marks allocated to each section or question, helping students manage their time.

B. Specific Instructions for Each Section

  • For each section (e.g., multiple-choice, essay), provide specific instructions on how students should answer. For example:
    • Multiple-Choice Questions: “Circle the letter of the correct answer.”
    • Essay Questions: “Write a well-organized essay discussing the key factors. Provide examples to support your arguments.”

C. Example of Instructions

  • “This test consists of 20 multiple-choice questions and 2 essay questions. Each multiple-choice question is worth 2 points, and each essay question is worth 10 points. You have 60 minutes to complete the test. Answer all questions.”

3. Preparing the Scoring Key and Marking Scheme

The scoring key and marking scheme ensure consistent and objective evaluation of student responses, whether the test contains objective or subjective items.

A. Scoring Key (Objective Items)

  • A scoring key provides the correct answers for objective questions (e.g., multiple-choice, true/false). It ensures that these questions are graded quickly and accurately.
  • Example:
    • Question 1: A
    • Question 2: B
    • Question 3: C

B. Marking Scheme (Subjective Items)

  • A marking scheme defines how points will be awarded for subjective questions like essays or problem-solving tasks. It typically outlines specific criteria that the answer must meet.
  • Example of a Marking Scheme:
    • Essay Question: “Describe the impact of the French Revolution on European politics.”
      • Content (10 points): Key impacts identified and discussed (5 points), examples provided (5 points)
      • Organization (5 points): Logical structure, clear introduction and conclusion
      • Grammar/Clarity (5 points): Correct use of language and clear argumentation

C. Rubric for Subjective Items

  • In some cases, a rubric is used to provide more detailed criteria for scoring subjective items. A rubric defines levels of performance (e.g., excellent, good, fair, poor) and assigns point values based on the quality of the response.
  • Example Rubric for an Essay:
    • Excellent (10 points): The response is comprehensive, addresses all parts of the question, and provides detailed examples.
    • Good (8 points): The response is mostly complete, but lacks some detail or examples.
    • Fair (5 points): The response addresses only part of the question and lacks detail or examples.
    • Poor (2 points): The response is incomplete or inaccurate.

 

 

 

 

Important Questions
Comments
Discussion
0 Comments
  Loading . . .