The feedback sandwich often misses the mark and can even backfire. Instead of creating clarity, it can muddy the message and feel insincere. Let's dive into why this approach doesn't work and explore a better way to give feedback with Radical Candor. ❌ What Not to Do: "Great job! But the presentation lacked details. Still, I appreciate your enthusiasm." ✅ What to Do Instead: Use CORE: 🔸 Context: Cite the specific situation. 🔸 Observation: Describe what was said or done. 🔸 Result: Explain the consequence. 🔸 Expected nExt stEps: Outline the expected next steps. Example of CORE Feedback: "I asked you to help us be more efficient (Context). You went above and beyond by implementing Slack (Observation). The team is now spending less time on email and more time communicating effectively (Result). We'd love for you to explore other tools to streamline communication in the office (Expected nExt stEps)." Giving feedback is crucial for growth, but it needs to be clear, kind, and actionable. Read more: https://bit.ly/3LhIzZ2 #ManagementTips #RadicalCandor #Leadership #Feedback #COREMethod #EffectiveCommunication #GrowthMindset
Effective Use of Educational Assessments
Explore top LinkedIn content from expert professionals.
-
-
Important new evidence on ChatGPT in education: Wang & Fan's (2025) meta-analysis of 51 studies shows we're at an inflection point. The technology demonstrably improves learning outcomes, but success depends entirely on implementation. The research reveals optimal conditions: sustained use (4-8 weeks), problem-based contexts, and structured support for critical thinking development. Effect sizes tell the story; large gains for learning performance (g=0.867), moderate for critical thinking (g=0.457). Quick fixes don't work. Thoughtful integration does. Particularly compelling: ChatGPT excels in skills development courses and STEM subjects when used as an intelligent tutor over time. The key? Providing scaffolds like Bloom's taxonomy for higher-order thinking tasks. As educators, we have emerging empirical guidance for AI adoption. Not whether to use these tools, but how to use them effectively - maintaining rigor while enhancing accessibility and engagement. The future of education isn't human or AI. It's human with AI, thoughtfully applied.
-
Here’s a harsh truth about assessments: If your exam feels like a trap, it probably is. 😵💫 Most assessment questions aren’t measuring anything—just checking for short-term memory. Learners deserve better. We should write assessments that teach, challenge, and reveal understanding, not confuse people with trick questions or irrelevant trivia. So I made this 👇 Here are eight techniques I use (and teach others) to write better assessment questions: 𝗔𝗟𝗜𝗚𝗡𝗠𝗘𝗡𝗧 – “This maps directly to the objective.” Every question should exist because of your learning goals, not despite them. 𝗥𝗘𝗔𝗟𝗜𝗦𝗠 – “This feels like the real world.” Why are you testing it if it’s not something they’d do on the job? 𝗦𝗧𝗥𝗨𝗖𝗧𝗨𝗥𝗘 – “I’m not thrown off by format.” Clear questions = better focus on thinking, not decoding. 𝗥𝗔𝗡𝗗𝗢𝗠𝗜𝗭𝗔𝗧𝗜𝗢𝗡 – “I’m not spotting patterns.” No more “C is always right.” Mix it up. 𝗔𝗩𝗢𝗜𝗗 𝗡𝗘𝗚𝗔𝗧𝗜𝗩𝗘𝗦 – “I’m not getting tripped up.” Tricky wording ≠ higher difficulty. It just creates confusion. 𝗔𝗩𝗢𝗜𝗗 𝗔𝗟𝗟 𝗢𝗙 𝗧𝗛𝗘 𝗔𝗕𝗢𝗩𝗘 – “I can’t game the system.” They’re lazy distractors. Retire them. 𝗗𝗜𝗦𝗧𝗥𝗔𝗖𝗧𝗢𝗥 𝗤𝗨𝗔𝗟𝗜𝗧𝗬 – “There are just enough options.” More isn’t better. Smarter is better. 𝗔𝗡𝗦𝗪𝗘𝗥 𝗟𝗘𝗡𝗚𝗧𝗛𝗦 – “One answer doesn’t stand out.” Stop giving away the correct answer with extra detail. 👇 Save this for your next module. Tag a fellow learning designer who needs this. #InstructionalDesign #LearningAndDevelopment #eLearningDesign #AssessmentDesign #LXD #LearningCulture
-
My go-to method for giving feedback to my students (it's NOT a "compliment sandwich" :) 👉 It's COIN! C - Context O - Observation I - Impact N - Next steps For example: "In our mock interview (C), I noticed that you kept circling back and repeating information when describing your design process (O). This repetition could potentially give the impression that you're unsure of your own methods (I). Let's work on creating a clear, linear narrative for each project and practice it until you can confidently deliver it within 2-3 minutes (N)." Why it works ↴ - It's clear and direct. No sugar-coating! - It focuses on specific actions and their effects. - It provides actionable next steps. - It avoids confusion (the feedback receiver knows exactly what to work on) What's your go-to feedback method? Share your thoughts below! 👇 #ux #uxcareers #UXLeadership #FeedbackTips
-
Published this week, final version: “The Ends of Tests: Possibilities for Transformative Assessment and Learning with Generative AI” In "The Ends of Tests," Cope, Kalantzis, and Saini propose a transformative vision for education in the era of Generative AI. Moving beyond the limitations of traditional assessments—especially multiple-choice and time-limited essays—they advocate for AI-integrated, formative learning environments that prioritize deep understanding over rote recall. Central to their argument is the concept of cybersocial learning, where educators curate AI systems using rubric agents, knowledge bases, and contextual analytics to scaffold learner thinking in real time. This reconfigures the teacher’s role: not diminished by AI, but amplified through new pedagogical tools. The authors call for education systems to abandon superficial summative assessments in favor of dynamic, dialogic, and multimodal evaluations embedded in everyday learning. Importantly, this model aims to redress structural inequalities by personalizing feedback within each learner’s “zone of proximal knowledge.” Rather than automating outdated systems, the paper imagines AI as a medium for epistemic justice, pedagogical renewal, and educational equity at scale. Full text and video here: https://lnkd.in/efhjt6jf
-
Moving away from thinking in AI as a "cheating" machine: The post discusses the updated version of the AI Assessment Scale (AIAS), a framework for integrating generative AI ethically into educational assessments across different disciplines. The AIAS provides five levels with varying degrees of permitted AI usage: 1. No AI: Students cannot use any AI tools. 2. AI-Assisted Idea Generation and Structuring: AI can be used for brainstorming and outlining, but final work must be human-authored. 3. AI-Assisted Editing: Students can use AI for refining and editing their work, submitting both original and AI-assisted content. 4. AI Task Completion, Human Evaluation: Students use AI for components of the task but critically evaluate the AI outputs. 5. Full AI: AI can be used throughout the task at the student/teacher's discretion. The updated AIAS aims to provide more nuance, flexibility and accommodate multimodal AI across diverse fields. Examples are given for applying each level to different assessment types. The author emphasizes the need to shift the narrative around AI in education from just "cheating" to exploring how it can enhance teaching and learning. The AIAS offers clarity to students on acceptable AI use and provides an ethical, equitable policy tool for institutions. The post includes an abstract from the published journal article further detailing the rationale and benefits of the AIAS framework. https://lnkd.in/ev-n_v4f
-
When we actively recall/retrieve information our brains put a little hashtag on it: #useful. And those tags compound with more retrievals. In addition, memories are best strengthened if they are retrieved just before we forget them. This means that the time between retrievals should increase with each one. Furthermore, the fewer cues we are given for recall increases the likelihood of making more associations between new information and prior knowledge. As such, learners can think analogously & apply concepts across contexts. Strategy 1: Use low stakes formative assessments as retrieval practice to enhance memory retention. Strategy 2: Incrementally increase the space between retrieval practice to maximize the effect. Strategy 3: Gradually increase the complexity of retrieval practice using the three types of recall to enhance depth of understanding. 3-4 of these retrieval events will suffice at about 15 minutes per. 🧠 Go for recall over recognition: Don’t use multiple choice questions as a summative assessment because in the real world they won’t be given a set of options where one is the correct answer. Learners being forced to generate the information is more effective. Free recall is more effective than cued recall and recognition, though it’s prudent for learners to work their way up from recognition to recall. 🔠 Make sure the context and mode of retrieval is varied: Mix it up. One day they post a video. Next, have them write something. The Later, have them create a diagram or map, etc. Generating information in multiple modes is even more powerful than being presented information in multiple representations. What’s more, this also goes for practicing related information in varying combinations. See Interleaving. 🌉 Make sure retrieval practice is properly scaffolded and elaborative: Go from concrete to abstract, simple to complex, easy to difficult; from questions to answer to problems to solve. Each retrieval event along the curve should be increasingly more involved to create a Desirable Difficulty. See also Bruner's Spiraling Curriculum & Reigeluth’s Elaboration Theory. 💡 Push creation of concrete examples, metaphors, and analogies: Concrete examples and analogous thinking have a high positive impact on memory. Especially if it is learner-generated. This provides students with the opportunity to put new, abstract concepts in terms of what they already know. It updates their existing schemas. 🔁 Give feedback, and time it right: If you’re not giving feedback that is corrective and often, your learners might suffer from confusion or even start to develop bad habits. But don’t wait too long to do it. Check out PREP feedback and Quality Matters helpful recommendations. Be sure to fade feedback as student develop mastery. #instructionaldesign #teachingandlearning #retrievalpractice
-
Many of the traditional multiple choice questions we use in assessment are abstract and measure only whether people recall facts they heard in the last 5 minutes. Converting these questions to scenario-based questions can increase the level of difficulty, measure higher level skills, and provide relevant context. 🎯 Transform traditional recall-based quiz questions into practical scenario-based questions to test actual job skills and decision-making abilities. 💡 Before writing questions, identify when and how learners would use the information in real work situations. If you can't find a practical use, reconsider the question. 📝 Keep scenarios concise and relevant. Often just 2-3 sentences of context can shift a question from testing memory to testing application. 📊 Align assessment questions with learning objectives. If your objective is application-level, your questions should test application rather than recall. Read more tips and see before and after question examples: https://lnkd.in/eARzjDfJ
-
The most impactful change I’ve made in my classroom over the past few years is a simple exercise that came out of my work in #engineering education assessment. At the start of each class period, I spend 1 minute discussing our #learning goals for class that day. On our course website, I put these goals at the top of the page for each class to remind students what they should be able to do having followed the class, done the practice problems, and read the book. When writing these goals, I keep the following in mind: 👩🏻🏫 What do my #students need to take with them from this class? 🌏 What fundamental knowledge should they learn, and how does this relate to the real-world? 👩🏻🔬 What is the “action” I want them to do? I try to state goals in a Bloom’s taxonomy framework where their knowledge gains are hierarchical in terms of their ability to do something. How has doing this helped my students? 🙋🏻♀️ They ask more focused questions during class that show engagement with the goals and material. 👩🏻🎓 They know the goals of their studying and have a sense of mastery when it comes to exam time. How has this helped me as an #instructor? 🙄 I don’t need to answer that “what’s on the test” question anymore. I point them to the learning goals. 🫶 When they’re stressed, I can better target what information is unclear by asking them “do you know how to do…?” and help them focus on that material. 🧐 It forces me to craft lectures and activities that align with our goals, rather than just what’s in a textbook, making my class more engaging and streamlining material presentation. If we're going to assess students' learning, we need to "write our own exam" by determining what they should know at the end of a course. Why not share this information with them? By letting students know the goals of the course - and thus what we're assessing them on - we empower them. This in no way tells them "how" to get an A. They still have to do the hard work of learning. But it helps them focus their studying efforts and benchmark their attainment.
-
I tried a new formative assessment technique in a workshop today that was inspired by Bonni Stachowiak's recent Teaching in Higher Ed podcast episode with Todd Zakrajsek. I called the technique Spinner Retrieval Practice. Here's how it worked: 1. I created a Padlet Sandbox deck with a card for each instructor. Each card was divided into six boxes that were labeled "Spin," "Retrieve," and "Rate." (See image below.) 2. Instructors clicked on the virtual spinner link in the "Spin" box to access an online spinner. The spinner included five topics we had just explored in the workshop, which in this case, were five Classroom Assessment Techniques (CATs). Instructors spun the wheel and took note of the topic they landed on. 3. Instructors returned to their card in Padlet Sandbox, and in the "Retrieve" box, they typed the name of the topic they landed on and what they could recall about the topic from memory. 4. In the "Rate" box, instructors rated how confident they were that their response was correct by jotting down a number between 1-5 on the sticky note (with 1 being not confident and 5 being very confident). 5. Instructors repeated steps 2-4 for a second round. 6. Afterwards, we reviewed the responses as a group. I particularly focused on reviewing responses that anyone rated less than a 4, and in many cases, instructors were correct despite their initial doubt. This led to a good discussion about how confidence ratings can be useful as part of one's formative assessment approach. Overall, Spinner Retrieval Practice was a fun way to randomize and slightly gamify the traditional retrieval practice approach! #HigherEd #FacDev #EdDev #Facilitation #ProfessionalLearning #Facilitator #Assessment #FormativeAssessment #TeachingAndLearning #WorkshopShare #ProfessionalDevelopment