Event Survey Questions

Explore top LinkedIn content from expert professionals.

  • View profile for Vitaly Friedman
    Vitaly Friedman Vitaly Friedman is an Influencer
    216,991 followers

    ✅ Survey Design Cheatsheet (PNG/PDF). With practical techniques to reduce bias, increase completion and get reliable insights ↓ 🚫 Most surveys are biased, misleading and not actionable. 🤔 People often don’t give true answers, or can’t answer truthfully. 🤔 What people answer, think and feel are often very different things. 🤔 Average scores don’t speak to individual differences. ✅ Good questions, scale and sample avoid poor insights at scale. ✅ Industry confidence level: 95%, margin of error 4–5%. ✅ With 10.000 users, you need ≥567 answers to reduce sample bias. ✅ Randomize the order of options to minimize primacy bias. ✅ Allow testers to skip questions, or save and exit to reduce noise. 🚫 Don’t ask multiple questions at once in one single question. 🤔 For long surveys, users regress to neutral or positive answers. 🚫 The more questions, the less time users spend answering them. ✅ Shorter is better: after 7–8 mins completion rates drop by 5–20%. ✅ Pre-test your survey in a pilot run with at least 3 customers. 🚫 Avoid 1–10 scales as there is more variance in larger scales. 🚫 Never ask people about their behavior: observe them. 🚫 Don’t ask what people like/dislike: it rarely matches behavior. 🚫 Asking a question directly is the worst way to get insights. 🚫 Don’t make key decisions based on survey results alone. Surveys aim to uncover what many people think or feel. But often it’s what many people *think* they think or feel. In practice, they aren’t very helpful to learn how users behave, what they actually do, if a product is usable or learn specific user needs. However, they do help to learn where users struggle, what user’s expectations are, if a feature is helpful and to better understand user’s perception or view. But: designing surveys is difficult. The results are often hard to interpret and we always need to verify them by listening to and observing users. Pre-test surveys before sending out. Check if users can answer truthfully. Review the sample size. Define what you want to know first. And, most importantly, what decisions you will and will not make based on the answers you receive. --- ✤ Useful resources: Survey Design Cheatsheet (PNG, PDF), by yours truly https://lnkd.in/ez9XQAk3 A Big Guide To Survey Design, by H Locke https://lnkd.in/eJWRnDRi How to Write (Better) Survey Questions, by Nikki Anderson, MA https://lnkd.in/eHpzr-Q6 Survey Design Guide, by Maze https://lnkd.in/e4cMp5g5 Why Surveys Are Problematic, by Erika Hall https://lnkd.in/eqTd-7xM --- ✤ Books ⦿ Just Enough Research, by Erika Hall ⦿ Designing Surveys That Work, by Caroline Jarrett ⦿ Designing Quality Survey Questions, by Sheila B. Robinson #ux #surveys

  • View profile for Dr Donna M Velliaris

    TOP 30 Global Guru in Education, 教育分野のトップ30グローバルグル (2023年:第30位、2024年:第22位、2025年:第9位), Schools as Cultural Systems & Inclusion Educator/Researcher

    26,295 followers

    Formative assessment is like a compass—it is ongoing, diagnostic, and designed to guide learning. It provides students with timely, actionable feedback during the learning process so they can improve before reaching the final destination. Examples include quizzes, think-pair-share, drafts, reflections, or teacher-student conferences. Formative assessments help identify misconceptions, adjust teaching strategies, and personalise support. In this way, they build student confidence and competence incrementally. Summative assessment is more like a snapshot—it evaluates what students have achieved at the end of an instructional period. It measures mastery against learning outcomes and is used to judge the effectiveness of instruction. Examples include final exams, projects, performances, or standardised tests. While summative assessments do not provide direct guidance during the learning process, they reflect the culmination of all the formative learning and feedback that came before.

  • View profile for Kevin Hartman

    Associate Teaching Professor at the University of Notre Dame, Former Chief Analytics Strategist at Google, Author "Digital Marketing Analytics: In Theory And In Practice"

    23,981 followers

    Remember that bad survey you wrote? The one that resulted in responses filled with blatant bias and caused you to doubt whether your respondents even understood the questions? Creating a survey may seem like a simple task, but even minor errors can result in biased results and unreliable data. If this has happened to you before, it's likely due to one or more of these common mistakes in your survey design: 1. Ambiguous Questions: Vague wording like “often” or “regularly” leads to varied interpretations among respondents. Be specific—use clear options like “daily,” “weekly,” or “monthly” to ensure consistent and accurate responses. 2. Double-Barreled Questions: Combining two questions into one, such as “Do you find our website attractive and easy to navigate?” can confuse respondents and lead to unclear answers. Break these into separate questions to get precise, actionable feedback. 3. Leading/Loaded Questions: Questions that push respondents toward a specific answer, like “Do you agree that responsible citizens should support local businesses?” can introduce bias. Keep your questions neutral to gather unbiased, genuine opinions. 4. Assumptions: Assuming respondents have certain knowledge or opinions can skew results. For example, “Are you in favor of a balanced budget?” assumes understanding of its implications. Provide necessary context to ensure respondents fully grasp the question. 5. Burdensome Questions: Asking complex or detail-heavy questions, such as “How many times have you dined out in the last six months?” can overwhelm respondents and lead to inaccurate answers. Simplify these questions or offer multiple-choice options to make them easier to answer. 6. Handling Sensitive Topics: Sensitive questions, like those about personal habits or finances, need to be phrased carefully to avoid discomfort. Use neutral language, provide options to skip or anonymize answers, or employ tactics like Randomized Response Survey (RRS) to encourage honest, accurate responses. By being aware of and avoiding these potential mistakes, you can create surveys that produce precise, dependable, and useful information. Art+Science Analytics Institute | University of Notre Dame | University of Notre Dame - Mendoza College of Business | University of Illinois Urbana-Champaign | University of Chicago | D'Amore-McKim School of Business at Northeastern University | ELVTR | Grow with Google - Data Analytics #Analytics #DataStorytelling

  • View profile for Tanuj Diwan
    Tanuj Diwan Tanuj Diwan is an Influencer

    Top 25 Thought Leaders 2022 by ICMI | Co-founder SurveySensum | Working with Insurance, Banking, NBFC’s to improve Customer Satisfaction/NPS/Renewals/Referrals.

    7,755 followers

    “Surveys are dead.” I keep seeing that on LinkedIn. Well… ours are very much alive — and they’re shaking things up. 😄 Let me share a quick story👇 A US-based healthcare company with over a million customers was running their feedback program the usual way — sending quarterly surveys to customers who had purchased medical equipment in the past 3 months. We suggested a simple change: 👉 Send surveys daily, right after delivery. Why? Because quarterly surveys tell you how you're doing overall. But they don't tell you when something goes wrong — or where to look. 🚨 Just 21 days after launching daily surveys, a pattern emerged: NPS consistently dropped on Fridays. No obvious reason. But it raised a flag. We sliced the data by therapy type and region, and found the issue. Now their ops team is fixing it. 💡 That’s the power of real-time feedback. From blind spots to insights. From trends to action. Surveys still work. Just not for those who don’t want to work on the feedback. #CustomerExperience #NPS #VerbatimInsights #FeedbackMatters #HealthcareCX #SurveySensum

  • View profile for Meenakshi (Meena) Das
    Meenakshi (Meena) Das Meenakshi (Meena) Das is an Influencer

    CEO at NamasteData.org | Advancing Human-Centric Data & Responsible AI

    16,132 followers

    This week's theme in my workshops (and, by that extension, my posts to you here) is – assessing data collection tools (like surveys) for inclusion and access. Most of my workshops start at the same place – where most have designed at least one survey in the current/past job/education. And then it takes three hours and some meaningful collective learning to realize that planning a survey is much more than just a list of questions. It is an opportunity to connect with your community directly, hear their stories, and understand their experiences and expressions of engagement. In this post, I want to share 5 "red flag" behaviors I often see during a survey design phase: ● When the only questions included are of positive feedback. We all love hearing good things, but only asking for positive feedback disables some real growth opportunities. Example: A question like, "What did you love most about our event?" assumes your respondent only loves the event, and then it offers no room for any different experience. ● When questions are overloaded with complicated words or jargon that only a few will know. You know your mission inside and out, but your community might not understand the same terms you do. Speak in their language. Think of your survey as a conversation. Example: A question like, "How would you rate the efficacy of our donor stewardship activities?" assumes everyone understands the details of "stewardship". ● When every possible question about every possible aspect of the mission is asked – because "why not". Designing surveys – without context – that go on for more than 10-12 minutes - can feel like asking for too much. Be mindful of the respondents and the needs of the data collection. Every question should have a purpose. ● When questions contradict anonymity. Our communities are diverse, and our surveys should hold a neat, safe space for those communities. Ensuring accessibility – balanced with truly useful demographic questions means not harming someone's anonymity – thus making the experience of collecting data easier and meaningful. Example: A survey asking about racial and ethnic diversity in a group of 99% homogenous population (thus making the 1% racially diverse population nervous about the possible breach of anonymity). ● When questions do not offer an 'Opt-Out' option by making everything required. Some questions may feel too personal or uncomfortable for individuals to respond to, and our surveys must create space for that. Give respondents the space to skip a question if they need to. Example: A survey that requires donors to disclose their income range without offering a way to skip the question if they're uncomfortable sharing that information. Stay tuned for a soon-to-be post on what we can do differently then. Have any other such behaviors? Share them here. In the meantime, try some of these resources (all designed to do good with data): https://lnkd.in/gUK-6M_Y #nonprofits #community

  • View profile for Jeff Toister

    I help leaders build service cultures.

    81,802 followers

    Your customer survey doesn't just capture feedback on the experience. It IS the experience. Asking at the wrong time can be annoying. For example, many websites use pop-up surveys to ask for feedback. The timing of the pop-up should be carefully chosen to avoid interrupting the customer's workflow. In this example, the survey popped up on the login page. Customers on the login page are likely intent on completing a transaction within their account. The pop-up survey adds unnecessary friction to the process. Even worse? The survey takes a few minutes to complete. Asking customers to stop what they're doing to spend several minutes completing your survey is pretty cheeky. How could you make this experience better? A few ways: 1. Improve survey timing You could offer a survey at the end of a transaction or when a customer's dwell time (time spent looking at one page) exceeded a certain threshold. 2. Simplify your survey Allow customers to share feedback in one click with an optional open comment box. 3. Think twice Do you really need to survey customers here? Only use a survey if you have a clear reason and a specific plan to use the data. LinkedIn Learning subscribers can get more survey tips from my course, Using Customer Surveys to Improve Service. ➡️ https://lnkd.in/eziCufWi Bottom line: Surveys are part of the customer experience. Make sure your survey ask doesn't make the experience worse.

  • View profile for Sangita Sarkar

    #Talent #ISTD Member #Talent Management #Learning and Development #Competency Mapping #XLRI #IIMRohtak #Jack Welch Academy USA #Linkedin Learning #IBMS

    39,517 followers

    Various aspects to measure to what extent the employees are agile to learning in an organization-------------- To measure the extent of employees' learning agility in an organization, several aspects and metrics can be evaluated. Here are the key areas to focus on: Key Aspects to Measure Learning Agility Assessment Tools: Utilize scientifically validated learning agility assessments, such as the Korn Ferry Learning Agility Tool or the Mettl Learning Agility Assessment. These tools evaluate various traits associated with learning agility, including adaptability, curiosity, and problem-solving skills. Learning Preferences: Identify individual learning styles and preferences through assessments that analyze how employees prefer to acquire new skills (e.g., self-learning, classroom training, mentorship). Performance Metrics: Monitor performance indicators such as time-to-competency in new roles or tasks, and the speed at which employees can adapt to changes in processes or technologies. This can provide insights into their learning agility in real-world scenarios. Feedback Mechanisms: Implement regular feedback loops where employees receive constructive feedback on their adaptability and learning efforts. This can include peer reviews, manager evaluations, and self-assessments. Training Participation and Outcomes: Track participation rates in training programs and subsequent application of learned skills on the job. Evaluate whether employees are able to transfer knowledge effectively into their roles and how this impacts team performance. Engagement in Continuous Learning: Measure engagement levels in continuous learning initiatives, such as workshops, online courses, and cross-training opportunities. High engagement may indicate a proactive approach to learning. Problem-Solving Abilities: Assess employees' ability to solve complex problems by presenting them with real-life challenges and evaluating their responses and solutions. This can be indicative of their capacity to learn from experiences. Adaptability to Change: Evaluate how quickly employees adjust to changes within the organization, such as new technologies or shifts in strategy. This can be assessed through surveys or direct observation during transitions. Retention of Knowledge: Assess how well employees retain information over time through follow-up assessments after training sessions or workshops. This helps gauge both initial learning and long-term retention capabilities. Collaboration and Knowledge Sharing: Measure participation in collaborative projects and knowledge-sharing initiatives within teams. Employees who actively engage in sharing insights and learning from one another typically demonstrate higher learning agility. By focusing on these aspects, organizations can gain a comprehensive understanding of their employees' learning agility levels, which is crucial for fostering a culture of continuous improvement and adaptability.

  • View profile for Bahareh Jozranjbar, PhD

    UX Researcher @ Perceptual User Experience Lab | Human-AI Interaction Researcher @ University of Arkansas at Little Rock

    8,152 followers

    User experience surveys are often underestimated. Too many teams reduce them to a checkbox exercise - a few questions thrown in post-launch, a quick look at average scores, and then back to development. But that approach leaves immense value on the table. A UX survey is not just a feedback form; it’s a structured method for learning what users think, feel, and need at scale- a design artifact in its own right. Designing an effective UX survey starts with a deeper commitment to methodology. Every question must serve a specific purpose aligned with research and product objectives. This means writing questions with cognitive clarity and neutrality, minimizing effort while maximizing insight. Whether you’re measuring satisfaction, engagement, feature prioritization, or behavioral intent, the wording, order, and format of your questions matter. Even small design choices, like using semantic differential scales instead of Likert items, can significantly reduce bias and enhance the authenticity of user responses. When we ask users, "How satisfied are you with this feature?" we might assume we're getting a clear answer. But subtle framing, mode of delivery, and even time of day can skew responses. Research shows that midweek deployment, especially on Wednesdays and Thursdays, significantly boosts both response rate and data quality. In-app micro-surveys work best for contextual feedback after specific actions, while email campaigns are better for longer, reflective questions-if properly timed and personalized. Sampling and segmentation are not just statistical details-they’re strategy. Voluntary surveys often over-represent highly engaged users, so proactively reaching less vocal segments is crucial. Carefully designed incentive structures (that don't distort motivation) and multi-modal distribution (like combining in-product, email, and social channels) offer more balanced and complete data. Survey analysis should also go beyond averages. Tracking distributions over time, comparing segments, and integrating open-ended insights lets you uncover both patterns and outliers that drive deeper understanding. One-off surveys are helpful, but longitudinal tracking and transactional pulse surveys provide trend data that allows teams to act on real user sentiment changes over time. The richest insights emerge when we synthesize qualitative and quantitative data. An open comment field that surfaces friction points, layered with behavioral analytics and sentiment analysis, can highlight not just what users feel, but why. Done well, UX surveys are not a support function - they are core to user-centered design. They can help prioritize features, flag usability breakdowns, and measure engagement in a way that's scalable and repeatable. But this only works when we elevate surveys from a technical task to a strategic discipline.

  • View profile for Tom O'Reilly

    Building the Internal Audit Collective

    36,453 followers

    Increasing the Response Rate of the Post-Audit Exit Survey Issuing and reporting on post audit exit survey is a great way to receive feedback to improve IA's performance. However, even with significant effort invested in building and issuing the survey, audit customers may sometimes not complete it. When this occurs, the CAE should consider the following factors to diagnose and resolve the issue: 1. Survey Length The survey might be too long. Recommendation: Keep the survey to no more than 6–8 questions. Use closed-ended questions for most items and reserve free-text responses for the final question. Consider making the free-text portion optional. 2. Timing of the Request The request for survey completion might be poorly timed. Some teams wait until after the final report is issued—sometimes weeks or months after fieldwork—to ask for feedback. Recommendation: Introduce and reinforce the survey request early and often: - The VP should mention the post-audit exit survey to C-suite or VP-level executives during pre-planning. - The Director should raise the topic with VP or Director-level audit customers at the end of the initial planning meeting. - The Manager should emphasize the survey’s importance during the fieldwork kickoff. - The audit senior or supervisor should send the survey along with the draft audit report in preparation for the audit exit meeting. 3. Limited or No Follow-Up The audit leadership team may have made the survey request only once. Recommendation: Follow up multiple times. Audit customers may be juggling several projects, so a reminder can significantly boost response rates. 4. Relevance of Survey Content The survey questions might focus solely on Internal Audit, which may not resonate with the audit customer. If the survey only asks about the audit team’s performance (e.g., team knowledge or punctuality in deliverables), it might overlook the audit team’s impact on the customer’s operations. Recommendation: Include questions that evaluate both the internal team’s performance and the relevance of the audit team’s output. This balanced approach makes the survey more engaging and pertinent to the audit customer. 5. The Audit Customer is Annoyed or Dissatisfied with IA If the audit ran too long or if the team’s performance was subpar, it may lead the customer to want to move on from all audit-related matters. Recommendation: Give the executive or team a couple of weeks of space. The CAE (not anyone else) should then follow-up directly to obtain their feedback. And they should be prepared to commit to following-up individually with the audit customers once improvements are implemented to highlight time was wll spent providing feedback and the team took action on it.

  • View profile for Hannah Shamji

    Therapist and Consumer Psychology Researcher. Cornell & Brown alum.

    8,432 followers

    Literally the best time to ask customers for feedback is right after they take action: Opt-ins.  Sign-ups.  Purchases.  Cancellations.  Upgrades. These are perfect moments to ask one simple question: "What made you [sign up] today? Why now and not 6 months ago or later?" Why is this one question so effective? You're catching customers in the act. Their decision is fresh, emotions are high, and there's no recall bias so you'll get more honest, engaged responses. Once you spot trends, you can switch to a multiple-choice question. But remember, less is more. It’s tempting to fit in more survey questions but resist the urge. The key is to piggyback on the momentum of the action they're already taking. Your one question should feel like an extension of the process, not a separate task. This is how you get a higher-than-normal response rate and valuable insights without disrupting the customer experience.

Explore categories