It shouldn’t be dangerous to speak your mind in any meeting or in any conversation at work. But employers know that it is dangerous, and that’s why they send out confidential employee engagement surveys. 68% of respondents to my poll on LinkedIn last week said that these surveys are not really confidential. If you know it is not safe for employees to speak their minds at work, that’s what you should focus on – solving that problem! It’s like there is a wall of goo between you and your employees, toxic goo, and rather than get rid of the wall of toxic goo you simply send a little paper airplane over the goo wall and tell employees to read what you wrote on the paper airplane, write something on it yourself and find a safe way to get the paper airplane back to you. That is both foolish and unethical. If you know it is not safe to speak your mind at work and you just send out a survey instead of tackling the real issue, you have failed at leadership.
Implementing Employee Surveys
Explore top LinkedIn content from expert professionals.
-
-
Stop pasting interview transcripts into ChatGPT and asking for a summary. You’re not getting insights—you’re getting blabla. Here’s how to actually extract signal from qualitative data with AI. A lot of product teams are experimenting with AI for user research. But most are doing it wrong. They dump all their interviews into ChatGPT and ask: “Summarize these for me.” And what do they get back? Walls of text. Generic fluff. A lot of words that say… nothing. This is the classic trap of horizontal analysis: → “Read all 60 survey responses and give me 3 takeaways.” → Sounds smart. Looks clean. → But it washes out the nuance. Here’s a better way: Go vertical. Use AI for vertical analysis, not horizontal. What does that mean? Instead of compressing across all your data… Zoom into each individual response—deeper than you usually could afford to. One by one. Yes, really. Here’s a tactical playbook: Take each interview transcript or survey response, and feed it into AI with a structured template. Example: “Analyze this response using the following dimensions: • Sentiment (1–5) • Pain level (1–5) • Excitement about solution (1–5) • Provide 3 direct quotes that justify each score.” Now repeat for each data point. You’ll end up with a stack of structured insights you can actually compare. And best of all—those quotes let you go straight back to the raw user voice when needed. AI becomes your assistant, not your editor. The real value of AI in discovery isn’t in writing summaries. It’s in enabling depth at scale. With this vertical approach, you get: ✅ Faster analysis ✅ Clearer signals ✅ Richer context ✅ Traceable quotes back to the user You’re not guessing. You’re pattern matching across structured, consistent reads. ⸻ Are you still using AI for summaries? Try this vertical method on your next batch of interviews—and tell me how it goes. 👇 Drop your favorite prompt so we can learn from each othr.
-
I used Google Forms for my bachelor’s research. And now I realize I shouldn’t have. Not because I was careless, but because I didn’t know better. None of us did. In India, almost every psych or social work student I knew used Google Forms. It was free, easy, and accessible. We thought we were doing it right. But once I started my master’s in Germany, I noticed something strange: No one here uses Google Forms. Not even for tiny surveys. Why? Google stores form responses on servers mostly located in the U.S, meaning researchers outside the U.S have little control over where their participants’ data goes or how it’s protected. When you’re collecting personal or sensitive information, this lack of control becomes a serious ethical and sometimes legal concern. That hit me hard. Back then, people trusted me with their stories. And I unknowingly put that trust at risk. I’m not sharing this to blame anyone. I’m sharing it because we’re often not taught what ethical research actually looks like. So here’s what I wish someone had told me earlier: If you’re collecting data from people, especially in psychology or social work, privacy is not optional. There are a few alternatives available: 🔹 Zoho Survey: Free, Indian company, better data protection. 🔹 LimeSurvey: Open-source, widely used in academia. 🔹 Nextcloud Forms: Privacy-first, great if your institution supports it. 🔹SurveySparrow : Also based in India. Good if you're not collecting highly sensitive data. 🔹Jotform: If you want a form builder that feels like Google Forms but with more control. Just double check where the data is stored. And if you must use Google Forms: • Be transparent: Let the participant know where their data would be stored • Avoid collecting sensitive info • Download and delete data from the platform ASAP Research is not just about responses. It’s also about respecting the people who respond. If you’re a student reading this, I hope this helps you to take one step closer to doing research that’s not just smart, but safe.
-
🧪 How To Resolve Conflicting Data and UX Research. With practical techniques on how to triangulate and reconcile data with mixed-method UX research ↓ 🤔 Data always tells a story — but it’s never just a single story. ✅ Quantitative data ← What/When: behavior patterns at scale. ✅ Qualitative data ← Why/How: user needs and motivations. ↳ Quant usually comes from analytics, surveys, experiments. ↳ Qual comes from tests, observations, open-ended surveys. 🚫 When data disagrees, it doesn’t mean that either is wrong. ✅ Different perspectives reveal different parts of a bigger story. ✅ Usually it means that there is a missing piece of the puzzle. ✅ Reconcile data: track what’s missing, omitted or overlooked. ✅ Triangulate: cross-validate data with mixed-method research. 🚫 Teams often overestimate the weight of big numbers (qual). 🚫 Designers often overestimate what people say and do (quant). ✅ Establish quality thresholds for UX research (size, sample). ✅ Find new sources: marketing, support, customer success. ✅ Find pairings of qual/quant streams, then map them together. People tend to believe what they want to believe. This goes for personal decisions, but also for any conducted research. If it shows the value of a decision already made, there will be people embracing it at full swing and carrying it forward fiercely, despite obvious blunders and weak spots. And sometimes, once a decision has been made, people find a way to frame insights from the past into their new narrative, inflating value of their initiatives. The best thing you can do is to establish well-defined thresholds for research — from confidence intervals (95%) and margin of error (<5%) to selecting user profiles and types of research. Risk-averse teams tend to overestimate the weight of big numbers in quantitative research. Users tend to exaggerate the frequency and severity of issues that are critical for them. So as Archana Shah noted, designers get carried away by users’ confident responses and potentially exaggerate issues, sometimes even the wrong ones. Raise a red flag once you notice decisions made on poor research, or hasted conclusions drawn from good research. We need both qual and quant — but we need both to be reliable. And: it’s not that one is always more reliable than another — they just tell different parts of a whole story that isn’t completed yet. ✤ Useful resources Mixed-Method UX Research, by Raschin Fatemi https://lnkd.in/eb3xsQ-B What To Do When Data Disagrees, by Archana Shah https://lnkd.in/ejt2E-Cc [More useful resources in the comments ↓]
-
I think about Jeff Bezos's "start with the press release and work backward" approach. Here is a future headline I would like to see: "Surveys are no longer the primary tool for gathering insights." To get there, surveys will have had to evolve into precision instruments used strategically to fill gaps in data. Let's call this the "Adaptive Survey." With adaptive surveys, organizations can target key moments in the customer or employee journey where existing data falls short. Instead of overwhelming consumers and employees with endless, and meaningless, questions, surveys step in only when context is missing or deeper understanding is required. Imagine leveraging your operational data to identify a drop in engagement and deploying an adaptive survey to better understand and pinpoint the "why" behind it. Or, using transactional data to detect unusual purchasing behavior and triggering a quick, personalized survey to uncover motivations. Here's how I hope adaptive surveys will reshape insight/VoC strategies: Targeted Deployment: Adaptive surveys appear at critical decision points or after unique behaviors, ensuring relevance and avoiding redundancy. Data-First Insights: Existing operational, transactional, and behavioral data provide the foundation for understanding experiences. Surveys now act as supplements, not the main course of the meal. Contextual Relevance: Real-time customization ensures questions are tailored to the gaps identified by existing data, enhancing both response quality and user experience. Strategic Focus: Surveys are used to validate hypotheses, explore unexpected behaviors, or uncover latent needs...not to rehash what’s already known. Surveys don't have to be the blunt instrument they are today. They can be a surgical tool for extracting insights that existing data can’t reach. What are your thoughts? #surveys #customerexperience #ai #adaptiveAI #customerfeedback #innovation #technology
-
I tried to compete with AI on data analysis 🤖 Shocker, I lost. Here's what happened... Out of habit, I started analyzing hundreds of detailed responses to an annual survey we send the team after our company retreats. Halfway through, I realized I should have leaned on AI to assist with this 🤦🏻♂️ so I decided to use this as an opportunity to measure the time saved on this routine task. After 7 hours of meticulous manual analysis, I asked Claude (Anthropic's AI) to do the same task. It took 15 minutes to get the same results 😅 Even more impressive, I compared the quality of both analyses, and Claude's was better! It caught the exact same major themes but also spotted patterns I'd missed, probably because I had my own biases about what worked/didn't work. This is probably common sense for many of you now, but just in case, here's how to replicate this process: - Send a feedback form to your team (Google Forms, Typeform, etc) - Export the responses (CSV works best for me) - Upload the file to your AI tool of choice (I use Claude) - Ask it to: Summarize the common themes, list top things that worked well, list areas for improvement, identify data trends, create a TLDR, etc - Share insights with your team The key takeaway for me... Many of us are still adapting to the power of the tools we have at our disposal, and I often find it easy to fall back into doing manual work out of habit or perhaps a bias toward my capabilities. But the use of AI in the modern workplace isn't about replacing human work - it's about complementing it. And it's tasks like these that provide the perfect opportunity to leverage the power of AI so we can focus our energy on implementing the conclusions it helps us create. This was a good reminder for me, hope it's helpful for some of you as well! **Photo below from this retreat in Ireland - stay tuned for another post with more details from the survey and what we learned from this retreat.
-
Remember that bad survey you wrote? The one that resulted in responses filled with blatant bias and caused you to doubt whether your respondents even understood the questions? Creating a survey may seem like a simple task, but even minor errors can result in biased results and unreliable data. If this has happened to you before, it's likely due to one or more of these common mistakes in your survey design: 1. Ambiguous Questions: Vague wording like “often” or “regularly” leads to varied interpretations among respondents. Be specific—use clear options like “daily,” “weekly,” or “monthly” to ensure consistent and accurate responses. 2. Double-Barreled Questions: Combining two questions into one, such as “Do you find our website attractive and easy to navigate?” can confuse respondents and lead to unclear answers. Break these into separate questions to get precise, actionable feedback. 3. Leading/Loaded Questions: Questions that push respondents toward a specific answer, like “Do you agree that responsible citizens should support local businesses?” can introduce bias. Keep your questions neutral to gather unbiased, genuine opinions. 4. Assumptions: Assuming respondents have certain knowledge or opinions can skew results. For example, “Are you in favor of a balanced budget?” assumes understanding of its implications. Provide necessary context to ensure respondents fully grasp the question. 5. Burdensome Questions: Asking complex or detail-heavy questions, such as “How many times have you dined out in the last six months?” can overwhelm respondents and lead to inaccurate answers. Simplify these questions or offer multiple-choice options to make them easier to answer. 6. Handling Sensitive Topics: Sensitive questions, like those about personal habits or finances, need to be phrased carefully to avoid discomfort. Use neutral language, provide options to skip or anonymize answers, or employ tactics like Randomized Response Survey (RRS) to encourage honest, accurate responses. By being aware of and avoiding these potential mistakes, you can create surveys that produce precise, dependable, and useful information. Art+Science Analytics Institute | University of Notre Dame | University of Notre Dame - Mendoza College of Business | University of Illinois Urbana-Champaign | University of Chicago | D'Amore-McKim School of Business at Northeastern University | ELVTR | Grow with Google - Data Analytics #Analytics #DataStorytelling
-
Nonprofits, how do you understand if your donor survey is successful? More times than I can count, my conversations at the start of data collection projects hover around “Can we aim for a high response rate?” or “We mostly care about a good response rate”. I get it – response rates are quantified numbers – easy to compare and track. Comparable numbers reduce a lot of “extra work” our brains would have otherwise had to do to understand unquantifiable contexts. But I want you to see your data collection success beyond that one number. Because if you are evaluating how you did solely by how many people responded, you are missing the point of your data collection. I want us to remember that response rate alone is a misleading metric. In fact, any number that claims to reduce the entire context into digits is neither helpful nor sufficient. A high response rate means little if the data you collect doesn’t drive action or improve your donor relationships. That means, ● building a design that focuses on your “why” from questions to every element of the user interface – questions that focus on capturing holistic experience and show you are listening. ● sending a follow-up, sharing key insights with participants, and—most importantly—taking action based on what you learn. ● ensuring your survey design is inclusive and reaches different segments of your donor base. Use accessible language and diverse channels, making it clear you value every voice. ● creating an action plan based on survey results and communicating those changes to your donors. In the chase of "50 vs. 500", focus on the why, what, how, and everything that creates enough context for you to co-create change with your community. #nonprofits #nonprofitleadership #community
-
Like a medical diagnosis, criticism in the workplace serves to pinpoint problems, inefficiencies, or shortcomings. It highlights areas that require attention, whether in individual performance, team dynamics, or organizational processes. However, criticism that stops at identification, without providing a roadmap for improvement, is incomplete. It can lead to frustration, demotivation, and a sense of aimlessness, akin to a patient knowing their ailment but having no means to cure it. The transition from merely diagnosing to offering a treatment plan in the business context involves providing actionable feedback. This step requires skill, empathy, and a deep understanding of the individual or the situation at hand. Actionable feedback is specific, achievable, and relevant. It not only points out the area of concern but also offers practical steps, resources, or guidance on how to rectify the issue. This approach transforms criticism from a potentially negative interaction into a constructive and empowering one. Incorporating actionable plans into criticism yields multiple benefits. For employees, it provides a clear path to improvement and facilitates growth. For teams, it encourages a culture of continuous improvement, collaboration, and open communication. And for organizations, it leads to improved results and a competitive edge. Implementing this approach is not without its challenges. It requires a culture that values open communication and continuous learning. Leaders and managers must be trained to provide balanced feedback that is both honest and constructive. Additionally, there must be an understanding that the 'treatment plan' might require adjustments and flexibility, as every professional scenario is unique. The takeaways ... [1] When offering criticism, accompany it with a specific, measurable action plan. For instance, if an employee's performance is lacking in a certain area, don't just highlight the problem; provide clear, achievable goals and a timeline for improvement. Offer resources, if needed. [2] Constructive criticism should not be a one-way street. Encourage employees to engage in the feedback process actively. This can be achieved by asking them for their input on potential solutions or improvements. Such an approach not only empowers the employees but also builds a culture of mutual respect and collaborative problem-solving. [3] Criticism and action plans are not a 'set it and forget it' scenario. Regular follow-ups are crucial to ensure that the action plan is being implemented and to assess its effectiveness. [4] Recognizing and acknowledging progress is equally important, as it reinforces positive behavior and outcomes, leading to sustained improvement and development. ✅ Share this to your network ✅ Follow me on LinkedIn for expert insights ★ DM me for a conversation to learn how we can help you grow & succeed #business #people #leadership #management #growth #success #feedback #communication
-
Stakeholders often focus on “how many” when presented qualitative research. Which is the wrong question to ask. Qualitative is about understanding the H (human) in HCI. The goal is to understand why they behave like that. When presenting research results: focus on showing clear patterns, supporting findings with evidence like quotes or observations, and connecting everything back to user behaviors and business goals, not sample sizes. Also, combine qualitative with quantitative to explain the what and the why. For example: - Quantitative shows what's happening: 72% abandon the goal-setting flow at account connection. - Qualitative reveals why: Users worry about security, are confused about account selection, and fear they can't reverse connections. - The powerful combination: "Our drop-off problem stems from specific trust concerns and mental model mismatches. By redesigning to address these specific issues, we can reduce the 72% abandonment rate." Beyond Numbers: How to Properly Evaluate Qualitative UX Research (9min) By Dr Maria Panagiotidi https://lnkd.in/gbqRneY4