💝 Healthcare UX Design Playbook. With practical guides, design systems, case studies and articles on medical applications and healthcare systems ↓ ✅ Healthcare industry grows 10.40% per year, but lags behind in UX. ✅ Most healthcare work focuses on policymakers, providers, insurance. ✅ Often patients are overlooked or dismissed as “secondary users”. ✅ Users often work in a highly data-heavy and interruptive environment. ✅ Healthcare UX is mostly about information design without room for error. ✅ It’s one of the few industries where users can’t choose what they use. ✅ Most systems are inherently fragmented, inconsistent, legacy-ridden. ✅ In testing, group healthcare professionals by their tasks, not job titles. ✅ Protect tester’s identity, names, images, contacts, employer, practice. ✅ Healthcare UX is an incredible opportunity to make a significant impact. Useful resources: Digital Health Product Design Playbook, via Alex Bilstein https://playbook.hanno.co/ Open Health Stack Design Guidelines, by Google https://lnkd.in/eekYjnfd A Guide To Healthcare UX Design, by Kyjean Tomboc https://lnkd.in/ePtfxPfJ Healthcare UX Research: Getting Started, by Jim Ross https://lnkd.in/ehnB-bgQ Building a Design System For Healthcare, by Better https://lnkd.in/eZrE7hyv What Working In Healthcare Design Is Like, by Chris Kiess https://lnkd.in/ez7DUXji Healthcare and Medical Systems UX, by Creative Navy UX Agency https://lnkd.in/eFcUAfPw NordHealth Design System https://nordhealth.design/ GE Healthcare Edison Design System https://lnkd.in/ei7bk-n7 Anatomy, Boston’s Scientific Design System https://lnkd.in/eBsx5WfU NHS Healthcare Design System (+ Figma Kit) Docs: https://lnkd.in/e6rNM-28 Figma kit: https://lnkd.in/e7KVjF4S The Design System by Queensland Health https://lnkd.in/eMm9WcYz Healthcare.Gov Design System https://design.cms.gov/ Health Design Thinking (book), by Bon Ku, Ellen Lupton https://lnkd.in/ezCBZp4U Visualizing Health and Healthcare Data (book), by Lindsay Betzendahl, Cambria Brown, Katherine Rowell https://lnkd.in/ezbZCVtN Highly recommended: Healthcare UX Newsletter, by Alex Bilstein https://lnkd.in/eYMkMJv7 Healthcare systems desperately need UX designers. They are often broken, dated, inefficient and confusing — in an environment where the stakes are high, and errors are intolerable. Hopefully these resources will help you better navigate healthcare UX — and make an impact that matters, at scale. #ux #design #healthcare
User Experience Design for Healthcare
Explore top LinkedIn content from expert professionals.
-
-
This is a must read for every HealthTech CEO. The UK Government’s AI Playbook outlines ten principles that ensure AI is used lawfully, ethically, and effectively. 1. Know AI’s Capabilities and Limitations AI is not infallible. Understanding what AI can and cannot do, its risks, and how to mitigate inaccuracies is essential for responsible use. 2. Use AI Lawfully and Ethically Legal compliance and ethical considerations are paramount. AI must be deployed responsibly, with proper data protection, fairness, and risk assessments in place. 3. Ensure Security and Resilience AI systems are vulnerable to cyber threats. Safeguards like security testing and validation checks are necessary to mitigate risks such as data poisoning and adversarial attacks. 4. Maintain Meaningful Human Control AI should not operate unchecked. Human oversight must be embedded in critical decision-making processes to prevent harm and ensure accountability. 5. Manage the Full AI Lifecycle AI systems require continuous monitoring to prevent drift, bias, and inaccuracies. A well-defined lifecycle strategy ensures sustainability and effectiveness. 6. Use the Right Tool for the Job AI is not always the answer. Carefully assess whether AI is the best solution or if traditional methods would be more effective and efficient. 7. Promote Openness and Collaboration Engaging with cross-government communities, civil society, and the public fosters transparency and trust in AI deployments. 8. Work with Commercial Experts Collaboration with commercial and procurement teams ensures AI solutions align with regulatory and ethical standards, whether developed in-house or procured externally. 9. Develop AI Skills and Expertise Upskilling teams on AI’s technical and ethical dimensions is crucial. Decision-makers must understand AI’s impact on governance and strategy. 10. Align AI Use with Organisational Policies AI implementation should adhere to existing governance frameworks, with clear assurance and escalation processes in place. AI in healthcare can be revolutionary if it’s done right. My key (well some) takeaways: - Any AI solution aimed at the NHS must comply with UK AI regulations, GDPR, and NHS-specific security policies. - AI models should be explainable to clinicians and patients to build trust. - AI in healthcare must be clinically validated and continuously monitored. - Having internal AI ethics committees and compliance frameworks will be key to NHS adoption. Is your AI truly NHS ready?
-
I believe agentic experiences are the missing piece in healthcare transformation. So, I created a resource for those building the future of healthcare experiences: Building Agentic Experiences Playbook – For Healthcare. Too often, our systems are reactive—responding to problems after they’ve already happened. What we need instead are agentic experiences: AI-enabled, human-centered systems that sense, predict, and act in real time. This playbook was created to help healthcare leaders, innovators, and design thinkers reimagine what’s possible. Inside, you’ll find: ✨ Frameworks for designing adaptive, empathetic experiences ✨ Persona and journey-mapping methods tailored for healthcare ✨ Technical architectures for orchestrating multi-agent ecosystems ✨ Guidance on governance, ethics, and regulatory alignment ✨ Case studies that bring these concepts to life through real-world scenarios ✨ A roadmap for implementation, measurement, and scaling At its core, this work is about one idea: AI should not replace humans in healthcare, but amplify care—making it more continuous, proactive, and compassionate. This playbook is just the start—what matters most is how we put these principles into practice. If healthcare is about humanity, then agentic design is about keeping that humanity at the center #HealthcareAI #AgenticAI #DigitalHealth #PatientExperience #Innovation #HumanCenteredDesign
-
𝐑𝐞𝐭𝐡𝐢𝐧𝐤𝐢𝐧𝐠 𝐏𝐚𝐭𝐢𝐞𝐧𝐭-𝐂𝐞𝐧𝐭𝐫𝐢𝐜 𝐂𝐚𝐫𝐞: 𝐀𝐬𝐤, 𝐃𝐨𝐧’𝐭 𝐆𝐮𝐞𝐬𝐬. 𝐂𝐨-𝐂𝐫𝐞𝐚𝐭𝐞, 𝐃𝐨𝐧’𝐭 𝐃𝐢𝐜𝐭𝐚𝐭𝐞. I recently had a thought-provoking conversation with Quek Lit Sin, former CEO of Ng Teng Fong Hospital, about what it truly means to provide patient-centric care. One key insight? Patients should be actively involved in co-creating solutions for their own care. While this may seem obvious, it’s not always the norm in our region. Too often, healthcare operates on a top-down approach, where medical professionals prescribe solutions without fully understanding how patients have coped with similar challenges before. Think about it: Doctors tell patients what to do because they assume they know best. While the advice may be sound, it often doesn’t take into account a patient’s lived experiences, personal strategies, or what they find realistically achievable. The result? Patients struggle to follow through because the solution doesn’t fit them. A different approach is possible. I recall hearing about a UK hospital where solution-focused practitioners worked with patients suffering from brain trauma. Instead of dictating their recovery path, they asked: ✅ What has helped you manage challenges in the past? ✅ What small steps feel achievable for you right now? ✅ What would make this journey easier for you? By listening and co-creating, they saw higher patient engagement and better recovery outcomes. But this mindset isn’t just for healthcare. What if we applied the same approach across industries? 🔹 Education: Instead of telling students how to study, what if teachers asked, “What helps you concentrate best?” 🔹 Customer Service: Instead of service managers dictating best practices, what if they asked frontline staff, “What truly enhances the customer experience?” 🔹 Leadership: Instead of leaders assuming what motivates employees, what if they asked, “What would make you feel truly valued and engaged?” 𝐀 𝐬𝐢𝐦𝐩𝐥𝐞 𝐬𝐡𝐢𝐟𝐭 𝐟𝐫𝐨𝐦 𝐭𝐞𝐥𝐥𝐢𝐧𝐠 𝐭𝐨 𝐚𝐬𝐤𝐢𝐧𝐠 𝐜𝐚𝐧 𝐜𝐫𝐞𝐚𝐭𝐞 𝐦𝐨𝐫𝐞 𝐦𝐞𝐚𝐧𝐢𝐧𝐠𝐟𝐮𝐥, 𝐥𝐚𝐬𝐭𝐢𝐧𝐠 𝐢𝐦𝐩𝐚𝐜𝐭. 💡 Ask, don’t guess. 💡 Co-create, don’t dictate. Would love to hear your thoughts—where else do you think this approach could make a difference? 👇🏼 #smallstepstobigchanges #cocreation #leadership #culture
-
AI scribes didn’t just make notes faster- they cut burnout. New study in JAMA Network Open (Oct 2, 2025): after just 30 days with an ambient AI scribe across 6 health systems and 263 ambulatory clinicians, burnout fell from 51.9% to 38.8% (adjusted OR 0.26). What changed: • Cognitive load dropped (−2.64 on a 10-point scale) • After-hours charting shrank by ~0.9 hours/week • More undivided attention for patients • Slightly clearer care plans for patients reading notes • Easier to add urgent slots when needed Ambulatory medicine is cognitive work. For me, analyzing a patient's history, deeply listening to them to find clues on diagnosis and to hear whats not being said, can be more mentally taxing then a full day in the OR. Anything that decreases cognitive load, helps free up bandwidth to focus on getting the patient better Every minute reclaimed from the EHR is a minute returned to clinical reasoning, patient connection, and recovery time the real levers against burnout. Caveats (and still promising): It’s self-reported, no control group, and only 30 days, but the signal is strong and consistent with smaller pilots. The next step is measuring. Track EHR time, after-hours work, inbox load, note quality, and patient access before/after deployment. My take as a practicing GI: The win isn’t “AI writes the note.” It’s that clinicians think more, click less, and patients get clearer plans. That’s the bridge from potential to practice. #AIinHealthcare #DigitalHealh #PhysicianBurnout #HealthInnovation
-
This paper discusses the integration of human values into LLMs used in medical decision-making, highlighting the complexities and ethical considerations involved. 1️⃣ Human values, reflecting individual and societal principles, inherently influence AI models from their data selection to their deployment. 2️⃣ The incorporation of values in medical decision-making dates back to the 1950s, emphasizing the ongoing relevance of balancing probability and utility. 3️⃣ Training data can both reveal and amplify societal biases, affecting AI outputs in medical applications like radiography and dermatology. 4️⃣ A clinical example involving growth hormone treatment illustrates how values influence AI recommendations and decision-making among patients, doctors, and insurers. 5️⃣ LLMs can be tuned to reflect specific human values through supervised fine-tuning and reinforcement learning from human feedback, but this raises questions about whose values are represented. 6️⃣ Addressing value discrepancies, ensuring continuous retraining, and using population-level utility scores are vital for aligning AI with evolving human values. ✍🏻 Kun-Hsing Yu, Elizabeth Healey , Tze-Yun Leong, Isaac Kohane, Arjun Manrai. New England Journal of Medicine. May 30, 2024. DOI: 10.1056/NEJMra2214183
-
𝐖𝐡𝐚𝐭 𝐊𝐞𝐞𝐩𝐬 𝐖𝐨𝐦𝐞𝐧 𝐄𝐧𝐠𝐚𝐠𝐞𝐝 𝐢𝐧 𝐇𝐞𝐚𝐥𝐭𝐡 𝐀𝐩𝐩𝐬? (𝐇𝐢𝐧𝐭: 𝐈𝐭’𝐬 𝐍𝐨𝐭 𝐌𝐨𝐫𝐞 𝐍𝐨𝐭𝐢𝐟𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬) A 2024 scoping review in the Journal of Medical Internet Research delved into women's preferences for personalized digital health tools. The study emphasized that personalization isn't just a feature; it's a necessity for effective engagement. ✅ 𝐖𝐡𝐚𝐭 𝐖𝐨𝐫𝐤𝐞𝐝: ↳ 𝐏𝐞𝐫𝐬𝐨𝐧𝐚𝐥𝐢𝐳𝐞𝐝 𝐌𝐞𝐬𝐬𝐚𝐠𝐢𝐧𝐠: Tailored messages, especially those addressing individual health goals and using the user's name, enhance motivation and reduce message fatigue. ↳ 𝐂𝐮𝐬𝐭𝐨𝐦𝐢𝐳𝐚𝐛𝐥𝐞 𝐍𝐨𝐭𝐢𝐟𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬: Allowing users to set the frequency and timing of notifications led to higher notification interaction. ↳ 𝐔𝐬𝐞𝐫-𝐂𝐨𝐧𝐭𝐫𝐨𝐥𝐥𝐞𝐝 𝐃𝐚𝐬𝐡𝐛𝐨𝐚𝐫𝐝𝐬: Dashboards that women could customize to track metrics relevant to their evolving goals (e.g., during pregnancy or lifestyle changes) improved long-term adherence. ↳ 𝐀𝐞𝐬𝐭𝐡𝐞𝐭𝐢𝐜 𝐚𝐧𝐝 𝐃𝐢𝐬𝐜𝐫𝐞𝐞𝐭 𝐖𝐞𝐚𝐫𝐚𝐛𝐥𝐞𝐬: Sleek, stylish designs that could be worn discreetly encouraged consistent use, with some users viewing them as fashion accessories. ❌ 𝐖𝐡𝐚𝐭 𝐃𝐢𝐝𝐧'𝐭 𝐖𝐨𝐫𝐤: ↳ 𝐂𝐨𝐧𝐧𝐞𝐜𝐭𝐢𝐯𝐢𝐭𝐲 𝐈𝐬𝐬𝐮𝐞𝐬: Dependence on stable internet connections posed challenges, especially in areas with limited coverage, leading to decreased tool usage. ↳ 𝐒𝐨𝐟𝐭𝐰𝐚𝐫𝐞 𝐆𝐥𝐢𝐭𝐜𝐡𝐞𝐬: App crashes, slow loading times, and broken links frustrated users, diminishing trust and engagement. ↳ 𝐋𝐚𝐜𝐤 𝐨𝐟 𝐂𝐮𝐥𝐭𝐮𝐫𝐚𝐥 𝐑𝐞𝐥𝐞𝐯𝐚𝐧𝐜𝐞: Tools that didn't account for cultural dietary habits or language preferences were less effective and often abandoned. ↳ 𝐎𝐯𝐞𝐫𝐰𝐡𝐞𝐥𝐦𝐢𝐧𝐠 𝐃𝐚𝐭𝐚 𝐏𝐫𝐞𝐬𝐞𝐧𝐭𝐚𝐭𝐢𝐨𝐧: Excessive or complex data without clear explanations made users feel overwhelmed, reducing the perceived usefulness of the tool. 📊 𝐄𝐧𝐠𝐚𝐠𝐞𝐦𝐞𝐧𝐭 𝐁𝐨𝐨𝐬𝐭 𝐰𝐢𝐭𝐡 𝐏𝐞𝐫𝐬𝐨𝐧𝐚𝐥𝐢𝐳𝐚𝐭𝐢𝐨𝐧: Studies within this report indicated that 𝐩𝐞𝐫𝐬𝐨𝐧𝐚𝐥𝐢𝐳𝐚𝐭𝐢𝐨𝐧 𝐟𝐞𝐚𝐭𝐮𝐫𝐞𝐬 𝐜𝐨𝐮𝐥𝐝 𝐢𝐧𝐜𝐫𝐞𝐚𝐬𝐞 𝐮𝐬𝐞𝐫 𝐞𝐧𝐠𝐚𝐠𝐞𝐦𝐞𝐧𝐭 𝐛𝐲 𝐮𝐩 𝐭𝐨 𝟒𝟎%. For instance, customizable notifications and personalized messaging significantly enhanced user interaction with health apps. Source: Journal of Medical Internet Research https://lnkd.in/d6-6czg3 💡 Tools that empower choice, respect preferences, and simplify data drive real engagement. What’s worked for you to boost retention? Looking forward to your insights in the comments below. 👋 I'm Dr. Michelle Frank, specializing in women's health advocacy. Connect with me to discuss how we can work together to overcome these societal barriers and improve women's healthcare autonomy. #DigitalHealth #WomensHealth #Personalization #HealthTech #UserEngagement #Innovation #CommunityManagement
-
Users don't suck, but the information provided to them can. If your IFU reads like a legal contract, people won’t read it. Why? Because they’re confusing. Too wordy. Too complex. Too scattered. A great IFU should feel like having a clear-headed expert guiding you step by step. The user needs to know what to do, how to do it, and when to do it. Here's 20 recommendations/writing rules to improve your IFU↴ 1. Write procedures in short, identifiable steps, and in the correct order. 2. Before listing steps, tell the reader how many steps are in the procedure. 3. Limit each step to no more than three logically connected actions. 4. Make instructions for each action clear and definite. 5. Tell the user what to expect from an action. 6. Discuss common use errors and provide information to prevent and correct them. 7. Each step should fit on one page. 8. Avoid referring the user to another place in the manual (no cross-referencing). 9. Use as few words as possible to present an idea or describe an action. 10. Use no more than one clause in a sentence. 11. Write in a natural, conversational way. Avoid overly formal language. 12. Express ideas of similar content in similar form. 13. Users should be able to read instructions aloud easily. Avoid unnecessary parentheses. 14. Use the same term consistently for devices and their parts. 15. Use specific terms instead of vague descriptions. 16. Use active verbs rather than passive voice. 17. Use action verbs instead of nouns formed from verbs. 18. Avoid abbreviations or acronyms unless necessary. Define them when first used and stay consistent. 19. Use lay language instead of technical jargon, especially for medical devices intended for laypersons. 20. Define technical terms the first time they appear and keep definitions simple. Prioritize the user while ensuring MDR/IVDR compliance.
-
Can Public Health Catch Up with Fintech on Behavioral Design? Listening to the CEO of a global bank on The David Rubenstein Show, I was struck by his statement: we view our mobile app as our largest “branch”. Once I thought about it, it made a lot of sense - we may want to learn more from Fintech in terms of design. Fintech platforms are often masterclasses in behavioral design—making complex actions (like saving or investing) simple, intuitive, and even rewarding. In Public health, we still people to show up, fill out long forms, and remember complex schedules - often without support, feedback, or reminders. But here’s the thing: the same behavioral principles that make fintech so effective can -and should - be applied to public health. The crossovers are powerful, and the stakes are even higher. Here are just a few examples: 🔹 1. Simplification Fintech: Auto-fill forms, single-click transfers, clear dashboards. Public Health: Appointment scheduling, consent forms, and health messages should be short, visual, mobile-friendly. 🔹 2. Prompting at the Right Time Fintech: Reminders before bill due dates or spending alerts. Public Health: Timely nudges - like SMS reminders before vaccination days - can dramatically improve follow-through. 🔹 3. Goal Setting with Feedback Fintech: Users track savings or investment progress. Public Health: Parents could track immunization completion, or users could set and monitor steps, check-ups, or medication adherence. 🔹 4. Defaults Fintech: Auto-enroll into savings plans. Public Health: Why not use opt-out vaccination appointments or auto-enroll for screenings? 🔹 5. Gamification and Micro-Rewards Fintech: Points for spending habits or streaks. Public Health: Small incentives - tokens, airtime, badges—can reward key behaviors like attending health visits or completing education modules. 🔹 6. Social Proof & Comparisons Fintech: “People like you saved $200 this month.” Public Health: “Most parents in your community have vaccinated their daughters.” Descriptive norms work. 🔹 7. Personalization Fintech: Tailored budgeting advice. Public Health: Personalized SMS: “Fatima, your daughter’s next vaccine is due this week.” Names matter. Timing matters. 🔹 8. Progress Indicators Fintech: Visual progress bars motivate users. Public Health: Progress tracking toward full immunization or treatment completion builds momentum and accountability. We don’t need to reinvent the wheel. Fintech figured out how to help people do hard things. Public health can do the same - with even greater impact. #BehaviorDesign #BehavioralScience #Fintech #PublicHealth #DigitalHealth #BehaviorChange #GlobalHealth #HumanCenteredDesign #VaccinesWork #HealthEquity #SocialNorms #HPVVaccination #Innovation #DigitalTransformation
-
👀 That moment when “one-size-fits-all” just doesn’t cut it anymore… and you’re all eyes on "personalized" By strategically combining AI and behavioral science, you can create the kind of impact that drops jaws 😮 That’s exactly what Amy Bucher and her team are doing at Lirio with precision nudging: hyper-personalized communications crafted from the fusion of AI tech and behavioral science expertise. So, what’s in their secret sauce for hyper-personalization? 1️⃣ *Behavioral Science Foundations* Lirio’s team builds interventions from both top-down and bottom-up research. They start with behavior change models and existing literature to pinpoint the key drivers of target behaviors. Then they blend in on-the-ground insights from stakeholders and partners to create an initial framework, or logic model. 2️⃣ *Precision Content Design* They identify behavior change techniques (BCTs) and translate these into content assets — bites and visuals that combine into engaging, targeted messages. 3️⃣ *AI Training & Iteration* Here, Lirio’s AI team steps in. They train AI “agents” with specialized jobs to determine the best content, timing, and channels for each person’s engagement. The AI agents work with a reward system that incentivizes various steps of the patient journey, from opening emails to scheduling and attending an appointment. Using contextual bandits (think of them as superhuman experimenters) and reinforcement learning (the aforementioned system of incentivizing behaviors), they continuously test different messages to maximize the target behavior (e.g., getting a mammogram). The result? Hyper-personalized messaging that respects each person’s unique context and helps them complete essential health tasks. 👋 Goodbye to the old days of one-size-fits-all interventions, and hello to personalized communications that address each user's unique situation. #AI #BehavioralScience #BehavioralDesign cc: Samuel Salzer Habit Weekly Nuance Behavior