⏱️ How To Measure UX (https://lnkd.in/e5ueDtZY), a practical guide on how to use UX benchmarking, SUS, SUPR-Q, UMUX-LITE, CES, UEQ to eliminate bias and gather statistically reliable results — with useful templates and resources. By Roman Videnov. Measuring UX is mostly about showing cause and effect. Of course, management wants to do more of what has already worked — and it typically wants to see ROI > 5%. But the return is more than just increased revenue. It’s also reduced costs, expenses and mitigated risk. And UX is an incredibly affordable yet impactful way to achieve it. Good design decisions are intentional. They aren’t guesses or personal preferences. They are deliberate and measurable. Over the last years, I’ve been setting ups design KPIs in teams to inform and guide design decisions. Here are some examples: 1. Top tasks success > 80% (for critical tasks) 2. Time to complete top tasks < 60s (for critical tasks) 3. Time to first success < 90s (for onboarding) 4. Time to candidates < 120s (nav + filtering in eCommerce) 5. Time to top candidate < 120s (for feature comparison) 6. Time to hit the limit of free tier < 7d (for upgrades) 7. Presets/templates usage > 80% per user (to boost efficiency) 8. Filters used per session > 5 per user (quality of filtering) 9. Feature adoption rate > 80% (usage of a new feature per user) 10. Time to pricing quote < 2 weeks (for B2B systems) 11. Application processing time < 2 weeks (online banking) 12. Default settings correction < 10% (quality of defaults) 13. Search results quality > 80% (for top 100 most popular queries) 14. Service desk inquiries < 35/week (poor design → more inquiries) 15. Form input accuracy ≈ 100% (user input in forms) 16. Time to final price < 45s (for eCommerce) 17. Password recovery frequency < 5% per user (for auth) 18. Fake email frequency < 2% (for email newsletters) 19. First contact resolution < 85% (quality of service desk replies) 20. “Turn-around” score < 1 week (frustrated users → happy users) 21. Environmental impact < 0.3g/page request (sustainability) 22. Frustration score < 5% (AUS + SUS/SUPR-Q + Lighthouse) 23. System Usability Scale > 75 (overall usability) 24. Accessible Usability Scale (AUS) > 75 (accessibility) 25. Core Web Vitals ≈ 100% (performance) Each team works with 3–4 local design KPIs that reflects the impact of their work, and 3–4 global design KPIs mapped against touchpoints in a customer journey. Search team works with search quality score, onboarding team works with time to success, authentication team works with password recovery rate. What gets measured, gets better. And it gives you the data you need to monitor and visualize the impact of your design work. Once it becomes a second nature of your process, not only will you have an easier time for getting buy-in, but also build enough trust to boost UX in a company with low UX maturity. [more in the comments ↓] #ux #metrics
Competitive Analysis In UX
Explore top LinkedIn content from expert professionals.
-
-
Price benchmark and positioning is one of the most important aspects for a new fashion brand launch. More so if it is an international brand launching in the diverse and competitive Indian market. The key benchmark of course would be the brand's base market price positioning as a starting point. More importantly to consider its global competition brand’s existing price positioning in India. And try to marry both outside-in and inside-out perspectives to identify that sweet spot in the market. Just applying a multiple on to the brand’s base market pricing for India may not suffice to cut through. It’s more nuanced than that, below are some key factors to consider: 🔸Brand's own market price positioning and aligning India pricing with that. M&S had to revise and reduce its pricing within a few years of its launch in India back in 2001, to align more with the market and be competitive. 🔸Brand’s global competitors pricing in India and their positioning vis-à-vis brand’s global benchmark. For example, a European denim brand starting 100 euros mrp planning to launch in India, would need to see its price benchmark with Levi's both in Europe as well as in India market to compare and align accordingly. 🔸Net landed cost including custom duty, freight etc and India sourcing mix requirements to reach ideal gross margins while maintaining global product standards & price competitiveness in the local market. Many leading international fashion brands operating over many years in India have successfully been able to offer that with scale and continue to grow. 🔸Pricing basis product perceived value, core vs fashion, categories etc and may price at a premium as/if needed, or sharper to try and sell more on fullprice and less on discounts. Zara entry price products in India are priced much sharper vis-a-vis higher price products in comparison with global price benchmarks, just to cater to that sweet price point for its TG. Thanks to social media, today customers are well informed about brand price positioning in the global market and would compare its pricing in Dubai, Bangkok etc or even the EU and US markets with the one in India, and make their shopping choices accordingly across brands and markets as accessible. Sharing snapshots of SS25 season men's t-shirt basic entry price point comparison for like-for-like style across brands in India and its global base market for perspective. Your thoughts? #Pricing #Positioning #Benchmark #Fashion #International #Brand #India #Market #Launch #Strategy
-
Brains aren’t calculators (they really aren’t). People compare, not score, so why do we keep asking for numbers when their minds work in stories and snapshots? I used to rely heavily on rating questions in UX studies. You’ve probably used them too. Rate the ease of a task from 1 to 7 or indicate satisfaction on a scale from 1 to 10. These questions feel measurable and look neat in reports, but after running enough sessions, I started noticing a pattern. A participant would finish a task and pause when asked for a score. They’d hesitate, look unsure, and eventually say something like, “Maybe a six?” followed by, “I’m not really sure what that means.” That hesitation is not about the experience itself. It’s about the format of the question. Most people do not evaluate their experiences using numbers. They judge by comparing, whether against other apps, past expectations, or familiar interactions. When I started asking questions like “How did that compare to what you’re used to?” or “Was that easier or harder than expected?” the responses became clearer and more useful. Participants shared what stood out, what surprised them, and what felt better or worse. Their answers were grounded in real impressions, not guesses. This shift from rating questions to comparison questions changed how I run research. Rating scales flatten experiences into abstract numbers. Comparison questions surface preference, context, and emotion. They help users express themselves in the way they naturally reflect on experiences. And they help researchers hear the parts of the experience that actually drive behavior. There is strong support for this in cognitive science. Tversky’s Elimination by Aspects model shows that people decide by gradually filtering out options that lack something important. Prototype theory explains that we judge how well something matches our internal image of what “good” looks like. Both models show that people think in relative terms, not fixed scores. Even heuristic evaluation in usability relies on comparing designs to expected norms and mental shortcuts, not isolated measurement. These models all point to the same idea. People understand and evaluate experiences through contrast. Asking them to rate something on a scale often hides what they really feel. Asking them to compare helps them express it. I still use quantitative data when needed. It helps with tracking and reporting. But when I want to understand why something works or fails, I ask comparison questions. Because users don’t think in scores. They think in reference points, in expectations, and in choices. That is what we should be listening to.
-
If you’re an AI engineer building multi-agent systems, this one’s for you. As AI applications evolve beyond single-task agents, we’re entering an era where multiple intelligent agents collaborate to solve complex, real-world problems. But success in multi-agent systems isn’t just about spinning up more agents, it’s about designing the right coordination architecture, deciding how agents talk to each other, split responsibilities, and come to shared decisions. Just like software engineers rely on design patterns, AI engineers can benefit from agent design patterns to build systems that are scalable, fault-tolerant, and easier to maintain. Here are 7 foundational patterns I believe every AI practitioner should understand: → 𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹 𝗣𝗮𝘁𝘁𝗲𝗿𝗻 Run agents independently on different subtasks. This increases speed and reduces bottlenecks, ideal for parallelized search, ensemble predictions, or document classification at scale. → 𝗦𝗲𝗾𝘂𝗲𝗻𝘁𝗶𝗮𝗹 𝗣𝗮𝘁𝘁𝗲𝗿𝗻 Chain agents so the output of one becomes the input of the next. Works well for multi-step reasoning, document workflows, or approval pipelines. → 𝗟𝗼𝗼𝗽 𝗣𝗮𝘁𝘁𝗲𝗿𝗻 Enable feedback between agents for iterative refinement. Think of use cases like model evaluation, coding agents testing each other, or closed-loop optimization. → 𝗥𝗼𝘂𝘁𝗲𝗿 𝗣𝗮𝘁𝘁𝗲𝗿𝗻 Use a central controller to direct tasks to the right agent(s) based on input. Helpful when agents have specialized roles (e.g., image vs. text processors) and dynamic routing is needed. → 𝗔𝗴𝗴𝗿𝗲𝗴𝗮𝘁𝗼𝗿 𝗣𝗮𝘁𝘁𝗲𝗿𝗻 Merge outputs from multiple agents into a single result. Useful for ranking, voting, consensus-building, or when synthesizing diverse perspectives. → 𝗡𝗲𝘁𝘄𝗼𝗿𝗸 (𝗛𝗼𝗿𝗶𝘇𝗼𝗻𝘁𝗮𝗹) 𝗣𝗮𝘁𝘁𝗲𝗿𝗻 Allow all agents to communicate freely in a many-to-many fashion. Enables collaborative systems like swarm robotics or autonomous fleets. ✔️ Pros: Resilient and decentralized ⚠️ Cons: Can introduce redundancy and increase communication overhead → 𝗛𝗶𝗲𝗿𝗮𝗿𝗰𝗵𝗶𝗰𝗮𝗹 𝗣𝗮𝘁𝘁𝗲𝗿𝗻 Structure agents in a supervisory tree. Higher-level agents delegate tasks and oversee execution. Useful for managing complexity in large agent teams. ✔️ Pros: Clear roles and top-down coordination ⚠️ Cons: Risk of bottlenecks or failure at the top node These patterns aren’t mutually exclusive. In fact, most robust systems combine multiple strategies. You might use a router to assign tasks, parallel execution to speed up processing, and a loop for refinement, all in the same system. Visual inspiration: Weaviate ------------ If you found this insightful, share this with your network Follow me (Aishwarya Srinivasan) for more AI insights, educational content, and data & career path.
-
AI is killing the UX Design role as we know it. Designers who adapt will evolve into Strategic Experience Architects who will be in high demand. While traditional designers are "pixel-pushing," a new set of designers is emerging. They're using AI to fast-track design ideas and turning prototypes into working code. A lot of what UX designers are doing manually today is exactly what AI tools are getting good at: • Rapid wireframing concepts • UI component creation • Basic user research • Persona development • Usability testing automation The ability to automate some UX tasks is already here. We have to assume that the technology will only advance quickly. I recently spoke with several Product Managers who are already replacing basic UX tasks with AI tools. When PMs can generate, iterate, and validate designs using AI, what happens to the traditional UX role? Simple products and startups will streamline. PMs with AI will be able to handle the basics. We're already seeing this shift. However, there's a big opportunity here as well. AI has a critical blind spot: it can't grasp the nuanced psychology of human behavior. It can't navigate complex stakeholder dynamics. It can't translate business objectives into meaningful user experiences. This is where the evolution happens. The future belongs to Strategic Experience Architects who: ✦ Define the right problems to solve ✦ Extract insights from human complexity ✦ Align teams around user value ✦ Guide AI with human context The market is splitting: → Basic products: UX roles blend into other roles on the team → Complex enterprises: Strategic UX roles become critical Fortunately, most valuable products are complex and human-centered. Want to stay relevant? Here's what to consider. 1. Master AI design tools But don't just use them, learn to orchestrate them 2. Evolve from maker to strategist Your value is in thinking, not in pushing pixels (AI will eventually handle this) 3. Develop business intelligence Connect user needs to revenue 4. Study human psychology This is your moat against AI 5. Learn systems thinking Focus on developing repeatable systems in your daily work The UX industry isn't dead, but it is transforming. -- ♻️ Share if you think this will help others ➕ Follow Jason Moccia for more insights on AI and Product Design
-
When working on new features, designers often focus purely on usability. However, usability is only one part of launching something successful. Users may never use a well-designed feature if it’s: - A feature they don’t need. - A feature you aren’t communicating well. - A feature that is completely hidden in your product. - A feature you have to kill because it doesn't make business sense. Now, assuming you have a feature that people need... In designing and launching it, try incorporating the following concepts into your thinking: 1. How do users hear about it? 2. How do users discover it? 3. How easy or difficult is it to use? 4. Does it achieve the desired outcome? 5. How does it affect key business metrics? To drive effective communication, work with your product marketing, customer success, or CRM team to inform users about the upcoming launch: - What is the name of the feature? - Is it useful to all users or only a specific subset? - What are the benefits, and how do we show them on our emails/website? To drive discoverability, think of the use case and how it fits within the existing workflow of your users: - What updates do you need to make on your navigation? - What kind of signposting do you need to do with UI elements? - Do you need to do any interlinking with existing features? To measure usability, pick the most appropriate testing method depending on how much time/capacity you have: - Pre-launch, try testing it with a prototype. - Post-launch, use a combination of user tests and in-app surveys. To measure the impact: - Make sure your data tracking events are set up. - Set up a report/dashboard that you can check on a weekly basis. - Work with your data team to calculate projected retention and LTV. — If you found this post helpful, consider reposting ♻️ #productdesign #uxdesign #uiux
-
Interview Question: "With 100 pages, do you create 100 Page Objects?" The answer reveals how well you understand design patterns in test automation. Here's my approach using patterns every automation engineer should know: ➡️ Singleton Pattern - Think of WebDriver/Configurations file as your single key. I use Singleton to ensure only one instance exists throughout the test execution. No matter how many pages I create, they all share the same instance - no confusion, no conflicts. (Quick Note: If WebDriver is used with SingleTon pattern - it restricts parallel execution) ➡️ Page Components Pattern - Real websites have repeating pieces - headers, footers, search bars, product cards. I create reusable components like HeaderComponent and ProductCardComponent that can be shared across multiple pages. Think LEGO blocks - build once, use everywhere. ➡️ Feature-Based Pages - Instead of creating LoginPage, RegisterPage, ForgotPasswordPage separately, I create an AuthenticationPage that handles all login-related features. Same logic applies to ShoppingPages, AccountPages, and CheckoutPages. Group by functionality, not by URL. ➡️ Builder Pattern - When creating complex page objects or test data, Builder pattern makes it elegant. Instead of messy constructors with 10 parameters, I chain methods: new UserBuilder().withName("John").withEmail("test@email.com").build() - much cleaner and readable. ➡️ Fluent Interface - This makes your page interactions read like natural language: loginPage.enterUsername("user").enterPassword("pass").clickLogin().verifyDashboard() - each method returns the page object, allowing smooth chaining. (Quick Note: Fluent and Builder patterns introduce Tight Coupling) ➡️ Common Utilities (BasePage & BaseTest) - BasePage contains shared functionality like wait methods, screenshot capture, and common element interactions. BaseTest handles driver setup, teardown, and reporting. These base classes eliminate duplicate code across your framework. ➡️ Page Object Model (POM) - This is your foundation pattern. Instead of scattering element locators across test methods, POM creates a clean separation where each page becomes a class with its own elements and methods. But here's the key - you don't need 100 classes for 100 pages. ➡️ The Magic Result: 100 pages become just 8-10 well-designed classes that handle everything efficiently. Your framework becomes a Swiss Army knife - compact but incredibly powerful. Remember: Great automation isn't about having more classes - it's about having smarter patterns that scale effortlessly. What's your favorite design pattern for test automation? Share below! 👇 -x-x- Crack your next SDET Coding Round with guided video sessions: https://lnkd.in/ggXcYU2s #japneetsachdeva
-
𝐌𝐚𝐫𝐤𝐞𝐭𝐞𝐫𝐬! 𝐒𝐭𝐚𝐫𝐭 𝐓𝐫𝐞𝐚𝐭𝐢𝐧𝐠 𝐏𝐨𝐬𝐢𝐭𝐢𝐨𝐧𝐢𝐧𝐠 𝐚𝐬 𝐚 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐋𝐞𝐯𝐞𝐫 Many marketers view positioning simply as a step that feeds into their communication strategies, but that's a limited perspective. The belief that advertising reach and distribution alone drive business growth is outdated. Instead, effective positioning can make a significant difference. Positioning is your brand’s response to the changing marketing reality. Here’s how to do it right: 𝐈𝐝𝐞𝐧𝐭𝐢𝐟𝐲 𝐂𝐚𝐭𝐞𝐠𝐨𝐫𝐲 𝐍𝐨𝐧-𝐧𝐞𝐠𝐨𝐭𝐢𝐚𝐛𝐥𝐞: Every product category has its non-negotiables—fundamental benefits that consumers expect. For example, people look for delight in chocolates, cleanliness in detergents, and sound quality in speakers. Identifying these non-negotiables is the first step to understanding what drives your audience. 𝐈𝐝𝐞𝐧𝐭𝐢𝐟𝐲 𝐭𝐡𝐞 𝐓𝐞𝐧𝐬𝐢𝐨𝐧: To truly nail your positioning, focus on three key areas: Culture-In: Recognize the cultural shifts influencing category expectations. Changes in consumer values, like the shift from a feature-centric to a value-centric mindset, can significantly impact how products are perceived and sold. Consumer-In: Dive deep into current consumer attitudes to uncover the deeper needs within a category. For detergents, this might be stain removal or ease of use; for shampoos, it could be about hair strengthening. Category-In: Pay attention to emerging conversations and dominant themes within your category. For example, if the prevailing trend in tires is durability, and your product offers superior braking, that's a unique angle to highlight. By synthesizing insights from these areas, you can pinpoint the tensions—whether they stem from cultural shifts, consumer needs, or category evolution—that your brand needs to address. 𝐁𝐫𝐞𝐚𝐤 𝐃𝐨𝐰𝐧 𝐘𝐨𝐮𝐫 𝐏𝐫𝐨𝐩𝐨𝐬𝐢𝐭𝐢𝐨𝐧: Once you identify the tension, deconstruct it to find your unique angle. This becomes the cornerstone of your positioning strategy—what your brand offers to resolve the identified tension and stand out in the marketplace. Positioning is more than a mere exercise; it's a strategic tool that, when leveraged correctly, can propel your business forward. Understand your audience, identify the tensions, and use them to refine your proposition. That's how you turn positioning into a powerful business lever. This post is part of a series on Building Consumer Centric Marketing Strategy: 100 Primer on Consumer Centricity: https://lnkd.in/d_39M9Bg 101 Segmenting Your Consumer: https://lnkd.in/d6GBfQ5k 102 Identifying Cultural Trends: https://lnkd.in/gjz-xfPe 103 Building Channel Strategy: https://lnkd.in/deU9jC2y 104 Proposition VS Positoning: https://lnkd.in/dVSPM7JV #Strategy #Marketing #Advertising #ConsumerCentricity
-
In banking, “me too” is a growth killer. Most banks and #creditunions offer the same accounts, the same rates, the same experiences—just wrapped in different logos. But in a world of API-first #fintech, embedded finance, and AI-driven personalization, sameness isn’t safety—it’s irrelevance. Differentiation doesn’t mean reinventing the wheel. It means solving your customer’s pain points, not the industry’s clichés. From a digital perspective, it means you should be embedding value where others just offer transactions. And because banking is fundamentally a chore and utility for most of us, you better be designing experiences so intuitive, they feel invisible. I’ve often said that the best banking experience is one I don’t have to think about. The winners in banking won’t be the biggest or the flashiest. They’ll be the ones bold enough to build something that doesn’t blend in. Your future doesn’t demand coffee in the branch (or egg salad machines). Differentiate—or disappear. #Banking #DigitalStrategy #CX #leadership #EmbeddedFinance #ProductInnovation #FinancialServices #communitybanking
-
Here are 3 predictions for the future of UX. We talk enough about AI and the big tech trends of the 2020s, but here’s my big-picture perspective on how they’ll actually reshape UX. 🔮 Anticipating user needs before they exist With tech like applied AI, you can ideate, test, prototype and code and market a new app at record speeds. When product moves so fast, it also means competition. Just look at any App store, there are about 100 different apps for any user need or niche. Everyone can build the same features in a couple weeks. User expectations on features / capabilities for products will also go up. To stand out in such a market, you’ll need to ship even faster to meet current AND future user needs. Products that win will live in the future even more and anticipate based on the user’s problems the capabilities they WILL want, not just what they’re asking for today. 🧭UX-ers will become strategic discovery guides AI's speeding up research workflows already. It can transcribe, code data and take notes faster than human assistants. Figma plugins are auto-generating UI components, and tools like Voiceflow are building conversational prototypes in minutes. Here’s our AI research assistant btw: https://bit.ly/4dAc3O0 So what’s left for the human researchers and designers? More strategy, high-impact questions, and facilitating org-wide discovery frameworks. With more cross-functional teams, UXRs will need to shift tactical, low-impact studies to non-researchers to run themselves. Research methodology may shift to more strategic questions to help teams anticipate user needs. Think decision-maker’s table product strategy and design frameworks, not just testing prototypes or running usability tests. 🛡️Increasing wariness about big tech Users are more wary of big tech in this era. Just look at the rise in privacy blockers, cybersecurity solutions and activism around the impunity of big tech. Trust may become the new UX currency. This could look like baking privacy and security considerations into every step of the design process. Transparency is crucial. We'll need to be designing clearer data dashboards, more intuitive permission flows, and ethical AI explainability features. UX interactions will focus heavily on building trust. Every UX decision will need to reinforce data ethics. We go into more detail on these predictions in our latest blog post on the future of UX: https://bit.ly/4dVxl8S Do you agree with my predictions? I want to know what you think! #futureofux