Mental Model Score: From user context to UX insight
As a UX designer & researcher, I’ve often felt our current toolbox is missing something crucial—methods that effectively bridge the gap between specific usability metrics and general satisfaction scores. Working with AI tools like Claude and ChatGPT, I developed the Mental Model Score (MMS) framework—a theoretical system that uniquely takes a contextual approach, evaluating applications from the user’s context rather than focusing solely on application characteristics. This framework addresses how users’ mental models, shaped by what they think is happening rather than what actually is, create critical gaps in our understanding. This isn’t just another case study of AI assistance; it’s the story of how collaborative intelligence created something that fills a genuine methodological need.
The gap in our UX research toolbox
The more I’ve worked in UX research, the more I’ve noticed a peculiar gap in our methodological toolbox. On one hand, we have highly specific tools that measure discrete interactions (time on task, error rates, eye tracking). On the other, we have general satisfaction metrics (NPS, CSAT, SUS) that provide overall scores but little diagnostic insight.
What’s missing is a middle ground—a framework that captures users’ internal representations of a system while acknowledging the critical balance between gain and pain in their experiences. Is the pain worth the gain? This question, similar to the concept of brand equity in marketing (the associated values toward a labeled experience), often gets lost in our current methods.
The challenge is particularly acute because traditional UX evaluation methods like heuristic evaluations and SUS (System Usability Scale) primarily focus on the inherent characteristics of the application itself. They ask: “Does this application follow established design principles?” or “How usable is this system according to standardized criteria?” While valuable, these approaches often miss the critical contextual dimension of how users experience applications within their specific usage environments and in comparison to alternatives.
This insight led me to envision a new framework—one that would measure not just what users do or say, but how they internally represent systems and the balance between perceived value and effort, all within their unique usage contexts.
The contextual advantage: What makes MMS different
What sets the MMS framework apart from traditional UX evaluation methods is its contextual approach. Rather than evaluating an application in isolation against fixed standards, MMS considers the user’s entire ecosystem:
-
- Comparative evaluation: MMS enables comparison between different tools and substitutes by measuring mental model alignment from the user’s perspective, not just against abstract usability standards.
- Contextual understanding: The framework acknowledges that a user’s experience is shaped by their specific environment, previous tool experiences, and the alternatives available to them.
- User-centered rather than application-centered: While methods like heuristic evaluation focus on whether an application meets predefined criteria, MMS focuses on how well an application aligns with users’ contextual mental models.
- Ecosystem awareness: Traditional methods might rate an application highly on usability scales, yet miss that it fails to integrate with users’ broader tool ecosystem.
This contextual approach makes MMS particularly valuable for comparing different solutions within the same problem space and understanding why users might prefer a technically “inferior” product that better matches their mental models.
The collaborative creation process
With this challenge in mind, I turned to AI as a thought partner. Here’s how our collaboration unfolded:
-
- Concept exploration: I shared my observations about the gap in UX methodologies with Claude, explaining how users often judge experiences through a balance of gain versus pain within their specific contexts. The AI helped structure these intuitions into potential measurement frameworks.
- Framework refinement: Through iterative discussions with ChatGPT, we explored how to quantify users’ internal representations within their usage contexts, eventually settling on five key components:
- • Effort (E): The perceived cognitive and physical work users expend
• Trust (T): Users’ confidence in the system’s reliability and intentions
• Expectation Alignment (X): The gap between anticipated and actual behavior
• Impact (I): How significantly misalignments affect the user’s gain/pain balance
• Concern Factors (C): Specific anxiety points that weigh on the experience
- • Effort (E): The perceived cognitive and physical work users expend
- Formula development: Working with Claude’s logical reasoning capabilities, we created a mathematical representation: MMS = (w₁E + w₂T + w₃X + w₄I) – w₅C, essentially calculating whether the gains (positive factors) outweigh the pains (concerns).
- Insight generation logic: Perhaps most valuable was the AI’s help in creating interpretive frameworks—understanding what different score patterns reveal about users’ internal representations within their usage contexts.
- Calculator implementation: Finally, ChatGPT helped develop the actual HTML/CSS/JavaScript code for a functional MMS Calculator, transforming theoretical framework into testable tool.
Throughout this process, the AI wasn’t just a technical assistant but a conceptual collaborator, helping bridge the gap between vague observations and structured methodology.
The framework in theory
The Mental Model Score (MMS) framework aims to capture the balance between gain and pain in user experiences, based on how users internally represent systems within their specific contexts. Here’s how it works in theory:n
After conducting user research sessions, researchers input findings into the MMS Calculator, assigning ratings from 1-5 for each component:
-
- User Effort: How much work users perceive in accomplishing tasks (1: High effort, 5: Low effort)
- Trust: Users’ confidence in the system (1: No trust, 5: High trust)
- Expectation Alignment: How well system behavior matches what users anticipate (1: Large mismatch, 5: Perfect match)
- Impact: How significantly any misalignment affects the gain/pain balance (1: Minimal impact, 5: Severe impact)
- Concern Factors: Specific worries that weigh on the experience (1: No concern, 5: Severe concern)
- User Effort: How much work users perceive in accomplishing tasks (1: High effort, 5: Low effort)
Rather than judging a system against abstract standards, the MMS framework deliberately focuses on measuring how users perceive and internally represent their experiences within their specific contexts—recognizing that these contextual mental models, not objective application characteristics, drive user behavior and satisfaction.
The calculator produces an overall score that represents whether the perceived gains outweigh the perceived pains, along with detailed insights into where mental models align or diverge from system reality within the user’s specific context.
Beyond the numbers: The potential value
The MMS framework’s potential value lies in how it bridges the gap between specific and general UX metrics while enabling contextual comparison:
-
- Capturing the Gain/Pain Balance: Like brand equity in marketing, MMS aims to quantify whether users feel the value gained is worth the effort invested.
- Enabling comparison across solutions: Unlike methods that evaluate applications in isolation, MMS allows for meaningful comparison between different tools and alternatives by measuring from the user’s contextual perspective.
- Respecting mental models: Rather than focusing solely on objective metrics, MMS acknowledges that users interact with systems as they believe them to be, not as they actually are.
- Avoiding rationalization traps: By focusing on internal representations rather than specific solution evaluations, MMS may help users express their true mental models rather than post-hoc rationalizations.
- Contextual insight: While traditional methods might tell you if an application is “good” according to established principles, MMS aims to tell you if it’s “right” for users in their specific contexts.
- Capturing the Gain/Pain Balance: Like brand equity in marketing, MMS aims to quantify whether users feel the value gained is worth the effort invested.
I’m looking forward to testing whether this framework actually delivers these potential benefits in real-world scenarios.
The future of human-AI collaboration in UX
Creating the MMS framework has shown me how AI can help bridge the gap between observation and methodology in UX research. While AI can’t replace human intuition about user psychology, it excels at:
-
-
- Pattern recognition: Identifying relationships between components that shape users’ internal representations
- Framework building: Translating vague observations into structured measurement systems
- Implementation: Rapidly converting theoretical constructs into testable tools
- Pattern recognition: Identifying relationships between components that shape users’ internal representations
-
This collaboration exemplifies how humans can identify gaps in existing approaches while AI helps formalize solutions that might otherwise remain intuitive but unstructured.
Next steps
With the framework defined and the calculator built, I’m now preparing to test whether the MMS approach actually captures users’ internal representations better than existing methods. I plan to:
-
- Apply the framework to upcoming user research projects
- Compare MMS findings with traditional UX metrics to identify unique insights
- Assess whether the contextual approach provides valuable comparative insights between different tools
- Refine the framework based on how well it captures users’ genuine mental models within their specific contexts
I believe this approach has potential to address the methodological gap I’ve observed, but only real-world testing will determine whether it truly helps us understand how users internally represent systems within their unique usage contexts.
Disclaimer
The Mental Model Score (MMS) framework is currently a theoretical approach to evaluating users’ internal representations of systems and the balance between gain and pain in their experiences. It has not yet been tested in real-world scenarios and has not undergone formal validation. The framework and calculator are provided as experimental tools that will require significant testing and refinement.
All insights and recommendations generated by the calculator should be considered speculative until validated through practical application. The collaborative AI process described represents my personal experience in framework development.
THE STIMULUS EFFECT | Podcasts
Podcasts on Spotify
Mental Model Score (MMS) Calculator
Calculate how well users' mental models align with your system design using the MMS framework. Higher scores indicate better alignment and user experience.
Positive Factors
Concern Factors
MMS Calculation
The Mental Model Score (MMS) is calculated using this formula:
Higher MMS indicates stronger mental model alignment and better user experience.
0 Comments