Why behavior change—not technology—is the hardest problem in health tech
Despite billions spent on digital health innovation, real-world impact remains stubbornly flat. The culprits aren’t data pipelines or AI models. It’s the human brain.
Health technology continues to underdeliver not because of technical constraints, but because behavior change remains the least understood, least funded, and least rigorously designed component of digital health products. For every behavioral scientist, there are dozens of engineers, and in many organizations, none at all*.* As a result, the industry consistently ships technologically impressive solutions that people quickly abandon in the context of real life.
Introduction - The Behavioral Feasibility Gap
Despite rapid advances in sensors, algorithms, and data infrastructure, digital health has not produced commensurate gains in outcomes or sustained engagement. We can sequence genomes in hours, detect disease through machine learning, and deliver telemedicine to remote populations, but most health apps are abandoned within weeks of download. Wearables collect millions of data points that users stop checking, and chronic disease platforms present sophisticated dashboards that patients ignore.
This gap between what technology can do and what people actually will do in daily life is the behavioral feasibility gap. Health tech companies pour resources into engineering and data infrastructure while treating behavior change as a content layer, a marketing problem, or a set of nudges to be added near launch. The result is predictable: products that work in controlled settings and miserably fail in the messy reality of competing priorities, fluctuating motivation, and context-specific barriers.
Health Tech Optimizes on What It Can Build, Not What People Can Sustain
Most digital health products are evaluated on whether they can be built and integrated, not whether people will keep using them in real life. Product roadmaps prioritize integration with electronic health records, machine learning pipelines, and real-time analytics, while core behavioral questions, “Can users fit this into their routines? What will they reliably do on their worst day, not their best?”, arrive late or not at all.
Research consistently demonstrates this gap. A 2019 study in the Journal of Medical Internet Research found that while 85% of digital health startups had technical validation, fewer than 15% had conducted rigorous behavioral pilots before scaling (Baumel et al., 2019). The consequence is predictable: products that work perfectly in controlled environments fail in the messy context of daily life, where competing priorities, variable motivation, and environmental triggers determine actual use.
Technical metrics amplify this bias. Engineering teams are rewarded for uptime, latency, and data accuracy, while behavioral feasibility is reduced to crude proxies like app opens or session length. A glucose monitoring app may deliver flawless readings, but if checking it requires five taps and opens to a confusing dashboard, behavioral feasibility fails regardless of technical performance. When teams do not explicitly design for “what people can sustainably do,” they default to “what the system can technically support,” and the behavioral feasibility gap widens with every release. Health tech will continue to build technically impressive products that users abandon.
For leaders, the implication is stark: if behavioral feasibility is not a gate in your investment and launch decisions, you are systematically overestimating the value of your technology.
Behavior Change Is Architecture, Not Content
Most health products still treat behavior change as an information and communication problem: if we can educate users, send reminders, and present risks clearly enough, people will change. This belief drives products heavy on educational content, motivational messaging, push notifications, and gamified badges, but light on structural behavior design.
True behavior change design is architectural. It focuses on friction, defaults, social proof, commitment devices, and habit formation at the level of system structure, not just user messaging. For example, there is a profound difference between:
A medication app that sends daily reminders and tracks adherence; versus
A service that redesigns the refill process, automates coordination with pharmacies, builds in advance refills, and uses small social accountability groups to normalize adherence.
The first treats behavior change as communication and tracking. The second treats it as systems design: it restructures the environment so that the desired behavior is easier, safer, and more automatic than the alternatives (Fogg, 2009).
Behavioral science research demonstrates that sustainable change requires environmental restructuring, not just individual motivation, willpower, or information (Marteau et al., 2011). Yet most digital health products place the entire burden of change on individual willpower while leaving the behavioral environment unchanged. Reminders and educational content are not architecture. Gamification badges may support engagement, but cannot substitute an unsupported behavioral support system. There is a need for fundamental design that reduces friction, establishes new defaults, and makes desired behaviors the path of least resistance.
For executives, this distinction matters because architecture-level changes are harder to copy than content. A behaviorally engineered refill system, for example, becomes a durable competitive advantage in a way that another library of wellness articles or push notifications never will.
Why Incentives and AI Underperform on Weak Behavioral Foundations
In response to engagement challenges, health tech has embraced financial incentives, personalization engines, and AI-based coaching. Yet these mechanisms consistently underperform expectations because they are layered atop fragile behavioral foundations. These tools promise to nudge behavior more effectively by leveraging behavioral economics and real-time adaptation. Yet their impact has often been more modest and short-lived than expected.
Financial incentives illustrate this problem clearly. While monetary rewards can effectively initiate behavior change, they often fail to produce lasting effects once removed (Loewenstein et al., 2016). The reason is structural: incentives are bolted onto existing behavioral patterns rather than integrated into a comprehensive model of habit formation, identity change, and intrinsic motivation development.
The same pattern appears with AI-driven personalization. Many systems optimize for engagement metrics (e.g., click-through rates, session length, short-term retention) without explicitly tying those metrics to validated behavioral outcomes such as sustained medication adherence, sustained physical activity, or long-term symptom improvement. An AI coach can keep users “engaged” in conversations that feel supportive, but if the underlying product still demands unsustainable effort or attention, engagement will erode over time.
Technology scales whatever logic you give it. When that logic ignores behavioral reality, AI and incentives simply accelerate failure. The fundamental mistake is using advanced mechanisms as substitutes for behavioral foundations rather than as amplifiers of sound behavior architectures. No level of algorithmic sophistication can compensate for:
Products that ignore basic habit formation principles.
Workflows that require constant vigilance from already overloaded patients or clinicians.
Designs that depend on initial motivation staying high instead of planning for its inevitable decline.
To unlock the full value of incentives and AI, leaders must first insist on robust, explicit behavioral models — ones that define target behaviors, sequences, triggers, maintenance strategies, and likely relapse points — before layering advanced tools on top. Advanced tools are powerful, but only when applied to behaviorally sound architectures.
Underfunded and Under-Measured: Why Outcomes Lag
Behind these product choices lies a deeper resource allocation problem. Most digital health organizations invest heavily in data science, infrastructure, and clinical validation while allocating comparatively little to behavioral research, experimentation, and long-term measurement. A 2020 analysis of digital health companies found that while 73% employed data scientists, only 12% had dedicated behavioral researchers, and fewer than 5% conducted longitudinal behavioral studies extending beyond six months (Torous et al., 2020).
Engagement analyses typically span days or weeks, while the behaviors that matter (sustained physical activity, weight loss, medication adherence, relapse prevention) unfold over months and years. The result is products optimized for short-term engagement rather than long-term behavior change, a fundamental misalignment with health outcome timelines.
The metrics that dominate dashboards mirror this bias. Success is often defined by:
App download numbers.
Short-term activity (log-ins, sessions, time-in-app).
Self-reported satisfaction.
These indicators are easy to collect and optimize but weak proxies for durable change in health behaviors or clinical endpoints. Without longitudinal measurement infrastructure, organizations cannot reliably distinguish between products that generate short-lived enthusiasm and those that build lasting habits.
Behavior change requires patient, rigorous measurement across extended timeframes. Sustainable weight loss, medication adherence, smoking cessation, and disease management all operate on timescales of months and years, not days and weeks. For executives accountable for outcomes and financial performance, under-investing in behavioral science and long-term measurement carries three risks:
Overstating ROI based on transient engagement.
Missing early warning signs that behaviors are not sticking.
Failing to build the evidence base regulators, payers, and large health systems increasingly demand.
Rebalancing this equation requires budgeting for dedicated behavioral researchers, longitudinal studies, and outcomes-linked metrics, not just engineering and AI.
The Hardest Problem to Solve: Human Behavior and Business Model Restraint
Designing for sustained behavior change runs directly against some of the strongest incentives in tech: speed, scale, and feature velocity. This represents a profound challenge for an industry built on rapid iteration, growth metrics, and continuous feature expansion. Behavior change, by contrast, demands restraint, depth, and trust.
Consider the tension between scale and depth. Behavior change is inherently high-touch, context-dependent, and relationship-based (Michie et al., 2013). It requires understanding individual barriers, establishing trust, providing consistent support through setbacks, and adapting approaches based on lived experience. These qualities resist rapid, horizontal scaling. However, health tech companies are under constant pressure to expand their user base quickly and to show rapid growth in metrics that investors recognize. In this case, it would be premature, as the behavioral models are not yet validated or support systems are not yet adequate.
Similarly, feature velocity conflicts with behavioral sequencing. Sustainable behavior change often requires focusing on targeted actions, build mastery, and then layer in additional behaviors and complexity over time. By contrast, product roadmaps prioritize adding features to demonstrate progress and differentiate from competitors. The result is overwhelm: apps that try to address nutrition, exercise, sleep, stress, and medication adherence simultaneously, ignoring research showing that behavior change is most effective when sequenced deliberately rather than attempted all at once.
Ultimately, the hardest problems in health tech are not about adding more functionality but about deciding what not to build yet. Leaders must be willing to:
Delay scaling until behavioral models are validated in smaller, higher-touch deployments.
Sacrifice some short-term engagement metrics in favor of depth and reliability in a narrower set of behaviors.
Align business models with the timescales of meaningful health outcomes, which often extend beyond the typical product or fundraising cycle.
What Leaders Need to Do Differently
If behavior change is the core bottleneck in digital health, then it must be treated as infrastructure, and not as a feature. For executives, this implies several shifts in strategy and governance:
Elevate behavior science to a core capability. Treat behavioral researchers and designers as peers to clinicians, data scientists, and engineers, with meaningful influence on product strategy and investment decisions.
Make behavioral feasibility a gate, not a nice-to-have. Require evidence that target users can and will sustainably perform the behaviors your product demands before scaling or launching expensive new features.
Redesign metrics around sustained outcomes. Complement traditional engagement metrics with validated behavioral and clinical outcomes over appropriate time horizons, and hold teams accountable for those.
Sequence for habits, not headlines. Focus on a small number of pivotal behaviors and build robust, low-friction architectures around them before expanding into adjacent domains.
Align the business model with behavior timelines. Structure contracts, pricing, and success guarantees around long-term outcomes to create the financial room for deeper, slower behavior work.
The next competitive advantage in health technology will not come from marginal improvements in algorithms or another layer of data. It will come from organizations willing to design for the full behavioral reality of human lives—including inconsistency, fatigue, relapse, and competing priorities. That requires treating behavior change as infrastructure rather than as a feature: funding it properly, measuring it longitudinally, sequencing behaviors deliberately, and aligning business models with the slow, nonlinear timelines of real health outcomes. The hardest problems in health tech are no longer waiting on better technology. They are waiting on leaders to treat behavior change as the foundation on which everything else depends.
References
Baumel, A., Muench, F., Edan, S., & Kane, J. M. (2019). Objective user engagement with mental health apps: Systematic search and panel-based usage analysis. Journal of Medical Internet Research, 21(9), e14567. https://doi.org/10.2196/14567
Fogg, B. J. (2009). A behavior model for persuasive design. Proceedings of the 4th International Conference on Persuasive Technology, Article 40. https://doi.org/10.1145/1541948.1541999
Loewenstein, G., Asch, D. A., & Volpp, K. G. (2016). Behavioral economics holds potential to deliver better results for patients, insurers, and employers. Health Affairs, 32(7), 1244-1250. https://doi.org/10.1377/hlthaff.2012.1163
Marteau, T. M., Ogilvie, D., Roland, M., Suhrcke, M., & Kelly, M. P. (2011). Judging nudging: Can nudging improve population health? BMJ, 342, d228. https://doi.org/10.1136/bmj.d228
Michie, S., Richardson, M., Johnston, M., Abraham, C., Francis, J., Hardeman, W., Eccles, M. P., Cane, J., & Wood, C. E. (2013). The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: Building an international consensus for the reporting of behavior change interventions. Annals of Behavioral Medicine, 46(1), 81-95. https://doi.org/10.1007/s12160-013-9486-6
Torous, J., Nicholas, J., Larsen, M. E., Firth, J., & Christensen, H. (2020). Clinical review of user engagement with mental health smartphone apps: Evidence, theory and improvements. Evidence-Based Mental Health, 21(3), 116-119. https://doi.org/10.1136/eb-2018-102891