Health apps may mean well, but technology has a way of feeding our worst instincts, says Natalie Kane
With progress comes technology, and with technology comes enhancement. With enhancement comes ways for us to be ‘fitter, happier, more productive’, to use a tired phrase (sorry, Radiohead). But more often than not we are given what I call ‘means well technology’; something that in principle seems to have honourable and worthy intentions, yet clashes against the society and culture we live in.
The colour-changing condom, invented in 2015 by a group of teenage boys for the TeenTech competition, is a good example of this. The condom changes colour to indicate the presence of a sexually transmitted disease, and encourage the user to seek help. But we all know that’s not exactly how it works. Stigma and shame already ruin our sex lives and sexual health, so why introduce another way for us to feel it? With the rise in technology-for-good and technology-for-wellness, flaws such as this are rife.
The problem with surveillance – and AI
The Samaritans, the fantastic outreach and support service for those who need someone to talk to, tried its hand at technological wellness in 2014 with the Radar app. The tool was a Twitter plugin which allowed users to monitor Tweets using keywords and phrases ‘that indicated someone might be struggling to cope’. An email was then sent to the monitoring user with suggestions on how to reach out, and in effect, help them to get help. Radar caused uproar in the mental health community, on the grounds that, in real life, this kind of benevolent surveillance raises obvious privacy and wellbeing concerns. If your friends are notified when you’re at your bottom, then those who seek to exploit your sadness can be too.
Read more: How is technology overload affecting our health?
Then there’s the problem of how an AI understands language. Algorithms can only gain so much context from the things we say, and are notoriously bad at classifying sarcasm, a human concept that Rana el Kaliouby, the founder of Affectiva – a machine sentiment analysis company – calls the ‘holy grail’ of an AI system. Misunderstanding then results when the algorithm doesn’t know that we’re joking. For these reasons, the app was promptly pulled just nine days after its launch.
Disregarding the backlash from the Samaritans’ Radar, Facebook decided to roll out a similar service in 2017, because in true Facebook fashion, they thought they could do it better. Without consent, the platform’s mental health tool scans your newsfeed and messages for signs of distress, ranks them on a scale of 0 to 1 (at intervals of 0.1) for risk of imminent harm, and offers you prompts to contact suicide prevention services. Like Radar, it also flags signs of concern in your friends’ feeds, and encourages you to act. Combined with Facebook’s decision not to target and remove abuse, it feels like a rather cruel sticking plaster rather than a genuine attempt at help. Facebook is creating new health information about users ‘but isn’t held to the same privacy standards as healthcare providers’, as Benjamin Goggin for Business Insider reports.
The growth of the wellness industry
This is all in the name of promoting ‘wellness’, a relatively new fad in technology that companies are starting to get an arm into. Wellness feels distinctly neoliberal in its approach: if you buy this app, this smart water, this online transcendental meditation subscription, then you’ll be less depressed at working 40-plus hours a week and less anxious about money and relationships. It can perhaps be thought of as a way to make us more efficient at being useful to capitalism, rather about personal wellbeing. Companies sponsor wellness weeks with free massages and yoga classes, but are unwilling to address the faults in their working culture or labour practices.
There are apps for meditation, such as the wildly popular Headspace, founded by former monk Andy Puddicombe (emphasis on the ‘former’, as I’m not sure Buddhism aligns with in-app purchases), as well as for everything from sleep to period tracking to good habit forming. The wellness tech phenomenon preys on an age-old idea of what we think technology is supposed to do for us: solve problems. It’s no surprise that 63 years later, many of us think that Eliza, the world’s first ‘therapy’ robot, was created to help us, rather than as an example by its creators at MIT of just how much chatbots don’t understand. Technology is not magic, and since there is no magic cure for illness or exhaustion, we should be wary of looking to the latest app for solutions to our health problems. These apps, when introduced into the world, suddenly clash against our faulty human methodologies and thought processes – why, if we’re completing our daily meditations, or eating the right amount of protein, or running every day, are we still unwell?
Manipulating behaviour through design
The interaction and user interface design of these apps comes from a whole history of compulsive and addictive design, with some far more dangerous consequences. Eating disorder charity Beat has criticised food and nutrition-tracking apps like My Fitness Pal or Lose it!, arguing they could ‘exacerbate unhealthy behaviours and make recovery harder’. The illusion of control these apps give us is deeply seductive. When combined with the messiness of our human anxieties and our multiple reactions to positive and negative reinforcement – something these apps were not designed to withstand – when we fail to get ‘better’, we blame ourselves.
Wellness technology, like a lot of health tracking apps, does not account for the dynamism of the human self. Those 10,000 steps you’ve been convinced to complete every day by your Fitbit? It’s largely bad science, resulting from one study in 1965 in Japan by Yamasa Clock and Instrument Company, which as a result created the Manpo-kei pedometer. In 2018, Public Health England’s Mike Brannan denounced the goal, arguing that ‘there’s no health guidance that exists to back it’. But this doesn’t matter when you get that rush from completing your goals, essentially blaming yourself if you’re not able to meet them, rather than the realities of daily life (sometimes we don’t want to get up at 6am to do 20 burpees, and that’s OK).
Making human connections using technology
This year I was diagnosed with a vestibular disorder, which, several times a month, leaves me feeling like I’m trying to balance on an incredibly choppy sea. It changes day by day, week by week, and is exhausting when I let my brain work hard to keep me upright rather than relying on the walking stick I’ve recently invested in. Technology won’t easily fix this – there are no apps for fluctuating vertigo, aphasia or brain-fog – but there are useful ways to record this that have been suggested to me through active, generous online patient communities.
I use Symple symptom tracker to try to make sense of what’s happening to me, and it’s something I can share easily with my neurologist, enabling me to communicate better with them. For support, nothing has been more helpful than being able to find others with similar diagnoses on social networks and in forums. VeDA (Vestibular Disorders Association) has an Instagram account which, if anything, reminds me that there are people out there I can reach out to.
Facebook support groups raise privacy concerns due to the sensitive, often emotional data that is shared for the sake of a connection to others. We often make the choice to trade off our privacy (and in this case, healthcare data) for the sake of community. For now, that might not change, but I’d rather that than try to layer on a perfect concoction of figures and paid-for goals set via an app.
The original version of this article appeared in Icon 197, all about digital technology, design and society