Tech giants like Apple, Google, and Amazon are increasingly developing apps and services offering mental health treatment. The value of these products for users is dubious — but they do promise the companies lucrative new sources of highly personal data.
A narrowly deterministic, asocial, and individualistic account of how mental health can be managed is being pushed by the leading data extractors. (Jaap Arriens / NurPhoto via Getty Images)
Waxing philosophical in a 2019 interview, Apple CEO Tim Cook raised the question of what “Apple’s greatest contribution to mankind” will ultimately be. He answered unequivocally that this contribution “will be about health.”
Cook’s promise has since manifested itself in several innovative Apple products purporting to “democratize” health care and empower individuals “to manage their health.” Recent years have also seen a number other attempts to disrupt the health care market by Big Tech giants Amazon, Meta, and Alphabet. Most recently, it was even announced that the notorious surveillance company Palantir won a £330 million contract to create a new data platform for the British National Health Service (NHS).
COVID-19 accelerated this trend, as the pandemic left in its wake various subsidiaries, research networks, internet health services, clinics, and other ventures attempting to “redesign the future of health” (in the words of Alphabet subsidiary Verily) with smartwatches and other digital tools. Yet forays into health care by the largest technology companies in the Western hemisphere are no longer centered solely on the body. Not content with mapping lungs and limbs, their newest target is the mind.
The timing of Big Tech’s latest turn toward psychological wellbeing as part of its project to “map human health” is hardly coincidental. Headlines concerning a nationwide “mental health crisis” have recently dominated the news: suicide rates are currently at an all-time high in the United States, and as Bernie Sanders has highlighted, nearly one in three US teenagers reported on a recent Centers for Disease Control and Prevention (CDC) survey that they suffered from poor mental health.
Tech conglomerates are all too happy to build PR campaigns around these alarming facts by talking about their efforts to combat these trends or even “solve the mental health crisis” altogether. In this way, Big Tech appears to be following a tried-and-true maxim: never let a good crisis go to waste.
Apple: Determining Your Depression Level
Apple’s initial efforts to enter the health market gained momentum after the company refined its signature wearable device around 2019, turning it from an accessory for geeky, eccentric self-trackers into a chic symbol of wellness. Apple has since been busy collaborating with several research institutions, and it has launched a wide range of health studies dedicated to showing that its smartwatch is not merely a wearable fitness trainer but a “lifesaver” capable of detecting atrial fibrillation or even the onset of COVID-19.
Given its mission to offer users a “complete picture” of their overall health, Apple’s recent announcement that it will add mental health tracking to the Apple Watch is a logical next step. The new State of Mind feature of Apple’s Mindfulness app asks users to rank how they are feeling on a scale of Very Pleasant to Very Unpleasant, to indicate factors affecting their emotional states such as family and work stress, and to describe their outlook with adjectives such as Grateful and Worried. The hope, apparently, is that an entry a day will keep the shrink away.
The Mindfulness app uses this data to determine an individual’s risk of depression. Conveniently, a recent “digital mental health” study performed by UCLA researchers (and sponsored by Apple) was able to show that use of the app on the Apple Watch increased emotional awareness in 80 percent of participants, while 50 percent claimed it had a positive impact on their overall wellbeing — information that the company is now advertising on its website.
In the coming months, Apple will likely be rolling out even more mental health software. According to recent reports, it is now working on an artificial intelligence (AI)–powered health coach named Quartz, an app that will supposedly not only be able to monitor user emotions but give them medical advice as well.
Apple is now working on an AI-powered health coach named Quartz, an app that will supposedly not only be able to monitor user emotions but give them medical advice as well.
To be sure, there is a growing mental health crisis in the United States and elsewhere, and there is an urgent need for direct, cost-effective treatment. Between 2007 and 2020, the number of emergency room visits due to mental health issues almost doubled in the United States, with younger generations particularly affected.
Yet while “smart” tools might modestly benefit some patients, the use of wearables can also enhance stress and anxiety, as other recent studies have shown. Moreover, the focus on short-term tech solutions runs the risk of distracting from the underlying social and political causes of psychological diseases, such as workplace exploitation, financial instability, growing atomization, and limited access to quality health care, food, and housing.
It also pushes the main responsibility for dealing with mental health disorders onto individuals, in typical neoliberal fashion. As Apple’s vice president of health, Sumbul Desai, recently claimed, her company’s goal “is to empower people to take charge of their own health journey.”
Meta: Working With the NHS to Mine Your Mental Health Data
Apple is not the only Big Tech company that has taken an interest in the mental health of its consumers. And while the Cupertino behemoth at least pays lip service to data privacy, many of the others don’t even bother.
In the spring of 2023, news broke that the NHS had been sharing intimate details about patient health with Facebook. For years, the NHS had been feeding information including search inquiries on self-harm and counseling appointments made by NHS website users to the social network and its parent company, Meta, through a data-harvesting tool named Meta Pixel.
In one example, Alder Hey Children’s Hospital in Liverpool gave Facebook and Meta the data of users who had visited its webpages on sexual development problems, eating disorders, and crisis mental health services, and shared information about their drug prescriptions. In another, the Tavistock and Portman mental health clinic in London provided the tech giant with the data of visitors to its webpage on gender identity development, which is specifically designed as an educational resource for children and teenagers.
For years, the NHS had been feeding information including search inquiries on self-harm and counseling appointments to Facebook.
While privacy experts such as Carissa Véliz advise health care professionals and institutions to “collect the bare minimum information that is needed to treat [patients] — nothing more,” the NHS/Facebook data breach reflects the opposite trend: not data minimization, as Veliz recommends, but data maximization, justified by the idea that more data extraction in itself is automatically the answer to deep, socially rooted problems. In this case, the personal data was obtained without the consent or even awareness of patients in order to target them with advertisements — the core of Meta’s business model.
The scandal was merely the latest in a long line of recent PR disasters for the company, coming on the heels of the fiasco surrounding its Metaverse launch (not coincidentally, Mark Zuckerberg’s immersive future of the internet has itself been hailed as a “promising solution for mental health”). And the case was no isolated incident: in March 2023, it was revealed that the telehealth startup Cerebral shared private health data including information on mental health not only with Meta but also with Google, among others.
Alphabet: Fitbit as a Mental Coach
Google’s parent company, Alphabet, another notorious data miner, has also entered the wearables market, and since completing its purchase of the smartwatch manufacturer Fitbit in 2021 has joined Apple in preaching the gospel of the mental health benefits of wearables.
On the heels of a study conducted by Alphabet’s life sciences research subsidiary Verily on whether smartphones can be used to detect symptoms of depression, Fitbit recently introduced a redesigned smartphone app “designed to give you a holistic view of your health and wellness with a focus on metrics that matter most to you.” Similar to Apple’s Mindfulness app, this redesign contains a feature called Log Mood that allows users to enter their emotional states.
A team at Washington University in St Louis has used Fitbit data and an AI model to lend credence to the “feasibility and promise of using wearables to detect mental disorders in a large and diverse community.” According to Chenyang Lu, professor at the McKelvey School of Engineering and one of the study’s authors, this research has real-world relevance given that “going to a psychiatrist and filling out questionnaires is time-consuming, and then people may have some reticence to see a psychiatrist.” In other words, AI can be a low cost, low friction tool for managing one’s mental health.
Since completing its purchase of the smartwatch manufacturer Fitbit in 2021, Alphabet has joined Apple in preaching the gospel of the mental health benefits of wearables.
Far from proving that wearables can diagnose depression, the study revealed several potential correlations between an inclination toward depression and wearable-based biomarkers. But this did not stop Lu from enthusing that “this AI model [developed in the study] is able to tell you that that you have depression or anxiety disorders. Think of the AI model as an automated screening tool.”
This exaggeration of the empirical evidence perpetuates the dubious notion that mental health problems can be solved through technological fixes. Of course, it is also tremendously beneficial to Alphabet’s corporate interests.
But Fitbit is not the company’s only intervention in the mental health space. In addition to the suicide prevention information that Google Search has displayed above mental health–related search results for several years, the company recently announced that users who search for suicide-related terms will see a prompt with prewritten conversation starters they can send via text message to the 988 Suicide & Crisis Lifeline.
Though a tool like this may be very useful in emergencies, there’s a real concern that Google will instrumentalize the sensitive data gathered here, sharing it with advertisers so that it can be exploited and monetized along with the other data it collects. It bears mentioning that Google’s new suicide prevention measures were revealed only weeks after the suicides of three company employees gave rise to speculation about the mental health of its own workforce. Against this background, the new features might be read as a PR stunt to distract from urgent issues within the company itself.
Amazon: Signing Away Your HIPAA Rights to Amazon Clinic
Amazon is also now busy promoting itself as a mental health care provider and advocate. Though Jeff Bezos seems to be primarily occupied with dreams of space entrepreneurship and lunar industries, he hasn’t forgotten to roll out some mental health “solutions” here on Earth.
As early as 2018, Bezos announced his intention to solve America’s health care crisis by democratizing access to medical services. He bought the online pharmacy PillPack and later developed Amazon Pharmacy.
In 2019, he launched Amazon Care, an online platform offering comprehensive twenty-four/seven medical care to Amazon employees via messaging and video chat. This initiative involved a collaboration with Ginger, an internet- and app-based psychotherapy service that bills itself as “mental healthcare for every moment” and a “complete solution to mental healthcare.”
In 2021, Amazon shuttered Amazon Care and established Amazon Clinic, a virtual health care platform with grander ambitions than its predecessor: plans have already been announced to expand the new platform to all fifty states and the District of Columbia. Unlike Amazon Care, Amazon Clinic is open to the general public. To use it, however, patients must consent to the “use and disclosure of protected health information” — waiving their rights to existing federal privacy protections under the Health Insurance Portability and Accountability Act (HIPAA) — and effectively grant the tech giant access to their most intimate self. (Whether this is legal is now being examined by the FTC.)
In February of this year, Amazon further expanded its health care portfolio by acquiring One Medical, a company offering in-person, online, and app-based primary care in over twenty US cities and metropolitan regions. One of its subprograms, Mindset by One Medical, focuses specifically on mental health, offering patients virtual help with conditions such as stress, anxiety, depression, ADHD, and insomnia in online group settings and one-on-one coaching.
With aims to expand to fifty countries beyond the United States and Canada, the Maven Clinic partnership will grant Amazon lucrative access to some of its most intimate and vulnerable data sets.
In addition to its latest moves with Amazon Clinic and One Medical, Amazon has recently broadened its employee health care offerings by partnering with Maven Clinic, the world’s largest virtual clinic for women’s and family health. With aims to expand to fifty countries beyond the United States and Canada, the Maven Clinic partnership will grant Amazon lucrative access to some of its most intimate and vulnerable data sets.
The general dangers of such data being hoarded in commercial hands that, under certain circumstances, will happily pass it on to national or local state authorities are clear: look, for instance, at the teenage girl from Nebraska who was convicted in summer 2021 of violating her state’s abortion law after Facebook and Google provided police with her private messages and browsing data.
The Colonization of Mental Health Data
Amazon, Meta, Apple, and Alphabet’s breakneck attempts to gain a foothold in mental health go beyond mere disruption. The sheer scale of this transformation should be understood within the framework of the greatest drive to annex previously untapped resources in history: colonialism.
Under the guise of corporations working to alleviate people’s mental health instabilities, a fundamental form of asset appropriation is under way. After all, until recently, the very idea that our mental health (all the data that represents and tracks it) could be a commercial asset on a balance sheet would have seemed bizarre. But today it is becoming banal. It is one facet of what Nick Couldry and Ulises Mejias have called “data colonialism.”
All four corporations are part of a larger commercial sector focused on exploiting new definitions of knowledge and rationality aimed at data extraction. Through the habitual grabbing of sensitive data and the capture of many other social domains (health, education, and law, to name a few), we are heading toward “the capitalization of life without limit,” as Couldry and Mejias describe it.
The normalization of wearables as tools for individuals, seemingly to manage their health (both psychic and physical), is part of this process, converting daily life into a data stream that can be appropriated for profit. Apple’s Mindfulness app and Fitbit’s Log Mood are just two examples of how Big Tech, having colonized the territory of the body, now has its sights on the psyche.
The idea that mental and physical health are primarily a matter of individual responsibility and tech-assisted personal management ignores the fact that health problems are often driven by systemic issues.
Data colonialism, like earlier stages of colonialism, disproportionately affects those who are already marginalized. For one thing, the technologies it involves are sometimes biased against marginalized groups, as was highlighted by a recent lawsuit against Apple over the alleged “racial bias” of the Apple Watch’s blood oxygen reader.
But in addition, the idea that mental and physical health are primarily a matter of individual responsibility and tech-assisted personal management ignores the fact that health problems are often driven by systemic issues, such as exploitative and unhealthy working conditions or a lack of time and financial resources to practice healthy living, which are shaped by long-term inequalities. Data colonialism obfuscates these factors in favor of profiteering, when a discussion of the socioeconomic factors behind the mental health crisis is needed more than ever.
It is ironic that, just as this structural change in the management of our bodies and minds is under way, a narrowly deterministic, asocial, and individualistic account of how mental health can be managed is being pushed by the leading data extractors. Indeed, it is more than ironic: it is perhaps the perfect alibi to divert our attention away from the institutionally driven data grab that is now underway.