Research conducted by Oversecured, a mobile application security company, has identified several vulnerabilities in some of the most popular mental health apps (those exceeding ten million downloads) available on Google Play. The researchers’ analysis focused primarily on applications for the management and treatment of depression, which are used in clinical studies funded by major research institutions and are also adopted by large employers, insurers, and government health agencies.
This issue deserves careful attention. The use of these applications is extremely widespread among users worldwide, who increasingly rely on such tools as a first form of therapeutic support. This trend is clearly reflected in the data reported by the researchers themselves: by 2025, the mental health app market had already reached a value of approximately ten to twelve billion dollars, with annual growth rates of 18-20%. Moreover, according to the World Health Organization, one in eight people globally (approximately one billion individuals) suffers from mental health disorders. It is therefore evident that the use of these apps may potentially reach a very large user base.
These applications collect a significant amount of users’ health data, including, for example: clinical test results, communications with doctors or therapists, data relating to mental and physical health, habits, traumas, and addictions.
Generally, Regulation (EU) 2016/679 (GDPR) prohibits the processing of special categories of personal data, including data concerning health, unless an appropriate legal basis exists. In these cases, such legal basis is generally identified in the explicit consent of the data subject using the application.
However, in most cases, users do not fully perceive the potential risks they may face if the tool does not actually guarantee the security of their information. Furthermore, data subjects are often unaware of the economic value that their health data may have on the dark web. For these reasons, users tend to share their information rather lightly and provide consent for the related processing, without considering the serious consequences that may arise if such data are accessed by unauthorized third parties.
Unfortunately, the consequences may be extremely serious, as clearly reflected in the expression used by the project “Privacy Not Included”, referenced by the researchers themselves, through which the Mozilla Foundation described the category of health apps as a “privacy nightmare”.
Indeed, the analysis conducted by Oversecured reveals a potential new frontier for cyber-espionage: health apps may inadvertently open a backdoor allowing unauthorized third parties to access a vast amount of sensitive data belonging to a very large number of users, who may be unknowingly profiled and monitored.
Security vulnerabilities create fertile ground for cyberattacks by hackers aimed at exfiltrating data and subsequently carrying out fraud or targeted phishing campaigns.
Numerous attacks of this kind have occurred. For instance, in 2020, a group of hackers unlawfully obtained the therapy session notes of 33,000 patients from a Finnish psychotherapy clinic and demanded a ransom in exchange for keeping such information confidential, contacting the victims directly. Some of those affected even took their own lives.
How are such attacks possible? Researchers explain that it usually begins when a user downloads an application that appears harmless and is often free of charge (for example, a calculator app). However, that application contains hidden malicious code capable of intercepting information contained in other apps (including health-related applications) without the user being able to detect it in any way.
This is only one example illustrating the concrete risks associated with the unlawful dissemination of the data contained in such applications. It is sufficient to consider that, on the dark web, medical records can be sold for thousands of dollars, fueling a highly impactful illegal market.
Indeed, exfiltrated data are illegally resold to the highest bidder due to their significant intrinsic economic value. Health information, including mental health data, represents an extremely asset and may become a source of discrimination, particularly in the employment, banking, and insurance sectors. In such contexts, these data may influence decisions regarding access to services (including essential services) and may potentially lead to discriminatory outcomes.
With the development of technologies and the increasing capabilities of AI, attacks exploiting vulnerabilities are becoming progressively more sophisticated. Nevertheless, users can pay attention and assess the reliability of the applications they intend to use. By way of example, the following elements should be taken into consideration:
- verify the presence of clear information regarding the processing of personal data (purposes, legal bases, categories of data processed), the identity of the controller, and the channels through which the controller may be contacted to obtain information and exercise data subject rights;
- check that adequate security measures have been implemented to protect users’ health data from possible breaches (pseudonymization, strong authentication, and the use of passwords meeting appropriate security standards);
- the adequacy and completeness of the privacy notice;
- the recipients of the personal data;
- the existence of transfers outside the European Union and any safeguards pursuant to Articles 45 et seq. of the GDPR;
- the data retention policy, including following the uninstallation of the app and deletion of the user profile;
- the accessibility by the app to additional external information (e.g., contacts, microphone, calendar, or photo gallery).
In conclusion, precisely considering the nature of the data processed and the risks that may arise for the rights and freedoms of data subjects, developers must ensure that their products are compliant with data protection legislation by design and by default throughout the entire life cycle of the processing of health data. However, this alone is not sufficient: users must also be made aware of the extremely sensitive nature of the information they enter into these applications, so that they can act more cautiously and become more aware of the potential risks involved.