Our society has become enthralled with Artificial Intelligence (AI), which is becoming an intrinsic
phrase in our zeitgeist. In the coming years, AI will embed itself into various professional fields. Although legal and policy measures may temper its impact, it is undeniable that our future will be enmeshed with AI. Embracing and collaborating with AI should be our primary focus. The medical realm has already witnessed the integration of AI into diagnostics and imaging, revolutionizing the healthcare landscape. (1). One sector of healthcare in need of a serious reexamination is the mental health field. We are in the midst of a global mental health crisis, exacerbated by the pandemic. The lockdowns altered our perception of meaning and led to a surge in mental health disorders, with suicide rates rising 4% in 2021 after a two-year decline (2). This shift has sparked a mental health epidemic, making it imperative to explore innovative solutions.
AI offers immense promise in suicide prevention efforts. By providing objective real-time assessments, personalized intervention strategies and resources, and improved prediction accuracy, AI could substantially enhance our ability to prevent suicides. Currently, there are a few AI applications already in practice for suicide risk assessment and intervention, yielding identifiable benefits and potential concerns. Considering the global rise in mental health issues along with fewer mental health experts practicing or experiencing burnout themselves (3), AI can offer both sympathy and knowledge for those seeking assistance in a severe crisis.
The Trevor Project, an organization focused on suicide prevention for LGBTQ youth, partnered
with Google.org to launch The Crisis Contact Simulator. This AI-powered training tool allows aspiring counselors to practice realistic conversations with LGBTQ youths in crisis, improving their skills and flexibility.
The platform aims to meet the increasing demand for digital mental health services, especially
during the pandemic, as suicide rates and the need for support have surged among LGBTQ youth.
Previously, instructor-led roles were the primary training method, but leveraging AI technology enables an increase in the number of trained counselors able to offer assistance and scheduling convenience. By embracing this AI-powered solution, The Trevor Project aims to reach and connect with Innovative Approaches in Suicide Risk Assessment and Intervention: Ethical challenges of
incorporating Artificial Intelligence. LGBTQIA+ youth in need of their support (4).
Voice DGP is a project focused on using voice as a biomarker of health to advance understanding
of various diseases. The group aims to create a diverse and ethically sourced voice database, linked to other health biomarkers, to fuel research in voice AI. By leveraging smartphone applications and
federated learning technology to protect data privacy, they seek to collect data from multiple
institutions and develop predictive models for screening, diagnosing, and treating diseases.
The project focuses on five disease categories: vocal pathologies, neurological and neurodegenerative disorders, mood and psychiatric disorders, respiratory disorders, and pediatric
diseases. Voice changes associated with these diseases, especially those in a psychotic episode facing suicidal ideation, will be analyzed to improve clinical care and contribute to the development of effective interventions (5).
Surprising findings include drafted and deleted suicide notes on the devices studied, potentially
impacting both military and general populations. While not predicting individual suicide attempts, data analysis may provide useful signals for heightened risk. The project gains support from family members who lend their loved ones’ devices, emphasizing suicide prevention and minimizing future loss (6).
Ethical considerations surrounding the use of AI in suicidal individuals must be thoughtfully
weighed. Principlism, a prominent and widely adopted methodological framework in healthcare ethics, establishes a moral theory comprising four prima facie norms: nonmaleficence, beneficence, autonomy, and justice. These principles serve as the cornerstone for making morally sound and well-balanced decisions in the complex realm of mental and physical health (7).
The principle of nonmaleficence anchors healthcare providers to their inherent duty of avoiding
harm and the moral imperative of never wronging or neglecting their patients—an oath taken by
physicians known as the Hippocratic Oath (7). Similarly, the principle of beneficence underscores a
profound ethos: to consistently treat patients with the fundamental creed of fostering their overall Innovative Approaches in Suicide Risk Assessment and Intervention: Ethical challenges of
incorporating Artificial Intelligence health and welfare. This commitment to ethical duties is especially crucial when addressing the needs of individuals facing existential moral crises, particularly in the sensitive domain of suicidal ideation.
It is within the nexus of empathy and clinical judgment that medical decisions find their grounding. However, as we contemplate integrating nonhuman entities, AI, into life-or-death situations, we encounter a pressing question: Can something nonhuman offer genuine compassion, and good healthcare outcomes? What ethical risks should be considered as we delve into the development
and implementation of AI technology in lieu of human medical professionals to save lives? Here,
beneficence assumes a paramount role, demanding that AI interventions be thoughtfully crafted to
provide authentic support and resources, with an unwavering focus on the user’s mental well-being
above all other objectives.
Regular and meticulous monitoring becomes the bedrock to prevent potential harm, requiring
ongoing evaluation of AI algorithms, biases to assure the efficacy of patient safety. In this intricate
landscape where ethics and technology converge, striking the delicate balance between AI innovation and human compassion stands as a pivotal pursuit. It is through this harmonious integration that we can invite AI into the conversation, while upholding the highest ethical standards to ensure the preservation of human lives.
In the realm of mental health, an individual grappling with suicidal thoughts seeks profound mediation, as these feelings define their very real, and very current existence. However, this complex subjectivity presents a challenge in providing medical guidance rooted in philosophy. Emerging AI applications designed to intervene in suicidal ideations appear and sound promising, yet they raise another vital question: Can we effectively partner with emotionless computers to offer objective interventions during moments of crisis?
The quantitative driven nature of AI may risk disregarding the subjective perspective of the
autonomous individual seeking crisis care, potentially overlooking the essential elements of personal
perspective, self-wholeness, and humanistic critical and experiential-based thinking. Such oversight may undermine the seekers’ sense of human connection, a fundamental aspect that can be lost when one experiences suicidal thoughts. To strike a balance, we must be mindful of preserving autonomy, respecting the individual’s right to decide on their mental health care. Instead of replacing human
AI tools should augment it, providing individuals with the assurance of a known human connection throughout their journey to healing. In certain bioethical discussions, human existence is viewed as the fundamental bedrock of vulnerability, as being inherently human can intensify susceptibility to harm and damage (ten Have, 15). This perspective raises concerns about how AI interacts with our corporeal existence and whether it leaves us vulnerable to potential exploitation by malicious actors. If hackers gain access to individual data or misuse AI systems to target those experiencing suicidal thoughts, they could prey on these vulnerable individuals.
The Black Box project presents another moral quandary, as obtaining informed consent for data
mining becomes complex when dealing with individuals who are no longer living. The project’s boundary is questioned in terms of respecting the autonomy of those who have not explicitly given consent, with family members making decisions on the decedent’s behalf. While the project’s potential to yield groundbreaking suicide interventions for veterans or scalable for public adaption is noteworthy, posthumous individuals have a right to maintain their health privacy.
Balancing potential harms and benefits becomes an ethical dilemma, underlining the need for
stringent data protection and confidentiality measures to safeguard the vulnerable, i.e. the dead.
Informed consent becomes critical, ensuring that users fully understand the capabilities and limitations of AI systems, enabling them to make informed choices about their involvement. Transparent communication about the role and limitations of AI is essential for accountability, building trust with users and the broader community, all while ensuring the protection of vulnerable individuals, patients, and those who have passed.
In conclusion, the integration of Artificial Intelligence (AI) in suicide prevention efforts holds vast
promise for addressing the global mental health crisis. As AI technology penetrates various professional fields, its potential in revolutionizing mental healthcare cannot be overlooked. Collaborating with AI and embracing its capabilities could significantly enhance our ability to prevent suicide. However, as we tread down this path, it is vital to consider the ethical implications and potential risks associated with relying on nonhuman entities to provide instructional guidance in life or death circumstances. Striking a delicate balance between AI innovation and human compassion becomes paramount in this pursuit. Augmenting human support with AI tools, rather than replacing it, ensures the preservation of autonomy and the preservation of the human connection vital in moments of crisis.
Furthermore, robust data protection measures and informed consent are essential to safeguard
the vulnerable from potential exploitation and uphold the highest ethical standards while harnessing the transformative potential of AI in suicide prevention.
Kara, with 15 years of experience in communications and marketing, currently manages health programs with the Los Angeles Public Health Department, notably contributing to COVID-19 efforts. Her roles with the Public Health department and degrees in Television & Film Production and Bioethics reflect her commitment to innovation and public welfare. Passionate about mental and behavioral health, she is excited to lend her philosophical voice to Breaking Taboo, and explore ethical considerations.
- Beauchamp, Tom L., and Childress, James F. Principles of Biomedical Ethics. 7th ed.
Oxford University Press, 2013
- Ten Have, Henk. Vulnerability: Challenging Bioethics. New York, NY. Routledge, 2016.