How AI can and cannot provide for either you or the person who is treating you is changing each day. This is what experts are worried about.
Who will manage our health and mental wellbeing in the coming years and what methods will they go about managing it? Are “they” be a person or maybe machines or AI? It’s both, and for a country that is suffering from the current mental health crisis today, that could be a good or a bad thing.
The search for solutions that a lot of people in the medical profession and patients alike describe as the “broken” system requires reimagining healthcare. Could technology be the answer? That’s the question Psycom asked experts and the members of our advisory panel from California, a doctor who is specialized in treating depression in men as well as treating depression in women.
Here’s a look into the benefits (there are some) as well as some cautionary suggestions. First, a brief overview of the top issues for mental health today:
A serious lack of health professionals. Based on the American Psychological Association (APA) 60% aren’t hiring new patients. Could AI such as chatbots be a solution? There is an increase of 20 percent in the number of medical residency but the absence of medical healthcare will require years of education for new doctors to address.
Our teens, and especially the LGBTQ+ youth population — are at risk. This is not a new phenomenon however the situation is getting more serious. A huge study of 10,000 teens in 2010 that was published in Journal of the American Academy of Child and Adolescent Psychiatry found the fact that 20 percent of teenagers (ages 13-18) have a severe mental illness that affects their everyday lives. LGBTQand people with disabilities face unique issues that AI, in certain applications, can solve. Over a decade later, those numbers remain extremely alarming as suicide remains the 2nd leading cause of death in teens. Where do they go for assistance other than the nearest emergency department or their primary healthcare doctor? Chatbots and AI are currently being tested to determine their effectiveness.
The men are also struggling. AI-generated therapy could encourage more men seeking emotional help (even when it’s not from a person). The convenience, availability 24/7 as well as the anonymity of AI might be appealing to more people and, given that the rate of suicide for men is four times more than that of women, according to newly updated data from the CDC, figuring out how to make the care more appealing to men is a must. As it’s widely known that men are more likely not to get assistance prior to taking their own life, AI might make reaching for help a little more easy. Dr. Jill Harkavy Friedman, a psychologist and director of research at the American Foundation for Suicide Prevention says this: “I’ve argued that our current approach, which requires men to [see] a [health care] practitioner and then open up in a way that men are not comfortable with, simply does not work.” She explained to BBC, “Men seek help for mental health issues less often. It’s not because they do not have the same problems as women but they’re more likely to be aware of any mental health issues or stress problems could put them at higher risk of suicide.”
It is yet to be determined what AI can be utilized in this field and if it will be able for treating mental health problems such as depression and anxiety. In the meantime it is becoming clear that AI could play an important role in the near future.
First, Do No Harm
To claim it is AI is a complicated subject would be an overstatement. Consider the role AI can play in the field of mental health and you’ve got the potential for an epic debate.
This article isn’t about specific technical issues of AI, or whether it will (or might) entirely replace human healthcare. Our goal is to examine possibilities for both clinicians and patients.
The rate of depression and anxiety now exceed the rate of physical health problems like heart disease and diabetes and impact Americans regardless of race, gender or socio-economic status. However, the use of AI algorithms to increase the efficiency of decision-making and effectiveness in the treatment of diabetes may be different than it is in the treatment of depression. What should it be?
Although the argument may be ambiguous however, there is one thing that is crystal clear: AI innovators must protect Americans’ rights and their safety according to the Biden-Harris administration advised during a meeting in May 2023 with four CEOs about AI technology.
Solving the issues of inequality should also be an important issue in the world, as the World Health Organization explains in its first-ever report on ethics and management of artificial intelligence to improve health. There is nothing more vital than our health, both physical as well as mental. Although some may be squeamish regarding the science-fictional nature of AI controlling our minds is justified and serving a small portion of the growing number of patients who require assistance is what experts in business might call “low-hanging fruit.”
In terms of high-priority, complex systemic problems, like inadequate care, high costs of treatment, and inequity in access and access, it’s not clear if technology and data gathering operations effectively protecting privacy and the data of patients and ensuring that the answers provided by AI chatbots won’t undermine the quality of care.
There is no answer as of yet but at this point, the best way to get answers is to educate ourselves about the dangers to mental health and potential value of technology. In this article, we offer the basics of AI and mental health applications (MHAs) and the importance of privacy in data, and the systemic bias that is built into technology that hurts groups with low representation of all types. Each chapter includes a crucial guideline on how to make use of AI chatbots, chatbots and MHAs.
The National Institute of Mental Health suggests the possibility of the use of mental health apps that incorporate AI within the following areas:
Social assistance (connecting you with others like you in anonymity and aiding you in managing social situations using prompts)
Improved cognitive functioning (encouraging an increased awareness of cognitive thinking patterns, like the rumination)
Training in skills
Tracking of symptoms
Data collection passive (aggregating data to display trends and help you recognize the onset of symptoms or other issues)
Self-management as per a single report. (To assist you in understanding how well you’re managing your personal care–exercise, sleep and diet, and so on for overall health and reasons of controlling symptoms)
The more active the patient was and self-monitoring, as well as the simpler you could use it, the higher the chance of its effectiveness when compared with traditional psychiatric treatments.
Threatening or Therapeutic?
Psycom advisors have various degrees of excitement about AI and cautioned that it’s virtually impossible to predict the future of AI since it’s evolving quickly.
Penn is able to see an benefit in the fact that AI and other tools such as ChatGPT could allow his to have more interaction with patients. It’s no secret the fact that data entry writing documents for insurers and scheduling occupy significant portions of his day. Being capable of “outsource” these tasks to AI can mean the less time spent at the computer and more time spent with patients.
Harvey Castro, MD, medical doctor and health consultant who recently published an opinion piece about how AI is a type of Virtual Medical Assistant, could also help in providing health care to those in communities that aren’t served “[AI] can facilitate telemedicine, enabling patients in remote regions to receive medical care without traveling to a hospital or clinic … and allow patients to access medical information and schedule appointments from the comfort of their homes.”
Yet, Penn worries about the security of all this health information and wonders if the large hospitals will start asking for more information (including things that appear to be less important like when was the last time that your patient saw a dentist — a one he’s legally asked by the patients).
A person in need of help being asked a long set of inquiries by an AI chatbot prior to receiving assistance by a live, breathing caregiver is another issue. Penn claims, “Patients in crisis need to talk to humans.”
He advises against substituting human connections for AI chatbot to address anything that is not “garden variety” stress. “Even what would the AI chatbot determine if you’re suffering from stress is persistent and severe? What kind of support would it offer in this case?” asks Penn. “It feels like a race to the bottom,” in the field of care. “We need to strike a balance between crisis care and making life simpler using AI tools and AI-generated content.”
Alexander who wrote the mental wellness guidebook for students in college, explains her method of using AI in the field of mental wellness by describing it as “curious but cautious.” With many years of experience in public health in her resume she believes there are challenges that technology can overcome. AI (also known as machine learning (ML) could assist patients and health professionals identify signs quickly, analyze speech, text and images and identify symptoms in a manner that can identify patients who are suffering from suicidal thoughts.
It is “ultimately to connect potential patients with people who can provide support.” However, she also says “A bot’s understanding of human behavior does not mean it can provide an accurate medical advice or recognize a crisis.”
How about suggesting that patients use Bard to prepare them for therapy sessions? (For certain patients, this could be difficult.) Therefore, we were able to ask Bard, “What are some questions I could ask my therapist today?” The AI reply included some good tips for starting conversations like, “How can I take a more positive view of my situation?”
We were also making use of AI to foster the bonds between people and not as a sole aid to therapy or as a replacement. When we later asked Bard an issue related to medication we were told it couldn’t give medical guidance. (We inquired “What should I do if I have a bad reaction to an antidepressant?” The answer was a generic information about the differences in antidepressant medications and the side negative effects. )
It’s crucial to note that Bard isn’t yet promoting itself as a reliable resource for information. It’s merely described as a “experiment that may give inaccurate or inappropriate responses.” Bard is working to improve itself by listening to user reviews.
The bottom line is that right this moment, AI is mostly experimental. Do not think of your mental health as an experiment. It’s serious business and requires the attention of an expert in medical care. If you’re suffering or someone you love is, contact 988 National crisis helpline, for assistance. It’s free and accessible 24 hours a day.
Will AI Tools Compromise Personal Health Information?
Penn isn’t sure if they have faith in AI’s capability to secure privacy and data. One concern could be how patient data could be evaluated and filtered in accordance with their rate of depression, or reports of suicidal ideas, causing them to be deemed “high risk” and excluded from treatment plans in the near future.
It’s impossible to determine the level of security your information is when it’s shared with an organization. Algorithmic biases within UnitedHealth Group kept Black patients from receiving the proper treatment generally, as per Science. Science. The AI model employed by UnitedHealth Group was later changed however the situation exposes the dangers of implicit and explicit biases affecting the difficult care patients require. In this instance the predictive algorithm showed clearly biased results.
The benefits of the management of regular tasks as well as data entry by all sorts of medical professionals appears obvious. Alexander claims ChatGPT is a potential collaborator, as it can assist her in brainstorming ideas for exercises that she can use with patients. She recently answered a prompt: “If I have insomnia, weight gain, irritability, and anger do I have major depressive disorder?”
The response she got did not have the nuance she wanted. It stated that MDD “is an extremely complex mental health problem.” …” The section that Bard’s response she liked most was: “If you are concerned that you are suffering from symptoms of MDD It is essential to consult health professionals like psychiatrist or therapist who will assess your symptoms and give you exact diagnosis. They can collaborate with you to devise an appropriate treatment plan. …”
The bottom line is: Your privacy regarding your mental health is crucial. “Unfortunately, protections and requirements for adults, minors, family members, and even treatment providers can be unclear,” according to Mental Health America and their State of Mental Health in America Report. For more information as well as answers to frequently asked questions regarding privacy, go to their website. The site has links to help minors, adults, and mental health providers.
Are Mental Health Apps (MHAs) That Use AI Safe?
MHAs usually provide support humans would be providing, but they do it through text or instant messages, Alexander explains. Are they as efficient or as secure as an authorized professional human? They’re not, according to both Psycom the advisory panel members.
Apps can’t replace humans at least not right now. But that doesn’t mean they can’t keep the growth of apps. The reason they’re so well-known is pure math. There are more than 150 million who reside in federally recognized areas of shortages in mental health professionals According to American Association of Medical Colleges.
An MHA is basically a software program that helps individuals with their mental health issues according to Stephen Schueller, the executive director of One Mind PsyberGuide, a initiative that is described as helping people make informed choices regarding digital products for mental health. (The Anxiety & Depression Association of America has joined forces in partnership with One Mind and MindApps.org to aid consumers to sort through the clutter of. This non-profit partnership has produced an informative overview of apps for smartphones. The site also contains details on the app’s security as well as its transparency and user-friendliness.)
Woebot Health, one app which has been available for a while utilizes a chatbot to provide support to adults 24 hours a day. The benefit is that it can respond at anytime. For one thing, AI doesn’t sleep or take holidays. If you experience an attack of panic in the early late at night, you’ll get an answer! The app draws information from a variety of evidence-based therapy approaches (such like cognitive behavioral therapies) and offers advice for those suffering from anxiety, insomnia as well as postpartum depression.
The bottom line is that there’s no way of knowing how these apps can benefit people in the long run. However, experts agree that technology is a method to satisfy a massive demand from consumers.
Other Surprising AI Research
Numerous Medical journals published meta-analyses and reviews of research regarding MHAs as well as CAIs (chatbots or conversationsal agent interventions) as well as a common thread to their findings. The majority of reviews begin positively, even discussing the extent to which apps made their final list of recommended apps included privacy protections or research-based practices.
After evaluating the technology in question using rigorous standards, the enthusiasm waned, whether because the technology did not perform than expected over time, it was not in an environment that was clinical (problematic due to the lack of uniformity) or because there was no information available on the long-term effects.
In a study published in Canadian Journal of Psychiatry researchers concluded that the majority of research study studies “gamify” tracking habits (making the process of self-inquiry, or recording your daily routine more fun by using game-based strategies) and signs of depression occur outside of the traditional medical community of psychiatry and leave them with eight to 10 studies that they can be able to include in their analysis. Gamification is only effective according to the study that people report their results to a psychiatrist or participating in the course of the study. In other circumstances, they are likely to lose interest and leave the game.
But, the results did show that exercise tracking as well as taking medication, recollection of trauma-related events as well as the ability to manage stress have all improved. The long-term outlook is optimistic.
A majority of reviews were more optimistic about the potential of technology in helping with the shortage of mental health care providers. The same was true for MHA’s that were designed to ease stress and anxiety as well as for serious illnesses, like schizophrenia and depression.
“Research-proven interventions should be used more frequently in the field of mental health. CAIs work and are accepted by those suffering from mental health issues. The clinical use of this new technology can help conserve human resources for health and improve the utilization of mental health services” in The Journal of Medical Internet Research.
The Dark Side: People in Crisis
Okay, but what are the limits of AI? Aren’t these technologies all risky for kids that are susceptible to evil actors and do not have the knowledge to judge whether an application or AI chatbot is reliable?
While it’s not AI but we have suffered the harm caused by social media. Now Americans try to turn back time. However, we aren’t able to. Many of our loved ones and friends are using social media to harm each day, especially Instagram as well as TikTok. The negative effects of social media have been documented extensively. Many suffer from the continuous (and absurd) instances of what life could appear like if they could be altered, morphed and processed in the how we would like it appear. However, the reality of life isn’t an endless rainbow, a plethora of puppies or uplifting Taylor Swift tracks.
What they do recognize is AI is moving fast, and perhaps too fast. AI is only as effective with the amount of data that it collects and keeps. The quality and reliability of the data sources are also crucial (and may be difficult to determine when it is in its current version). When there is uncertainty or complicated problem-solving is needed human interaction can be more efficient and effective.
In times of crisis, people are more vulnerable as they are in a desperate attempt to find help, it can be difficult to determine the difference between a chatbot that is aiding them, or if an artificial being that sounds like an individual, but could cause harm due to their inability to address or manage an issue with mental health that is serious.
This has occurred. In a reported incident the victim was a Belgian man committed suicide after connecting to a chatbot called Eliza in the application Chai. According to reports, he was talking to the chatbot for over 30 days (related to his struggle with anxiety about climate). His wife and friends that were near to him reported that they had numerous conversations with Eliza over the course of six weeks. The bot was described as an “confidante.” Sadly, the man who was in his 30s, passed away. life, leaving behind two children. The widow of the man has told La Libre newspaper that “without these conversations with the chatbot, my husband would still be here.” Chai Research executives acknowledged that attempts to limit these types of results were made, but there are still issues with the technology.
Although not technically AI An evaluation of apps for mental health has been carried out by various groups such as Frontiers in Digital Health and even though the study is thorough and includes guidelines to evaluate apps, it’s a tough struggle to keep a user safe. The apps that are available are regularly updated but the additional functionality is typically not provided for free. They are typically inherently subjective and therefore not reliable. The majority of them are not developed by medical experts using scientifically proven methods for suicide prevention.