Empowering Care: Leveraging AI in Helplines and Telehealth

“I’m sorry, Dave, I’m afraid I can’t do that.” (2001: A Space Odyssey, 1968)

“I’m going to enjoy watching you die, Mr. Anderson.” (The Matrix, 1999)

“I’ll be back!” (The Terminator, 1984)

Smiling black businesswoman working in call center

Our cultural imagination is packed with images of terrifying, humanity-threatening AI. While we have managed to avoid the robot apocalypse so far, typing “AI gone wrong” into your favorite search engine will surface many examples of AI tools that have caused real-world harm. One now infamous example in the helpline space is Tessa, the eating disorder chatbot intended to replace human counselors that was taken down after providing harmful advice.

For all the negative images, Hollywood has also given us images of positive AI: R2D2. Wall-E. The ship’s computer in Star Trek: The Next Generation reliably serving up “Tea. Earl Grey. Hot.” We’re a long way from AI ushering in a bergamot-infused utopia, but there are many examples of AI used well. The Trevor Project, a hotline for LGBTQ youth, built an AI training simulator for new volunteers to accelerate training without increasing the burden on full-time staff.

AI tools are a powerful way to help meet increased needs for behavioral health support if we can avoid the dangers. So, how can you use AI for good in your organization?

I once had a boss who asked, “What problem are you solving?” every time I introduced a new idea. It was one of his more annoying qualities. However, it was a great lesson, and I now regularly ask that question of myself. If you are considering using AI in your organization, this is the first question to ask. Resist pressure from board members, bosses, donors, and peers to “do AI,” and start by identifying what needs your organization has that AI might be able to solve. For example, within Aselo, our software platform for crisis helplines, we identified the time counselors spend on manual data entry as a key problem, and we’re investing in AI tools to help counselors capture data more efficiently.

As you consider how to approach AI, it’s helpful to look at two dimensions of the problem. First, where it’s used: internal use vs service provision. Second, the type of AI: generative vs traditional.

Uses: Internal vs. Service Delivery

AI for internal productivity tends to be the quickest, cheapest, least risky path to getting started with AI applications. You and your team are most likely already using AI in ways you may not realize, such as autocomplete in email programs. Free or inexpensive tools like ChatGPT are useful for generating an initial draft of a document, providing feedback on a document, or getting a quick summary of an article or an internet search. Consider creating an AI policy to give your staff guidelines on using these tools, such as never inputting confidential information and always reviewing generated content for style and factual errors.

Leveraging AI in service delivery is more challenging. It’s essential to assess the risks and take unlikely risks seriously if their impact is significant. The builders of Tessa said it failed guidelines 0.1% of the time, yet that was enough to create a public debacle.

The best current uses of AI in helpline service delivery are for aiding human counselors rather than replacing them. Børns Vilkår, the national youth helpline of Denmark, built an AI assistant that watches an ongoing conversation and recommends reference materials to a counselor, who can choose whether to use them. This provides service quality and efficiency gains but minimizes the risk by keeping a trained human in the loop.

One safe way to have AI directly interface with service users is to limit it to a specific set of responses and have it fall back to contacting a human. For example, we use a chatbot that asks a series of survey questions before a service user talks to a counselor. The chatbot can only say a specific set of things, minimizing risk, and if the service user has problems using it, they are sent to a counselor.

Types: Generative vs. Traditional AI

ChatGPT is an example of generative AI built on Large Language Models (LLMs). Generative AI is designed to create new content based on vast quantities of text. It excels at many creative tasks, though this creativity is driven by statistical patterns, not comprehension. Custom development can make generative AI more effective for specific problem domains, such as recommending relevant articles from an internal knowledge base to a counselor during a conversation.

Traditional AI methods, frequently overlooked today, are often a better match for the problem you want to solve. They excel at prediction and classification tasks and usually provide more control and transparency than generative AI solutions. Predicting future staffing needs to meet demand is a great use case for traditional AI. One challenge is that traditional solutions typically require a significant amount of data on which to train.

It’s always important to remember that bespoke AI development, no matter which type you use, carries a high cost. Organizations can almost never just throw data at an AI model and get useful results. Even well-organized data needs to be prepared, and it can take multiple iterations of testing to get an AI model to the point where it is useful and safe. And then there’s still a need to create feedback loops, maintenance, and training for your team, which is an ongoing process. New AI tools typically cost hundreds of thousands, if not millions of dollars, so if you only expect to save tens of thousands per year on the tool, it may not be worth developing.

If you are not already using data analysis to support decision-making in your organization, that may be a good first step before moving on to AI. GivingTuesday, in its 2024 AI Readiness report, found that “the best predictor of AI readiness was the size when an organization hires its first technical or Monitoring Evaluation, Research, and Learning (MERL) person.” If you can’t afford to hire a data analyst or technologist, options like hiring a fractional CTO or bringing a tech leader onto your board can be worth exploring.

There may be less need for bespoke development in the future. More products will roll out that encapsulate AI for shared use cases, spreading the development cost across many customers. One example is ReflexAI, which offers a counselor training simulator that is customizable to different helplines’ needs without re-developing the core AI underneath.

At all times, and especially when considering third-party vendors, it’s critical to think about service users’ data privacy. Give special care to a vendor’s privacy and data retention policies to understand how your data will be used. Some vendors will contractually promise that data will not be used for training publicly-available models (which is important with LLMs because a person’s private conversation could be unearthed). Some will also pledge Zero Data Retention (ZDR), meaning that not only will they not use data in new models, but they will not store the data at all after their systems have processed your request.

AI today realizes neither our greatest hopes nor our worst fears, but it can do real good or harm. Start with clear problem identification, examine the best uses and types of AI to fit your needs, consider data privacy and ethics, weigh costs and benefits, and keep staff involved to ensure the tools are safe and effective. Always keep in mind what’s best for the people you serve, or in the words of the AI film character Tron: “I fight for the users!”

Nick Hurlburt, MS, can be contacted at nick@techmatters.org. Tech Matters is a nonprofit with a mission to bring the benefits of technology to all of humanity, not just the richest 5%. To learn more about Tech Matters and its Aselo open-source contact center platform, visit techmatters.org.

References

Film and television quotes from, respectively: 2001: A Space Odyssey (1968); The Matrix (1999); The Terminator (1984); Star Trek: The Next Generation, “Contagion” (1989); Tron (1982)

GivingTuesday. (2024). AI Readiness and Adoption in the Nonprofit Sector in 2024. ai.givingtuesday.org/ai-readiness-report-2024/

McCarthy, L. (2023, June 8). A Wellness Chatbot is Offline After Its ‘Harmful’ Focus on Weight Loss. The New York Times. www.nytimes.com/2023/06/08/us/ai-chatbot-tessa-eating-disorders-association.html

Psychiatrist.com. (2023, June 5). NEDA Suspends AI Chatbot for Giving Harmful Eating Disorder Advice. www.psychiatrist.com/news/neda-suspends-ai-chatbot-for-giving-harmful-eating-disorder-advice/

The Trevor Project. (2021, March 24). The Trevor Project Launches New AI Tool To Support Crisis Counselor Training. www.thetrevorproject.org/blog/the-trevor-project-launches-new-ai-tool-to-support-crisis-counselor-training/

2 Responses

  1. […] in the words of the AI film character Tron: “I fight for the users!” Read the full article at Behavioral Health News. SHARE […]

  2. […] Read the full article at Behavioral Health News. […]

Have a Comment?