Transcript
This transcript has been edited and refined for clarity.
Marianne McGee: Hi, I'm Marianne Kolbasuk McGee, executive editor at Information Security Media Group and I'm here at the hem cyber Foreman Boston, speaking with Barbee Mooneyhan, who is vice president of security, IT and privacy at Woebot Health, an AI-powered mental health application. Hi Barbee. For starters, for those who are not familiar with Woebot Health, please briefly describe what the company provides in terms of mental health services. And what does it mean for the company's services to be AI-driven?
Barbee Mooneyhan: So Woebot Health is a mental health ally and chatbot form. We take natural language processor and we pair it with curated content over with the clinical oversight. And that engages in experience with patients and with users that allows them to be able to have someone to have conversations with, and to be able to go through, some of those methods for improvement of their mental health, whether it be at 2am, or if it's at 4pm. So it's always their mental health ally in your pocket.
McGee: So with that said, what steps is Woebot Health taking to protect individuals privacy and security? And how does AI change that in terms of the risks?
Mooneyhan: I think that's a good question. Because we are AI-based, there is the natural language processor, but we are incredibly thoughtful in what methods that we take when it comes to how we utilize AI. First and foremost is when we select our vendors, the vendors have to be a specific relationship, our contracts are thoroughly looked over, we make sure that from a partnership perspective, we know exactly where our data is, where it's going to go, how it's going to be used, and then we also know that internally, and so we move through the process of of architecting, our infrastructure, and architecting, our AI services, so that the user experience is sound, but also, we're able to control each element of it. So we don't have outputs that we don't know what they are. And from a security, privacy perspective, what you say to a robot is considered to be a privilege conversation. And so while yes, there are some indicators that we utilize to make sure that the experience is appropriate for the user, there's also a lot of things that go into place that prevent the exposure of that information. Like what we call transcripts, we have a lot of conversations internally about transcripts and the appropriate usage of them like who can view them and who can't. And so we do have like least privileged role-based access in the user data is actually in a separate environment than all the other environments, so that we can keep good, deep controls on the user's information, while also being able to run the application appropriately. And it reduces the attack potential on user information. And then we are protective of the conversations themselves.
McGee: So now you're using AI in your mental health services. But what other emerging use cases are you seeing or hearing most about right now when it involves generative AI in healthcare, in terms of promising applications that you think could benefit patients and the healthcare providers themselves, that have to be weighed against whatever the risks are?
Mooneyhan: First, I want to cover that Woebot currently does not perform generative AI. It is strictly curated content that is served back depending on our natural language processor. We've just released our first study on generative AI and the fantastic work that's being done there. But we just have so much testing to make sure that what we're doing is going to make sense, especially with the conversation that's prevalent to the industry right now. And so some of those components for the future state of potential use cases would be anything, like we just talked on stage about using it to be able to take physician notes, to create where your imagery could be in three months. So if you have a disease and then you do gender and have imaging to for where it's going to be in three months. And so the use cases are endless. We could insert it into different operations and anything that you can do five times you could probably automate. And if you can automate it, you could probably automate it with a generative text component. However, I do think it's incredibly important that we can't just implement this into healthcare because we're working with patients' lives. And I think that's an important thing to consider. And we have to have good guardrails, we have to understand what it's doing, the outputs, we have to verify and test the outputs. And we have a lot of testing to do in the environment before major changes can be made.
McGee: And in terms of the good and the bad for regenerative AI in healthcare, when it comes to data security, privacy and potential breaches. Looking ahead, what do you see?
Mooneyhan: There's obviously amazing things that we can do with it. Fast forward for us, November hit, we got much improved abilities. And then just seeing the environment go wild since then, everybody trying to get the front of the line and figure out how to use generative AI. So there's a lot of good that can happen. But there's also a lot of bad. And so one of the things that we try to keep in mind. So there's different pieces, there's outside threats, and then there's the potential of insider threats, and insider threats don't have to be malicious, it can be completely unintentional. So we have to make sure that in the long run, the security privacy components of it is well defined. So if you have good policies in place, and then you're able to identify, if you're training the awareness of everything, if you're training them appropriately, and you're training your workforce appropriately, and you have good policies in place, and you're saying these are the things that you can do. These are the things that you can't do. And this is how you appropriately use it in your environments. We do reduce the risks around security and privacy. We don't want to put our business intelligence information, we don't want to put our PII into these, and then we have a privacy implication. But we also want to be able to support the innovations. So a lot of it is just being prepared for it and being able to address it appropriately as it comes up and to proactively understand what the potential ramifications are, and training our workforce for it.
McGee: So those are some of the guardrails. Finally, what is your top security and privacy advice for healthcare entities deploying general AI efforts in their organizations right now? Anything that you have mentioned that you think is important for them to consider?
Mooneyhan: Most important is we have to understand it, we have to test it, we have to validate it. Without that, we could be potentially putting things into place that we don't know what's going to happen, because we don't understand the technology just yet. And we are still understanding it. But I do think that we have protective controls. If you're thinking through the protective controls, it's going to be making sure that you don't have the text going out to a user without a clearance area. So whether or not you never have the user talk to an AI, or if you have just a buffer in there that has verification, that it has those good check marks in place. And it will stop the prompt before it goes back out. Being able to implement those and testing them to make sure that they work appropriately is going to be a big difference between putting generative AI into place that could potentially harm versus putting AI into place that has the safeguards and guardrails already in place.
McGee: Well, thank you Barbee. I've been speaking to Barbee Mooneyhan and I'm Marianne Kolbasuk McGee of Information Security Media Group. Thanks for joining us.