The excitement over Artificial Intelligence (AI) is often met with concerns about its negative potential. That’s especially true in healthcare where the potential gains are met by the principled and practical requirements of protecting patient data.
Anitha Vittal, Head, Risk and Compliance, Providence Global Center in India tackles the topic head on in this podcast. She sees AI as having great potential to revolutionize research, diagnosis and treatment, if we can successfully create guardrails for its responsible use.
To do so, she recommends focusing on the risks. The big ones are:
- Data protection and security. AI requires huge amounts of data, which raises potential privacy concerns.
- If the data is biased, then the output will be as well.
- Transparency and Accountability. It can be very difficult to understand AI systems. That’s why it’s essential to bring transparency and accountability into the process.
Compliance teams also need to be educators, helping the AI team and businesspeople understand the ethical considerations involved. One potential technique involves creating case studies and requiring participants to play different roles to better understand perspectives and risks.
Listen in to learn more about managing the opportunities and risks of AI, including the importance of what she calls the Four E’s: Establish, Embed, Enforce and Evolve.