AI In Healthcare: Protecting Your Data In A Digital World

Advances in healthcare artificial intelligence (AI) are occurring rapidly and will soon have a significant real-world impact. Several new AI technologies are approaching feasibility and a few are close to being integrated into healthcare systems [1, 2]. In radiology, AI is proving to be highly useful for the analysis of diagnostic imagery [3, 4]. For example, researchers at Stanford have produced an algorithm that can interpret chest X-rays for 14 distinct pathologies in just a few seconds [5]. Radiation oncology, organ allocation, robotic surgery and several other healthcare domains also stand to be significantly impacted by AI technologies in the short to medium term [6,7,8,9,10]. In the United States, the Food and Drug Administration (FDA) recently approved one of the first applications of machine learning in clinical care—software to detect diabetic retinopathy from diagnostic imagery [11, 12]. Because of this rapid progress, there is a growing public discussion about the risks and benefits of AI and how to manage its development [13].

Many technological discoveries in the field of AI are made in an academic research environment. Commercial partners can be necessary for the dissemination of the technologies for real world use. As such, these technologies often undergo a commercialization process and end up owned and controlled by private entities. In addition, some AI technologies are developed within biotechnology startups or established private companies [14]. For example, the noted AI for identifying diabetic retinopathy is developed and maintained by startup IDx [12, 13]. Because AI itself can be opaque for purposes of oversight, a high level of engagement with the companies developing and maintaining the technology will often be necessary. The United States Food and Drug Administration, are now certifying the institutions who develop and maintain AI, rather than focusing on the AI which will constantly be changing [15]. The European Commission has proposed legislation containing harmonized rules on artificial intelligence [16], which delineate a privacy and data principle of organizational accountability very similar to that found in the European General Data Protection Regulation [17, 18]. Other jurisdictions like Canada have not completed tailoring regulation specific to AI [19]. AI remains a fairly novel frontier in global healthcare, and one currently without a comprehensive global legal and regulatory framework.

The commercial implementation arrangements noted will necessitate placing patient health information under the control of for-profit corporations. While this is not novel in itself, the structure of the public-private interface used in the implementation of healthcare AI could mean such corporations, as well as owner-operated clinics and certain publicly funded institutions, will have an increased role in obtaining, utilizing and protecting patient health information. Here, I outline and consider privacy concerns with commercial healthcare AI, focusing on both implementation and ongoing data security.