A View of Artificial Intelligence in Healthcare

Adopting AI responsibly calls for a hard look at the potential for ground-breaking advances and the many concerning barriers

First Look Online - AI —September 27, 2023

Categories:

AI
Sharon S. Gentry, MSN, RN, HON-ONN-CG, AOCN, CBCN
Program Director, AONN+

A simple explanation of artificial intelligence (AI) is the discipline of making machines that can think like humans. It is a “smart” technology that can process large amounts of data, such as numbers, algorithms, and other input, to recognize patterns and use real-time data to make decisions like humans. Unlike humans, it has the capacity to process significant amounts of data, but what it cannot replace currently is the emotional depth, intuition, and unpredictability of human creativity. So, AI is capable of learning, thinking, and making decisions or taking actions…mimicking human cognition…but it cannot capture interpersonal skills such as empathy, compassion, and active listening.

In healthcare, AI is a machine learning application in medical settings that can enrich applications with diagnosis and treatment recommendations, patient engagement, adherence, and administrative activities. Since AI is a collection of technologies to perform tasks that are associated with human intelligence, following is an overview of select tools and their use in healthcare.

Machine Learning

Machine learning is one of the most common forms of AI and is a technique for fitting models to data. It starts with data like numbers, photos, or text to train a computer model to find patterns or make predictions and to “learn” with data. An example is better diagnostics using these enabled tools to analyze medical reports and images, such as identifying cancerous tumors in mammograms. This computer-aided detection (CAD) is an example of deep learning or neural network models that has thousands of features, such as graphics processing units and cloud architectures that predict outcomes.1 A less complex but equally important machine learning is the neural network. It views problems in terms of inputs (test results, risk factors for a specific disease), outputs (which pieces of that data are vitally important), and weights of variables (risk features of the individual) to predict whether an individual is at risk of a certain disease. This conglomerate of data captures complex relationships that medical personnel cannot address alone.

CAD is a boost for early detection. An example is Sybil from MIT and Mass General Cancer Center. This application was “trained” on low-dose chest computed tomography scans, and for patients undergoing screening for lung cancer, Sybil was able to look at an image and accurately predict the risk of a patient developing lung cancer within 6 years.2 Another success comes from Boca Raton Regional Hospital where AI used to estimate the risk of breast cancer from millions of breast cancer scans showed a 23% increase in cancer cases identified during breast cancer screening compared with scans assessed by a radiologist.3

Natural Language Processing (NLP)

This tool includes applications such as speech recognition, text analysis, translation, and other goals related to language.1 It allows programs the ability to read, understand, and derive meaning from human languages. In healthcare, statistical NLP can extract disease conditions, medications, and treatment outcomes from patient notes and other electronic health records. Examples are prepared reports on radiology examinations, recording patient interactions with speech-to-text conversion in the electronic health record, and deriving billable information from clinical notes and transferring it into standardized medical codes. As a note, in one’s everyday world, this tool can assess conversations on social media and detect one’s intentions, desires, and choices in order to send one an advertisement.

A great example is the chatbot or a computer program that simulates human conversation with an end user. Chatbots can use conversational AI techniques like NLP to understand the user’s questions and automate responses to them. Penn Medicine created an augmented intelligence chatbot named Penny that used text-based, bidirectional conversational interactions to guide patients through potentially complex regimens.4 Patients with gastrointestinal cancer on an oral chemotherapy had a 70% adherence rate of taking their medications. Feedback shows an acceptable way of interacting with the care team, an additional layer of support, and increased confidence in taking the medication in the treatment continuum of care.

Rule-Based Expert Systems

A practical definition comes from Akula’s Rule-Based Systems for Medical Diagnosis book.5 “Rule-based expert systems are expert systems in which the knowledge is represented by production rules. A production rule, or simply a rule, consists of an IF part (a condition or premise) and a THEN part (an action or conclusion). IF condition THEN action (conclusion).” Examples are clinical decision support where rules are put in (clinical practice guidelines) for patient decision support and even alerting systems when combined with telemonitoring that can enable more efficient clinical care, such as through automated alerts at the earliest sign of deteriorating patient health. By combining real-world data through tools such as biosensors, watches, smartphones, conversational interfaces, and other instruments, patient behavior can be encouraged or directed in a more positive direction throughout cancer care.

Physical Robots

These industrial automatons have been in factories and warehouses for years as they perform predefined tasks like lifting, repositioning, or assembling objects. In healthcare, they may deliver supplies, and robotic surgery allows surgeons to improve their ability to see and perform precise and minimally invasive surgical tasks. Healthcare robotic process automation involving computer programs on servers can perform prior authorization, update patient records, or be used in billing.

AI is used throughout the cancer continuum as noted from the examples above. In the prevention and risk-reduction domain, genetic risk evaluation calculations for individuals are used to provide screening, chemoprevention, and, if needed, treatment decisions due to personal risk factors. An example in the screening phase comes from clinical data from 6 million patients in Denmark and the United States who were analyzed with machine learning models on the sequence of disease codes on clinical histories for pancreatic cancer from disease trajectories. The model was able to identify individuals at high risk of developing pancreatic cancer up to 3 years earlier by using medical records including family history and the presence of genetic mutations.6 Studies in the United Kingdom and the Netherlands showed that machine learning could predict the recurrence of lung cancer better than a model built on the TNM staging system, potentially leading to earlier retreatment and improved outcomes for patients.7

Barriers to AI

With all the positives AI can bring to healthcare, such as helping clinicians with informed, real-time decision-making with actionable insight, predictive analytics, and opportunities to avoiding serious complications, barriers must be understood to create a path to go forward. This relates to the adage of “one gets out what one puts in” in terms of data. Tachkov and colleagues performed a scoping literature review with one team, and another team ran parallel focus group meetings with healthcare team members during which the list of barriers identified by the scoping review were reviewed, updated, and extended on a continuous basis.8 After completion of the tasks, the teams merged the list of barriers, resolved overlaps in the list and categorized them into 5 groups—data-related, methodological, technological, regulatory and policy-related, and human factor–related barriers.

Data hurdles were reliability, validity, and accuracy of data, especially in the assessment of data entry or self-reporting of unstructured data where electronic medical records or imaging reports were a challenge to aggregate and analyze, and the lack of interoperability across systems. The interoperability challenge was around electronic medical records of different service providers, and limited multinational data collection and analysis due to coding differences in countries. Another was systemic bias via upcoding where hospitals bill for a more complex treatment than reflected in the care of the patient. Data racial biases were noted with the lack of data standards and accountability around race and ethnicity. The misuse of racial and ethnic data to inform diagnosis and treatment plans, and the implementation of algorithms that are not inclusive of race factors can exacerbate socioeconomic inequities in healthcare. The models may perform well, but they are not generalizable.

Noted methodological barriers were bias of AI to favor some subgroups that have better information, lack of transparency of protocols for data collection methods, inability to text mine or use natural language processing algorithms due to the lack of standardized medical terms in the local language, and limited reproducibility due to the complexity of the methods. Interestingly, lack of methodological transparency of learning models or “black box” phenomenon was cited. This phenomenon is described as “any AI system whose inputs and operations aren’t visible to the user or interested party" and the model "arrives at conclusions or decisions without providing any explanations as to how they were reached."9 If a patient is informed that an image has led to a diagnosis of cancer, the individual may want to know why.

Technological barriers revolve around costs—creating and sustaining the infrastructure to support AI; the expense to collect, secure, and store data; as well as investment to improve data validity. And this includes an AI competent healthcare workforce with initial training and ongoing education as technology develops. One solution is that healthcare providers will see return on investment in the form of reduced administrative burdens.

The sensitive nature of all the data that can lead to valid AI tools is the concern of regulatory and policy-related barriers. There are compliance issues with managing a high volume of sensitive information, and on the other side is a lack of access to patient-level databases due to data protection regulations. And are decision makers open to relying on AI-based real-world evidence if they face a black box phenomenon? Patient privacy is critical for acceptance and consent by patients and medical professionals. How can regulations reassure patients, clinicians, and other stakeholders that the AI is being used in an appropriate manner? Responses to regulation are seen, as the FDA has taken steps to increase oversight of AI-enabled devices, including releasing an evolving action plan in January 2021.10 The World Health Organization is calling for caution to be exercised when using AI model tools and released 6 core principles to promote the ethical use of AI.11 These principles include (1) Protecting human autonomy; (2) Promoting human well-being, human safety, and the public interest; (3) Ensuring transparency, explainability, and intelligibility; (4) Fostering responsibility and accountability; (5) Ensuring inclusiveness and equity; and (6) Promoting AI that is responsive and sustainable.

The human-related barriers are reflective of the technology barriers in that there is a lack of education and training about applying AI methods, such as natural language processing and machine learning in outcomes research, which in turn would generate AI-driven scientific evidence. There is also concern about a lack of knowledge in data governance, data ownership, and data stewardship, and a lack of expertise by decision makers about the methods and use of AI-driven scientific evidence.

Richardson and colleagues conducted focus groups examining patient views of diverse applications of AI in healthcare.12 Participants were supportive about healthcare AI but wanted assurances about the AI tool being well-tested and accurate. They desired oversight and regulatory protections against potential harm. They were comfortable with getting recommendations generated by AI tools but wanted personal treatment decisions and the monitoring of ongoing care to be done by a human provider. Preservation of patient choice and autonomy was described as patients having the right to be aware that an AI tool is being used in their care and being able to opt out of AI involvement. Concerns that AI tools might increase healthcare costs to them and that the recommendations could impact treatment choices based on insurance coverage were voiced. By citing personal experiences with errors they had found in their own health records, they questioned how data integrity would be ensured. Interestingly, the dependency on technology was raised with comments about systems-level crash, AI system hacking, and fear that healthcare providers could easily become overly dependent on AI tools. Tebra, a healthcare technology company, surveyed 1000 Americans and 500 healthcare professionals regarding the use of AI in healthcare, and their results were reflective of Richardson’s study.13

Judy Faulkner, Epic Systems founder and CEO, shared new plans for their digital medical record software app to address many of the noted barriers in AI.14 The plan includes offering ongoing training for workers struggling with its software, as well as helping medical and nursing students learn its software. It will expand its databases of patients so that sharing health information for research and treatment purposes becomes easier. They are working with Microsoft on AI being integrated into Epic’s software system to save providers’ time. One example is using AI-based summarization of recorded conversations between physicians and patients to create first drafts of reports and using it to search medical and research databases.

Overall solutions to the barriers are to standardize clinical practice and to document in a way that is understandable for the machines. On the human side, it is a matter of trust for the AI developer community to find common working ground with frontline clinicians by identifying a problem that needs to be solved. Then the highly accurate and safe-to-use tool can be integrated into the current workflow to help reduce the cognitive burden of tasks. AI must send a tailored message to the healthcare individual that makes sense for an individual patient. The technologies that are going to be helpful are the ones that keep patients out of hospitals and keep them from having unnecessary visits. And it will be necessary to curb incidents in which patients receive medical information from AI systems that they would prefer to receive from an empathetic clinician.

Vision for the Future

The reach of AI technology in healthcare is promising, powerful, and important. It does need to be responsibly developed and deployed, and adopted into healthcare practice responsibly. Some healthcare jobs will be automated, such as those that involve dealing with digital information in administrative duties, as well as radiology or pathology CADs. But direct patient care is needed to operate the robotic surgery, explain the AI-generated decision results, and then personalize the individual care. Many factors are supporting the human side, such as the patient and caregiver acceptance, as well as cost around implementation, education, and regulatory factors. As AI technology goes forward with new opportunities and experiences, hopefully it will enrich the way that healthcare professionals diagnose, treat, and manage cancer across the care continuum.

References

  1. Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019;6:94-98.
  2. Mikhael PG, Wohlwend J, Yala A, et al. Sybil: a validated deep learning model to predict future lung cancer risk from a single low-dose chest computed tomography. J Clin Oncol. 2023;41:2191-2200.
  3. Sylver A. AI Technology Helping Diagnose Breast Cancer Even Earlier. Baptist Health Boca Raton Regional Hospital. https://baptisthealth.net/baptist-health-news/ai-technology-helping-diagnose-breast-cancer-even-earlier. 2023.
  4. Siwicki B. Penn Medicine uses AI chatbot ‘Penny’ to improve cancer care. Healthcare IT News. www.healthcareitnews.com/news/penn-medicine-uses-ai-chatbot-penny-improve-cancer-care#:~:text=Nearly%204%2C000%20medication%2Drelated%20text,approximately%2093%25%20were%20accurately%20interpreted. 2023.
  5. Akula VSG. Rule-Based Systems for Medical Diagnosis. In: Kumar AVS, ed. Fuzzy Expert Systems for Disease Diagnosis. IGI Global; 2015:21-44.
  6. Placido D, Yuan B, Hjaltelin JX, et al. A deep learning algorithm to predict risk of pancreatic cancer from disease trajectories. Nat Med. 2023;29:1113-1122.
  7. The Royal Marsden. AI study “exciting first step” towards improving post-treatment surveillance of lung cancer patients. www.royalmarsden.nhs.uk/ai-study-post-treatment-surveillance-lung-cancer. 2022.
  8. Tachkov K, Zemplenyi A, Kamusheva M, et al. Barriers to use artificial intelligence methodologies in health technology assessment in Central and East European countries. Front Public Health. 2022;10:921226.
  9. Yasar K. What is black box AI? TechTarget. www.techtarget.com/whatis/definition/black-box-AI.
  10. US Food and Drug Administration. FDA Releases Artificial Intelligence/Machine Learning Action Plan. www.fda.gov/news-events/press-announcements/fda-releases-artificial-intelligencemachine-learning-action-plan. 2021.
  11. World Health Organization. WHO calls for safe and ethical AI for health. www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health#:~:text=The%206%20core%20principles%20identified,AI%20that%20is%20responsive%20and. 2023.
  12. Richardson JP, Smith C, Curtis S, et al. Patient apprehensions about the use of artificial intelligence in healthcare. NPJ Digit Med. 2021;4:140.
  13. Shryock T. AI Special Report: What patients and doctors really think about AI in health care. Medical Economics. www.medicaleconomics.com/view/ai-special-report-what-patients-and-doctors-really-think-about-ai-in-health-care. 2023.
  14. Diaz N. Judy Faulkner touts new plans for Epic. Becker’s Healthcare. www.beckershospitalreview.com/ehrs/judy-faulkner-touts-new-plans-for-epic.html. 2023.
Related Articles
Cancer in Numbers
Sharon S. Gentry, MSN, RN, HON-ONN-CG, AOCN, CBCN
|
March 2024 Vol 15, No 3
What navigators should know about the perplexing revelation from the American Cancer Society’s Annual Report.
Geriatric Oncology Resources
Sharon S. Gentry, MSN, RN, HON-ONN-CG, AOCN, CBCN
|
February 2024 Vol 15, No 2
AONN+ provides the resources you need to care for older adults with cancer through the Geriatric Cancer Care Toolkit
A New Era for Oncology Patient Navigation
Sharon S. Gentry, MSN, RN, HON-ONN-CG, AOCN, CBCN
|
January 2024 Vol 15, No 1
As of January 1, 2024, coding, billing, and payment for principal illness navigation services is a reality that marks a new era for oncology patient navigation.
Last modified: September 27, 2023

Subscribe Today!

To sign up for our print publication or e-newsletter, please enter your contact information below.

I'd like to receive:

  • First Name *
    Last Name *
     
     
    Profession or Role
    Primary Specialty or Disease State
    Country