Artificial intelligence in veterinary medicine raises ethical challenges
The use of artificial intelligence (AI) is increasing in the field of veterinary medicine, but veterinary experts warn that the rush to embrace the technology raises some ethical considerations.
“A big difference between veterinary and human medicine is that vets have the ability to kill patients – which can be for a variety of medical and financial reasons – so the stakes of diagnoses provided by AI algorithms are very high ,” says associate Eli Cohen. clinical professor of radiology at the NC State College of Veterinary Medicine. “Human AI products need to be validated before they come to market, but currently there is no regulatory oversight of veterinary AI products.”
In a review for Veterinary Radiology and Ultrasounds, Cohen discusses the ethical and legal questions raised by veterinary AI products currently in use. He also highlights key differences between veterinary AI and AI used by human medical doctors.
AI is currently marketed to veterinarians for radiology and imaging, largely because there are not enough veterinary radiologists in practice to meet the demand. However, Cohen points out that AI image analysis is not the same as a trained radiologist interpreting images in light of an animal’s medical history and unique situation. Although AI can accurately identify some conditions on an X-ray, users need to understand potential limitations. For example, the AI may not be able to identify every possible condition, and may not be able to accurately distinguish between conditions that look similar on X-rays but have different courses of treatment.
Currently, the FDA does not regulate AI in veterinary products as it does in human medicine. Veterinary products may enter the market without oversight other than that provided by the AI developer and/or company.
“AI and how it works is often a black box, meaning that even the developer doesn’t know how it comes to decisions or diagnoses,” says Cohen. “Couple that with a lack of transparency by companies in AI development, including how the AI is trained and validated, and you’re asking vets to use a diagnostic tool without determining whether or not it’s accurate.
“Since vets often get a single visit to diagnose and treat a patient and are not always followed up, AI can provide erroneous or incomplete diagnoses and a vet will have limited ability to identify these unless the case is reviewed or a serious outcome occurs,” says Cohen.
“AI is marketed as a substitute or as a similar value to a radiologist interpretation because there is a market gap. The best use of AI going forward, and certainly in this initial phase of deployment, is with what is called ‘ a radiologist in the loop, where AI is used in conjunction with a radiologist, not instead of one,” says Cohen. “This is the most ethical and defensible way to employ this emerging technology: use it to help more vets and pets to access radiologist consultations, but most importantly to have domain experts troubleshooting the AI and preventing adverse outcomes and patient harm.”
Cohen recommends that veterinary experts work with AI developers to ensure the quality of the datasets used to train the algorithm, and that third-party validation testing be done before AI tools are released to the public.
“Almost everything a veterinarian can diagnose on radiographs has the potential to be medium-to-high risk, meaning it leads to changes in medical treatment, surgery, or euthanasia, either because of the clinical diagnosis or client financial limitations,” says Cohen. “That level of risk is the threshold that the FDA uses in human medicine to determine whether there should be a radiologist in the loop. We would be wise as a profession to adopt a similar model.
“AI is a powerful tool and will change how medicine is practiced, but the best practice going forward will be to use it in concert with radiologists to improve access to and quality of patient care, instead of using it to improve those consultations to replace.”