Physician falls asleep

Is attention going to be the radiologist’s biggest challenge in the age of AI?

guy-checking-out-girl-meme

After a period of disbelief, radiology appears to be recognizing the likely-to-happen probability of Artificial Intelligences being used in clinical practice.  The details have yet to be worked out, but over the next few years they will.

AI systems are unlikely to render final interpretations without a radiologist’s supervisory review.  However, with productivity enhancement a tantalizing promise of AI implementation, radiologists will likely have even less time to review a study.  It’s a catch-22.

Narrow AI’s designed for mission-critical purposes like medical diagnosis will have potentially severe consequences when they fail. Such systems should be conceived not only with the excitement of new technology, but also with sober recognition of the serious consequences of algorithmic missteps. Failure mode engineering should be implemented for an AI algorithm to cover potential sources of failure: cascading failures, edge cases, and corner cases.

However, there is another potential source of error besides the AI itself – the human radiologist.  While good UI/UX design is imperative, even the best will be dependent on  human factors.  And we fight a serious enemy – inattention.

Medical students fight this while reviewing cases with attendings; trainees and attendings quickly discover when THEY are personally responsible for a study’s interpretation, it is intensely stimulating, probably responsible for the radiologist’s  heightened awareness of the visual stimuli of the study.

Could we inadvertently be creating a new cascading failure mode by changing the nature of the task? The practice of radiology is currently a high-attention task: we scrutinize minute details of every image in a study. Relying on a trusted algorithm may decrease attention. At some point, after months of positive experiences, the radiologist will get comfortable with the AI’s decisions, and relax their awareness and attention.  The high-attention task of the practice of radiology has been turned into a low-attention one.

Super-human accuracy may thus be a danger to radiologist attention – the better the algorithm, the lower the attention the practitioner may commit to the study.  By withdrawing agency, in part or full, a risk from inattention can be re-introduced.

Radiologists will not be alone in this risk – in fact any physician using AI systems which will be involved in clinical decision support will be at risk.  Add to that the chronic tiredness that accompanies long shifts or call situations with lack of sleep, and the risk of inattention rises.

Physician falls asleep

We must approach this shift in our practice with diligence and maintenance of the high level of radiology physician practice existing today.  To do this, we must notice when something goes wrong, learn from these inevitable errors, and thereby improve our practice.  This subject is only touched on briefly in this post and more comprehensive evaluation is warranted.