SIIM 2018 – My experience

A personal review of the SIIM 2018 annual meeting and notable presentations and products I saw.

Full Disclosure: I am a member of the Society of Imaging Informatics in Medicine.

I attended the 2018 SIIM Annual meeting as I was presenting a research poster on data augmentation and class imbalance in medical imaging datasets.  I’m planning on presenting an expanded version of the original abstract soon, so I won’t put the poster file up  – I’d prefer to release the finished product after it has been reviewed as a conference submission.

Here are some presentations and vendors I thought were noteworthy.

Scientific Sessions:

One of the stand-out scientific sessions was from Peter D Chang at UCSF who had an excellent deep learning presentation using a hybrid 3D-2D CNN ResNet model for brain hemorrhage detection, which essentially converted weak labeling to strong labels.  Not only were the results impressive, with the number of missed hemorrhages under 3%, but the framework was impressive as well.  Excellent work by Peter, who has announced himself as a welcome addition to the medical deep learning community with this.

Andrew Taylor MD PhD from UCSF also had an interesting presentation on Pneumothorax detection, running an algorithm that focused on discovery of medium-sized to large pneumothoraces.  Their top model had an AUC of 0.96 with sens/spec of 0.87 and 0.91.  There was a lively discussion revealing the different focuses of a large institution (UCSF) which was more concerned with a missed large PTX (due to the volume of studies being tied up in the queue) vs. private practitioners who would be looking for the small pneumothorax prior to discharging a patient.  I chatted with him briefly after the presentation, and we discussed image pixel size and its effect on algorithmic accuracy – like me, he found that 256×256 is typically too small for this type of analysis, but presently there is little difference between 512×512 and 1024×1024.

There was an fascinating presentation by Dr. Usha Raghavan at Philips Research on workflow modeling.  I was particularly interested in this presentation, as it is similar to a project I started (and put on the back burner) a few years ago. Basically, they came to similar conclusions as I have when looking at turn-around-times (TAT’s).  As we have optimized many TAT’s in the hospital patient care setting, there gets to be a lower limit you can’t really get around.  This is because we are operating in the physical world – and some things take time – the patient has to get on the scanner table, MRI pulse sequences take a few minutes even if performed perfectly due to the physics, etc… This causes the bell curve to get squashed up in the direction of the lower limit; but fat tails can occur towards the right, and this is where opportunities for improvement lie.  This is a Gamma Distribution:

                     \Gamma(a) = \int_{0}^{\infty} {t^{a-1}e^{-t}dt}

What was exciting about this presentation is that they used non-gaussian distributions in a workflow setting.  Out of a number of non-standard distributions (Gamma, Lognormal, Weibull, Log-logistic) and it turned out that the Log-Logistic distribution outperformed all others for their workflow modeling.  The log-logistic probability formula is this:

where X, alpha, and beta are all greater than zero.

In any case, the application of their model to a real world scheduling process allowed the parent institution to save $650K annually.

One final presentation by Mark S Frank MD, MBA was a very practical one, comparing RIS data to Revenue Cycle Generator Data.  This discovered missing and unmatched charges, which were impacting practice profitability.  While somewhat of a thankless job, the author recommended tying exams to revenue cycle analysis to avoid missed / lost charges.

Vendors:

Of course, nearly every vendor present was now touting their “AI” capabilities.  That is going to mean different things to different people – most of the people reading this blog will recognize the claims as being more along the “AI-ready” or “AI-compatible” veins.

One of the traditional vendors I was more impressed with was TeraReconTeraRecon is a middleware imaging provider who previously focused on high-quality visualization (3D Multiplanar Reconstructions, Shaded volumetric displays, MIPS, MinIPs, image fusion for PET/CT and CT/MRI) algorithms.  They certainly were a market leader in that segment, as in the past, enterprise PACS vendors did not have nearly the same level of image processing capabilities that TeraRecon did.  Think Mercedes-Benz vs. Ford for a comparison.  I have used TeraRecon in the past, and felt it produced superior images to other competing algorithms.  However, over time, PACS vendors also improved their offerings and included similar features.  And by the way, there is this AI thing on the horizon – what’s that going to do to radiologists and reads?

I spoke with Terarecon’s CEO, Jeff Sorenson at some length about their product strategy, and he showed me the NorthStar Viewer.  The NorthStar is marketed as “An AI-Enabled Medical Image Console”.  I would call it a PACS for AI; Jeff would strongly disagree, because its not really kosher/legally accurate (monitor not FDA approved for medical imaging, etc..) to label it as such, but I’m going by the “duck rule” here (looks like a duck, quacks like a duck…).

The user interface is beautiful – mouse controlled (see:  multi-button mouse improves productivity) but designed for speed once a user is trained on it.  NorthStar interfaces tightly with Envoy AI .  The Envoy AI marketplace of algorithms through NorthStar allows the user to choose which AI algorithm to apply to a study.  The user retains control over, as they decide whether to include the results of the algorithm on the permanent medical record through the PACS (an important point in the medical legal record).  More than one algorithm can be accessed.

From a radiologists’ perspective, I can see how this would be attractive as it allows me to select the AI algorithm I want and keep the images I want, without having something made a permanent part of the medical record that I don’t want.  There are multiple good reasons for this as a radiologist: you want to help your clinicians reach the correct result, you want to reduce uncertainty on the exam, etc…  This ‘radiologist friendly’ approach might be a good piece of the puzzle to encourage AI adoption by radiologists in a friendly, as opposed to hostile, manner.

AGFA demonstrated their latest version of IMPAX.  Apparently rebuilt from the ground up, it remains a classic enterprise PACS workhorse solution for radiologists.  There was integration between NorthStar and IMPAX which appeared pretty seamless.  I’m not going to dwell on the many PACS system vendors and their everything-but-radiology-images cousins, the Enterprise-level imaging vendors as that’s not my focus.  The dashboarders and cloud providers like Ambra Health were also there in force.

AI Startups were not necessarily boothed at SIIM – a few of the more interesting players like PhenoMX and two young Stanford grads trying to establish an “Underwriters Laboratory” for AI were among the many hallway conversations that happened organically.  However, Arterys was there with a booth of their own, separated from the rest of the startups.  I’ve previously written about CuraCloud, Lunit, Mindshare, Quantib, and Qure.ai.  In general, their offerings were more mature and progressive from RSNA, with Quantib offering a neurodegenerative package and focusing on neuroimaging, and Qure.ai adding a Neuro Triage product to their Chest offerings.

There were some new startups as well:

Pixyl appeared to be a new European AI startup in the Neuroimaging space.  They seemed to be aiming at the -omics space with biomarker extraction.  Unfortunately, they were having connectivity issues when I visited them and I couldn’t see their offering.

Koios was exhibiting a ultrasound AI algorithm for breast mass characterization.  It basically takes two images from the US and runs them through its classifier to render a BIRADS class.

While I applaud Koios for tackling the difficult area of AI sonography, it needs to be said that this will be an operator-dependent assessment.  The sonographer’s choice of images will certainly influence the classifier’s decision.  In my clincial experience, I have seen breast cancers that looked perfectly benign in many images, with the only clue that they were cancerous some marginal irregularity or a tail in one sonographic image ONLY.  If that image is not chosen, the possibility of a false positive may exist (I can’t say for certain, because I have not clinically evaluated their algorithm).

In current practice, cine sweeps are regularly provided through breast lesions to document their characterization.  I would hope that future iterations of Koios included a way to evaluate the entire lesion based on such sweeps, as I think it would spur adoption of this product.

Medexprim, a French AI startupand SIIM innovation challenge semi-finalist was providing a somewhat different product – a “Radiomics Enabler”.  The software suite is more of a big data product which interfaces with the user’s PACS/RIS to query and extract-transform-load relevant images with anonymization.  It is a research-oriented or dataset-oriented tool.  What’s cool about it is that it is open source!!!  I would hope it is supported, as we need more open source tools in our community (remember – open source does not mean free to rip off).

Oxipit, another SIIM innovation semi-finalist from Lithuania founded by two Kaggle Grand Masters had two interesting offerings: a “similar image search” algorithm that crawls a database to provide ‘similar’ images for comparison, and what appeared to be a CNN-RNN based multi-class Chest X-ray classifier capable of identification of 53 different classes, trained on a database of over 500K annotated images.  What was neat about the classifier was not only the high number of classes, but the output which included a degree of location (L/R, low-mid-high) – in English/Lithuanian text.

I didn’t get to see as many of the other startups in depth as I wanted to.  Time is always limited.  I liked the SIIM; hopefully there will be more machine learners there in time.  Now on to C-MIMI!

As before, I’m doing this on my own dime and I have no financial relationships to disclose at this time with any mentioned entities.