AI Imaging
RSNA 2017 - radiology, AI, artificial intelligence, deep learning

AI and Machine Learning Startups at RSNA 2017

RSNA 2017 - radiology, AI, artificial intelligence, deep learning

RSNA is a huge trade show in Chicago, held right after Thanksgiving.  As a Floridian, we tend to avoid such climes, but the pre-meeting buzz about machine learning was significant, so I spent a day or two there (and brought the warm weather with me).

While AI was the buzz at RSNA, it was not afforded the same regard as more established disciplines in Radiology.  The Machine Learning marketplace was in the rear of the smaller exhibit hall, the Machine Learning posters were at the back of the poster exhibit room, and the Machine Learning sessions were in medium-sized rooms which filled up to fire code capacity 30 minutes before.

There is a lot of negativity, understandably, about “vaporware” and “not ready for deployment” products.  I’m going to avoid commenting on that, and focus instead on the positive.  This post focuses on the startups – established players and resellers will be covered separately.

Zebra Medical – I became aware of Zebra over a year ago when I sat at a lunch table with them and Arterys.  Zebra has a suite of about 15 different applications for automated AI interpretation, focused on specific applications.  Examples of Zebra’s applications are: Fatty Liver quantification, Emphysema evaluation, automated Coronary Calcium Scores, and a rather novel application of radiomics to a CT scan of the abdomen – obtaining an equivalent for a DEXA score to evaluate for osteopenia.  I’ve sat in the technical presentation for the synthetic DEXA evaluation and it has some technical teeth behind it with a number of cascading neural networks (a sign of stronger applications)  I wasn’t able to get a look at the UI at the Zebra booth, but I did see the integrated product at Carestream’s booth, where the Zebra product was integrated into the Carestream Enterprise PACS.  The integration was very attractive and well designed.  Here is Zebra’s business plan, obviously targeted first to extra-US facilities.  They are probably the current leader in data aggregation among the start ups, with reportedly access to over a million non-US imaging studies.

Zebra Medical Vision - artificial intelligence Radiology

Arterys:  notable for being the first cloud-based SaMS (Software as a Medical Service) machine learning firm to achieve FDA product approval, they are now capitalizing on it by expanding on their FDA-approved cardiac imaging product to provide a lung CAD product for  lung nodules with volumetric analysis and sequential follow-up, as well as Liver CAD using the LI-RADS criteria.  Arterys has put a lot of time and effort into their UI and it shows, winning recent innovation awards.  That said, I don’t believe they have achieved a similar level of integration into enterprise PACS as Zebra.  Fortunately, their products are different.  Arterys is forthright about their desire to add other applications to their platform and UI, not necessarily developed by them.

CuraCloud: CuraCloud is a Seattle-based Startup with a decent number of engineers behind it.  They demoed a deep learning based system based around three applications: Brain Bleeds, Pulmonary Embolism, and Pneumothorax.  The system is cloud-based, with a gorgeous UI.  The CuraCloud app’s brain product seemed to have unusual sharpness, in both the scan and the segmentation, making me wonder if they were using a super-resolution (resolution enhancement) algorithm.  Their captioning suggested a combined CNN-RNN model.  Of course, its only a demo, but this and their software engineers’ publication lists suggest a degree of technical competence.  Their data is sourced from both the US and China.

Qure.AI: Qure is out of India, with multiple cloud-based applications in the segmentation and detection arena, with one or two of the applications hovering on the threshold of radiomics.  Their flagship product is a robust Chest X-ray algorithm for screening and interpretation.  The algorithm is apparently based on >200,000 studies from hospitals in India, and reportedly passed validation on the 100,000+ ChestX-Ray14 database as well.  They apply a heatmap style algorithm to identify where the algorithm is picking up abnormalities – instead of a CAM model, they use a lesioning algorithm similar to Zeiler and Ferguson.  Radiology in India is different than that practiced in the developed world, with a deficit of Radiologists to read studies – a Chest X-ray with a significant finding could be waiting for up to three months to be read!  This is a home-grown approach to a need in the developing world.

Lunit : Lunit is a South Korean Machine Learning firm that is focusing on Chest Radiography and Mammography.  The chest radiography application for pulmonary nodules displayed an attractive heatmap, a bit showy and certainly attractive, but distracting from the standpoint of a radiologist.  They were probably applying a heatmap model on the last convolutional layer before the fully connected layers similar to what the Stanford group recently did with their CheXNet paper (which I have discussed here).  The heatmap seems very similar to that presented online elsewhere.

Late addendum: Lunit has a slick way of augmenting their data set:  Rads can “try out” their product by uploading CXR cases.  A creative solution to creating a new test set.  Well done.

Lpixel is  the Japanese version of Lunit, with more of a focus on CT imaging and automated diagnosis, offering a suite of apps similar to Zebra’s.  LPixel’s servers were down when I visited, and the rest of the demonstration was in Japanese, which limits my ability to report further on them.

RadLogics is a machine learning and deep learning application which they call the “Virtual Resident” which they were showing for Thoracic applications.  One nice thing about the Radlogics approach is that they feed forward findings on their cloud app into the Powerscribe report for radiologist review and they interface with AGFA PACS.  They are focusing on productivity improvement.  They are looking for additional apps to add to their framework as well.

Quantib: Quantib is a dutch deep learning neuroimaging company with an application which evaluates white matter disease.   The use case I was shown was differentiating oligodendrogliomas vs other brain lesions, raising the possibility of non-invasive diagnosis, which as it pertains to neurooncologic patients, would be a tremendous benefit.  While the use case is specific, it is very useful to those few patients afflicted with this terrible disease.  Again, bordering on the radiomics territory.

Kheiron: A Mammography CAD company out of the UK built on Deep Learning, Kheiron has the inimitable Hugh Harvey MD associated with them, so I have to take them seriously.  I’m no longer a mammographer, and therefore ill suited to discuss the intricacies of breast CAD , but I suspect their product is solid.

DIA is an Israeli tech company led by a woman CEO who’s machine learning product is focused on Ultrasound.  Their product is FDA approved and they have recently inked a partnership with GE.  Their approved products focus on echocardiographic measurements, with algorithms which take some of the technologist-dependent element of ultrasound out of the picture (for those not in the know, this is more variable than you think).  They are planning on expanding into new areas, and I hope they continue with  ultrasound which is difficult to work with compared to other modalities in radiology.

Mindshare Medical: Mindshare is about as close as we came to a radiomics company at RSNA this year.  The principals have a long history in computer vision, and the Mindshare application is used to characterize lung masses as to malignancy risk.  It appears to be more of a hand-crafted supervised machine learning entity, but I would not be suprised if it also incorporates Deep Learning in an ensemble (if not, hint hint).  As opposed to just using the size of the mass as a predictor of concern (c.f. Fleischner society guidelines), I was shown some stealth nodules that had high likelihoods of malignancy where the radiomics approach would prompt more rapid biopsy instead of the usual “wait and see” approach.  Their UI was original and informative, but I’m not sure how well it will integrate with enterprise PACS systems.  I would like to see more applications like this, as I think its where we need to go in Radiology in the future.

Subtle Medical – This is a new exhibitor, a Stanford-based startup which presented their algorithm as a novel way to decrease administered contrast dose of both CT and MRI contrast agents.  While they did not have a demo, and are probably an early stage firm, the use case is compelling from a financial and patient safety profile.  Wait – sorry, we’ve just decided that both Iodinated and Gadolinium based contrasts are safe again, despite black box warnings a decade ago regarding NSF and perpetual renal toxicity concerns for Iodinated contrast.  Until next decade when another study comes out.  But in any case, reducing the amount of administered contrast by 5x-10x would have a direct cost savings for imaging facilities, particularly those in the developed world.  While I did not attend it, my contacts who are knowledgable in deep learning enjoyed their presentation and were impressed by them.  Bracco or Bayer would be smart to buy these guys out before they cut into their profits.

Google Cloud: Google was present, offering their Cloud-based storage solutions and space to run cloud-based apps.  One of the main concerns from attendees was data privacy, and the Google folks I spoke to assured me this would be respected, not only from a HIPAA standpoint but also from a “we don’t touch your silo” approach.  I believed them, particularly after their recent experience with hosting data from the NHS.  A persistent low mumble exists about Google Brain being about 6 months ahead of everyone in advanced deep learning development, and with the wealth of talent they have, I believe it.

Nvidia:  Nvidia was there showcasing partnerships with GE and Nuance, and of course promoting their DGX-1 multi-GPU workstation (every deep-learner’s dream box) which is the GPU computing equivalent of a Bugatti Veyron (1200 hp, top speed 238mph, $2.5 million).  But they have a discount for startups!  And I’m sure at least one well-heeled deep learner will eventually say, “Yeah.  I have a startup” – just to get the discount.  And bragging rights similar to Harvard’s.  And invitations to cool bay area parties.

Or they might just buy a Bentley instead.

All opinions my own – vendors welcome to contact me for a more in-depth discussion of their platforms to be shared here.








Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: