The Study: Sure dermatologists are great (we guess), but it takes years of seeing thousands of skin lesions to hone their diagnostic skilz. And until they open the spicket on residency spots, dermatologists can be hard to access. A huge dataset of dermatoscopic images of seven categories of benign and malignant skin lesions was generated to compare the diagnostic accuracy of human physicians and AI algorithms. The International Skin Imaging Collaboration Challenge sought algorithm submissions that could learn to place skin lesions into one of these seven diagnostic categories. Over 100 algorithms from 77 labs were submitted with the best algorithm name going to “The Homeboy’s,” though sadly it wasn’t the best algorithm. In the other corner were over 500 human readers who consisted of derm attendings (55%), derm residents (23%), and general practitioners (16%). This group was further subdivided into an elite “expert” group (n=27) with >10 years experience. In the testing set with 30 malignant and benign lesions, the average correct score for humans was 17.9, for experts was 18.8, and for AI was 25.4. Will dermatology kiosks be popping up next to your local RedBox soon? TBH...maybe? It’s at least a really useful decision support for clinicians diagnosing skin lesions.
TBL: Artificial intelligence algorithms trained to diagnose pigmented skin lesions appears to outperform even expert dermatologist in diagnostic accuracy. | Tschandl, Lancet Oncol 2019