One paper compares the best available techniques for AI-assisted skin cancer detection. Another proposes a new method for reading breast cancer scans. Together they point toward a future where specialist shortages no longer determine who gets diagnosed in time.

Survival rates for skin cancer can reach 95%. That figure is not a reflection of how treatable the disease is once it has advanced — it is a reflection of what becomes possible when it is caught early. The distance between that number and the actual survival rates recorded in many parts of the world is not primarily a medical gap. It is a gap in access, in infrastructure, and in the availability of the trained specialists on whom early and accurate diagnosis currently depends.
Dermatologists who can reliably distinguish a benign tumour from a malignant one are not evenly distributed across the global healthcare system. In regions with high patient volume, limited facilities, or significant shortages of specialist staff, the time it takes to get a diagnosis can be the difference between a cancer that is treated and one that isn’t. Artificial intelligence cannot fix healthcare infrastructure, but it can partially decouple diagnostic accuracy from specialist availability and researchers atEffat University in Jeddah are contributing directly to that effort.
Sorting Through 17 Techniques to Find the Best One
A paper co-authored by Effat University’s Saeed Mian Qaisar approaches the question of AI-assisted skin cancer diagnosis from a practical starting point: before the field can settle on the best tools, it needs a clear-eyed comparison of what is available. The paper provides exactly that, reviewing and comparing 17 machine learning and deep learning techniques against one another.
The range of methods covered reflects how far the field has developed and how varied its approaches have become. Support Vector Machines — developed in the 1990s and long valued for their classification accuracy — are assessed alongside K-means Clustering and K-nearest Neighbours, techniques that date to the 1960s but remain in active use. A range of deep learning models is also included, among them Long Short-Term Memory networks and Deep Neural Networks.
The paper’s clearest finding is the performance advantage of Convolutional Neural Networks. CNNs, which are specifically designed to process and analyse image data, have achieved accuracy above 90% in predicting different types of skin cancer — the highest level recorded across all 17 techniques examined. In a diagnostic context where misclassification carries serious consequences, that performance gap is meaningful. For researchers, clinicians, and developers building AI-assisted diagnostic systems, the comparison provides a grounded and evidence-based guide to where the current state of the art actually sits.
Breaking an Image Down to See It Better
The second line of research from Effat University is less a survey of existing tools and more a proposal for a new one. A paper co-authored by researcher Abdulhamit Subasi introduces a technique for breast cancer diagnosis from ultrasound images that works differently from conventional approaches — and, its authors argue, more effectively.
The technique is called the grid-based deep feature generator. Rather than feeding an ultrasound image into an AI model as a single input, the method first divides the image into a structured grid of rows and columns. Pre-trained CNN models are then applied to each individual cell in the grid, extracting diagnostic features from each section of the image separately before the results are combined.
The logic behind this approach is that diagnostic information is not uniformly distributed across a medical image. Features relevant to whether a tumour is malignant may appear in one part of the image and not another, and a model processing the whole image at once may not give those localised features the weight they deserve. Breaking the image into a grid and analysing each part individually is designed to address that limitation — to ensure the model captures the full diagnostic content of the image rather than averaging across it.
The choice of ultrasound as the imaging modality matters as much as the technique itself. Ultrasound is more widely available than many other diagnostic imaging tools, particularly in lower-resource healthcare settings. A method that improves the diagnostic information extractable from ultrasound images — without requiring specialist radiological interpretation — extends the reach of reliable breast cancer detection into exactly the contexts where access to that kind of expertise is most limited.
The Honest Limits of What Has Been Built
The progress documented in both papers sits alongside limitations that the research is transparent about. The most structurally significant is the underrepresentation of diverse skin types in the datasets used to train AI diagnostic models for skin cancer. A model trained predominantly on images from certain populations will perform less reliably for patients whose skin types were not adequately represented in the training data — introducing a form of bias that is not visible in aggregate accuracy figures but that has real consequences for the patients affected by it. Building more representative datasets is one of the field’s most pressing obligations.
The adoption question is the other variable that technical progress alone cannot resolve. AI diagnostic tools only reduce cancer mortality if they are integrated into clinical practice, and that integration requires dermatologists and clinicians to approach them as instruments that extend and support their work rather than as systems that threaten it. The evidence strongly supports that framing — AI performs best as a complement to specialist expertise, extending its reach rather than replacing its judgment. Making that case convincingly to the clinical community is as important as improving the algorithms.