Robots could soon detect brain tumors with high accuracy: study

view original post


(Photo by Tima Miroshnichenko via Pexels)




By Stephen Beech via SWNS

Brain tumors could soon be detected by robots, according to a new study.

Artificial intelligence (AI) models can be trained to distinguish tumors from healthy tissue, say scientists.

AI models can already find brain tumors in MRI images almost as well as a human radiologists.

Now researchers are making sustained progress in developing AI for medical uses.

Scientists say the state-of-the-art technology is “particularly promising” in radiology, where waiting for technicians to process medical images can delay patient treatment.

Convolutional neural networks are powerful tools that allow researchers to train AI models on large image datasets to recognize and classify images.

The networks can “learn” to distinguish between pictures, and they also have the capacity for “transfer learning.”

Scientists can reuse a model trained on one task for a new, related project.

Although detecting camouflaged animals and classifying brain tumors involves different sorts of images, the research team involved in the study believed that there was a “parallel” between an animal hiding through natural camouflage and a group of cancerous cells blending in with the surrounding healthy tissue.

They explained that the learned process of generalization – the grouping of different things under the same object identity – is “essential” to understanding how the network can detect camouflaged objects.

The researchers say such training could be particularly useful for detecting tumors.



(Photo by Markus Spiske via Pexels)




In the study of public domain MRI data, published in the journal Biology Methods and Protocols, the research team investigated how neural network models can be trained on brain cancer imaging data while introducing a unique camouflage animal detection transfer learning step to improve the networks’ tumor detection skills.

Using MRIs from public online repositories of cancerous and healthy control brains, the researchers trained the networks to distinguish healthy from cancerous MRIs, the area affected by cancer, and what type of cancer it looks like.

The research team found that the networks were almost perfect at detecting normal brain images, with only one or two false negatives, and distinguishing between cancerous and healthy brains.

The first network had an average accuracy of 85.99% at detecting brain cancer, the other had an accuracy rate of 83.85%.

The researchers said that a key feature of the network is the multitude of ways in which its decisions can be explained, allowing for increased trust in the models from both medical professionals and patients.

Deep models often lack transparency, and as the field grows the ability to explain how networks perform their decisions becomes important.

Following the research, the network can generate images that show specific areas in its tumor-positive or negative classification.

Scientists say that would allow radiologists to cross-validate their own decisions with those of the network and add confidence, like a second robot radiologist who can show the tell-tale area of an MRI that indicates a tumor.

The research team believes that in the future it will be important to focus on creating deep network models whose decisions can be described in intuitive ways, so AI can occupy a “transparent” supporting role in clinical environments.



(Photo by Anna Shvets via Pexels)


The networks struggled more to distinguish between types of brain cancer.

However, the accuracy and clarity improved as the researchers trained the networks in camouflage detection.

Transfer learning led to an increase in accuracy for the networks.

While the best-performing proposed model was about 6% less accurate than standard human detection, the research successfully showed the quantitative improvement brought on by the training model.

Lead author Dr. Arash Yazdanbakhsh said: “Advances in AI permit more accurate detection and recognition of patterns.

“This consequently allows for better imaging-based diagnosis aid and screening, but also necessitate more explanation for how AI accomplishes the task.”

Dr. Yazdanbakhsh, a Research Assistant Professor at Boston University, added: ” Aiming for AI explainability enhances communication between humans and AI in general.

“This is particularly important between medical professionals and AI designed for medical purposes.

“Clear and explainable models are better positioned to assist diagnosis, track disease progression, and monitor treatment.”