Experts at the Massachusetts General Hospital have conducted new research that resulted in the development of medical imaging techniques that rely heavily on cutting-edge artificial intelligence. The pertinent AI is designed to make it possible for physicians to get better images without the need to gather copious amounts of data, which is actually a major boon for clinicians in many circumstances because of how much imaging procedures now rely on data. Stanford University researchers in Silicon Valley so happen to have just discussed at length the ethical implications and role of machine-learning in how healthcare choices are made for patients today. Postdoctoral scholar Catherine Stinson argues that now's the time to get philosophical about the direction and purpose of AI and deep learning.

The new AI technique empowering clinicians to gain higher quality medical imaging is referred to as automated transform by manifold approximation or AUTOMAP. It requires less time to produce this high-quality imaging apropos of low-radiation X-rays, MRIs, PET scans and CT scans. Obviously, given that all of these are relatively ubiquitous forms of medical imaging used for wide varieties of circumstances, AUTOMAP is set to be pretty significant and integral to how medical imaging will work from now on. Due to the quickness of the processing speed, though, the real-time clinical decisions that are commonly made with regard to imaging protocols when a patient's being scanned is expected to be measurably enhanced by AUTOMAP according to experts from the Massachusetts General Hospital.

AUTOMAP was described in great detail in a paper published last week in Nature, and that paper illustrated the major contrasts between AUTOMAP and imaging produced by conventional approaches, both based on the same data. "What we did was condition a neural network through machine learning to recognize what makes an image an image," according to Matthew Rosen, the Low Field MRI and Hyperpolarized Media Laboratory director. Rosen is also the co-director of Massachusetts General Hospital's Center for Machine Learning, which is based at the Athinoula A. Martinos Center for Biomedical Imaging.

"What the network is learning are generic properties of images, not detailed properties of normal or diseased pathology," Rosen also says. He mentions that the image reconstruction speed via the machine learning algorithm is nigh-instant, occurring in mere milliseconds. That's a big deal for medical imaging and kind of a game-changer. "Some types of scans currently require time-consuming computational processing to reconstruct the images," Rosen adds. "In those cases, immediate feedback is not available during initial imaging, and a repeat study may be required to better identify a suspected abnormality. AUTOMAP would provide instant image reconstruction to inform the decision-making process during scanning and could prevent the need for additional visits."

What the experts at Stanford's School of Medicine have been discussing is the rapidity of the development of machine-learning tools. They say it suggests physicians and medical scientists should be cautiously reviewing ethical risks related to incorporating these tools into healthcare decision-making. The Stanford authors of a new study published in the New England Journal of Medicine, which came out on March 15, acknowledge the massive advantage represented by machine learning in relation to patient health outcomes, but they simultaneously caution that the entirety of this advantage comes from the use of these machine-learning tools to predict and hedge against negative outcomes can't come to fruition without their carefully weighing prospective, ethical pitfalls that could be incurred along the way.

"Because of the many potential benefits, there's a strong desire in society to have these tools piloted and implemented into healthcare," Danton Char says as lead author on the published paper. Char's an assistant professor of anesthesiology, pain and perioperative medicine. "But we have begun to notice, from implementation in non-health care areas, that there can be ethical problems with algorithmic learning when it's deployed on a large scale."

Their major concerns include the fact that algorithms are created using data that can potentially be comprised of biases toward specific clinical recommendations. They also share concerns about ensuring that physicians fully comprehend how the algorithms are developed and how they work so as not to become exorbitantly dependent on those algorithms and, thus, proportionately more vulnerable to the aforementioned biases. "We need to be cautious about caring for people based on what algorithms are showing us," Char says. "The one thing people can do that machines can't do is step aside from our ideas and evaluate them critically."

One serious risk that Catherine Stinson brings up in her opinion piece for The Globe and Mail is what she calls "nerd-sightedness: the inability to see value beyond one's own inner circle. There's a tendency in the computer-science world to build first, fix later, while avoiding outside guidance during the design and production of new technology. Both the people working in AI and the people holding the purse strings need to start taking the social and ethical implications of their work much more seriously."

[researchpaper 리서치페이퍼=Cedric Dent 기자]

저작권자 © 리서치페이퍼 무단전재 및 재배포 금지