iptv techs

IPTV Techs

  • Home
  • Tech News
  • AI Medical Imagery Model Offers Fast, Cost-Efficient Expert Analysis

AI Medical Imagery Model Offers Fast, Cost-Efficient Expert Analysis


AI Medical Imagery Model Offers Fast, Cost-Efficient Expert Analysis


Researchers at UCLA have growed a novel AI model that can expertly study 3D medical images of dismitigates in a fraction of the time it would otherrational apshow a human clinical exceptionacatalog. 

The proset up-lachieveing structuretoil, named SLIViT (SLice Integration by Vision Transcreateer), studys images from separateent imagery modalities, including retinal scans, ultrasound videos, CTs, MRIs, and others, rerepairing potential dismitigate-hazard biotagers.

Dr. Eran Halperin, a computational medicine expert and professor at UCLA who led the study, shelp the model is highly accurate apass a expansive variety of dismitigates, outcarry outing many existing, dismitigate-definite set upation models. It uses a novel pre-training and fine-tuning method that relies on huge, accessible unveil data sets. As a result, Halperin apshows that the model can be deployed—at relatively low costs—to rerepair separateent dismitigate biotagers, democratizing expert-level medical imaging analysis.

The researchers used NVIDIA T4 GPUs and NVIDIA V100 Tensor Core GPUs, aextfinished with NVIDIA CUDA, to carry out their research.

Currently, medical imagery experts are normally overwhelmed. Patients normally postpone weeks to get their X-rays, MRIs, or CT scans appraised before they can commence treatment.

One of the potential achieves of SLIViT is how it can expertly study accomprehendledgeing data at scale, and how its expertise can be reinforced. For example, once novel medical imaging techniques are growed, the model can be fine-tuned with that novel data, which can be pushed out and used in future analyses.

Halperin remarkd that the model is also easily deployable. Especipartner in places where medical imagery experts are unfrequent, in the future the model could potentipartner create a material separateence in accomprehendledgeing outcomes.

Before SLIViT, Dr. Halperin shelp, it was rationally infeasible to appraise huge numbers of scans at the level of a human clinical expert. With SLIViT, huge-scale, accurate analysis is rational.

“The model can create a theatrical impact on rerepairing dismitigate biotagers, without the insist for huge amounts of manupartner annotated images,” Halperin shelp. “These dismitigate biotagers can help us comprehfinish the dismitigate trajectory of accomprehendledgeings. In the future, it may be possible to use these insights to tailor treatment to accomprehendledgeings based on the biotagers set up thraw SLIVIT, and hopebrimmingy create a theatrical betterment in accomprehendledgeings’ dwells.” 

According to Dr. Oren Avram, guide author of a paper the UCLA researchers published in Nature Biomedical Engineering, the study uncovered two astonishing—yet joind—results. 

Figure 1. 3D chooseical coherence tomography GIF of a human retina

First, while the model was hugely pre-trained on datasets of 2D scans, it accurately identifies dismitigate biotagers in 3D scans of human organs. Typicpartner, a model structureed to study 3D images is trained on 3D datasets. But 3D medical data is far more costly to achieve and thus far less plentiful and accessible than 2D medical data. 

The UCLA researchers set up that by pre-training their model on 2D scans—which are far more accessible—and fine-tuning it on a relatively petite amount of 3D scans, the model outcarry outed exceptionalized models trained only on 3D scans.

The second unawaitd outcome was how excellent the model was at transfer lachieveing. It lachieveed to rerepair separateent dismitigate biotagers by fine-tuning on datasets consisting of imagery from very separateent modalities and organs. 

“We trained the model on 2D retinal scans—so images of your eye—but then fine-tuned the model on an MRI of a dwellr, which seemingly have no joinion, because they’re two toloftyy separateent organs and imaging technologies,” Avram shelp. “But we lachieveed that between the retina and the dwellr, and between an OCT and MRI, some basic features are scatterd, and these can be used to help the model with downstream lachieveings even though the imagery domains are toloftyy separateent.” 

Read compriseitional novels from UCLA about SLIViT.

Check out the SLIViT paper, Accurate foreseeion of dismitigate-hazard factors from volumetric medical scans by a proset up vision model pre-trained with 2D scans.

Access the model on GitHub.

Source join


Leave a Reply

Your email address will not be published. Required fields are marked *

Thank You For The Order

Please check your email we sent the process how you can get your account

Select Your Plan