<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by MONAI Medical Open Network for AI on Medium]]></title>
        <description><![CDATA[Stories by MONAI Medical Open Network for AI on Medium]]></description>
        <link>https://medium.com/@monai?source=rss-78140725f336------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Fri, 03 Apr 2026 21:31:43 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@monai/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[PanTS: A New Benchmark for AI in Pancreatic Cancer Detection]]></title>
            <link>https://monai.medium.com/pants-a-new-benchmark-for-ai-in-pancreatic-cancer-detection-52e4250e9086?source=rss-78140725f336------2</link>
            <guid isPermaLink="false">https://medium.com/p/52e4250e9086</guid>
            <category><![CDATA[medical-imaging]]></category>
            <category><![CDATA[pancreatic-cancer]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[pytorch]]></category>
            <dc:creator><![CDATA[MONAI Medical Open Network for AI]]></dc:creator>
            <pubDate>Wed, 10 Sep 2025 15:08:29 GMT</pubDate>
            <atom:updated>2025-09-10T15:08:29.699Z</atom:updated>
            <content:encoded><![CDATA[<p><strong>Introduction: The Urgency of Early Detection</strong></p><p>Pancreatic cancer is the third leading cause of cancer-related deaths in the United States. Tragically, 80–85% of cases are diagnosed too late for effective treatment. The disease’s silent progression and anatomical complexity make early detection a formidable challenge. But early detection saves lives — and that’s where AI can make a difference.</p><p>To address this, NVIDIA partnered with Johns Hopkins University, along with contributions from 11 institutions worldwide, to launch <strong>PanTS</strong> — the <strong>Pancreatic Tumor Segmentation Dataset, </strong>the largest and most comprehensive resource ever released for pancreatic CT analysis.</p><p><strong>PanTS at a Glance</strong></p><ul><li><strong>36,390 CT scans</strong> from <strong>145 medical centers</strong> across <strong>18 countries</strong></li><li><strong>993,000+ expert-validated voxel-wise annotations</strong></li><li>Covers <strong>pancreatic tumors</strong>, <strong>pancreas subregions</strong>, and <strong>24 surrounding anatomical structures</strong> including vascular, skeletal, and abdominal/thoracic organs</li></ul><p>Each scan includes rich metadata: patient age, sex, diagnosis, contrast phase, in-plane spacing, and slice thickness. This diversity enables robust model generalization across populations and imaging conditions.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*mA1ESNvcF5AvrlMA4-qJ6Q.png" /><figcaption>Fig 1. Dataset characteristics and visualization</figcaption></figure><p><strong>MONAI-Powered Annotation and AI Training</strong></p><p>PanTS was built using <strong>MONAI Label</strong>, NVIDIA’s open-source AI framework for medical imaging. Radiologists used MONAI to perform interactive 3D segmentation, enabling scalable, human-in-the-loop annotation workflows</p><p>This approach ensured consistency and speed across nearly a million annotations, setting a new standard for medical imaging datasets.</p><p><strong>Human-in-the-Loop Workflow</strong></p><ul><li>AI-generated segmentations using MONAI-based models (e.g., VISTA3D)</li><li>Radiologist validation and refinement</li><li>Multi-rater consensus for high-quality annotations</li></ul><p><strong>Benchmark-Leading Performance</strong></p><p>Leveraging MONAI, NVIDIA’s open-source medical imaging AI framework, PanTS drastically improves AI accuracy in pancreatic tumor detection and sets a new benchmark for research.</p><p>Models trained on PanTS significantly outperform those trained on existing public datasets:</p><p><strong><em>Metric | Improvement Over Baselines</em></strong></p><p><em>Tumor Segmentation (Dice) | +4.9%</em></p><p><em>Tumor Segmentation (NSD) | +3.1%</em></p><p><em>Out-of-Distribution Tumor Detection (AUC) | +14%</em></p><p><em>Full Anatomy vs. Tumor-Only Training (Dice) | +10.3%</em></p><p>These gains are directly attributable to PanTS’s scale and anatomical richness</p><p><strong>Open Science and Developer Access</strong></p><p>PanTS is released under a non-commercial license, with a public training set and a reserved test set for third-party benchmarking. This ensures reproducibility and rigorous evaluation.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*6SSJujXruhg9Nr1dmJFvfg.png" /><figcaption>Fig 2: Justification of annotating 24 surrounding anatomical structures</figcaption></figure><p><strong>Global Collaboration and Impact</strong></p><p>PanTS was co-developed by NVIDIA and Johns Hopkins University, with contributions from institutions across Europe, Asia, and North America. It represents a global effort to accelerate AI innovation in oncology, radiotherapy planning, and surgical decision support</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/798/1*8vix9NlytP9kLPCCld0ZLw.png" /><figcaption>Fig 3. Global Impact of the Dataset and the Benchmark</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*JcPSZOn41-UtgmZd-kh2jw.png" /><figcaption>Fig 4. PanTS dataset powering the addition AI research to achieve Pancreas Tumor Segmentation Leaderboard SoTA (CancerVerse Team)</figcaption></figure><p><strong>Conclusion: Building Toward Earlier Diagnosis</strong></p><p>PanTS is more than a dataset — it’s a call to action. By enabling earlier and more accurate tumor detection, it has the potential to improve survival rates and transform pancreatic cancer care.</p><p>Whether you’re a developer building segmentation models or a researcher benchmarking AI performance, PanTS offers the tools and data to push the boundaries of medical imaging.</p><p>Let’s build the future of cancer detection — together.</p><p><strong>Core Contributors</strong></p><ul><li>Wenxuan Li, Zongwei Zhou, Alan Yuille — Johns Hopkins University</li><li>Yucheng Tang, Daguang Xu — NVIDIA</li><li>Collaborators from University of Bologna, UC Berkeley, UCSF, Peking University Third Hospital, and more</li></ul><p><strong>Core Materials</strong></p><ul><li><a href="https://arxiv.org/pdf/2507.01291v1">Pre-print paper</a></li><li><a href="https://github.com/MrGiovanni/PanTS?tab=readme-ov-file#pants-dataset">Dataset Download</a></li></ul><p><strong>Additional Materials</strong></p><ul><li><a href="https://github.com/MrGiovanni/PanTS">GitHub Repository</a></li><li><a href="https://www.nvidia.com/en-us/clara/medical-imaging/">Explore MONAI</a></li><li><a href="https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+S-HX-02+V1">Learn with MONAI</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=52e4250e9086" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Hacking TorchServe and Triton Inference Server for MONAI and Medical Imaging]]></title>
            <link>https://monai.medium.com/hacking-torchserve-and-triton-inference-server-for-monai-and-medical-imaging-b0564069f48e?source=rss-78140725f336------2</link>
            <guid isPermaLink="false">https://medium.com/p/b0564069f48e</guid>
            <category><![CDATA[pytorch]]></category>
            <category><![CDATA[medical-imaging]]></category>
            <category><![CDATA[monai]]></category>
            <category><![CDATA[triton-inference-server]]></category>
            <category><![CDATA[torchserve]]></category>
            <dc:creator><![CDATA[MONAI Medical Open Network for AI]]></dc:creator>
            <pubDate>Wed, 24 Apr 2024 16:01:51 GMT</pubDate>
            <atom:updated>2024-04-24T16:01:51.716Z</atom:updated>
            <content:encoded><![CDATA[<p><strong>Authors:</strong></p><ol><li>Dr. Vikash Gupta, Center for Augmented Intelligence in Imaging, Mayo Clinic Florida</li><li>Jiahui Guan, Senior Solutions Architect, NVIDIA</li></ol><p><strong><em>GitHub URL</em></strong><em>: </em><a href="https://github.com/vikashg/monai-inference-demo"><em>https://github.com/vikashg/monai-inference-demo</em></a></p><p>MONAI has become the de facto standard in medical image AI, offering tools for model training and clinical integration. It provides out-of-the-box tools for the preprocessing and postprocessing of radiological images. MONAI features a broad selection of predefined neural network architectures and training routines based on PyTorch.</p><p>The MONAI Model Zoo offers access to pre-trained models based on peer-reviewed research, allowing developers to either fine-tune these models or use them to perform inference. In the latest 1.3 release, the MONAI model zoo introduced a Pythonic API, simplifying the processes for downloading and fine-tuning models.</p><p>On the opposite end of the spectrum is MONAI Deploy Express, which provides a comprehensive deployment solution. It uses a MONAI Application Package (MAP) to create application bundles that can be deployed. These MAPs can take a DICOM input and generate an appropriate DICOM image as output. MONAI Deploy Express is an open-source solution that can be integrated into hospital ecosystems, serving as an inference server and a DICOM router.</p><p>MONAI effectively covers two opposite ends of the workflow, with MONAI core at one end, focusing on data preprocessing and model training, and MONAI Deploy Express at the other, offering a comprehensive suite of tools for a complete clinical deployment. However, research labs often find themselves in need of a middle ground — a solution that bridges the gap between operating a full-fledged MONAI Deploy Express instance and writing new code for each inference task. The ideal solution is a local inference server that can be consistently referenced for the same purposes. The advantages of having a dedicated model inference server include:</p><ol><li>A common endpoint that can be invoked for inference tasks</li><li>A common directory to maintain models and apps</li></ol><h4>In this post, we will explore the following:</h4><ol><li>How to write a handler function for a MONAI-based application</li><li>How to write a test function for the aforementioned handler</li><li>How to deploy a MONAI model using a torchserve inference server</li><li>How to deploy a MONAI model using a Triton inference server</li><li>Using a REST API for executing inference</li><li>A comparison between the two model servers</li></ol><h4>Prerequisites and Assumptions</h4><p>We’ll be assuming familiarity with a few key concepts:</p><ul><li>MONAI Transforms: Knowledge of how to use MONAI for transforming medical imaging data</li><li>REST API: Understanding the basics of REST API design for web services interaction</li><li>Deep Learning Concepts: A grasp of fundamental deep learning principles</li></ul><p>All of the necessary code, sample data, and models for this guide will be made available through a GitHub Repository.</p><h4>Creating the ModelHandler</h4><p>First, we’ll break down the code for Model Handler. This handler consists of four main functions: preprocess, post-process, inference, and handle.</p><p>Here, we’ll break down the code into understandable chunks.</p><ol><li>Initializing the ModelHandler, which inherits from the BaseHandler</li></ol><pre>class ModelHandler(BaseHandler):<br>  def __init__(self):<br>  self._context = None<br>  self.initialized = False<br>  self.explain = False<br>  self.target = 0<br>  self.device = torch.device(&quot;cuda&quot; if torch.cuda.is_available() else &quot;cpu&quot;)</pre><p>2. Write the data preprocessing function</p><pre>def preprocess(self, data_fn):<br><br>  transforms = Compose([LoadImage(image_only=True),<br>    EnsureChannelFirst(),<br>    Resize(spatial_size=(256, 256,24)),<br>    ScaleIntensityRange(a_min=20, a_max=1200, b_min=0, b_max=1, clip=True), <br>    AddChannel()])<br><br>  fn = data_fn[0][&#39;filename&#39;].decode()<br>  img_fullname = os.path.join(input_dir, fn)<br>  data = []<br>  batch_size = 1<br><br>  for i in range(batch_size):<br>    tmp = {}<br>    print(img_fullname)<br>    tmp[&quot;data&quot;] = transforms(img_fullname)<br>    data.append(tmp)<br><br>  return data</pre><p>The preprocessing function, as it is aptly called, takes a filename data_fn. It loads the data from the specified data directory and performs preprocessing using the MONAI transform chain. In this particular case, the Nifti image is loaded and resized to a spatial size of 256 X 256 x 24, and the intensity values are scaled between 0 and 1. We then generate a list of data. Since we pass only a single file name, it is important that we hardcode the batch_size to 1.</p><p>3. Post-processing function</p><pre>def postprocess(self, inference_output):<br>  post_trans = Compose([Activations(sigmoid=True), AsDiscrete(threshold=0.5)])<br>  postprocess_output = post_trans(inference_output)<br>  SaveImage(output_dir, output_postfix=&#39;seg&#39;, output_ext=&#39;.nii.gz&#39;)(postprocess_output[0])<br><br>  return [1]</pre><p>Like the preprocessing function, the post-processing function applies the post-processing transforms to the model’s output. It then saves the output to the predefined data directory. If everything goes well, it will return success.</p><p>4. Inference Function</p><pre>def inference(self, data, *args, **kwargs):<br>  with torch.no_grad():<br>  marshalled_data = data.to(self.device)<br>  results = self.model(marshalled_data, *args, *kwargs)<br><br>  return results</pre><h4>Create a Test Function</h4><p>Before creating a model server, we must ensure that the model performs inference as expected. We’ll utilize the previously created ModelHandler and create a test function to validate the workflow.</p><pre>from handler import ModelHandler<br>from ts.torch_handler.unit_tests.test_utils.mock_context import MockContext<br>from pathlib import Path<br>import os<br><br>MODEL_PT_FILE=&#39;traced_segres_model.pt&#39;<br>CURR_FILE_PATH = Path(__file__).parent.absolute()<br>EXAMPLE_ROOT_DIR=CURR_FILE_PATH<br>TEST_DATA=os.path.join(CURR_FILE_PATH, &#39;test.nii.gz&#39;)<br><br><br>def test_segresnet(batch_size=1):<br>  handler = ModelHandler()<br>  print(EXAMPLE_ROOT_DIR.as_posix())<br>  ctx = MockContext(model_pt_file = MODEL_PT_FILE,<br>    model_dir= EXAMPLE_ROOT_DIR.as_posix(),<br>    model_file = None,)<br>  handler.initialize(ctx)<br>  handler.context = ctx<br>  handler.handle(TEST_DATA, ctx)<br><br>if __name__ == &#39;__main__&#39;:<br>  test_segresnet()</pre><h4>Create a TorchServe endpoint</h4><p>To set up a TorchServe endpoint with your handler, follow these steps:</p><ol><li>Create a torch model archive (.mar) using the torchscript model (traced_segres_model.pt) and model handler you wrote earlier. Specify the model name, serialized file, handler path, and version.</li></ol><pre>torch-model-archiver - model-name segresnet - serialized-file ./traced_segres_model.pt - handler ./handler.py -v 1.0</pre><p>2. Move the model to the model_store folder</p><pre>mv segresnet.mar model_store</pre><p>3. Start torchserve. Specify the model store directory, the model to deploy, and the TorchServe configuration file.</p><pre>torchserve - start - model-store model_store/ - models segresnet=segresnet.mar - ts-config ./config.properties</pre><p>4. Test the torchserve is running properly by sending a request to the health endpoint.</p><pre>curl http://localhost:8080/ping</pre><p>Expected Response:</p><pre>{<br> &quot;status&quot;: &quot;Healthy&quot;<br>}</pre><p>5. List all the models deployed on torchserve model server by executing the following command:</p><pre>curl http://localhost:8081/models?</pre><p>Expected Output:</p><pre>{<br> &quot;models&quot;: [<br>   {<br>     &quot;modelName&quot;: &quot;breast&quot;,<br>     &quot;modelUrl&quot;: &quot;breast.mar&quot;<br>   },<br>   {<br>     &quot;modelName&quot;: &quot;segresnet&quot;,<br>     &quot;modelUrl&quot;: &quot;segresnet.mar&quot;<br>   }<br> ]</pre><p>6. Make a prediction request by passing the filename as data</p><pre>curl -d &quot;filename=test.nii.gz&quot; http://127.0.0.1:8080/predictions/segresnet</pre><p>If the prediction is successful, it will return ‘1’. You can now see the AI output in the specified output directory.</p><p>This process allows for deploying and managing multiple models on TorchServe, enabling a flexible and scalable serving environment for your AI applications.</p><h4>Triton Inference Server-based model server for MONAI models</h4><p>Triton Inference Serve enables the deployment of any AI model. It supports multiple libraries like TensorRT, TensorFlow, PyTorch, ONNX, and OpenVINO. In this blog, we will demonstrate MONAI models being deployed on the Triton Inference Server. As MONAI supports PyTorch, we will deploy a PyTorch model on this inference server.</p><p>Steps:</p><ol><li>Convert the model to torchscript using the following code.</li><li>Write the config.pbtxt file needed for the triton inference server</li><li>Similar to the torchserve, we need to write handler for image client.</li></ol><p>Code to convert the PyTorch model to the traced torch script model. The code can be downloaded from the Github link.</p><pre>from model_del import ModelDefinition<br>import torch<br> <br>model_def = ModelDefinition(model_name=&#39;SegResNet&#39;)<br>model = model_def.get_model()<br>model_fn = &#39;./model/SegResNet/best_metric_model.pth&#39; # Filename<br>model.load_state_dict(torch.load(model_fn))<br>x = torch.zeros(1, 1, 256, 256, 24)<br>traced_model = torch.jit.trace(model, x)<br>traced_model.save(&#39;./model/SegResNet/traced_segres_model.pt&#39;)</pre><p>This model is saved in a directory.</p><p>Now, we need to write the config.pbtxt file.</p><pre>name: &quot;lv_segmentation&quot;<br>platform: &quot;pytorch_libtorch&quot;<br>max_batch_size: 2<br>input [<br>  {<br>    name: &quot;input__0&quot;<br>    data_type: TYPE_FP32<br>    dims: [1, 256, 256, 24 ]<br>  }<br>]<br>output [<br>  {<br>    name: &quot;output__0&quot;<br>    data_type: TYPE_FP32<br>    dims: [1, 256, 256, 24]<br>  }<br>]</pre><h4>Explanation of the config.pbtxt file</h4><p>The field name corresponds to the name of the model. The field platform refers to the deep learning library used. In the present case, we used PyTorch to develop the model and thus the value pytorch_libtorch. The following field max_batch_size is self-explanatory as the maximum number of images that can be inferred at once. The last two fields input and output has three sub-fields: name, data_type and dims. The name refers to the input/output node of the model. In most cases (especially for PyTorch), the nodes names by default are input__0 and output__0. So, we encourage the users to use these values in that field. If it fails, the users should do further investigation. TYPE_FP32 refers to the data_type. The last field dims corresponds to the size of input image and the expected output (image). In this particular case, we are demonstrating a segmentation model and so the input and output dims are the same.</p><p>More information about the config.pbtxt file is available at the official Github for Triton: <a href="https://github.com/triton-inference-server/server/blob/main/docs/getting_started/quickstart.md">https://github.com/triton-inference-server/server/blob/main/docs/getting_started/quickstart.md</a></p><h4>Deploying on Triton Inference Server</h4><p>To deploy the models, arrange your models in a directory structure more like this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/456/1*GAzRvr3Dx2ycvAxUdw51vg.png" /></figure><p>These are two models: breast_density and lv_segmentation. The model file model.pt is inside the folder named 1 for both models. The 1, in this case, refers to the model version. Once the directory is set, the triton server can be started using the following command:</p><pre>docker run - gpus=1 -p 8000:8000 -p 8001:8001 -p 8002:8002 -v ${PWD}/model_repository/models:/models nvcr.io/nvidia/tritonserver:23.12-py3 tritonserver - model-repository=/models</pre><p>The above command maps the model_repository/models directory to the tritonserver. For the first run, it will download the docker images nvcr.io/nvidia/tritonserver:23.12-py3. Make sure that you are downloading the appropriate tritonserver image. More information is available at <a href="https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tritonserver">https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tritonserver</a></p><h4>Writing the image-client</h4><p>Now, we can focus on writing the image-client.py. The image-client file mainly contains two functions: preprocess and post_process. The preprocess function looks as follows. One important thing to note in this function is that, though we use MONAI transform chain, it converts the output to a numpy array. This is because triton-inference-server can make predictions on numpy arrays.</p><pre>def preprocess(img_path=&quot;MR.nii.gz&quot;):<br>  transforms = Compose([LoadImage(image_only=True), <br>    EnsureChannelFirst(),<br>    Resize(spatial_size=(256, 256, 24)),<br>    ScaleIntensityRange(a_min=20, a_max=1200, b_min=0, b_max=1, clip=True)])<br>  img_tensor = transforms(img_path)<br>  results_np = np.expand_dims(img_tensor.numpy(), axis=0)<br><br>  return results_np</pre><p>In this case, we also need to post-process the output and save it as a Nifti or a DICOM-RT file. In this case, we are saving it as a Nifti file. In this function, we use SimpleITK to save a nifti file where we get the “meta” information for the image for a reference image (most likely the input image).</p><pre>def post_transform(inference_output, out_dir=&#39;./&#39;, ref_image=None):<br>  post_trans = Compose([Activations(sigmoid=True),<br>    AsDiscrete(threshold=0.5), ])<br>  image_itk = sitk.GetImageFromArray(np.transpose(postprocess_output, [2, 1, 0]))<br>  image_itk.SetSpacing(ref_image.GetSpacing())<br>  image_itk.SetOrigin(ref_image.GetOrigin())<br>  image_itk.SetDirection(ref_image.GetDirection())<br>  sitk.WriteImage(image_itk, os.path.join(out_dir,<br>  &quot;segmentation.nii.gz&quot;))</pre><p>Finally, the driver function for the image-client is written as:</p><pre>def main():<br>  img_path = &quot;/data/MR.nii.gz&quot;<br>  transformed_image = preprocess(img_path=img_path)<br>  client = httpclient.InferenceServerClient(url=&quot;localhost:8000&quot;)<br>  inputs = httpclient.InferInput(&quot;input__0&quot;, transformed_image.shape, datatype=&quot;FP32&quot;)<br>  inputs.set_data_from_numpy(transformed_image, binary_data=True)<br>  outputs = httpclient.InferRequestedOutput(&quot;output__0&quot;, binary_data=True, class_count=0)<br>  <br>  results = client.infer(model_name=&quot;lv_segmentation&quot;, inputs=[inputs], outputs=[outputs])<br>  inference_output = results.as_numpy(&quot;output__0&quot;)<br>  <br>  ref_img_fn = &#39;./tmp/MR/MR_preprocessed.nii.gz&#39;<br>  reader = sitk.ImageFileReader()<br>  reader.SetFileName(ref_img_fn)<br>  ref_image = reader.Execute()<br>  post_transform(inference_output, out_dir=&#39;./&#39;, ref_image = ref_image)</pre><p>As before, the complete program is available on the GitHub page. The image-client is called from the command line as</p><pre>python image-client.py</pre><p>The image-client is similar to the model_handler. The main difference between model_handler for TorchServe is that for triton inference server, the input to the model should be in a numpy format as opposed to the meta-dictionary format for torchserve.</p><h4>Conclusion</h4><p>MONAI has established itself as a go-to standard for processing medical images. MONAI-Deploy-App-SDK and MONAI Deploy are components that provide clinical integration tools for model deployment. However, we believe there is a huge valley of possibilities between MONAI Deploy’s clinical integration and deployment strategies, and no deployment is available in MONAI. Here, we are taking the PyTorch model trained using MONAI and deploying it using model deployment servers like torchserve and Triton Inference Server.</p><p>We would like to thank Dr. Mutlu Demirer, Dr. Barbaros Selnur Erdal, and Dr. Richard D. White from the Center for Augmented Intelligence in Imaging at Mayo Clinic, Florida, for their guidance and support on this project.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b0564069f48e" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Innovating and Integrating AI into the Medical Imaging Ecosystem: how AZ Delta built and…]]></title>
            <link>https://monai.medium.com/innovating-and-integrating-ai-into-the-medical-imaging-ecosystem-how-az-delta-built-and-287ad6d6e844?source=rss-78140725f336------2</link>
            <guid isPermaLink="false">https://medium.com/p/287ad6d6e844</guid>
            <category><![CDATA[medical-imaging]]></category>
            <category><![CDATA[pac]]></category>
            <category><![CDATA[pytorch]]></category>
            <category><![CDATA[monai]]></category>
            <category><![CDATA[academic-medical-centers]]></category>
            <dc:creator><![CDATA[MONAI Medical Open Network for AI]]></dc:creator>
            <pubDate>Fri, 24 Nov 2023 11:28:45 GMT</pubDate>
            <atom:updated>2023-11-24T11:28:45.251Z</atom:updated>
            <content:encoded><![CDATA[<h3><strong>Innovating and Integrating AI into the Medical Imaging Ecosystem: how AZ Delta built and incorporated AI at scale using Sectra Amplifier, NVIDIA, and MONAI</strong></h3><p>Academic medical centers, like AZ Delta, are on the cutting edge of medical imaging AI research. Researchers create deep learning models to address all kinds of institutional needs, with workflows to address clinical, quality assurance, and administrative needs. With the volume of data these centers produce, there is arguably no problem that cannot be solved in this space. But what happens after a model is created?</p><p>Taking a model from research into production is fraught with challenges at every turn. There are disconnects between research and clinical practice. There is a limited availability of resources and talent to bring these solutions across the divide. And, up to now, there has been no industry-accepted best practice, forcing researchers and IT operations teams to use ad-hoc, impractical solutions. To address these challenges, there needs to be a standardized model to package AI models, integrate the applications into PACS workflows, and maintain the application lifecycle to operate and manage these applications. This process must be repeatable and scalable to manage the ever-increasing volume of projects necessary for healthcare delivery of the future.</p><p>Sectra and NVIDIA have teamed up to help academic medical centers to meet this challenge. Sectra Amplifier Services has established the Amplifier Platform infrastructure to bring AI applications from research into reality. NVIDIA infrastructure, coupled with the Project MONAI (Medical Open Network for AI) framework, has created the accelerated pipeline and platform to develop and deploy medical imaging AI applications at scale. With these two components being brought together, academic medical centers around the world, including AZ Delta, can take advantage of these technology breakthroughs to the benefit of both their patients and clinical and administrative staff.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*5Oxi2z4CJuXR54sB9tiH-g.jpeg" /></figure><h3>Revolutionizing AZ Delta’s integration of AI applications</h3><p>With these technologies, AZ Delta brought them to bear to streamline the first of many workflows, with a team that included Dr. Peter De Jaeger, manager of the innovation team and IT, alongside data scientist Dr. Nathalie Mertens and in close collaboration with Dr. Kristof De Smet. The application segments the Psoas muscle from abdominal CT images, including defining the length and volume of the muscle. This use case was selected because of the difficulty in good segmentation, since the muscle can have different orientations and has a tapered geometry. A possible application is an additional objective parameter that reflects the health status of a patient, which has general applicability for every patient from whom a CT abdomen is taken. This requires further research in clinical practice to show outcomes.</p><p>“Within AZ Delta, we aim at integrating AI models into the clinical workflow to harmonize precision, efficiency, and accurate patient outcomes,” said Dr Mertens. “This way, AZ Delta embraces a future where medical experts converge with innovation and technology to reach optimal healthcare. Thanks to Sectra, an approachable solution to achieve this goal has been built.”</p><h3>Sectra Amplifier Services made it easy for AZ Delta</h3><p>AI adoption will accelerate if the output is seamlessly integrated with the existing ecosystem. This shouldn’t matter whether the output comes from commercial or research applications, seamless integration in the workflow is key to both.</p><p>Sectra Amplifier Services facilitates a faster route to AI adoption in clinical practice, providing a single technical and administrative infrastructure to access commercial applications. As an enterprise imaging vendor, Sectra creates a single point of contact for contracting, deploying, and managing applications over the lifecycle of usage.</p><p>The Amplifier Services is an open platform that enables healthcare institutions to leverage the Amplifier Platform to build and deploy their own applications as well.</p><h3>MONAI accelerates AI solutioning and seamlessly integrates into Sectra Amplifier Services</h3><p>Project MONAI was created to enable medical imaging AI researchers to do their life’s work. Co-founded by academic and industry partners from around the world, including NVIDIA, Project MONAI was created to simplify and accelerate deep learning pipelines for the medical domain. It builds upon common frameworks researchers already use, including PyTorch, and enables the specialization needed to work with medical imaging, going beyond the pixels into the representations of what those pixels mean.</p><p>Creating an end-to-end medical AI solution is done by taking ground truth, training a deep-learning model against that truth, packaging up that model into an AI application, and connecting that with the medical imaging ecosystem. Project MONAI includes SDKs that make this possible, for annotation, training and fine-tuning, federated learning, and inference.</p><p>Connecting those AI applications into the Sectra Amplifier Platform completes the picture for academic medical centers like AZ Delta, and makes it easier, and scalable for healthcare enterprises.</p><p>Learn more about Sectra Amplifier at <a href="https://medical.sectra.com/product/sectra-amplifier-marketplace/">https://medical.sectra.com/product/sectra-amplifier-marketplace/</a>, and Project MONAI at <a href="https://monai.io/">https://monai.io/</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=287ad6d6e844" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Rapid Deployment of MONAI Application Packages (MAPs) in Radiology Workflows using the “mercure”…]]></title>
            <link>https://monai.medium.com/rapid-deployment-of-monai-application-packages-maps-in-radiology-workflows-using-the-mercure-fe7cfd77acce?source=rss-78140725f336------2</link>
            <guid isPermaLink="false">https://medium.com/p/fe7cfd77acce</guid>
            <category><![CDATA[medical-imaging]]></category>
            <category><![CDATA[mercure]]></category>
            <category><![CDATA[open-source]]></category>
            <category><![CDATA[nyu-langone]]></category>
            <category><![CDATA[monai]]></category>
            <dc:creator><![CDATA[MONAI Medical Open Network for AI]]></dc:creator>
            <pubDate>Fri, 17 Nov 2023 19:26:21 GMT</pubDate>
            <atom:updated>2023-11-17T19:36:25.970Z</atom:updated>
            <content:encoded><![CDATA[<h3>Rapid Deployment of MONAI Application Packages (MAPs) in Radiology Workflows using the “mercure” Open-Source DICOM Orchestrator</h3><p><strong>Authors: </strong><a href="https://www.linkedin.com/in/jmsocallaghan/"><strong>James O’Callaghan, PhD</strong></a><strong>, </strong><a href="https://www.linkedin.com/in/riccardolattanzi/"><strong>Riccardo Lattanzi, PhD</strong></a><strong>, and </strong><a href="https://www.linkedin.com/in/kai-tobias-block/"><strong>Kai Tobias Block, PhD</strong></a></p><p><a href="https://cai2r.net/"><strong>Center for Advanced Imaging Innovation and Research (CAI2R)</strong></a></p><p>Deploying AI applications into the radiology workflow just got easier with the introduction of <a href="https://github.com/Project-MONAI/monai-deploy/blob/main/guidelines/monai-application-package.md">MONAI Application Package (MAP)</a> support to the <a href="https://mercure-imaging.org/">mercure DICOM Orchestrator</a>. By leveraging mercure’s flexible DICOM routing and processing capabilities, extensive monitoring functions, and its open-source model, organizations can create custom workflows that are tailored to their specific needs. A user-friendly web interface makes it easy to integrate, configure, and administer MAPs and provides visualization of job status and audit trails. The deployment standardization approach taken by MONAI may be combined with the simplicity and flexibility of the mercure open-source software to expedite clinical translation of AI models in radiology for a variety of use cases.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/551/1*jjbE-t11GVawPQeGrqxP-g.png" /><figcaption><em>Deploying MAPs using the mercure DICOM orchestration software</em></figcaption></figure><p><strong>Benefits of using mercure to deploy MAPs</strong></p><p>The mercure DICOM orchestrator handles interactions and requirements of clinical environments when deploying AI models into radiology workflows. Key benefits include:</p><ul><li><strong>Simple integration</strong>: compatible with clinical infrastructure (DICOM compliant), supports cloud deployments, easy to use.</li><li><strong>Flexibility</strong>: versatile processing modules, configurable for a wide range of use cases, scalable as demand increases.</li><li><strong>Supports best clinical practices</strong>: provides notification functionality to reduce delays in provision of care, monitoring and audit trails, standardization through MAP compatibility.</li><li><strong>Open-source</strong>: highly customizable, vendor agnostic with no required collaboration agreements, unlimited free installations so organizations can deploy AI models without incurring licensing costs.</li></ul><p><strong>Deployment is simple with mercure</strong></p><p>The intuitive web-based interface makes mercure very easy to use. Installation is automatic — by following the <a href="https://mercure-imaging.org/docs/quickstart.html">quick start guide</a>, users can swiftly get up and running with a fully functional test environment on their machine.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7T98itMa-eO5Hf2xyx2FTg.jpeg" /><figcaption><em>mercure’s web-based management interface</em></figcaption></figure><p><strong>MAPs can be configured using an intuitive web interface:</strong></p><p>· <strong>Modules page:</strong> This is where MAPs can be added as a ‘MONAI’ module type.</p><p>· <strong>Targets page:</strong> This is where details of destination devices (e.g. PACS, VNA, etc.) are provided for routing the results generated by a MAP.</p><p>· <strong>Rules page:</strong> This is where users specify the MAP to run, the target to send results to, and filtering criteria to select incoming data for processing. Notifications can be configured to provide alerts and results to radiologists via email and messaging applications.</p><p>· <strong>Queue page:</strong> Enables users to monitor DICOM data sent to mercure for processing and to review audit trails.</p><p><strong>Open-source modules for rapid MAP deployment</strong></p><p>There is a growing catalogue of open-source modules and tutorials so that new mercure users can rapidly deploy their first AI applications using trained open-source models. Modules have been published on <a href="https://hub.docker.com/u/mercureimaging">Docker Hub</a> and can be run out-of-the-box simply by providing the docker tag in the modules page of the mercure user interface. Source code is provided in the <a href="https://github.com/mercure-imaging">mercure GitHub repository</a> to provide users with templates to build their own MAPs as described below.</p><p><strong>MONAI Classify</strong></p><p>This module provides a template for developing MAPs that perform classification tasks. It extends functionality of the <a href="https://monai.io/model-zoo.html">‘Lung nodule_ct detection’ MONAI bundle</a> to enable mercure to send notification emails when lung nodules are detected. It also outputs DICOM images with bounding-box labels around detected nodules.</p><p><a href="https://github.com/mercure-imaging/MAP-monaiclassify">Open-source code is available here</a>. The module can be installed in mercure using the docker tag: <em>mercureimaging/map-monaiclassify</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*j0FG5zMD47B3lHAj6k5SEg.png" /><figcaption><em>Lung nodule detection using MONAI Classify in mercure</em></figcaption></figure><p><strong>MONAI Segment</strong></p><p>This module provides a template for developing MAPs that perform segmentation tasks. It is based on the <a href="https://monai.io/model-zoo.html">‘Spleen ct segmentation’ MONAI bundle</a>, which segments DICOM CT images and outputs DICOM CT images with spleen labels.</p><p><a href="https://github.com/mercure-imaging/mercure-monaisegment">Open-source code is available here</a>. The module can be installed in mercure using the docker tag: <em>mercureimaging/mercure-monaisegment</em></p><p><strong>Other mercure modules</strong></p><p>· <a href="https://github.com/mercure-imaging/mercure-totalsegmentator"><strong>TotalSegmentator</strong></a> : A module to deploy the open-source <a href="https://github.com/wasserth/TotalSegmentator">TotalSegmentator model</a> in mercure for segmentation of 104 classes in CT images.</p><p>· <a href="https://github.com/mercure-imaging/mercure-pyapetnet"><strong>Pyapetnet</strong></a><strong>:</strong> A module to deploy the open-source <a href="https://github.com/gschramm/pyapetnet">pyapetnet model</a> for anatomy-guided PET reconstruction.</p><p>· <a href="https://github.com/mercure-imaging/mercure-exampleinference"><strong>Example Inference</strong></a><strong>:</strong> A tutorial module that performs a simple slice-by-slice U-Net-based segmentation of the prostate and creates a colored segmentation map that is blended with the input MRI images.</p><h3>Deploy your own AI application</h3><p>The quickest way to get started with mercure and deploy your first AI application is to follow the <a href="https://github.com/mercure-imaging/mercure-monaisegment/blob/main/mercure_demonstration-RSNA2023_Rapid_Deployment_How-To.ipynb">‘Rapid deployment How-To’ jupyter notebook tutorial</a>. This tutorial is part of an educational exhibit (INEE-47) at the RSNA 2023 annual meeting. In six simple steps, mercure is installed, the MONAI segment module is configured, and segmentation results from provided test data are displayed to demonstrate the simplicity of deploying AI models with mercure.</p><p><strong>Learn more</strong></p><p><a href="https://mercure-imaging.org/">mercure documentation</a></p><p><a href="https://github.com/mercure-imaging">mercure GitHub repository</a></p><p><a href="https://hub.docker.com/u/mercureimaging">mercure Docker Hub</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=fe7cfd77acce" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Simplifying 3D Medical Imaging with MONAI Auto3DSeg]]></title>
            <link>https://monai.medium.com/simplifying-3d-medical-imaging-with-monai-auto3dseg-4350d73008a7?source=rss-78140725f336------2</link>
            <guid isPermaLink="false">https://medium.com/p/4350d73008a7</guid>
            <category><![CDATA[deep-learning]]></category>
            <category><![CDATA[medical-imaging]]></category>
            <category><![CDATA[monai]]></category>
            <category><![CDATA[auto3dseg]]></category>
            <category><![CDATA[pytorch]]></category>
            <dc:creator><![CDATA[MONAI Medical Open Network for AI]]></dc:creator>
            <pubDate>Thu, 12 Oct 2023 05:31:23 GMT</pubDate>
            <atom:updated>2023-10-17T00:11:18.848Z</atom:updated>
            <content:encoded><![CDATA[<p>In the rapidly evolving field of medical imaging research, developers and researchers play a crucial role in advancing the state of the art. With over 60% of the latest MICCAI conference papers centered on segmentation algorithms for 3D datasets, there’s a growing demand for tools that empower developers to tackle the complex task of 3D medical image segmentation.</p><p>Auto3DSeg is the answer to this demand. Designed with developers in mind, it seamlessly bridges the gap between innovation and practical application, providing an efficient and user-friendly solution. By harnessing the capabilities of MONAI and modern GPU technology, Auto3DSeg empowers developers — both novices and experts — to achieve hassle-free, state-of-the-art performance in 3D medical image segmentation.</p><h3>Understanding MONAI Auto3DSeg</h3><p>Auto3DSeg is a MONAI native project, aiming to demonstrate best practices of common 3D segmentation workflows for several algorithms. For non-expert users, it allows them to start with only a few lines of code to automatically train models on their 3D CT or MRI data. For expert users, it provides recipes of best practices for segmentation training with MONAI components, to achieve state-of-the-art baseline segmentation performance, customize it, and further build upon it. Special efforts were put into improving the computational performance of Auto3DSeg, focusing on minimizing training and inference time while maximizing GPU compute utilization.</p><h3>Key Features:</h3><ul><li><strong>Dataset Analysis</strong>: Auto3DSeg sets the stage for subsequent steps by analyzing the dataset’s intensity, size, and spacing.</li><li><strong>Algorithm Generation</strong>: Algorithm folders are automatically configured based on initial data assessment.</li><li><strong>GPU Integration</strong>: Innate GPU support accelerates model training, validation, and inference.</li><li><strong>Hyper-parameter Optimization</strong>: Auto3DSeg refines model parameters for optimal performance and accuracy.</li><li><strong>Model Ensemble</strong>: Auto3DSeg creates and integrates multiple models, enhancing accuracy and reliability.</li></ul><h3>Real-world Applications</h3><p>Let’s explore Auto3DSeg’s capabilities through recent challenges. A team of NVIDIA researchers successfully applied Auto3DSeg in several recent MICCAI 2023 challenges.</p><p>The team members are <a href="https://www.linkedin.com/in/myronenko/">Andriy Myronenko</a>, <a href="https://www.linkedin.com/in/dong-yang-thu/">Dong Yang</a>, <a href="https://www.linkedin.com/in/yufan-he-991523182/">Yufan He</a>, and <a href="https://www.linkedin.com/in/daguang-xu-2b307863/">Daguang Xu</a>.</p><p><strong>BraTS 2023–Multiple 1st and 2nd Place wins: </strong>Auto3DSeg showcased its abilities placing 1st and 2nd in multiple BraTS competitions. <strong>(</strong>1st Place in Brain Metastases, 1st place in Brain Meningioma, 1st place in Brats-Africa Glioma, 2nd place in Adult Glioma, 2nd place in Pediatric Glioma). See the leaderboardhere: <a href="https://www.synapse.org/#!Synapse:syn51156910/wiki/621282">https://www.synapse.org/#!Synapse:syn51156910/wiki/621282</a></p><p><strong>KiTS 2023–1st Place: </strong>Auto3DSeg excelled in the KiTS 23 segmentation challenge at MICCAI 2023, achieving top-tier performance in 3D kidney segmentation. See the leaderboard here: <a href="https://kits-challenge.org/kits23/#kits23-official-results">https://kits-challenge.org/kits23/#kits23-official-results</a>.</p><p><strong>SEG.A. 2023–1st Place: </strong>Auto3DSeg demonstrated adaptability and robustness by winning the Aorta segmentation challenge at MICCAI 2023. See the leaderboard here: <a href="https://multicenteraorta.grand-challenge.org/">https://multicenteraorta.grand-challenge.org/</a>.</p><p><strong>MVSEG 2023–1st Place: </strong>In the MVSEG23 challenge, Auto3DSeg showcased its versatility by securing the top spot in segmenting mitral valve leaflets from 3D echocardiography volumes. See the leaderboard here: <a href="https://www.synapse.org/#!Synapse:syn51186045/wiki/622048">https://www.synapse.org/#!Synapse:syn51186045/wiki/622048</a>.</p><p>Here are a few images from the segmentation challenges above:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/555/1*1nRwW9jDWSvYkpJSsD1Qwg.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/557/1*gn4t9Rodvfk6Hgy7_i-GOQ.png" /><figcaption><strong>BraTS 2023 Images</strong></figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*jnQxtmooeBEQ_BJj" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/998/0*kpl6T_XjQZhcZFYX" /><figcaption><strong>KiTS 2023 Images</strong></figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*ZehzcVvccce0tHlN" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*_2AqzdwhdoLot_MC" /><figcaption><strong>SEG.A. 2023 Images</strong></figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*6UR2rbvt_VmII0iB" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1004/0*99R5FcRKSBWgKrP4" /><figcaption><strong>MVSEG 2023 Images</strong></figcaption></figure><h3>Previous Challenge Achievements</h3><p>Auto3DSeg’s track record includes:</p><ul><li>1st Place in MICCAI 2022 challenge HECKTOR 2022 for head and neck tumor segmentation in PET/CT images.</li><li>2nd Place in MICCAI 2022 challenge INSTANCE22 for intracranial hemorrhage segmentation on Non-Contrast head CT (NCCT), ranking first in Dice score.</li><li>2nd Place in MICCAI 2022 challenge ISLES’22 for ischemic stroke lesion segmentation, ranking first in Dice score.</li></ul><h3>Learn More</h3><p>For a comprehensive understanding of Auto3DSeg, check out our resources:</p><p><a href="https://www.youtube.com/watch?v=wEfLVnL-7D4">YouTube Walkthrough</a>: Dive deeper into Auto3DSeg’s mechanics and advantages.</p><p><a href="https://github.com/Project-MONAI/tutorials/tree/main/auto3dseg">GitHub Tutorials</a>: Explore detailed tutorials to unlock the full potential of this transformative tool.</p><p>By streamlining 3D medical image segmentation with MONAI Auto3DSeg, developers and researchers can make significant strides in medical diagnosis, treatment planning, and research.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4350d73008a7" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Project MONAI is excited to announce that its flagship framework, MONAI Core, has reached v1.0]]></title>
            <link>https://monai.medium.com/project-monai-is-excited-to-announce-that-its-flagship-framework-monai-core-has-reached-v1-0-7c2b12b691dd?source=rss-78140725f336------2</link>
            <guid isPermaLink="false">https://medium.com/p/7c2b12b691dd</guid>
            <category><![CDATA[radiology]]></category>
            <category><![CDATA[deep-learning]]></category>
            <category><![CDATA[medical-imaging]]></category>
            <category><![CDATA[monai]]></category>
            <category><![CDATA[pytorch]]></category>
            <dc:creator><![CDATA[MONAI Medical Open Network for AI]]></dc:creator>
            <pubDate>Mon, 26 Sep 2022 15:32:33 GMT</pubDate>
            <atom:updated>2022-09-26T15:32:33.857Z</atom:updated>
            <content:encoded><![CDATA[<p>An exciting journey started three years ago when NVIDIA and King’s College London came together during MICCAI 2019 and formed Project MONAI as an initiative to develop a standardized, user-friendly, and open-source platform for Deep Learning in Medical Imaging. Soon after that, they established the MONAI Advisory Board and Working Groups with representatives from Stanford University, National Cancer Institute, DKFZ, TUM, Chinese Academy of Sciences, University of Warwick, Northwestern University, Kitware, and Mayo Clinic.</p><p>Throughout this journey, MONAI has deepened its offering in radiology, expanded to pathology, and most recently, included support for streaming modalities starting with endoscopy. Now three years later, MONAI has over 600,000 downloads. It is used in over 450 GitHub projects, has been cited in over 150 published papers, and academic and industry leaders are using MONAI in their research and clinical workflows.</p><p>We’re excited to announce that MONAI is<a href="https://developer.nvidia.com/blog/open-source-healthcare-ai-innovation-continues-to-expand-with-monai-1-0/"> continuing to expand open-source healthcare AI innovation</a> with v1.0. With a focus on providing a robust API that is designed for backward compatibility, this release ensures that you can integrate MONAI into your projects today and benefit from the stability of an industry-leading framework into the future.</p><p>Let’s look at the features included in the MONAI Core v1.0, MONAI Label v0.5 releases, and a new initiative called the MONAI Model Zoo.</p><h3>MONAI Core v1.0</h3><p>With the release of v1.0, MONAI Core focuses heavily on a robust and backward-compatible API design and also includes additional features like MetaTensors, a Federated Learning API, the MONAI Bundle Specification, and an Auto3D Segmetentation framework.</p><h3>MetaTensor</h3><p>The MetaTensor enhances the metadata-aware imaging processing pipeline by integrating both torch tensors and imaging meta-information. This combined information is essential for delivering clinically useful models, supporting image registration, and joining multiple models into a cohesive workflow.</p><p><a href="https://docs.monai.io/en/stable/data.html#metatensor">MONAI MetaTensor Docs</a></p><h3>MONAI Bundle</h3><p>The MONAI Bundle is a self-contained model package with pre-trained weights and all associated metadata abstracted through JSON and YAML-based configurations. By focusing on ease of use and flexibility, you can directly override or customize these configs or utilize a hybrid programming model that supports config to Python Code abstraction.</p><p><a href="https://docs.monai.io/en/stable/bundle_intro.html">MONAI Bundle Docs</a></p><h3>Federated Learning</h3><p>The MONAI Federated Learning module provides a base API that defines a MONAI Client App that can run on any federated learning platform. With the new federated learning APIs, you can utilize MONAI bundles and seamlessly extend them to the federated learning paradigm.</p><p>The first platform to support these new Federated Learning APIs is <a href="https://developer.nvidia.com/flare">NVIDIA FLARE</a>, the federated learning platform developed by NVIDIA. We welcome the integration of other federated learning toolkits to the MONAI Federated Learning APIs to help build a common foundation for collaborative learning in medical imaging.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/758/0*sT-qLVoEpy4gMQXh" /></figure><p><em>MONAI and Federated Learning high-level workflow using the new MONAIAlgo FL APIs</em></p><p><a href="https://docs.monai.io/en/latest/fl.html">MONAI Federated Learning Docs</a></p><p><a href="https://github.com/NVIDIA/NVFlare/tree/2.2/integration/monai">NVIDIA FLARE + MONAI Example</a></p><h3>Auto3D Segmentation</h3><p>Auto3D is a low-code framework that allows data scientists and researchers of any skill level to train models that can quickly segment regions of interest in data from 3D imaging modalities like CT and MRI.</p><p>Developers can start with as little as 1–5 lines of code, resulting in a highly accurate segmentation model. By focusing on accuracy and including state-of-the-art models like Swin UNETR, DiNTS, and SegResNet, data scientists and researchers can utilize the latest and greatest algorithms to help maximize their productivity.</p><p><a href="https://github.com/Project-MONAI/tutorials/tree/main/auto3dseg">Auto3D Tutorial</a></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*MDwH9LlsVq4J8bou" /></figure><p><em>Auto3D Segmentation Training and Inference workflow</em></p><h3>MONAI Model Zoo</h3><p>We’re excited to announce the <a href="https://monai.io/model-zoo">MONAI Model Zoo</a>, a hub for sharing pre-trained models that allow data scientists and clinical researchers to jump-start their AI development.</p><p>In the first release, there are 15 pre-trained models from MONAI partners, including King’s College London, <em>Charité</em><strong><em> </em></strong>University, University of Warwick, Vanderbilt University, and Mayo Clinic.</p><p>These models utilize the MONAI Bundle specification, making it easy to get started in just a few commands. With the MONAI Bundle and Model Zoo, we hope to establish a common standard for reproducible research and collaboration, and we welcome everyone to contribute to this effort by <a href="https://github.com/Project-MONAI/model-zoo/blob/dev/CONTRIBUTING.md">submitting</a> their pre-trained models for downstream tasks.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*XXwxns2Ltp2lXQN-" /></figure><p><em>MONAI Model Zoo Landing Page</em></p><h3>MONAI Label v0.5</h3><p>MONAI Label now supports MONAI Core v1.0 and continues to evolve by improving overall performance, offering new models for radiology, and expanding into endoscopy with integration into the CVAT viewer for annotation and releasing new endoscopy models.</p><p>MONAI Label has been updated to support MONAI Core v1.0. For radiology, we’ve focused on improving overall performance and released a new vertebra model. For endoscopy, we’ve continued to improve CVAT integration and released three new models.</p><h3>Radiology</h3><p>In this release, MONAI Label focuses heavily on improving the overall performance of radiology applications.</p><p>By utilizing caching for pre-transforms in the case of repeated inference for interaction models, we were able to speed up the overall interactive loop. There is also support for DICOM Web API responses and fixes to the DICOM Proxy for WADO and QIDO.</p><p>Additionally, MONAI Label has released a new <a href="https://github.com/Project-MONAI/MONAILabel/tree/main/sample-apps/radiology#multistage-vertebra-segmentation">Multi-Stage Vertebra Segmentation model</a> with three stages that can be used together or independently. The Vertebra model demonstrates the power of a multi-stage approach for segmenting several structures on a CT Image.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*SfMyqvzmmM_bDnho" /></figure><p><em>MONAI Label’s new Vertebra model segments several structures in CT images, shown running in 3D Slicer.</em></p><h3>Endoscopy</h3><p>MONAI Label now supports 2D segmentation for endoscopy. By continuing to expand on the previous CVAT integration, MONAI Label has integrated active learning into CVATs automated workflow.</p><p>Three new models are being released including Tool Tracking Segmentation, InBody vs. OutBody De-identification classification model, and DeepEdit for interactive tool annotation.</p><p><a href="https://github.com/Project-MONAI/MONAILabel/tree/main/sample-apps/endoscopy">MONAI Label Endoscopy Sample Applications</a></p><h3>Conclusion</h3><p>This is a momentous milestone for Project MONAI and we are looking forward to further serving the medical imaging community. We want to hear your feedback! Connect with us on <a href="https://forms.gle/QTxJq3hFictp31UM9">Slack</a> and <a href="https://github.com/Project-MONAI">GitHub</a>. Please share your successes and report any issues you might have with MONAI.</p><p>Interested in joining the MONAI Community? Get started on our <a href="https://www.youtube.com/c/Project-MONAI">MONAI YouTube Channel</a>, where we have tutorials, archived bootcamps, and walkthrough guides.</p><p>Stay tuned for the latest news on our hosted events! Whether you’re new to MONAI or already integrating MONAI into your workflow, the <a href="https://monai.io/">MONAI Website</a> and <a href="https://twitter.com/ProjectMONAI">Twitter</a> account are the best places to stay up to date!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7c2b12b691dd" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[MONAI continues its Pathology integration and defines a new MONAI Bundle Model Sharing Standard]]></title>
            <link>https://monai.medium.com/monai-continues-its-pathology-integration-and-defines-a-new-monai-bundle-model-sharing-standard-4e0b6d8a11b5?source=rss-78140725f336------2</link>
            <guid isPermaLink="false">https://medium.com/p/4e0b6d8a11b5</guid>
            <category><![CDATA[medical-imaging]]></category>
            <category><![CDATA[monai]]></category>
            <category><![CDATA[pathology]]></category>
            <category><![CDATA[deep-learning]]></category>
            <category><![CDATA[pytorch]]></category>
            <dc:creator><![CDATA[MONAI Medical Open Network for AI]]></dc:creator>
            <pubDate>Wed, 13 Jul 2022 19:08:19 GMT</pubDate>
            <atom:updated>2022-07-13T22:20:41.173Z</atom:updated>
            <content:encoded><![CDATA[<p>Project MONAI releases MONAI Core v0.9, MONAI Label v0.4, MONAI Deploy App SDK v0.4, and MONAI Deploy Informatics Gateway v0.2.</p><p>MONAI focuses on two main topics for these releases, MONAI for Pathology and MONAI Bundles.</p><p>First, MONAI is helping create a starting point for Pathologists, Data Scientists, and Researchers who want to get started using Deep Learning in their Pathology workflow. Currently, Deep Learning is not typically used in Pathology for various reasons, including lack of integration into the typical workflow and general performance limitations.</p><p>Project MONAI is helping address these issues with transformers, data loaders, and network architectures in MONAI Core and integration into Pathology Viewers to help with AI-Assisted Annotation with MONAI Label.</p><p>Next, the MONAI Bundle format defines a portable and standardized packaging format for storing and sharing models. By creating a standard layout and supporting MONAI Bundles throughout each framework, you’ll be able to seamlessly utilize MONAI bundles from annotation, training, and deployment.</p><p><strong>MONAI Core v0.9</strong></p><p>Highlights include:</p><ul><li>MONAI Bundle</li><li>Object Detection in Medical Images</li><li>SWIN Transformers and SWIN UNETR Architecture</li><li>DeepEdit and NuClick for Pathology</li><li>MetaTensor API Preview</li></ul><p>MONAI Bundles include all the information necessary for a model development life cycle, including training, fine-tuning, and inference. The MONAI Bundle API is defined within the `monai.bundle` module namespace and is an easy-to-use API that helps separate the deep learning hyperparameter settings from code and helps to decouple the component details from the higher-level learning paradigms.</p><p>This release also contains the essential components for object localization and categorization workflows. These modules include handling 2D and 3D bounding boxes, network blocks and architectures of RetinaNet, and common utility modules.</p><p>This release has included or updated three new networks, including SWIN UNETR, DeepEdit, and NuClick. <a href="https://arxiv.org/abs/2201.01266">SWIN UNETR</a> is a state-of-the-art transformer model inspired by the previous success of the UNETR model included in a prior release of MONAI Core. And <a href="https://arxiv.org/abs/2005.14511">NuClick</a> is a CNN-based framework for interactive segmentation of objects in histology images and is used as a sample application with MONAI Label.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*44t6WkoEhMoi36q6bVWqWQ.png" /><figcaption>Swin UNETR Architecture</figcaption></figure><p>Last, a preview release of the MetaTensor API is being introduced. The MetaTensor is a refactoring of the data representation in MONAI to include the metadata associated with the primary imaging modalities and is essential in many biomedical imaging applications.</p><p>For full release notes, check out the <a href="https://docs.monai.io/en/latest/whatsnew_0_9.html">MONAI Core release notes</a>.</p><p><strong>MONAI Label v0.4</strong></p><p>Highlights include:</p><ul><li>Pathology Sample Applications for Segmentation Nuclei, DeepEdit Nuclei, and NuClick</li><li>DSA, QuPath, and CVAT Integration</li><li>GraphCut and GMM-based methods for Scribbles</li></ul><p>Three new Pathology based sample applications that can be used as an easy starting point or as the basis for your custom annotation application include Segmentation Nuclei, DeepEdit Nuclei, and NuClick.</p><p>MONAI Label now integrates with three pathology viewers, including QuPath, Digital Slide Archive or DSA, and CVAT. These allow you to quickly choose your favorite viewer and start using MONAI Label with it today.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*TfXASNgae4ehLujD" /><figcaption><em>Pathology Viewer Integration for QuPath, DSA, and CVAT</em></figcaption></figure><p>If you’re interested in contributing feedback to the direction of our Pathology work in MONAI, we’ve created a survey to help understand the community’s needs.</p><p>For full release notes, check out the <a href="https://docs.monai.io/projects/label/en/latest/whatsnew.html">MONAI Label release notes</a>.</p><p><strong>MONAI Deploy v0.4</strong></p><p>Highlights include:</p><ul><li>MONAI Bundle inference operator</li><li>Multi-Model support</li></ul><p>With the introduction of MONAI Bundles through all of the Project MONAI frameworks, MONAI Deploy App SDK is making it easy to take those MONAI Bundles and quickly create your own AI Application. MONAI Bundle Operator enables using a MONAI-trained model in an inference app with minimal or no coding. The Bundle inference operator is enhanced to support both in-memory and file I/O for edge cases where app developers/model trainers only need to test the inference logic with the single operator app.</p><p>We’ve also tested and confirmed multi-model support. Each bundle operator is designed to support a single model bundle, but the application can have multiple model bundles named uniquely.</p><p>For full release notes, check out the <a href="https://docs.monai.io/projects/monai-deploy-app-sdk/en/latest/release_notes/v0.4.0.html">MONAI Deploy release notes</a>.</p><p>With these new releases for Project MONAI, MONAI continues to expand into new fields like Pathology while also making it easier to create reproducible and standardized ways to share Medical AI Models. These efforts continue to create an environment for data scientists, researchers, clinicians, and pathologists to integrate deep learning into their workflows.</p><p>Join the MONAI Community on <a href="https://github.com/Project-MONAI">GitHub</a> or <a href="https://forms.gle/QTxJq3hFictp31UM9">Slack</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4e0b6d8a11b5" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Pathology Image Labeling Comes to MONAI]]></title>
            <link>https://monai.medium.com/pathology-image-labeling-comes-to-monai-a033e200e587?source=rss-78140725f336------2</link>
            <guid isPermaLink="false">https://medium.com/p/a033e200e587</guid>
            <category><![CDATA[qupath]]></category>
            <category><![CDATA[pytorch]]></category>
            <category><![CDATA[deep-learning]]></category>
            <category><![CDATA[digital-slide-archive]]></category>
            <category><![CDATA[pathology]]></category>
            <dc:creator><![CDATA[MONAI Medical Open Network for AI]]></dc:creator>
            <pubDate>Tue, 14 Jun 2022 14:03:09 GMT</pubDate>
            <atom:updated>2022-06-21T18:16:46.609Z</atom:updated>
            <content:encoded><![CDATA[<p>Project MONAI is continuing to expand into the field of Pathology. With the release of MONAI Label v0.4, we have new features, sample applications, and viewer integrations that will help you get started with annotating Pathology images.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FJy5VTqX4_jo&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DJy5VTqX4_jo&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/f9c5201f83be56bd50efcacc78562b85/href">https://medium.com/media/f9c5201f83be56bd50efcacc78562b85/href</a></iframe><h3>Viewer Integration</h3><p>MONAI Label now integrates with three pathology viewers, including <a href="https://github.com/Project-MONAI/MONAILabel/tree/main/plugins/qupath">QuPath</a>, Digital Slide Archive or <a href="https://github.com/Project-MONAI/MONAILabel/tree/main/plugins/dsa">DSA</a>, and <a href="https://github.com/Project-MONAI/MONAILabel/tree/main/plugins/cvat">CVAT</a>. You can choose your favorite viewer and quickly start labeling pathology images today.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*6p9ArcxnHc2VXs81" /><figcaption>Pathology Viewer Integration for QuPath, DSA, and CVAT</figcaption></figure><h3>Sample Applications</h3><p>MONAI Label also provides new <a href="https://github.com/Project-MONAI/MONAILabel/tree/main/sample-apps/pathology">Pathology based sample applications</a> that can be used as an easy starting point or as the basis for your custom annotation application.</p><p>These three applications include Segmentation Nuclei, DeepEdit Nuclei, and NuClick.</p><p>Segmentation Nuclei gives a reasonable basis for multi-label segmentation, including labels for Neoplastic Cells, Inflammatory, Connective and Soft Tissue cells, Dead cells, and Epithelial; however, it doesn’t provide the ability to do interactive segmentation.</p><p><a href="https://arxiv.org/pdf/2203.12362.pdf">DeepEdit </a>Nuclei combines both Interactive and Auto Segmentation that combines all of the labels from the standard segmentation model into a single Nuclei label.</p><p>Last is an implementation of <a href="https://arxiv.org/abs/2005.14511">NuClick</a>, a Unet-based approach to speed up collecting annotations for microscopic objects requiring minimum interaction from the annotator.</p><h3>Performance</h3><p>One of the most significant pain points in the Pathology workflow is the performance of loading images and performing inference. MONAI Label focuses on performance throughout the Pathology Workflow, improving time to inference on patches and whole slide images and offering the ability to use <a href="https://github.com/rapidsai/cucim">RAPIDS cuCIM </a>to speed up the loading of images.</p><h3>Inference Performance</h3><p>For performance on Patches and Whole Slide Images, both NuClick and DeepEdit perform significantly faster than typical Machine Learning Nuclei Detection, with DeepEdit performing up to six times faster than the standard NucleiDetection algorithm, as shown below. We measured performance with varying amounts of nuclei in the image, and the results show up to a 6x speedup in inference. This performance increase can mean the difference between 3 minutes for GPU-based inference or 20 minutes for CPU-based inference.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*914T4Z-d2w4TPx7D" /><figcaption># of Nuclei Inference Performance</figcaption></figure><h3>RAPIDS cuCIM</h3><p>A significant portion of a typical deep learning pipeline is spent on I/O. For Pathology, this is the case since images are typically extremely large, and the process of loading and encoding these images can become a significant bottleneck. That’s why MONAI has integrated <a href="https://github.com/rapidsai/cucim">RAPIDS cuCIM</a> as one of its optional image loaders.</p><p>Below you’ll see the performance increase of up to 6x depending on the number of threads being used for loading an image compared to an alternative library like OpenSlide.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*2s18kjGzbV5p4bB_io1qhA.png" /><figcaption>SVS File Loading Performance for OpenSlide vs. cuCIM</figcaption></figure><h3>Pathology Adoption of Deep Learning</h3><p>MONAI Label is creating a starting point for Pathologists and Data scientists to work together and utilize the benefits of Deep Learning. By enabling a workflow that integrates directly into a Pathologist viewer and allowing for continuous learning, MONAI aims to accelerate the adoption of Deep Learning in Pathology.</p><blockquote>“MONAI Label will enable pathologists and scientists to build accurate models without knowing anything about AI. This is an important step in making AI a universal tool for research.”</blockquote><blockquote>— Lee A.D. Cooper, PhD — Associate Professor of Pathology, Director, Computational Pathology, Director, Center for Computational Imaging and Signal Analytics, @ Northwestern University Feinberg School of Medicine</blockquote><p>If you’re interested in contributing feedback to the direction of our Pathology work in MONAI, we’ve created a <a href="https://docs.google.com/forms/d/e/1FAIpQLSc7OAd87S2Ow_j5zgzZP1w-Vj0wcscgDtEkcAK3hwOaj1O82A/viewform">survey to help understand the community’s needs.</a></p><p><em>Citations:</em></p><p><em>Diaz-Pinto, Andres &amp; Alle, Sachidanand &amp; Ihsani, Alvin &amp; Asad, Muhammad &amp; Nath, Vishwesh &amp; Pérez-García, Fernando &amp; Mehta, Pritesh &amp; Li, Wenqi &amp; Roth, Holger &amp; Vercauteren, Tom &amp; Xu, Daguang &amp; Dogra, Prerna &amp; Ourselin, Sebastien &amp; Feng, Andrew &amp; Cardoso, Manuel Jorge. (2022). MONAI Label: A framework for AI-assisted Interactive Labeling of 3D Medical Images.</em></p><p><em>Navid Alemi, Mostafa Jahanifar, et al. “NuClick: a deep learning framework for interactive segmentation of microscopic images.” Medical Image Analysis 65 (2020): 101771.</em></p><p><em>Jahanifar, Mostafa, Navid Alemi Koohbanani, and Nasir Rajpoot. “Nuclick: From clicks in the nuclei to nuclear boundaries.” arXiv preprint arXiv:1909.03253 (2019).</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a033e200e587" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[From AutoML powered development to cloud-native deployment, MONAI marches forward with four new…]]></title>
            <link>https://monai.medium.com/from-automl-powered-development-to-cloud-native-deployment-monai-marches-forward-with-four-new-f6009a410e7f?source=rss-78140725f336------2</link>
            <guid isPermaLink="false">https://medium.com/p/f6009a410e7f</guid>
            <category><![CDATA[deep-learning]]></category>
            <category><![CDATA[pytorch]]></category>
            <category><![CDATA[medical-imaging]]></category>
            <category><![CDATA[automl]]></category>
            <category><![CDATA[kubernetes]]></category>
            <dc:creator><![CDATA[MONAI Medical Open Network for AI]]></dc:creator>
            <pubDate>Mon, 29 Nov 2021 20:58:22 GMT</pubDate>
            <atom:updated>2021-11-29T20:58:22.099Z</atom:updated>
            <content:encoded><![CDATA[<h3>From AutoML powered development to cloud-native deployment, MONAI marches forward with four new releases.</h3><p>MONAI has released three new versions of its existing tools and is introducing a new tool to its deployment offering, introducing MONAI Deploy Inference Server. Researchers can quickly deploy and run MONAI Application Packages (MAPs) as scalable inference on their data using an existing Kubernetes cluster as cloud-native microservices.</p><h3><strong>MONAI Core v0.8</strong></h3><p>The first new release is MONAI Core v0.8 which expands on the available learning methods, including Self-Supervised and Multi-Instance learning support. It also includes a new AutoML technique called Differentiable Network Topology Search, or DiNTS, and new visualization techniques for the various transforms already available in MONAI.</p><p>Self-Supervised allows us to utilize unlabeled data by generating pre-trained weights using the unlabeled data with self-supervised tasks based on different augmentation types. MONAI Core now provides an <a href="https://github.com/Project-MONAI/tutorials/tree/master/self_supervised_pretraining">example tutorial using the TCIA-Covid19 to generate the pre-trained weights</a>. Those weights are then used with the Beyond the Cranial Vault (BTCV) dataset as the fine-tuning dataset.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wCUFzlXRjJFpl45kWFABkQ.png" /><figcaption>Self-Supervised Learning</figcaption></figure><p>Multi-Instance Learning (MIL) is a supervised learning technique that uses bags of labeled data versus individually labeled data. MIL is a crucial algorithm for classifying whole slide images (WSI), which can have billions of pixels and requires extraordinary computational and annotation resources. MONAI now includes a new Network Architecture called MILModel, which provides three Multi-Instance Learning modes — mean, max, and attention-based methods. These attention-based methods are based on state-of-the-art research that helps account for dependencies in Deep Learning-based Multiple Instance Learning (https://arxiv.org/abs/2111.01556)</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/1*CPZMxL8-Wfd172-lThZjHA.jpeg" /><figcaption>Multi-Instance Learning (MIL) for Whole-Slide Images (WSI)</figcaption></figure><p>DiNTS Neural Architecture Search has been applied to search high-performance networks for medical image segmentation. The DiNTS method addresses some of the common issues with large-scale 3D image datasets, like flex multi-path network topologies, high-search efficiency, and GPU memory usage. You can find example notebooks using DiNTS and the Medical Segmentation Decathlon (MSD) datasets to achieve state-of-the-art performance.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/795/1*Je-CbssXgm08xq0bsANGpw.png" /><figcaption>Differentiable Neural Network Topology Search (DiNTS)</figcaption></figure><p>MONAI Core v0.8 release includes a <a href="https://github.com/Project-MONAI/tutorials/blob/master/modules/transform_visualization.ipynb">transformer visualization notebook</a> for existing MONAI transforms, including visualizing images with matplotlib based on MONAI matshow3d API, with TensorBoard-based MONAI plot_2d_or_3d_image API, and with ITKWidgets. It also includes how to blend two images with the same shape.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*HxuHx-ua2erHwuhtRnAgqg.png" /><figcaption>Transformer Visualization using MONAI MatShow3D API</figcaption></figure><h3><strong>MONAI Label v0.3</strong></h3><p>The second new release is MONAI Label v0.3 which includes multi-label segmentation for existing applications, increased performance by supporting multi-GPU training, and better Active Learning User Experience.</p><p>First, Multi-label segmentation support includes updating the existing DeepEdit and DeepGrow networks provided by MONAI Label. It now consists of an upgraded UI interface to support multi-label tasks, modifying training scripts to work with multi-label tasks, and creating a robust naming and error system for label name and number associations.</p><p>Next, to help increase the performance of the MONAI Label training loop, multi-GPU support has been added to the existing workflows, including updating the data loader. You’re now able to indicate how many GPUs you want to use during the training process.</p><p>Last, user experience is an integral part of the training process. To help enable a simple and more intuitive experience for Active Learning, we’ve provided options to allow you to train specific models and allow users to skip images if they feel the current image selection isn’t a good one,</p><h3><strong>MONAI Deploy</strong></h3><p>The third new release is MONAI Deploy App SDK v0.2, which includes two new base operators for DICOM interactions: one for DICOM Series Selection and another exporting DICOM Structured Reports SOP for classification results.</p><p>Expanding on the MONAI Deploy offerings, a new component called MONAI Inference Service (MIS) has been released. This tool allows researchers to build and deploy a scalable inference server for their data using an existing Kubernetes cluster.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/880/1*NWxKbGl7qXkCu4dcF_RC4A.png" /><figcaption>MONAI Inference Service allows for deployment on Kubernetes using Helm</figcaption></figure><p>Highlights Include:</p><ul><li>Register a MAP in the Helm Charts of MIS.</li><li>Upload inputs via a REST API request and make them available to the MAP container.</li><li>Provision resources for the MAP container.</li><li>Provide outputs of the MAP container to the client who made the request.</li></ul><p>We’ve also included new MONAI Deploy tutorials that walk you through creating a MAP, deploying MIS, and pushing your MAP to MIS to be run in a Kubernetes cluster.</p><p>MONAI Deploy Tutorials: <a href="https://docs.monai.io/projects/monai-deploy-app-sdk/en/latest/getting_started/tutorials/index.html">Web-based</a> or <a href="https://github.com/Project-MONAI/monai-deploy-app-sdk/tree/main/notebooks/tutorials">Jupyter Notebooks</a></p><p>MONAI continues to expand its core capabilities throughout the Medical AI workflow. Get started today by checking out the notebooks mentioned throughout the sections above, or head over to our GitHub repos to start contributing today!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f6009a410e7f" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[MONAI v0.6 and MONAI Label v0.1]]></title>
            <link>https://medium.com/pytorch/monai-v0-6-and-monai-label-v0-1-e738556b0a10?source=rss-78140725f336------2</link>
            <guid isPermaLink="false">https://medium.com/p/e738556b0a10</guid>
            <category><![CDATA[image-labeling]]></category>
            <category><![CDATA[medical-imaging]]></category>
            <category><![CDATA[healthcare]]></category>
            <category><![CDATA[pytorch]]></category>
            <category><![CDATA[deep-learning]]></category>
            <dc:creator><![CDATA[MONAI Medical Open Network for AI]]></dc:creator>
            <pubDate>Mon, 19 Jul 2021 18:24:52 GMT</pubDate>
            <atom:updated>2021-07-21T18:35:31.716Z</atom:updated>
            <content:encoded><![CDATA[<h3>MONAI v0.6 and MONAI Label v0.1 Are Now Available — Monai Label Helps Quickly Create Annotated Datasets and AI Annotation Models</h3><p>MONAI v0.6 and MONAI Label v0.1 are now available! MONAI Label is an intelligent image labeling and learning tool that enables users to create annotated datasets and build AI annotation models quickly.</p><p>We’re excited to announce our latest release of MONAI, version 0.6. We continue to expand on our APIs by including a new network called UNETR implemented in PyTorch. We’re also adding new functionality to use existing pre-trained PyTorch models created for NVIDIA Clara Train.</p><p>Alongside our core release, we’re releasing a new project that has officially hit version 0.1, called MONAI Label. MONAI Label is an intelligent open-source image labeling and learning tool that reduces the time and effort of annotating new datasets and enables the adaptation of AI to the task at hand by continuously learning from user interactions and data. We’re providing sample applications that use some of our existing PyTorch models to help you get started quickly.</p><h3>MONAI v0.6</h3><h4>UNETR: Transformers for Medical Image Segmentation</h4><p><a href="https://arxiv.org/abs/2103.10504">UNETR</a> is a transformer-based model for volumetric (3D) medical image segmentation and is currently the state-of-the-art model on the <a href="https://www.synapse.org/#Synapse:syn3193805/wiki/217752">BTCV dataset</a> leaderboard for the task of multi-organ semantic segmentation. UNETR has a flexible implementation that supports various segmentation tasks.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*3dl1vN1UUHD96_Bk" /></figure><p>You can find a tutorial for 3D multi-organ semantic segmentation using UNETR in our <a href="https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/unetr_btcv_segmentation_3d.ipynb">tutorials repo</a>.</p><h4>Decollate Batches</h4><p>We’ve included the ability to decollate batches to simplify post-processing transforms and enable flexible operations on a batch of model outputs. By building off previous work on inverse spatial transforms, decollate is an “inverse” operation of the PyTorch collate function. Decollate enables post-processing transforms for each item independently and allows for randomized transforms to be applied for each predicted item in a batch. This also provides inverse operations for data items in different original shapes since the inverted items are returned in lists instead of tensors.</p><p>A typical process of decollate batch is illustrated as follows:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*R9kUYVfJ2VcjbbAa" /></figure><p>You can find a Jupyter notebook tutorial showing a typical decollate workflow <a href="https://github.com/Project-MONAI/tutorials/blob/master/modules/decollate_batch.ipynb">here</a>.</p><h4>Medical Model ARchive (MMAR) support</h4><p>MONAI now includes Pythonic support for the Medical Model ARchive, MMAR, format provided by NVIDIA Clara Train. By enabling support for this format, developers can now use pre-trained models created for Clara Train directly in MONAI.</p><p>Find all of the Clara Train models on NGC <a href="https://ngc.nvidia.com/catalog/models?orderBy=scoreDESC&amp;pageNumber=0&amp;query=Clara%20Train&amp;quickFilter=&amp;filters=">here</a>. We’ve also included a <a href="https://github.com/Project-MONAI/tutorials/blob/master/modules/transfer_mmar.ipynb">new tutorial</a> showing you how to use one of the pre-trained MMAR models for transfer learning. The results are shown below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/714/0*qBjYWoiP5jcHTt4K" /><figcaption>Training from scratch (green), Inference of pre-trained MMAR weights without training (magenta), training from the MMAR model weights (blue)</figcaption></figure><h4>Metric Enhancement</h4><p>The base API for metrics has been enhanced to support the logic for iteration and epoch-based metrics. By enabling support for both metric methods, MONAI metrics are now more extensible and are a great starting point for creating custom metrics. The APIs also support data-parallel computation; with the Cumulative base class, intermediate metric outcomes can be automatically buffered, cumulated, synced across distributed processes, and aggregated for the final results.</p><p>We’ve included a <a href="https://github.com/Project-MONAI/tutorials/blob/master/modules/compute_metric.py">multi-processing computation example</a> that shows how to compute metrics based on saved predictions and labels in a multi-processing environment.</p><h4>CUDA extension</h4><p>MONAI continues to accelerate domain-specific routines in common workflows by introducing C++/CUDA modules as extensions of the PyTorch native implementations. We now provide two ways to build a C++ extension from PyTorch:</p><ul><li>Via `setuptools` for modules including `Resampler`, `Conditional random field (CRF)`, and `Fast bilateral filtering using the permutohedral lattice`.</li><li>Via just-in-time (JIT) compilation for Gaussian mixtures module. Using JIT compilation allows for dynamic optimization according to the user-specified parameters and local system environment.</li></ul><h4>Backward compatibility and enhanced CI/CD</h4><p>As we move closer to the MONAI 1.0 release, we’re focusing on creating the proper mechanisms to provide a fast and collaborative codebase development.</p><p>As a starting point, we’ve created some basic policies for backward compatibility. New utilities are introduced on top of the existing semantic versioning modules and the git branching model. We’re also working on a complete CI/CD solution that is efficient, scalable, and secure.</p><h3>MONAI Label v0.1</h3><p><a href="https://github.com/Project-MONAI/MONAILabel">MONAI Label</a> is a server-client system that facilitates interactive medical image annotation by using AI. As a part of Project MONAI, MONAI Label shares the same principles as MONAI and focuses on being Pythonic, modular, user friendly, and extensible.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fm2rYorVwXk4&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dm2rYorVwXk4&amp;image=http%3A%2F%2Fi.ytimg.com%2Fvi%2Fm2rYorVwXk4%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/0703f0d8f82f6c7228ece2eff0c5fecd/href">https://medium.com/media/0703f0d8f82f6c7228ece2eff0c5fecd/href</a></iframe><p>Open-source and easy-to-install, MONAI Label can run locally on a single machine with one or multiple GPUs. The server and client run on the same machine and don’t currently support multiple users or communication with an external database.</p><p>To quickly install and run MONAI Label using DeepEdit, it’s as easy as following the steps below:</p><pre>$ pip install monailabel</pre><pre>$ monailabel datasets --download --name Task02_Heart --output C:\Workspace\Datasets</pre><pre>$ monailabel apps --download --name deepedit_left_atrium --output C:\Workspace\Apps</pre><pre>$ monailabel start_server --app C:\Workspace\Apps\deepedit_left_atrium --studies C:\Workspace\Datasets\Task02_Heart\imagesTr</pre><p>Once you start the MONAI Label Server, by default, it will be serving at <a href="http://127.0.0.1:8000/.">http://127.0.0.1:8000/.</a> Opening the URL in the browser will provide you with the list of Rest APIs available. You can also use the 3D Slicer extension by filling in the <em>MONAI Label Server</em> field with the serving URL.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/638/0*1P1-UySWvCOpFaFq" /></figure><h4>Who should use MONAI Label?</h4><p>MONAI Label focuses on two types of users, Researchers and Clinicians.</p><p>For Researchers, MONAI Label gives you an easy way to define their pipeline to facilitate the image annotation process. They can use the provided Slicer MONAI Label plugin or customize their own workflow to process inputs and outputs sent to the App.</p><p>For Clinicians, MONAI Label gives you access to a continuously learning AI that will better understand what the end-user is trying to annotate.</p><p>MONAI Label comprises the following key components: MONAI Label Server, MONAI Label Sample Apps, MONAI Label Sample Datasets, and a 3DSlicer Viewer extension.</p><h4>MONAI Label Server</h4><p>The MONAI Label server is the main integration point. It creates the <a href="https://github.com/Project-MONAI/MONAILabel/blob/main/docs/images/MONAILabel_API.png">REST API</a> that allows communication between the MONAI Label server and the client (e.g., Slicer plugin, OHIF plugin, etc.).</p><h4>Sample Apps</h4><p>The included <a href="https://github.com/Project-MONAI/MONAILabel/tree/main/sample-apps">sample apps for MONAI Label</a> are the following:</p><ul><li>Left-atrium semantic segmentation in the heart using both DeepEdit and DeepGrow.</li><li>Spleen semantic segmentation using both DeepEdit and DeepGrow.</li><li>Multiple label segmentation (e.g., <a href="https://github.com/diazandr3s/MONAILabel-Apps/tree/main/segmentation_heart_ventricles">heart ventricles segmentation</a> and <a href="https://github.com/diazandr3s/MONAILabel-Apps/tree/main/segmentation_liver_and_tumor">liver and tumor</a>)</li></ul><p>These sample applications showcase the speed-up provided when using MONAI Label when used to create your annotation model. For example, using the Spleen Application, you can quickly begin to utilize your annotation model after only a few images are segmented.</p><p>The figure below compares the “Interactive” vs. “Standard” way of annotating and training.</p><p>In the “Interactive” way, the annotation and model training complement each other. The user can assist, in the form of clicks, to guide the AI model to annotate the object of interest better. This method allows you to quickly start using your annotation model, giving you a significant speedup in your overall annotation process.</p><p>While the “Standard” way is where the user uses a classical technique of using the paintbrush or click-based contours to annotate the image. This method requires annotating all images before training, which means a longer time before you can begin to utilize your model.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*TwraJKMWmtYOwuai" /><figcaption>A comparison of the “Interactive” vs. “Standard” way of training an annotation model, assuming ~10 minutes per image for a skilled user to segment a spleen CT image manually.</figcaption></figure><h4>Sample Datasets</h4><p>MONAI Label uses the <a href="http://medicaldecathlon.com/">Medical Segmentation Decathlon</a> datasets to showcase how easy it is to create MONAI Label Apps using the three different paradigms: DeepGrow, DeepEdit, and automatic segmentation</p><h4>Annotation Paradigms</h4><p>MONAI Label currently employs the following annotation algorithms:</p><ul><li><a href="https://github.com/Project-MONAI/MONAILabel/wiki/DeepGrow">DeepGrow</a> is a click-based interactive segmentation model, where the user can guide the segmentation with positive and negative clicks. Positive clicks guide the segmentation towards the region of interest, while negative clicks guide the model away from the over-segmented areas.</li><li><a href="https://github.com/Project-MONAI/MONAILabel/wiki/DeepEdit">DeepEdit</a> is an algorithm that combines the power of two models in one single architecture. It allows the user to perform inference using a standard segmentation method and interactive segmentation using clicks.</li><li><a href="https://github.com/Project-MONAI/MONAILabel/wiki/Automatic-Segmentation">Automatic Segmentation</a> is the non-interactive paradigm available in MONAI Label. It allows the researcher to create a segmentation pipeline using a standard UNet or any <a href="https://github.com/Project-MONAI/MONAI/tree/dev/monai/networks/nets">network available in MONAI</a> (e.g., UNet, Highresnet, ResNet, DynUnet, etc.) to segment images automatically.</li></ul><h4>3DSlicer Extension</h4><p>The 3D Slicer extension handles calls and events created by user interaction and sends them to the MONAI Label server. The current version supports click-based interaction and allows the user to upload images and labels.</p><p>The MONAI Label server also supports other interaction styles such as closed curves and ROI. A researcher can modify this plugin to make it more dynamic or customized to their MONAI Label Apps.</p><h3>Summary</h3><p>We’re excited as we continue expanding on our portfolio of projects that are a part of Project MONAI. We also have two new working groups focused on Deployment and Digital Pathology, so keep an eye out for more releases later this year, including a prototype that focuses on the end-to-end Medical AI lifecycle.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e738556b0a10" width="1" height="1" alt=""><hr><p><a href="https://medium.com/pytorch/monai-v0-6-and-monai-label-v0-1-e738556b0a10">MONAI v0.6 and MONAI Label v0.1</a> was originally published in <a href="https://medium.com/pytorch">PyTorch</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>