<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by S.Serdar Helli on Medium]]></title>
        <description><![CDATA[Stories by S.Serdar Helli on Medium]]></description>
        <link>https://medium.com/@serdarhelli?source=rss-e722ae47f29b------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Mon, 06 Apr 2026 04:01:55 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@serdarhelli/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Fine-Tuning MedGemma-4B-IT on Chest X-Rays (ReXGradient) for Under $5: A Lite Evaluation Experiment]]></title>
            <link>https://serdarhelli.medium.com/fine-tuning-medgemma-4b-it-on-chest-x-rays-rexgradient-for-under-5-a-lite-evaluation-experiment-3fed61fc2d5b?source=rss-e722ae47f29b------2</link>
            <guid isPermaLink="false">https://medium.com/p/3fed61fc2d5b</guid>
            <category><![CDATA[gemma]]></category>
            <category><![CDATA[llm]]></category>
            <category><![CDATA[medgemma]]></category>
            <category><![CDATA[fine-tuning]]></category>
            <category><![CDATA[radiology]]></category>
            <dc:creator><![CDATA[S.Serdar Helli]]></dc:creator>
            <pubDate>Sun, 17 Aug 2025 19:48:59 GMT</pubDate>
            <atom:updated>2025-08-17T19:48:59.981Z</atom:updated>
            <content:encoded><![CDATA[<h3>Fine-Tuning MedGemma-4B-IT on Chest X-Rays for Under $5: A Lite Evaluation Experiment</h3><h3>🩺 Introduction</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*lTElK3GkrMtkNaSum98bgw.png" /><figcaption>A conceptual illustration of medical visual question answering — where a chest X-ray is paired with AI-driven analysis for interpretability and reasoning.</figcaption></figure><p>AI in healthcare is advancing quickly, but adapting large models to <strong>medical reasoning</strong> often requires huge compute budgets. To explore what’s possible on a shoestring, I ran a <strong>lite fine-tuning experiment</strong> on <strong>MedGemma-4B-IT</strong> — a vision-language model adapted to medicine — using <strong>less than $5 of compute</strong>.</p><p>This was not a full training run: I trained for only <strong>100 steps</strong> and evaluated on just <strong>100 samples</strong>. Still, the model reached ~83% accuracy while producing <strong>step-by-step reasoning outputs</strong>. This proof-of-concept shows how lightweight experiments can help us test ideas before scaling up.</p><h3>🔍 What is MedGemma?</h3><p><strong>MedGemma</strong> is an open-source extension of Google’s <strong>Gemma model</strong> for healthcare tasks. Like Gemma, it’s a <strong>vision-language model</strong>, meaning it can process both:</p><ul><li><strong>Images</strong> (e.g., chest X-rays)</li><li><strong>Text</strong> (clinical questions, findings, reports)</li></ul><p>This makes it suitable for medical VQA (Visual Question Answering) and diagnostic reasoning.</p><ul><li>📄 <strong>Reference</strong>:</li><li><em>Gemma: Open Models Based on Gemini Research and Technology</em> (Google DeepMind, 2024)</li><li><em>MedGemma</em> (open-sourced adaptation on Hugging Face: unsloth/medgemma-4b-it)</li></ul><h3>📊 The Datasets: REXGradient-160K &amp; REXVQA</h3><h3>REXGradient-160K</h3><p>A large-scale chest X-ray dataset released by Stanford’s Rajpurkar Lab and Gradient Health:</p><ul><li><strong>160,000 studies</strong> from <strong>109K patients</strong> across <strong>79 clinical sites</strong></li><li>Includes <strong>273,000 images</strong> and <strong>paired radiology reports</strong></li><li>Designed for robust multimodal AI research</li><li>📄 <strong>Reference</strong>:</li><li><em>ReXGradient-160K: A Large-Scale Multi-Institutional Chest X-ray Dataset to Accelerate Medical AI Research</em> (Rajpurkar Lab &amp; Gradient Health, 2025)</li></ul><h3>REXVQA</h3><p>A benchmark built on top of REXGradient-160K to evaluate <strong>visual question answering</strong> in radiology:</p><ul><li><strong>~653,000 multiple-choice questions</strong> paired with X-rays</li><li>Covers 5 reasoning skills: <strong>presence, location, negation, diagnosis, geometry</strong></li><li>Generated from reports using radiologist-informed GPT-4o prompts, with strong clinical vetting</li><li>📄 <strong>Reference</strong>:</li><li><em>ReXVQA: A Large-scale Visual Question Answering Benchmark for Generalist Chest X-ray Understanding</em> (Rajpurkar Lab, 2025)</li></ul><h3>⚙️ Lite Fine-Tuning Setup</h3><p>Using <a href="https://github.com/unslothai/unsloth">Unsloth</a> + Hugging Face <strong>TRL</strong>, I fine-tuned MedGemma-4B-IT:</p><ul><li><strong>LoRA adapters</strong> → only ~0.89% of parameters (~38M) updated</li><li><strong>4-bit precision</strong> → single GPU, memory-efficient</li><li><strong>Prompting</strong> → structured as <em>Findings → Impression → Solution</em> for interpretability</li><li><strong>Training</strong> → 100 steps (~15 minutes runtime) on A40 40 gb vram</li><li><strong>Evaluation</strong> → 100 test samples from REXVQA</li></ul><p>💸 <strong>Total cost: under $5</strong>.</p><h3>📈 Lite Results</h3><ul><li><strong>Accuracy</strong>: ~83% on the 100-sample subset</li><li><strong>Outputs</strong>: Clear reasoning steps before answers, e.g.:</li></ul><pre>&lt;start_working_out&gt;<br>Findings: The heart size remains normal. Stable bibasilar scarring noted.<br>Impression: Stable scarring without acute changes.<br>&lt;end_working_out&gt;<br><br>&lt;SOLUTION&gt;<br>C - Stable bibasilar scarring<br>&lt;/SOLUTION&gt;</pre><p>⚠️ <strong>Important</strong>: These are preliminary results — not full benchmarks. The small evaluation is meant to illustrate feasibility, not publishable accuracy.</p><h3>🚀 Why This Matters</h3><ul><li><strong>Low barrier to entry</strong> → Quick, cheap experiments can validate ideas before large-scale training.</li><li><strong>Explainability</strong> → Structured reasoning improves trust in clinical contexts.</li><li><strong>Open resources</strong> → Anyone can reproduce or extend this work.</li></ul><h3>📂 Resources</h3><ul><li>📝 Notebook: <a href="https://github.com/SerdarHelli/TuneCraft/blob/main/notebooks/medgemma-4b-it_rexvqa_sft_lite.ipynb">GitHub — TuneCraft: medgemma-4b-it REXVQA SFT Lite</a></li><li>🤗 Model Checkpoint: Hugging Face —<a href="https://huggingface.co/SerdarHelli/medgemma-4b-it_rexvqa_sft"><em> SerdarHelli/medgemma-4b-it_rexvqa_sft</em></a></li><li>📊 Datasets:</li><li><a href="https://huggingface.co/datasets/rajpurkarlab/ReXGradient-160K">REXGradient-160K</a></li><li><a href="https://huggingface.co/datasets/rajpurkarlab/ReXVQA">REXVQA</a></li></ul><h3>💡 Final Thoughts</h3><p>This was a <strong>lite experiment</strong> — 100 steps of training, 100 samples of evaluation, &lt;$5 of compute. Even with such a small setup, MedGemma-4B-IT showed <strong>encouraging reasoning ability</strong>.</p><p>Scaling this to full training and evaluation could unlock powerful, explainable medical AI models. But the takeaway here is simple: <strong>affordable, domain-specific AI is possible today.</strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3fed61fc2d5b" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[‍ Facial Emotion Recognition Using ModelArts]]></title>
            <link>https://medium.com/huawei-developers/facial-emotion-recognition-using-modelarts-b464e4fd6f08?source=rss-e722ae47f29b------2</link>
            <guid isPermaLink="false">https://medium.com/p/b464e4fd6f08</guid>
            <category><![CDATA[deep-learning]]></category>
            <category><![CDATA[modelarts]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[huawei]]></category>
            <category><![CDATA[facial-emotion-detection]]></category>
            <dc:creator><![CDATA[S.Serdar Helli]]></dc:creator>
            <pubDate>Thu, 04 Aug 2022 12:16:54 GMT</pubDate>
            <atom:updated>2022-08-08T07:04:50.034Z</atom:updated>
            <content:encoded><![CDATA[<h3>👨‍💻Facial Emotion Recognition Using ModelArts</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Ne0iQMmLXhzXRN2U.jpg" /><figcaption>Welcome to Huawei ModelArts!</figcaption></figure><p>Hello Everyone,</p><p>Today, I will develop a custom Facial Emotion Recognition CNN model using ModelArts and Huawei OBS. We will load our data to our bucket using Huwaei OBS. Then, we will train a CNN model on ModelArts. Finally, We will build an AI real-time service as an API.</p><h3>Introduction</h3><h4>What is Facial Emotion Recognition?</h4><p>Facial Emotion Recognition is a technology that analyzes emotions from many sources, including images and videos. It is a member of the group of technologies known as “affective computing,” a multidisciplinary area of study on the capacity of computers to recognize and understand affective states and human emotions that frequently relies on Artificial Intelligence.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/588/1*N4bNHOCHJOCo0aG0zidXTA.png" /><figcaption>Emotions</figcaption></figure><p>In this study, we will train a CNN model, which will be VGG19, with custom hyperparameters to recognize facial emotion. Also, we will use the FER2013 dataset to train our model. FER2013 contains approximately 30,000 facial RGB images of different expressions with seven labels. FER2013 is a well-studied dataset and has been used in ICML competitions and several research papers. It is one of the more challenging datasets, with human-level accuracy only at 65±5%. After training our model, we will build a real-time service. [1]</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/966/1*OtMMGJ545bqHhS1PGX7V9Q.png" /><figcaption>FER2013 Examples</figcaption></figure><h4>What are the ModelArts?</h4><p>A one-stop shop for AI development, ModelArts is designed for programmers and data scientists of all levels. You can manage full-lifecycle AI workflows and quickly design, train, and deploy models from the cloud to the edge. With important capabilities, including data preparation and auto labeling, distributed training, automated model creation, and one-click workflow execution, ModelArts supports AI creativity and speeds up AI development.</p><p>All phases of AI development are covered by ModelArts, including data processing, model training, and model deployment. ModelArts’ core technologies enable a variety of heterogeneous computer resources, giving developers the freedom to choose and employ resources as needed. TensorFlow, MXNet, and PyTorch are just a few of the well-known open-source AI development frameworks that are supported by ModelArts. Additionally, ModelArts enables you to apply personalized algorithm frameworks that are suited to yours. For More Information:</p><p><a href="https://support.huaweicloud.com/intl/en-us/productdesc-modelarts/modelarts_01_0001.html">What Is ModelArts?_ModelArts_Huawei Cloud</a></p><h4>A Step-by-Step Implementation</h4><p><a href="#1117"><strong>1. Upload Data To OBS</strong></a>: First, we need to upload our data to our OBS bucket to train our custom model. There are many various options to upload data. But I would choose to use OBS utils. I think it is more manageable.</p><p>· In the first step, we must create an Access key to Access OBS utils.</p><p>· Then, we will create a bucket to store our data.</p><p>· Finally, We will upload our FER data using OBS Utils.</p><p><a href="#e3b0"><strong>2. Training a Custom Model Using ModelArts</strong></a>: After we upload our data, we can start to train our custom model using ModelArts.</p><p>· Firstly, We need to write our training code.</p><p>· Then, we will create a requirement text file to set up the necessary libraries.</p><p>· In this step, we will upload our codes and file to the created bucket.</p><p>· In the final step, we will train our model using ModelArts Training Jobs.</p><p><a href="#70b8"><strong>3. Building Real-Time Service Using ModelArts</strong></a>: Now, we can build a real-time service with our trained model.</p><p>· We need to write our inference code to configure and develop the service.</p><p>· Finally, we will build the API service for Facial Emotion Recognition using ModelArts.</p><h3>1. Upload Data To OBS</h3><h4>Creating an Access Key</h4><p>First of all, we need an access key and secret key to access using development tools, including APIs, CLI, and SDKs. On the management console, hover over the username in the upper right corner and choose <strong>My Credentials</strong> from the drop-down list.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1013/1*Y-pDNnfg-ztvHJR5C-HVWg.png" /><figcaption>Management Console</figcaption></figure><p>Then, we should choose <strong>Access Keys</strong> from the navigation pane. Now, we can create an Access key.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hT-aNhtWgHoL1oap8Shaow.png" /><figcaption>Creating an Access Key</figcaption></figure><p>We will click <strong>Create Access Key</strong> and enter the verification code or password. It is essential that we download the access key file and keep it properly. If the download page is closed, we will not be able to download the access key. However, we can create a new one.</p><p>We can get our access key, secret key, and access key id quickly from the CSV file which we downloaded previously.</p><h4>Creating a Bucket</h4><p>Now, we will use Huawei OBS (Object Storage Service). We create a bucket and load our data into it. Object Storage Service (OBS) is a cloud storage service optimized for storing massive amounts of data. It provides unlimited, secure, and highly reliable storage capabilities at a relatively low cost. On the management panel, click over the search field in the upper right corner and write <strong>OBS, </strong>then choose it from the drop-down list.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*RhHIkMjiguI4V7PYG28iYQ.png" /><figcaption>Creating a Bucket</figcaption></figure><p>After clicking <strong>Create Bucket</strong> button, we will see a panel as follows. In this part, we should remember which region we choose because once a bucket is created, the region cannot be changed, and also, we will also choose the same region on <strong>the ModelArts</strong> part to train our model. I will select the <strong>AP-Singapore</strong> region and give a name to it as <strong>ferdatahw.</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vB_ScJsMRvhC7if0GJabZw.png" /><figcaption>Creating A Bucket On Console</figcaption></figure><p>Now, we created a bucket. Let’s click over our bucket and create the necessary folders we will use in the training process and building service. On the left navigation panel, let’s click over <strong>Objects.</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*aW-4QS4M45A1kHkd0njUhQ.png" /><figcaption>The Objects Of Bucket</figcaption></figure><p>In this section, we will create three folders. One of them will be a folder we will use to obtain data from there. Another one will be a folder we will use our model’s and API’s code. The final one will be a folder we will save our own model into it. I created three folders, giving names to their <strong>data, code, and Out </strong>in order.</p><h4>Uploading Data Using OBS Utils</h4><p>As you can remember, in the first stage, we created an access key. Now, we will load our data into the bucket we created a few minutes ago. But after the start, we should download data into our local to upload data into our bucket. As you know, we will develop the Facial Emotion Recognition AI model in this study. I would love to use the FER2013 dataset because it has a lot of images, and its size is not huge. <a href="https://www.kaggle.com/datasets/msambare/fer2013">With this link, you can download it</a>. After downloading the dataset, let’s extract the RAR file to your chosen local place.</p><p>Huwaei OBS tools offer many tools to manage your bucket. For example, you can upload your files manually or use the OBS Utils and OBS Web Browser, etc. In this part, I will use <strong>the OBS Utils</strong>; it is effortless to use. Let’s download OBSUtil from the Huwaei OBS console.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YrND97lK8E38ZrIdtNy_NQ.png" /><figcaption>OBS Tools</figcaption></figure><p><strong>The OBS Utils</strong> is a command line tool for accessing and managing OBS on HUAWEI CLOUD. This tool can perform common configurations on OBS, such as creating buckets, uploading and downloading files/folders, and deleting files/folders. If you are familiar with the command line interface (CLI), the OBSUtil is recommended for batch processing and automated tasks. More information and documentation :</p><p><a href="https://support-intl.huaweicloud.com/en-us/utiltg-obs/obs_11_0001.html">Introduction to obsutil_Object Storage Service_Tools Guide_obsutil_HUAWEI CLOUD</a></p><p>After downloading the OBS Utils, you will get a RAR file. Then, let’s open <strong>obsutil.exe.</strong> A command console will be opened. We must enter our access id, access key, and region path to access OBS. Let’s enter this code line onto the OBSUtil.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/590/1*MwneFCP4MMhrx7At4za1rQ.png" /><figcaption>Command Console of OBS Utils</figcaption></figure><pre><strong>obsutil config -i=your_access_id -k=your_access_key -e=obs.ap-southeast-3.myhuaweicloud.com</strong></pre><p>If your connection is successful, you will get a response as “Update config file successfully!”. We connected to the <strong>OBS</strong>. Now we transfer our data to the bucket which we created previously. Let’s upload our data.</p><pre><strong>obsutil cp pathdirectory_yourextracteddata_fer2013 obs://ferdatahw/data/ -f -r</strong></pre><p>For Example :</p><pre><strong>obsutil cp C:/Users/pc/Desktop/val obs://ferdatahw/data/ -f -r</strong></pre><pre><strong>obsutil cp C:/Users/pc/Desktop/train obs://ferdatahw/data/ -f -r</strong></pre><p>You can check out was the transfer successful or not on the OBS console. In the training process, we upload our code into our bucket manually because our code will consist of a few files and will be easy to upload and change files manually.</p><h3>2. Training a Custom Model Using ModelArts</h3><h4>Custom Training Code</h4><p>Let’s write our main training code. When a ModelArts model reads data stored in OBS or outputs data to a specified OBS path, perform the following operations to configure the input and output data. We should parse the input and output paths in the training code. Also, we can parse hyperparameters in the training code. As you can see in this figure, we will select the OBS path or dataset path as the training input and the OBS path as the output on the modelarts.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ykhBjGMWs-mwpd1NPl1rAg.png" /><figcaption>Training Inputs On Training Jobs</figcaption></figure><p>Let’s Code!</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/a2bfdff6e875cdd7783f3d46d9d1be4b/href">https://medium.com/media/a2bfdff6e875cdd7783f3d46d9d1be4b/href</a></iframe><h4>Creating Requirements Text File</h4><p>We will create a file named<strong> pip-requirements.txt</strong> in the code directory, and specify the name and version number of the dependency package in the file. Before the training boot file is executed, the system automatically runs the following command to install the specified Python packages:</p><pre>imutils==0.5.4</pre><pre>numpy==1.21.6</pre><pre>keras&gt;=2.1</pre><pre>argparse==1.1</pre><h4>Uploading Training Code And Requirement Text File</h4><p>Finally, we will upload our training boot file and requirement.txt into our code directory, which we created previous OBS stage. Let’s upload our files manually. Because we have two files and we don’t need any tools such as <strong>the OBS Utils</strong> to upload.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*LY21TwudFjdnprBvSepTZA.png" /><figcaption>OBS Utils Tools</figcaption></figure><h4>Training Using ModelArts Training Jobs</h4><p>Let’s open <strong>ModelArts</strong> console. In this study, we don’t use <strong>the ExeML tool, ModelArts SDK</strong>, or others. We will do custom training using only Tensorflow. However, there are many different paths you can follow. For More Information:</p><p><a href="https://support.huaweicloud.com/intl/en-us/modelarts/index.html">Progressive Knowledge_ModelArts_Huawei Cloud</a></p><p>On the left panel, let’s go to <strong>Training Management </strong>and then choose <strong>Trainin Jobs</strong> from the drop-down list. Now, we will create a training job. Now, you will see a panel. Let’s examine this part by part. For More Information:</p><p><a href="https://support.huaweicloud.com/intl/en-us/engineers-modelarts/en-us_topic_0000001072729016.html">Creating a Training Job_ModelArts_User Guide (Senior AI Engineers)_Training Management (New Version)_Performing a Training_HUAWEI CLOUD</a></p><p>In the first area, we can give a name and description of our training job.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/886/1*CW0zfGEzu4WDFgjrkQYIOw.png" /><figcaption>The First Field Of Training Jobs</figcaption></figure><p>In the second part, there are many fields. In the first field<strong>, “Created By,”</strong> we will choose <strong>Custom Algorithms </strong>because, as you know, we would love to do custom training. In the second field of this part, we will choose Present Images. Then, as you can see, we will choose TensorFlow 2.1 because we will use Tensorflow Framework to train our model in this study. In the <strong>“Code Directory”</strong> field, we will choose the folder which we created previously on the OBS stage. And again, in the boot file, we will select our main training code file.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*U8reoFalkfAvXPlI39Op5A.png" /><figcaption>The Second Field Of Training Jobs</figcaption></figure><p>In this part, we must enter to parameters <strong>training_url </strong>and<strong> data_url</strong>. These are a path where we will save our model and a path where from we will obtain our dataset, respectively. Also, We can enter <strong>Hyperparameters, and Environment Variables</strong> to pass to our code :</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*kEKxyU3TANlNnrn0LDTYBg.png" /><figcaption>The Third Field Of Training Jobs</figcaption></figure><p>In the final part, we can configure what we want. I preferred <strong>GPU </strong>to train.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GeGu40sTHBdIJfkFbhaipg.png" /><figcaption>The Final Field Of Training Jobs</figcaption></figure><p>Now, it is ready to submit. If you go to your training job, you will see the logs of your code.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4vTGiO64viH32Krm_WPg_w.png" /><figcaption>The Logs Of Training Jobs</figcaption></figure><p>We got a <strong>0.65 accuracy score</strong> on FER2013 data. We can check whether our model is saved or not in our obs path.</p><h3>3. Building Real-Time Service using ModelArts</h3><h4>Inference Code</h4><p>Now, We will develop an AI application as a web service. First, our model requires inference code; ensure that the code is stored in the model directory, which is <strong>Out</strong>; we saved our model there before. The file name is fixed to <strong>customize_service.py</strong>. There must be one and only one such file. Our inference code must be inherited from the <strong>BaseService</strong> class. The following table lists the import statements of different types of model parent classes. For More Information:</p><p><a href="https://support.huaweicloud.com/intl/en-us/engineers-modelarts/modelarts_23_0093.html">Specifications for Compiling Model Inference Code_ModelArts_User Guide (Senior AI Engineers)_Model Package Specifications_HUAWEI CLOUD</a></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/789/1*w-tu7tWtJMUe7_TwwUI-og.png" /><figcaption>The Import Statements of Different Types of Model Parent Classes</figcaption></figure><p>Let’s code</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/8cef3048f63e906c90de94aca0ac32a9/href">https://medium.com/media/8cef3048f63e906c90de94aca0ac32a9/href</a></iframe><p>Finally, We will upload <strong>the inference code</strong> file into our directory, where we saved our model. Let’s upload our files manually. Then, we will be ready.</p><h4>Building AI Application and a Real-Time Service</h4><p>Now, let’s go back to the ModelArts console. On the left management console, click over the <strong>AI Application Management</strong> and choose <strong>AI Applications</strong> from the drop-down list. Then, we will create an AI application.</p><p>We will choose OBS on the panel because we saved our model into a bucket and did custom training. Also, in the previous stage, we uploaded the inference code of our model, as you remember. We will select Tensorflow as AI Engine and choose the version of it that we used at the training.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3IXGQ4G20ovC1_gzrxJ0eg.png" /><figcaption>AI Application Console</figcaption></figure><p>In the final stage, we will build our real-time service. It is the easiest part. On the left management panel, click over the <strong>AI Application Management</strong> and choose <strong>AI Applications</strong> from the drop-down list. Then, We will create an AI application.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*NGc-26wLoOwWOCSFG5UZgQ.png" /><figcaption>Real-Time Services Console</figcaption></figure><p>We should select the AI application which we built. That’s it. 😊 Cheers!</p><h3>Summary</h3><p>To summarize, this is how you can easily generate your own real-time API service for Facial Emotion Recognition with ModelArts.</p><p>We trained a CNN model based on FER2013 dataset with custom hyperparameters, reaching a 65% accuracy score on the validation dataset. Then, We built a real-time service using the model. In an addition, if you get a higher score, you can try data augmentation techniques, adding new datasets, or using transfer learning. So, this study showed that using Huawei ModelArts tools is so easy to build an AI real-time service, and train a model.</p><h3>References</h3><p>Goodfellow, Ian J., et al. “Challenges in representation learning: A report on three machine learning contests.” <em>International conference on neural information processing</em>. Springer, Berlin, Heidelberg, 2013.</p><ul><li><a href="https://support.huaweicloud.com/intl/en-us/productdesc-modelarts/modelarts_01_0001.html">What Is ModelArts?_ModelArts_Huawei Cloud</a></li><li><a href="https://www.kaggle.com/datasets/msambare/fer2013">FER-2013</a></li><li><a href="https://support-intl.huaweicloud.com/en-us/utiltg-obs/obs_11_0001.html">Introduction to obsutil_Object Storage Service_Tools Guide_obsutil_HUAWEI CLOUD</a></li><li><a href="https://support.huaweicloud.com/intl/en-us/modelarts/index.html">Progressive Knowledge_ModelArts_Huawei Cloud</a></li><li><a href="https://support.huaweicloud.com/intl/en-us/engineers-modelarts/en-us_topic_0000001072729016.html">Creating a Training Job_ModelArts_User Guide (Senior AI Engineers)_Training Management (New Version)_Performing a Training_HUAWEI CLOUD</a></li><li><a href="https://support.huaweicloud.com/intl/en-us/engineers-modelarts/modelarts_23_0093.html">Specifications for Compiling Model Inference Code_ModelArts_User Guide (Senior AI Engineers)_Model Package Specifications_HUAWEI CLOUD</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b464e4fd6f08" width="1" height="1" alt=""><hr><p><a href="https://medium.com/huawei-developers/facial-emotion-recognition-using-modelarts-b464e4fd6f08">👨‍💻 Facial Emotion Recognition Using ModelArts</a> was originally published in <a href="https://medium.com/huawei-developers">Huawei Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Basic Classification of Thyroid Tumors on UltraSound Images using Deep Learning Methods]]></title>
            <link>https://serdarhelli.medium.com/the-basic-classification-of-thyroid-tumors-on-ultrasound-images-using-deep-learning-methods-46f812d859ea?source=rss-e722ae47f29b------2</link>
            <guid isPermaLink="false">https://medium.com/p/46f812d859ea</guid>
            <category><![CDATA[medical-imaging]]></category>
            <category><![CDATA[image-processing]]></category>
            <category><![CDATA[grad-cam]]></category>
            <category><![CDATA[deep-learning]]></category>
            <category><![CDATA[thyroid-cancer]]></category>
            <dc:creator><![CDATA[S.Serdar Helli]]></dc:creator>
            <pubDate>Sun, 09 Jan 2022 16:01:19 GMT</pubDate>
            <atom:updated>2022-06-26T10:14:02.110Z</atom:updated>
            <content:encoded><![CDATA[<p>Thyroid nodule is one of the most common endocrine carcinomas. Due to its higher reveal ability and ability to distinguish between benign and malignant nodules in pathological features, ultrasonography has become the most widely used modality for finding and diagnosing thyroid cancer when compared to CT and MRI.</p><p>In this study, the purpose is the classification of thyroid tumors on ultrasound images with 6 different categories:</p><p>• 1 (Benign) • 2 (Benign)</p><p>• 4a (Malign) • 4b (Malign)</p><ul><li>4c (Malign) • 5 (Malign)</li></ul><p>For this end, we will develop a deep learning algorithm using ultrasound images labeled and evaluate the performance of the algorithm.</p><p><a href="http://cimalab.intec.co/applications/thyroid/">Colombia National University presented an open access database of thyroid ultrasound images. The dataset consists of a set of B-mode Ultrasound images, including a complete annotation and diagnostic description of suspicious thyroid lesions by expert radiologists.</a> [1]</p><p>Firstly, we will import the libraries we will use.</p><pre><strong>import</strong> os<br><strong>import</strong> xml.etree.ElementTree <strong>as</strong> ET<br><strong>from</strong> natsort <strong>import</strong> natsorted<br><strong>import</strong> pandas <strong>as</strong> pd<br><strong>from</strong> PIL <strong>import</strong> Image<br><strong>import</strong> numpy <strong>as</strong> np<br><strong>import</strong> requests<br><strong>from</strong> zipfile <strong>import</strong> ZipFile<br><strong>from</strong> io <strong>import</strong> BytesIO<br><strong>import</strong> cv2<br><strong>import</strong> matplotlib.pyplot <strong>as</strong> plt<br><strong>import</strong> tensorflow <strong>as</strong> tf<br><strong>import</strong> math<br><strong>import</strong> random<br><strong>from</strong> six.moves <strong>import</strong> xrange<br><strong>import</strong> collections<br><strong>import</strong> string</pre><p>Then , we will download and prepare the data.</p><pre><strong>def</strong> download_dataset(save_path):<br>    r <strong>=</strong> requests<strong>.</strong>get(&quot;<a href="http://cimalab.unal.edu.co/applications/thyroid/thyroid.zip">http://cimalab.intec.co/applications/thyroid/thyroid.zip</a>&quot;)<br>    print(&quot;Downloading...&quot;)<br>    z <strong>=</strong> ZipFile(BytesIO(r<strong>.</strong>content))    <br>    z<strong>.</strong>extractall(save_path)<br>    print(&quot;Completed...&quot;)<br><br><em># XML and Jpeg     </em><br><strong>def</strong> to_dataframe(path):<br>    dirs<strong>=</strong>natsorted(os<strong>.</strong>listdir(path))<br>    xml_list<strong>=</strong>[]<br>    img_list<strong>=</strong>[]<br>    <strong>for</strong> i <strong>in</strong> range(len(dirs)):<br>        <strong>if</strong> &#39;.xml&#39; <strong>in</strong> dirs[i]:<br>            xml_list<strong>.</strong>append(dirs[i])<br>        <strong>if</strong> <strong>not</strong> &#39;.xml&#39;  <strong>in</strong> dirs[i]:<br>            img_list<strong>.</strong>append(dirs[i])<br>    xml_list<strong>=</strong>natsorted(xml_list)<br>    img_list<strong>=</strong>natsorted(img_list)<br>    tirads<strong>=</strong>[]<br>    <strong>for</strong> j <strong>in</strong> range(len(xml_list)):<br>        tree <strong>=</strong> ET<strong>.</strong>parse(path<strong>+</strong>&#39;/&#39;<strong>+</strong>xml_list[j])<br>        a<strong>=</strong>tree<strong>.</strong>findall(&quot;./tirads&quot;)<br>        <strong>if</strong> a[<strong>-</strong>1]<strong>.</strong>text<strong>!=None</strong>:<br>            case<strong>=</strong>[xml_list[j],a[<strong>-</strong>1]<strong>.</strong>text]<br>            tirads<strong>.</strong>append(case)<br>    data<strong>=</strong>[]<br>    <strong>for</strong> k <strong>in</strong> range(len(tirads)):<br>        xml<strong>=</strong>tirads[k][0][:<strong>-</strong>4]<br>        <strong>for</strong> z <strong>in</strong> range(len(img_list)):<br>            <strong>if</strong> xml<strong>+</strong>&#39;_1.jpg&#39;<strong>==</strong>img_list[z] <strong>or</strong> xml<strong>+</strong>&#39;_2.jpg&#39;<strong>==</strong>img_list[z] <strong>or</strong> xml<strong>+</strong>&#39;_3.jpg&#39;<strong>==</strong>img_list[z]:<br>                m<strong>=</strong>[img_list[z],tirads[k][1]]<br>                data<strong>.</strong>append(m)<br><br>    df <strong>=</strong> pd<strong>.</strong>DataFrame(data,columns <strong>=</strong>[&#39;Jpeg_Name&#39;, &#39;Tirads&#39;])<br>    <strong>return</strong> df</pre><p>The dataset is imbalanced, and several images contain thyroid cancers that are not labeled. In addition, some images have two thyroid ultrasound images from the same subject. Before algorithm training, images were processed. In data preperation ,firstly , we normalized images . Secondly, we cropped images to find to the biggest coutour of image.Also, there are text about classification of thyroid tumors on images. They were deleted because of to prevent the bias.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/945/1*6IihccifZD5xkq-IU_HYBA.png" /><figcaption>Figure 1 — Original Image and Cropped and Resized Image which has given to model</figcaption></figure><p>The Code :</p><pre><em>#Cropp Function</em><br><strong>def</strong> croping(img,x, y, w, h):<br>    <strong>if</strong> abs(w)<strong>&lt;</strong>abs(h):<br>        img2<strong>=</strong>np<strong>.</strong>zeros([h,h])<br>        img2[:,h<strong>-</strong>w:h]<strong>=</strong>img[y:y<strong>+</strong>h, x:x<strong>+</strong>w]<br>    <strong>if</strong> abs(h)<strong>&lt;</strong>abs(w):  <br>        img2<strong>=</strong>np<strong>.</strong>zeros([w,w])<br>        img2[w<strong>-</strong>h:w,:]<strong>=</strong>img[y:y<strong>+</strong>h, x:x<strong>+</strong>w]<br>    <strong>else</strong>:<br>        <strong>return</strong> img<br>    <strong>return</strong> img2<br><br><strong>def</strong> convert_one_channel(img):<br>    <em>#if some images have 3 channels , although they are grayscale image</em><br>    <strong>if</strong> len(img<strong>.</strong>shape)<strong>&gt;</strong>2:<br>        img<strong>=</strong>img[:,:,0]<br>        <strong>return</strong> img<br>    <strong>else</strong>:<br>        <strong>return</strong> img<br><br><em>#Remove Fill area from Image and Resizeing</em><br><strong>def</strong> crop_resize(path,resize_shape):<br>    img<strong>=</strong>plt<strong>.</strong>imread(path)<br>    img<strong>=</strong>convert_one_channel(np<strong>.</strong>asarray(img))    <br>    kernel <strong>=</strong>( np<strong>.</strong>ones((5,5), dtype<strong>=</strong>np<strong>.</strong>float32))<br>    ret,thresh <strong>=</strong> cv2<strong>.</strong>threshold(img, 0, 255, cv2<strong>.</strong>THRESH_BINARY)<br>    thresh <strong>=</strong> thresh<strong>.</strong>astype(np<strong>.</strong>uint8)<br>    a1,b1<strong>=</strong>thresh<strong>.</strong>shape<br>    thresh<strong>=</strong>cv2<strong>.</strong>morphologyEx(thresh, cv2<strong>.</strong>MORPH_OPEN, kernel,iterations<strong>=</strong>3 )<br>    thresh<strong>=</strong>cv2<strong>.</strong>erode(thresh,kernel,iterations <strong>=</strong>5)<br>    contours, hierarchy <strong>=</strong> cv2<strong>.</strong>findContours(thresh<strong>.</strong>copy(), cv2<strong>.</strong>RETR_TREE, cv2<strong>.</strong>CHAIN_APPROX_SIMPLE)<br>    c_area<strong>=</strong>np<strong>.</strong>zeros([len(contours)])<br>    <strong>for</strong> i <strong>in</strong> range(len(contours)):<br>        c_area[i]<strong>=</strong> cv2<strong>.</strong>contourArea(contours[i]) <br>    cnts<strong>=</strong>contours[np<strong>.</strong>argmax(c_area)]    <br>    x, y, w, h <strong>=</strong> cv2<strong>.</strong>boundingRect(cnts)<br>    roi <strong>=</strong> croping(img, x, y, w, h)<br>    roi<strong>=</strong>cv2<strong>.</strong>resize(roi,(resize_shape),interpolation<strong>=</strong>cv2<strong>.</strong>INTER_LANCZOS4)<br>    <strong>return</strong> roi<br><br><br><em># TO Data Matrix</em><br><strong>def</strong> to_imgmatrix(resize_shape,path,df):<br>    path<strong>=</strong>path<strong>+</strong>&#39;/&#39;  <br>    images<strong>=</strong>crop_resize(path<strong>+</strong>df[&quot;Jpeg_Name&quot;][0],resize_shape)<br>    <strong>for</strong> i <strong>in</strong> range (1,len(df[&quot;Jpeg_Name&quot;])):<br>        img<strong>=</strong>crop_resize(path<strong>+</strong>df[&quot;Jpeg_Name&quot;][i],resize_shape)<br>        images<strong>=</strong>np<strong>.</strong>concatenate((images,img))<br>    images<strong>=</strong>np<strong>.</strong>reshape(images,(len(df[&quot;Jpeg_Name&quot;]),resize_shape[0],resize_shape[1],1))<br>    <strong>return</strong> images<br><br><strong>def</strong> prepare_data(path,resize_shape):<br>    df<strong>=</strong>to_dataframe(path)<br>    data<strong>=</strong>to_imgmatrix(resize_shape,path,df) <br>    <strong>return</strong> df,data<br></pre><pre>download_dataset(&quot;/content/Data&quot;)</pre><pre>#We want to resize 256x256 </pre><pre>df,data<strong>=</strong>prepare_data(&quot;/content/Data&quot;,(256,256))</pre><p>Let’s see df:</p><pre>df<strong>.</strong>head()</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/174/1*5REKs72visVGoC7iRbWw0g.png" /><figcaption>Figure 2- df.head() results</figcaption></figure><p>We need to <strong>y</strong> as categorical data to give as an input to model, so the code :</p><pre><em># We need numeric category</em><br><strong>def</strong> to_categoricalmatrix(df):<br>    <em>#There are little categories, so i handled manually</em><br>    Y<strong>=</strong>np<strong>.</strong>zeros([len(df[&quot;Tirads&quot;])])<br>    <strong>for</strong> i <strong>in</strong> range(len(df[&quot;Tirads&quot;])):<br>        <strong>if</strong> df[&quot;Tirads&quot;][i]<strong>==</strong>&quot;2&quot;:<br>          Y[i]<strong>=</strong>0<br>        <strong>if</strong> df[&quot;Tirads&quot;][i]<strong>==</strong>&quot;3&quot;:<br>          Y[i]<strong>=</strong>1<br>        <strong>if</strong> df[&quot;Tirads&quot;][i]<strong>==</strong>&quot;4a&quot;:<br>          Y[i]<strong>=</strong>2<br>        <strong>if</strong> df[&quot;Tirads&quot;][i]<strong>==</strong>&quot;4b&quot;:<br>          Y[i]<strong>=</strong>3<br>        <strong>if</strong> df[&quot;Tirads&quot;][i]<strong>==</strong>&quot;4c&quot;:<br>          Y[i]<strong>=</strong>4<br>        <strong>if</strong> df[&quot;Tirads&quot;][i]<strong>==</strong>&quot;5&quot;:<br>          Y[i]<strong>=</strong>5<br>    <strong>return</strong> Y</pre><pre><em># to integer</em><br>y<strong>=</strong>to_categoricalmatrix(df)<br>y<strong>=</strong>tf<strong>.</strong>keras<strong>.</strong>utils<strong>.</strong>to_categorical(y, dtype<strong>=</strong>&#39;float32&#39;)</pre><p>Befero the training our model , we also need to normalize to the images :</p><pre><em>#normalize function</em><br><strong>def</strong> normalize(data):<br>  <strong>for</strong> i <strong>in</strong> range(len(data)):<br>    data[i,:,:,:]<strong>=</strong>data[i,:,:,:]<strong>*</strong>(1<strong>/</strong>np<strong>.</strong>max(data[i,:,:,:]))<br>  <strong>return</strong> np<strong>.</strong>float32(data)<br>  <br><em># we need noormalize to images</em><br>x<strong>=</strong>normalize(data)</pre><p>Let’s see , random examples from X :</p><pre><strong>import</strong> random<br>random_number<strong>=</strong>random<strong>.</strong> randint(0,len(df[&quot;Tirads&quot;]))<br>plt<strong>.</strong>figure(figsize <strong>=</strong> (20,10))<br>tit<strong>=</strong>&quot;Classification : &quot;<strong>+</strong>np<strong>.</strong>str(df[&quot;Tirads&quot;][random_number])<br>plt<strong>.</strong>title(tit,fontsize <strong>=</strong> 40)<br>plt<strong>.</strong>imshow(x[random_number,:,:,0],cmap<strong>=</strong>&quot;gray&quot;)</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/583/1*4nnNMsUmTkUrwEkufo-D7Q.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/583/1*L7XVIEYdAenluutJx___yg.png" /><figcaption>Figure 3 — Examples Cropped and Resized Data</figcaption></figure><p>Before the training , we need to split dataset. In final , we have 347 images.</p><pre><em>#Splitting test ,validation ,and train</em><br>x_train<strong>=</strong>np<strong>.</strong>copy(x[:300,:,:,:])<br>x_test<strong>=</strong>np<strong>.</strong>copy(x[313:,:,:,:])<br>x_valid<strong>=</strong>np<strong>.</strong>copy(x[300:313,:,:,:])<br><br>y_train<strong>=</strong>np<strong>.</strong>copy(y[:300,:])<br>y_valid<strong>=</strong>np<strong>.</strong>copy(y[300:313,:])<br>y_test<strong>=</strong>np<strong>.</strong>copy(y[313:,:])</pre><p>Pre-processing is the initial stage in refining image data, such as removing distortion, so that it may be utilized to process data more effectively. We apply many methods of pre-processing in this study, including augmentation, which tries to avoid overfitting so that even if the device encounters a difficulty with micro variations, the software can still make accurate predictions.</p><p>So ,The augmentation used is a random rotation distance of 20, random zooming area in the range of 0.2, a random contrast in the range of 0.1, and, a random horizontal flip.</p><pre><strong>from</strong> tensorflow.keras <strong>import</strong> layers<br><em>#Data Augmention for to prevent Overfitting and to improve accuracy</em><br>data_augmentation1 <strong>=</strong> tf<strong>.</strong>keras<strong>.</strong>Sequential([<br> layers<strong>.</strong>experimental<strong>.</strong>preprocessing<strong>.</strong>RandomFlip(<br>    &quot;horizontal&quot;),<br>layers<strong>.</strong>experimental<strong>.</strong>preprocessing<strong>.</strong>RandomZoom(height_factor<strong>=</strong>(<strong>-</strong>0.2, 0.2),fill_mode<strong>=</strong>&quot;constant&quot;),<br>layers<strong>.</strong>experimental<strong>.</strong>preprocessing<strong>.</strong>RandomRotation(factor<strong>=</strong>(<strong>-</strong>0.2, 0.2),fill_mode<strong>=</strong>&quot;constant&quot;),<br>tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>experimental<strong>.</strong>preprocessing<strong>.</strong>RandomContrast(0.1)])<br><br>x_train1<strong>=</strong>data_augmentation1(x_train)<br>y_train1<strong>=</strong>np<strong>.</strong>copy(y_train)<br>i<strong>=</strong>1<br><br><em>#22</em><br><strong>while</strong>(i<strong>&lt;</strong>22):<br>  x_aug<strong>=</strong>data_augmentation1(x)<br>  x_train1<strong>=</strong>np<strong>.</strong>concatenate((x_train1,x_aug),axis<strong>=</strong>0)<br>  y_aug<strong>=</strong>np<strong>.</strong>copy(y)<br>  y_train1<strong>=</strong>np<strong>.</strong>concatenate((y_train1,y_aug))<br><br>  <em>#20</em><br>  <strong>if</strong> i <strong>==</strong> 20:<br>    <strong>break</strong><br>  i <strong>+=</strong> 1</pre><p>During training, the input to our VGG-19 is a fixed-size 256 × 256 gray scale image. The image is passed through a stack of convolutional (conv.) layers, where we use filters with a very small receptive field: 3 × 3 .The padding is same for 3 × 3 convolution layers.Max-pooling is performed over a 2 × 2 pixel window, with stride 2. A stack of convolutional layers is followed by three Fully-Connected (FC) layers: the first two have 1024 channels each.The final layer is the soft-max layer.. All hidden layers are equipped with the rectification (ReLU (Krizhevsky et al., 2012)) non-linearity.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/656/1*rjivESFqrEgYNsFs_fUtMw.png" /><figcaption>Figure 2 — The Diagram of VGG-19</figcaption></figure><p>The Code:</p><pre><strong>def</strong> VGG19(input_shape,filters):<br>    inputs<strong>=</strong>tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Input(shape<strong>=</strong>input_shape)<br>    <br>    x <strong>=</strong> tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Conv2D(filters<strong>//</strong>16,(3,3), activation <strong>=</strong> &#39;relu&#39;, padding <strong>=</strong> &#39;same&#39;, kernel_initializer <strong>=</strong> &#39;he_normal&#39;)(inputs)<br>    x<strong>=</strong>tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Dropout(0.1)(x)<br>    x <strong>=</strong> tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Conv2D(filters<strong>//</strong>16,(3,3), activation <strong>=</strong> &#39;relu&#39;, padding <strong>=</strong> &#39;same&#39;, kernel_initializer <strong>=</strong> &#39;he_normal&#39;)(x)<br>    x<strong>=</strong>tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>BatchNormalization()(x)<br><br>    x <strong>=</strong> tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>MaxPooling2D(pool_size<strong>=</strong>(2, 2))(x)<br>    x <strong>=</strong> tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Conv2D(filters<strong>//</strong>8,(3,3), activation <strong>=</strong> &#39;relu&#39;, padding <strong>=</strong> &#39;same&#39;, kernel_initializer <strong>=</strong> &#39;he_normal&#39;)(x)<br>    x<strong>=</strong>tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Dropout(0.2)(x)<br>    x <strong>=</strong> tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Conv2D(filters<strong>//</strong>8,(3,3), activation <strong>=</strong> &#39;relu&#39;, padding <strong>=</strong> &#39;same&#39;, kernel_initializer <strong>=</strong> &#39;he_normal&#39;)(x)<br>    x<strong>=</strong>tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>BatchNormalization()(x)<br>    <br>    x <strong>=</strong> tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>MaxPooling2D(pool_size<strong>=</strong>(2, 2))(x)<br>    x <strong>=</strong> tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Conv2D(filters<strong>//</strong>4,(3,3), activation <strong>=</strong> &#39;relu&#39;, padding <strong>=</strong> &#39;same&#39;, kernel_initializer <strong>=</strong> &#39;he_normal&#39;)(x)<br>    x<strong>=</strong>tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Dropout(0.3)(x)<br>    x <strong>=</strong> tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Conv2D(filters<strong>//</strong>4,(3,3), activation <strong>=</strong> &#39;relu&#39;, padding <strong>=</strong> &#39;same&#39;, kernel_initializer <strong>=</strong> &#39;he_normal&#39;)(x)<br>    x<strong>=</strong>tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>BatchNormalization()(x)<br>    x <strong>=</strong> tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Conv2D(filters<strong>//</strong>4,(3,3), activation <strong>=</strong> &#39;relu&#39;, padding <strong>=</strong> &#39;same&#39;, kernel_initializer <strong>=</strong> &#39;he_normal&#39;)(x)<br>    x<strong>=</strong>tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>BatchNormalization()(x)<br><br>    x <strong>=</strong> tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>MaxPooling2D(pool_size<strong>=</strong>(2, 2))(x)<br>    x <strong>=</strong> tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Conv2D(filters<strong>//</strong>2,(3,3), activation <strong>=</strong> &#39;relu&#39;, padding <strong>=</strong> &#39;same&#39;, kernel_initializer <strong>=</strong> &#39;he_normal&#39;)(x)<br>    x<strong>=</strong>tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Dropout(0.4)(x)<br>    x <strong>=</strong> tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Conv2D(filters<strong>//</strong>2,(3,3), activation <strong>=</strong> &#39;relu&#39;, padding <strong>=</strong> &#39;same&#39;, kernel_initializer <strong>=</strong> &#39;he_normal&#39;)(x)<br>    x<strong>=</strong>tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>BatchNormalization()(x)<br>    x <strong>=</strong> tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Conv2D(filters<strong>//</strong>2,(3,3), activation <strong>=</strong> &#39;relu&#39;, padding <strong>=</strong> &#39;same&#39;, kernel_initializer <strong>=</strong> &#39;he_normal&#39;)(x)<br>    x<strong>=</strong>tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>BatchNormalization()(x)<br><br>    x <strong>=</strong> tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>MaxPooling2D(pool_size<strong>=</strong>(2, 2))(x)<br>    x <strong>=</strong> tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Conv2D(filters,(3,3),activation <strong>=</strong> &#39;relu&#39;, padding <strong>=</strong> &#39;same&#39;, kernel_initializer <strong>=</strong> &#39;he_normal&#39;)(x)<br>    x<strong>=</strong>tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Dropout(0.5)(x)<br>    x <strong>=</strong> tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Conv2D(filters,(3,3), activation <strong>=</strong> &#39;relu&#39;, padding <strong>=</strong> &#39;same&#39;, kernel_initializer <strong>=</strong> &#39;he_normal&#39;)(x)<br>    x<strong>=</strong>tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>BatchNormalization()(x)<br>    last <strong>=</strong> tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Conv2D(filters,(3,3), activation <strong>=</strong> &#39;relu&#39;, padding <strong>=</strong> &#39;same&#39;, kernel_initializer <strong>=</strong> &#39;he_normal&#39;,name<strong>=</strong>&#39;top_conv&#39;)(x)<br>    <br>    model<strong>=</strong>tf<strong>.</strong>keras<strong>.</strong>Model(inputs,last,name<strong>=</strong>&quot;VGG19&quot;)<br>    <strong>return</strong> model</pre><p>In fully connective layers, we will use L1-L2 regulazier to prevent overfitting .</p><pre>base_model<strong>=</strong>VGG19(input_shape<strong>=</strong>(256,256,1),filters<strong>=</strong>512)<br>x <strong>=</strong> base_model<strong>.</strong>output<br>f<strong>=</strong>tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Flatten(name<strong>=</strong>&quot;flatten&quot;)(x)<br><em>#To prevent overfitting and unbalancing , used regularizer</em><br>d2<strong>=</strong>tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Dense(1024,activation<strong>=</strong>&quot;relu&quot;,kernel_regularizer<strong>=</strong>tf<strong>.</strong>keras<strong>.</strong>regularizers<strong>.</strong>l1_l2(0.00001))(f)<br>dp9<strong>=</strong>tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Dropout(0.5)(d2)<br>d3<strong>=</strong>tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Dense(1024,activation<strong>=</strong>&quot;relu&quot;)(f)<br>dp10<strong>=</strong>tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Dropout(0.5)(d2)<br><br>final<strong>=</strong>tf<strong>.</strong>keras<strong>.</strong>layers<strong>.</strong>Dense(6,activation<strong>=</strong>&quot;softmax&quot;)(dp10)<br>model <strong>=</strong> tf<strong>.</strong>keras<strong>.</strong>Model( inputs <strong>=</strong>[ base_model<strong>.</strong>input], outputs <strong>=</strong> final)</pre><p>We will use “Categorical Cross-Entropy Function” for loss function. Also, the model will be trained with 35 epochs and 16 batch size. After each 15 epochs , we want to decrease learning rate in order to slowly converge to the model and prevent overfitting.</p><pre>metrics<strong>=</strong>tf<strong>.</strong>keras<strong>.</strong>metrics<strong>.</strong>AUC(<br>    num_thresholds<strong>=</strong>200, curve<strong>=</strong>&#39;ROC&#39;,<br>    summation_method<strong>=</strong>&#39;interpolation&#39;<br>)<br><em>#categorical_crossentropy</em><br>model<strong>.</strong>compile(optimizer<strong>=</strong>tf<strong>.</strong>keras<strong>.</strong>optimizers<strong>.</strong>Adam(learning_rate<strong>=</strong>0.0001), loss<strong>=</strong>&quot;categorical_crossentropy&quot;,metrics<strong>=</strong>metrics)<br><br><br><strong>def</strong> lr_scheduler(epoch, lr):<br>    decay_rate <strong>=</strong> 0.1<br>    decay_step <strong>=</strong> 15<br>    <strong>if</strong> epoch <strong>%</strong> decay_step <strong>==</strong> 0 <strong>and</strong> epoch:<br>        <strong>return</strong> lr <strong>*</strong> decay_rate<br>    <strong>return</strong> lr<br><br><em>#after each 15 epochs , we want to decrease learning rate for converge to model</em><br>lr_call <strong>=</strong> tf<strong>.</strong>keras<strong>.</strong>callbacks<strong>.</strong>LearningRateScheduler(lr_scheduler)<br>epochs<strong>=</strong>35<br>history<strong>=</strong>model<strong>.</strong>fit(x<strong>=</strong>[x_train1],y<strong>=</strong>[y_train1],batch_size<strong>=</strong>16,epochs<strong>=</strong>epochs,callbacks<strong>=</strong>[lr_call],validation_data<strong>=</strong>(x_valid,y_valid))</pre><p>Lets see our validation and training loss .</p><pre>plt<strong>.</strong>figure(figsize <strong>=</strong> (20,10))<br>plt<strong>.</strong>title(&#39;Loss&#39;)<br>plt<strong>.</strong>plot(history<strong>.</strong>history[&#39;loss&#39;], label<strong>=</strong>&#39;train&#39;)<br>plt<strong>.</strong>plot(history<strong>.</strong>history[&#39;val_loss&#39;], label<strong>=</strong>&#39;test&#39;)<br>plt<strong>.</strong>legend()</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/686/1*ZM0zSJO6fvMuQSIciYjbqQ.png" /><figcaption>Figure 3 — Loss and Epochs on Train and Validation Set</figcaption></figure><p>Let’s evaluete our model :</p><pre><strong>import</strong> sklearn<br>predict<strong>=</strong>model<strong>.</strong>predict(x_test)<br>auc <strong>=</strong> sklearn<strong>.</strong>metrics<strong>.</strong>roc_auc_score(y_test, predict)</pre><p>Our AUC score is 0.734. ROC curve and AUC score are an important metric for the image classification.</p><p>Next ROC Curve :</p><pre>y_test<strong>=</strong>np<strong>.</strong>reshape(y_test,(34<strong>*</strong>6))<br>predict<strong>=</strong>np<strong>.</strong>reshape(predict,(34<strong>*</strong>6))<br><strong>from</strong> sklearn.metrics <strong>import</strong> roc_curve<br><strong>from</strong> sklearn.metrics <strong>import</strong> roc_auc_score<br><br><em># keep probabilities for the positive outcome only</em><br>ns_probs <strong>=</strong> [0 <strong>for</strong> _ <strong>in</strong> range(len(y_test))]<br><em># calculate scores</em><br>ns_auc <strong>=</strong> roc_auc_score(y_test, ns_probs)<br>lr_auc <strong>=</strong> roc_auc_score(y_test, predict)<br><em># summarize scores</em><br>print(&#39;No Skill: ROC AUC=%.3f&#39; <strong>%</strong> (ns_auc))<br>print(&#39;Model: ROC AUC=%.3f&#39; <strong>%</strong> (lr_auc))<br><em># calculate roc curves</em><br>ns_fpr, ns_tpr, _ <strong>=</strong> roc_curve(y_test, ns_probs)<br>lr_fpr, lr_tpr, _ <strong>=</strong> roc_curve(y_test, predict)<br><em># plot the roc curve for the model</em><br>plt<strong>.</strong>figure(figsize <strong>=</strong> (20,10))<br>plt<strong>.</strong>title(&quot;ROC Curve&quot;,fontsize <strong>=</strong> 40)<br>plt<strong>.</strong>plot(ns_fpr, ns_tpr,label<strong>=</strong>&#39;No Skill&#39;)<br>plt<strong>.</strong>plot(lr_fpr, lr_tpr, label<strong>=</strong>&#39;Model&#39;)<br><em># axis labels</em><br>plt<strong>.</strong>xlabel(&#39;False Positive Rate&#39;)<br>plt<strong>.</strong>ylabel(&#39;True Positive Rate&#39;)<br>plt<strong>.</strong>rcParams[&quot;font.size&quot;] <strong>=</strong> &quot;15&quot;<br><br><em># show the legend</em><br>plt<strong>.</strong>legend()<br><em># show the plot</em><br>plt<strong>.</strong>show()</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/760/1*QFL0ZhlON18jayldw-6lpQ.png" /><figcaption>Figure 4 — ROC Curve Analysis</figcaption></figure><p>Grad-CAM is a strict generalization of the Class Activation Mapping. Unlike CAM, Grad-CAM requires no re-training and is broadly applicable to any CNN-based architectures. We also show how GradCAM may be combined with existing pixel-space visualizations to create a high-resolution classdiscriminative visualization (Guided Grad-CAM). In this study ,we used GradCam method to examine results.</p><p>Now GradCam :</p><pre><em>#The GradCam observes the results</em><br><strong>def</strong> make_gradcam_heatmap(img_array, model, last_conv_layer_name, classifier_layer_names ):<br>    <em># First, we create a model that maps the input image to the activations</em><br>    <em># of the last conv layer</em><br>    last_conv_layer <strong>=</strong> model<strong>.</strong>get_layer(last_conv_layer_name)<br>    last_conv_layer_model <strong>=</strong> keras<strong>.</strong>Model(model<strong>.</strong>inputs, last_conv_layer<strong>.</strong>output)<br>    <em># Second, we create a model that maps the activations of the last conv</em><br>    <em># layer to the final class predictions</em><br>    classifier_input <strong>=</strong> keras<strong>.</strong>Input(shape<strong>=</strong>last_conv_layer<strong>.</strong>output<strong>.</strong>shape[1:])<br>    x <strong>=</strong> classifier_input<br>    <strong>for</strong> layer_name <strong>in</strong> classifier_layer_names:<br>        x <strong>=</strong> model<strong>.</strong>get_layer(layer_name)(x)<br>    classifier_model <strong>=</strong> keras<strong>.</strong>Model(classifier_input, x)<br>    <em># Then, we compute the gradient of the top predicted class for our input image</em><br>    <em># with respect to the activations of the last conv layer</em><br>    <strong>with</strong> tf<strong>.</strong>GradientTape() <strong>as</strong> tape:<br>        <em># Compute activations of the last conv layer and make the tape watch it</em><br>        last_conv_layer_output <strong>=</strong> last_conv_layer_model(img_array)<br>        tape<strong>.</strong>watch(last_conv_layer_output)<br>        <em># Compute class predictions</em><br>        preds <strong>=</strong> classifier_model(last_conv_layer_output)<br>        top_pred_index <strong>=</strong> tf<strong>.</strong>argmax(preds[0])<br>        top_class_channel <strong>=</strong> preds[:, top_pred_index]<br>    <em># This is the gradient of the top predicted class with regard to</em><br>    <em># the output feature map of the last conv layer</em><br>    grads <strong>=</strong> tape<strong>.</strong>gradient(top_class_channel, last_conv_layer_output)<br><br>    <em># This is a vector where each entry is the mean intensity of the gradient</em><br>    <em># over a specific feature map channel</em><br>    pooled_grads <strong>=</strong> tf<strong>.</strong>reduce_mean(grads, axis<strong>=</strong>(0, 1, 2))<br><br>    <em># We multiply each channel in the feature map array</em><br>    <em># by &quot;how important this channel is&quot; with regard to the top predicted class</em><br>    last_conv_layer_output <strong>=</strong> last_conv_layer_output<strong>.</strong>numpy()[0]<br>    pooled_grads <strong>=</strong> pooled_grads<strong>.</strong>numpy()<br>    <strong>for</strong> i <strong>in</strong> range(pooled_grads<strong>.</strong>shape[<strong>-</strong>1]):<br>        last_conv_layer_output[:, :, i] <strong>*=</strong> pooled_grads[i]<br><br>    <em># The channel-wise mean of the resulting feature map</em><br>    <em># is our heatmap of class activation</em><br>    heatmap <strong>=</strong> np<strong>.</strong>mean(last_conv_layer_output, axis<strong>=-</strong>1)<br><br>    <em># For visualization purpose, we will also normalize the heatmap between 0 &amp; 1</em><br>    heatmap <strong>=</strong> np<strong>.</strong>maximum(heatmap, 0) <strong>/</strong> np<strong>.</strong>max(heatmap)<br>    <strong>return</strong> heatmap</pre><pre><strong>from</strong> tensorflow <strong>import</strong> keras<br>img_array<strong>=</strong>x_test[0,:,:,:]<br><br>img_array<strong>=</strong>np<strong>.</strong>reshape(img_array,(1,256,256,1))<br>preds <strong>=</strong> model<strong>.</strong>predict(img_array)<br>last_conv_layer_name <strong>=</strong> &quot;top_conv&quot;<br>classifier_layer_names <strong>=</strong> [&quot;flatten&quot;]   <br>              <br><em># Generate class activation heatmap</em><br>heatmap <strong>=</strong> make_gradcam_heatmap(<br>    img_array, model, last_conv_layer_name, classifier_layer_names<br>)<br>img <strong>=</strong> keras<strong>.</strong>preprocessing<strong>.</strong>image<strong>.</strong>img_to_array(x_test[0,:,:,:])</pre><p>Create Heatmap :</p><pre><strong>from</strong> tensorflow <strong>import</strong> keras<br>img_array<strong>=</strong>x_test[0,:,:,:]<br><br>img_array<strong>=</strong>np<strong>.</strong>reshape(img_array,(1,256,256,1))<br>preds <strong>=</strong> model<strong>.</strong>predict(img_array)<br>last_conv_layer_name <strong>=</strong> &quot;top_conv&quot;<br>classifier_layer_names <strong>=</strong> [&quot;flatten&quot;]   <br>              <br><em># Generate class activation heatmap</em><br>heatmap <strong>=</strong> make_gradcam_heatmap(<br>    img_array, model, last_conv_layer_name, classifier_layer_names<br>)<br>img <strong>=</strong> keras<strong>.</strong>preprocessing<strong>.</strong>image<strong>.</strong>img_to_array(x_test[0,:,:,:])</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/804/1*cfZMwuPdkR2H4GTY6FsGOg.png" /><figcaption><em>Figure 5– Original Image and GradCam Result</em></figcaption></figure><p>In figure 5 , as can be seen, the model targetted tyhriod tumor for classification. This shows that the model focuses the tyhriod tumor for the classfication</p><p>In this study, the thyroid nodules classified 6 different classses which are 1 (Benign) , 2 (Benign) ,4a (Malign) , 4b (Malign),4c (Malign) ,5 (Malign). Data augmentation is a technique used to increase the number of training data artificially by changing the ratio of width to height, changing colors, or using horizontal flip. It is reported to be an essential technique required by deep learning algorithms to achieve good performance . We used data augmention technique in this study becasue we have limited and unbalanced data.Despite that the most study is about the classification of thyroid tumors as bening and malign , in our study , thyroid tumors were classified 6 different classes. In this line , considering 6 classes, the unbalancing issue on dataset occurs. In this case , the most important issue is unbalancing of dataset and not enough data for each class. However, considering these issues ,our proposed model shows promising results with 0.734 AUC score. This score shows that the model has skills to classify thyroid tumors.</p><p>The Full Code : <a href="https://github.com/SerdarHelli/The-Classification-of-Thyroid-Tumors-on-UltraSound-Images-using-Deep-Learning-Methods/tree/main">https://github.com/SerdarHelli/The-Classification-of-Thyroid-Tumors-on-UltraSound-Images-using-Deep-Learning-Methods</a></p><p><strong>References</strong></p><p>[1] Pedraza, Lina &amp; Vargas, Carlos &amp; Narváez, Fabián &amp; Durán, Oscar &amp; Muñoz, Emma &amp; Romero, Eduardo. (2015). An open access thyroid ultrasound-image Database. Progress in Biomedical Optics and Imaging — Proceedings of SPIE. 9287. 10.1117/12.2073532.</p><p>[2] Selvaraju, Ramprasaath R., et al. “Grad-cam: Visual explanations from deep networks via gradient-based localization.” Proceedings of the IEEE international conference on computer vision. 2017.</p><p>[3] Abadi, Mart\’{\i}n, Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., … others. (2016). Tensorflow: A system for large-scale machine learning. In Symposium on Operating Systems Design and Implementation (pp. 265–283)</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=46f812d859ea" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Core Neuro Art]]></title>
            <link>https://serdarhelli.medium.com/core-neuro-art-eaf811cabdc4?source=rss-e722ae47f29b------2</link>
            <guid isPermaLink="false">https://medium.com/p/eaf811cabdc4</guid>
            <category><![CDATA[sciart]]></category>
            <category><![CDATA[medical-imaging]]></category>
            <category><![CDATA[mri]]></category>
            <category><![CDATA[nft]]></category>
            <dc:creator><![CDATA[S.Serdar Helli]]></dc:creator>
            <pubDate>Fri, 12 Nov 2021 18:06:28 GMT</pubDate>
            <atom:updated>2021-11-12T20:15:59.107Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*2M-YuQ8U1CCPij_ylO3FVw.png" /></figure><p>NFTs (Non-Fungible Tokens) are digital forms of actual valuables. Over the last few years, NFTs have considerably grown in popularity. Besides a countless number of artists, lots of companies enter the realm of NFT. These digital assets are already being referred to as <em>‘’the art of the future’’</em>.</p><blockquote><a href="https://opensea.io/collection/coreneuroart"><strong>Core Neuro Art </strong>is a one-of-a-kind digital collection of MR pictures that have been combined with art. These collectibles are minted on the Polygon blockchain and are displayed on the OpenSea platform. </a>Still avaiable on Opensea: <a href="https://opensea.io/collection/coreneuroart">https://opensea.io/collection/coreneuroart</a></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/576/1*xxtrb4IxpVMJBTToqrqZBQ.gif" /><figcaption>The Review of <strong>Core Neuro Art</strong> collection</figcaption></figure><blockquote><em>“The brain created art itself and this is their meeting after all…”</em></blockquote><p><strong>Core Neuro Art</strong> was born with the question of ‘’ What would it be like if we processed the art into brain MR images?’’ asked by two biomedical engineer friends working in medical imaging laboratory. Concepts including our ability to gather new knowledge about human physiology, detect anomalies, and treat diseases are evolving in combination with quickly developing technology. It is obvious that it will offer the opportunity to perform remote surgery, early diagnosis, and treatment with artificial intelligence in the near future. In addition to this digital hype, <strong>Core Neuro Art</strong> seeks to raise awareness of the rise in medical imaging by integrating MR pictures with art..</p><h4>Metaverse, Healthcare, and Core Neuro Art</h4><p>Facebook recently declared that it will rename “Meta” as a new emphasis for the “Metaverse,” an interactive sharing place. In other words, Facebook CEO Mark Zuckerberg refers to the “Metaverse” as the “next-generation internet,” which also comprises virtual reality products and services. Various major international corporations, including Microsoft, have indicated that they will participate in the “Metaverse.” So, how will the healthcare business integrate into the “Metaverse,” and what exactly is<strong> Core Neuro Art </strong>attempting to convey?</p><p>In the “Metaverse,” numerous innovative concepts for the field of health can be considered. It might, for example, offer us digital patient treatment and patient follow-up across the world, and medical and surgical education, etc. It is inevitable that tremendous potentials for the healthcare industry emerge in this evolving digital environment.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*Ai7EL4twiJYSKwxWs8Y43Q.gif" /><figcaption>Medivis -Holographic Surgical Navigation- GIF from <a href="https://www.medivis.com/">https://www.medivis.com/</a></figcaption></figure><p>The first virtual reality surgeries on live patients have been done by Johns Hopkins neurosurgeons. “During the initial procedure on June 8, 2020, surgeons inserted six screws into a patient’s spine for spinal fusion surgery to treat the patient’s chronic, excruciating back pain,” according to a Johns Hopkins Medicine publication. On June 10, surgeons removed a cancerous tumor known as a chordoma from a patient’s spine in the second surgery. Both patients are in great health, thus according to doctors.”</p><p>In order to highlight these developments,<strong> Core Neuro Art</strong> brings its NFT collection to the forefront in the artistic scene. After artificial intelligence technology enhanced the quality and resolution of brain MR images, the images were interpreted with artistic interpretation, resulting in a one-of-a-kind digital collection.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/828/1*vqB_pPX7Z5-heUGtP9bMBw.png" /><figcaption>Core Neuro Art #60 from Core Neuro Art collection</figcaption></figure><h4>In the Future, Core Neuro Art</h4><p><strong>Core Neuro Art</strong> has interpreted the brain with a unique perspective in this adventure where we are just beginning to appreciate the human brain and wanted to draw attention to the potential brought by technology. Moreover, they will also generate virtual reality artwork using 3D Brain MR images, even though exact timing is unknown. As part of an NFT collection, this artwork will be waiting for us. Stay tuned!</p><p><strong>References</strong></p><ul><li><a href="https://www.hopkinsmedicine.org/news/articles/johns-hopkins-performs-its-first-augmented-reality-surgeries-in-patients">https://www.hopkinsmedicine.org/news/articles/johns-hopkins-performs-its-first-augmented-reality-surgeries-in-patients</a></li><li><a href="https://www.medivis.com/">https://www.medivis.com/</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=eaf811cabdc4" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>