<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Harshith Deshalli Ravi on Medium]]></title>
        <description><![CDATA[Stories by Harshith Deshalli Ravi on Medium]]></description>
        <link>https://medium.com/@harshithdr10?source=rss-4a7bbfab3726------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Wed, 08 Apr 2026 13:30:25 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@harshithdr10/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[The Intuitive Build: Vibe Coding a Portfolio!!!]]></title>
            <link>https://medium.com/@harshithdr10/the-intuitive-build-vibe-coding-a-portfolio-b21a08a8537a?source=rss-4a7bbfab3726------2</link>
            <guid isPermaLink="false">https://medium.com/p/b21a08a8537a</guid>
            <category><![CDATA[ai-agent]]></category>
            <category><![CDATA[challenge]]></category>
            <category><![CDATA[vibe-coding]]></category>
            <category><![CDATA[firebase-studio]]></category>
            <category><![CDATA[aritficial-intelligence]]></category>
            <dc:creator><![CDATA[Harshith Deshalli Ravi]]></dc:creator>
            <pubDate>Sat, 03 May 2025 05:03:26 GMT</pubDate>
            <atom:updated>2025-05-04T07:46:16.572Z</atom:updated>
            <content:encoded><![CDATA[<h3><strong>I tried Vibe Coding my web Portfolio!!!</strong></h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*YeZzPeGb9wB3Mofg" /><figcaption>Photo by Pete Sene</figcaption></figure><p>Vibe coded Portfolio link : <a href="https://www.harshithdeshalliravi.us/">https://www.harshithdeshalliravi.us/</a></p><p>Alright, so picture this: I’ve been deep in the coding trenches for a <em>while</em>, mostly navigating the backend labyrinth — that’s my zone. But throw me into the world of frontend design? Making webpages pop with slick interactions and dynamic bits? Yeah, total blank slate. Felt like trying to speak a language I’d only heard whispers of.</p><p>Then came the AI wave. Like everyone else, I’ve been totally hooked, playing with pretty much every new AI gadget that dropped over the last couple of years. It’s been wild watching it all unfold.</p><p>That’s when this whole “vibe coding” thing started buzzing. Seeing folks, some who barely code, just <em>manifesting</em> apps? Even building legit businesses pulling in serious cash? It felt… different. Like tapping into some kind of creative flow state with the machine. I <em>had</em> to feel it out for myself.</p><p>So, I dove in. Tried jamming with tools like bolt.new, messing with cline on VS Code, tinkering with lovable… but honestly? The vibe just wasn’t right. Felt clunky, couldn’t get the look or the backend hooks quite how I pictured them. My flow kept getting blocked.</p><p>Just when I was about ready to shelve the idea, I caught the Firebase Studio release. <em>Okay</em>, I thought, <em>this feels different</em>. A little spark ignited — maybe this whole vibe coding thing <em>could</em> actually work for building something real.</p><p>That lit the fire. I decided to just go for it, set myself a wild challenge: build an entire web app purely by vibing with AI. No safety net, zero lines of my own code. And honestly? <strong>Spoiler alert:</strong> The results kinda blew my mind. What emerged was way closer to my vision than I’d dared to hope.</p><p>So, what project to tackle? After bouncing a few ideas around, it clicked: why not just vibe out my own portfolio site? Perfect playground.</p><p>Going into Firebase Studio, I kept my expectations chill — figured maybe I’d get a basic structure, something functional even if it wasn’t a design masterpiece. But once I started prompting, riffing, <em>chatting</em> with the AI… my hopes shot up. Still, part of me wondered if it would just plateau, you know? Like the AI would run out of creative juice after a few rounds. Oh, and apparently, I’d jumped into some timed thing — a 3-hour clock started ticking to pull the whole project together. Talk about diving headfirst into the vibe!</p><p>So, let’s unpack those 7 hours. Stepping into this, you definitely feel that weird energy shift, right? That whole “Is AI gonna just… <em>do</em> our jobs?” buzz. It’s less about erasing humans and maybe more like… whoa, the definition of ‘coding’ is getting seriously remixed right in front of us.</p><p>Firebase Studio wanted the grand vision first. So, I basically dumped my brain into the prompt — a full 150-word download of every feature, every little interaction I was picturing for the portfolio. It chewed on that, spit out an initial game plan. We riffed on it a bit, tweaked the flow, got in sync. Then, the AI started cooking. Took a solid 10 minutes — felt longer, watching the virtual gears whir, dependencies installing — to spin up the environment and lay down the first batch of code.</p><p>And honestly? That first “zero shot” reveal was kind of electric. Even though the <em>look</em> wasn’t quite my style yet, the way it interpreted the core idea? The underlying structure? It <em>got</em> it. That initial spark was all I needed. It felt less like giving instructions and more like starting a jam session.</p><p>“Okay, cool start,” I prompted, “but let’s change the whole vibe here. Rework the skeleton, shift the design language.” And it <em>listened</em>. The iterations started flowing, and I began genuinely digging the aesthetics it was proposing. That planned 3-hour sprint? Totally forgotten. I was locked in, pushing the session to 7 hours (with a couple of sanity breaks).</p><p>This wasn’t just about painting by numbers, either. We were debugging on the fly, adding features as ideas popped up, swapping out designs. I threw it a curveball: “Go scrape my details from GitHub and LinkedIn and weave them into these pages.” And it just… did it. Pretty smoothly, too. Then came the mobile check — always the moment of truth. Looked wonky. A quick prompt later? Bam. Fixed it clean, responsive, like a seasoned pro nailing a media query first try. That feeling? Pure collaborative magic.</p><p>Now, it wasn’t all smooth sailing and pixel perfection. Getting the background design <em>just right</em>? Man, that was a struggle. My prompts felt like they were hitting a wall; the AI’s interpretation just wasn’t vibing. Felt like we were out of sync. Then, the breakthrough: instead of trying to describe the <em>feeling</em> I wanted, I just described the <em>visuals</em> I needed. Gave it the ‘what’. With that clarity? It coded the algorithm like snapping its fingers. Piece of cake. Sometimes, you gotta switch up how you communicate with your digital dance partner.</p><p>And yeah, let’s be real — it’s beta software. The flow definitely got disrupted. The session just blinked out a couple of times, forcing a VM restart. Happened three times during the 7 hours. Talk about a vibe killer! Super frustrating when you’re deep in the zone.</p><p>But one massive win? Built-in Git. When a few of my experimental prompts led down a weird path (happens!), rolling back was painless. Having that version control safety net? Absolutely crucial when you’re surfing the vibe and trying wild stuff. It lets you explore without consequence.</p><p>So yeah, from “AI replacing humans” to maybe… “AI augmenting humans”? It felt less like replacement and more like having this incredibly fast, sometimes quirky, but ultimately powerful coding partner riding shotgun.</p><p>So, wrapping this all up? Honestly, I walked away seriously impressed. This whole vibe coding journey, especially jamming with Firebase Studio, was way more powerful than I expected. It shifted my whole perspective — it’s less about AI taking over and more about finding new ways to flow, to create, to bring ideas to life faster and maybe even more intuitively than before.</p><p>If you’ve been curious about this stuff, or if you’re like me and frontend sometimes feels like a different galaxy, I genuinely think you should give it a spin. Just dive in, throw some ideas at it, and see what kind of magic you can conjure up together. You might surprise yourself.</p><p>Wanna see how my 7-hour AI jam session turned out? Check out the final result below!</p><p><strong>My Vibe-Coded Portfolio: </strong><a href="https://www.harshithdeshalliravi.us/">https://www.harshithdeshalliravi.us/</a></p><blockquote><em>Happy coding, don’t forget to clap and follow for similar content.</em></blockquote><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b21a08a8537a" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Run DeepSeek-r1 model on android locally!]]></title>
            <link>https://medium.com/@harshithdr10/run-deepseek-model-on-android-locally-f0198948905a?source=rss-4a7bbfab3726------2</link>
            <guid isPermaLink="false">https://medium.com/p/f0198948905a</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[deepseek-r1]]></category>
            <category><![CDATA[large-language-models]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[ai-privacy]]></category>
            <dc:creator><![CDATA[Harshith Deshalli Ravi]]></dc:creator>
            <pubDate>Thu, 30 Jan 2025 04:47:01 GMT</pubDate>
            <atom:updated>2025-01-31T17:17:26.762Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*l2TVwj1LODyatTP4" /><figcaption>src: <a href="https://www.reuters.com/technology/deepseek-app-unavailable-apple-google-app-stores-italy-2025-01-29/">reuters.com</a></figcaption></figure><p>As a self-proclaimed tech freak, I’ve always dreamed of running an LLM (Large Language Model) locally on my phone. Why? Because while I love tinkering with AI and use Ollama Server on my workstation, I’m also deeply concerned about privacy when using proprietary models from OpenAI, Claude, and Google. Plus, let’s face it — lugging around a laptop everywhere isn’t always practical. And when I’m out and about, my options for using AI locally shrink faster than my phone’s battery.</p><p>Luckily, with SLM (Small Language Models) improving every day, we can finally take advantage of their efficiency to run AI directly on our phones. No cloud, no data leakage — just pure, unadulterated AI goodness in our pockets.</p><h3>Why Would You Even Need AI on Your Phone?</h3><p>Some of you might be thinking, <em>“Why bother? I have the internet!”</em> But there are actually plenty of scenarios where a local AI model on your phone could be a game-changer. Let’s go through a few:</p><h4>1. Survival Mode: AI in the Wild</h4><p>Picture this: You’re camping deep in the forest, and suddenly, you need to write a poem about the beauty of nature. No Wi-Fi, no internet, but thankfully, you’ve got your trusty LLM running locally. Okay, maybe you need it for something more practical — like translating survival guides, generating emergency messages, or even debugging your own thoughts when lost in the wilderness.</p><h4>2. Privacy: Because Even Apple Has Trust Issues</h4><p>Sure, Apple <em>claims</em> to be the champion of privacy, but let’s be real — there have been plenty of allegations against them regarding data security. So, why entrust your sensitive data to any big tech company? Running a local AI model means you can draft emails, compose urgent formal messages, or even polish your grammar — all without your data ever leaving your device. AI-assisted writing without the prying eyes? Yes, please!</p><h3>How to Set Up an LLM on Your Android Phone</h3><p>iPhone users, sorry, you have no option other than trusting Apple.</p><h4>Installation Guide</h4><p><strong>1. Install Termux Application</strong></p><p>There are two ways to install it. If the first method works, don’t bother with the second one.</p><ul><li><strong>Method 1:</strong> Download directly from the Play Store and move to second step.</li><li><strong>Method 2:</strong> If the application is not listed in your country, follow these steps:</li></ul><p>Go to the <a href="https://github.com/termux/termux-app/releases">Termux GitHub releases page</a> and download termux-app_v0.119.0-beta.1+apt-android-7-github-debug_arm64-v8a.apk and install it.</p><p><strong>2. Install Requirements Before Running Ollama Server</strong></p><p>After launching Termux, you’ll see a terminal that looks just like a Linux terminal (don’t feel like a hacker yet — four years ago, I downloaded it and felt the same way 😁). Follow these steps to set up the environment for Ollama:</p><ul><li><strong>Grant Storage Access:</strong></li></ul><pre>termux-setup-storage</pre><p>This command will allow Termux to access your Android storage system. Once you run it, the Settings application will pop up. Scroll down, find Termux, and grant storage permission.</p><ul><li><strong>Update Packages:</strong></li></ul><p>Before running or installing any tools, update the packages — just like you would on Linux:</p><pre>pkg upgrade</pre><p>Enter Y when prompted.</p><ul><li><strong>Install Tools Like Git, CMake, and Golang:</strong></li></ul><p>These packages are required for downloading and building Ollama:</p><pre>pkg install git cmake golang</pre><p><strong>3. Install and build Ollama:</strong></p><ul><li>Clone the Ollama github repository</li></ul><p>If you’re maintaining Termux daily, move to the folder where you want to install it. Otherwise, you can follow the steps directly.</p><pre>git clone --depth 1 https://github.com/ollama/ollama.git</pre><ul><li>Navigate to ollama directory:</li></ul><p>After downloading move to ollama directory.</p><pre>cd ollama</pre><ul><li>Generate Go code and build ollama with it:</li></ul><pre>go generate ./..</pre><pre>go build .</pre><p>This step may take some time, so patience is key. Once completed, congratulations — you have successfully installed Ollama on your phone!</p><p><strong>4. Run DeepSeek Model or Any Other 1B or 2B Parameter Model</strong></p><p>Now, let’s see how to download and run the DeepSeek model.</p><ul><li><strong>Choose a Model:</strong></li></ul><blockquote><strong><em>Note:</em></strong><em> Models with more than 3B parameters can’t be run on a phone because they are too slow, and some can’t even be loaded into your VRAM.</em></blockquote><p>Choose a model from the Ollama models library website. Look for SLMs (Small Language Models with fewer parameters). If you find one, you’re good to go.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*5ZCPbPX9cQC4fTZLxOy-9w.png" /><figcaption>src: <a href="https://ollama.com/library/deepseek-r1:1.5b">ollama library</a></figcaption></figure><p>There you will find the copy button as shown above. If you’re on a phone, select desktop view to see the copy button. Copy it.</p><ul><li><strong>Download and Run Model:</strong></li></ul><p>For now, I will show how to run the DeepSeek 1.5B model, but you can select whichever model you want. Just make sure to follow these steps.</p><p>For DeepSeek model:</p><pre>./ollama run deepseek-r1:1.5b --verbose</pre><p>For your own model:</p><pre>./&lt;copied text from ollama webpage&gt;</pre><p>Here, the --verbose flag is optional and provides logs while you&#39;re running models. This command will start downloading the model to your phone—be patient! The time it takes depends on your internet speed. (<em>If you&#39;re using mobile data, make sure you have at least 1.5GB of data remaining!</em>)</p><p>Once it’s downloaded, you will see the interaction panel on Termux itself, where you can use the LLM model just like you would on a PC. But don’t expect performance like ChatGPT — after all, you’re running it on a phone with a small model, and it will take time.</p><blockquote>Once again <strong>Congratulations</strong> on successfully installing and running llm model on your phone.</blockquote><p><strong>5. Managing Performance</strong></p><p>While testing the DeepSeek 1.5B model on my Samsung Flip 5, I got a speed of <strong>9 tokens per second</strong>. If you’re running a bigger model, your phone may hang — so stick to small models. And if you have a better processor than mine, you can expect more responsiveness.</p><p><strong>6. Revisit the Model</strong></p><p>Once you’re done using your model, close the Termux application and make sure to terminate it from your notification panel. Otherwise, it will keep running in the background.</p><p>To use it again, open Termux and run these commands (don’t worry, it hardly takes seconds):</p><pre>cd ollama</pre><pre>./ollama run deepseek-r1:1.5b --verbose</pre><p>These two commands will navigate and load the model to use.</p><p><strong>7. Optional Cleanup</strong></p><p>After using Ollama, you may want to clean up. This step is completely optional.</p><p><strong>Remove Unnecessary Files:</strong></p><pre>chmod -R 700 ~/go</pre><pre>rm -r ~/go</pre><h3>The Future is Local (and Pocket-Sized)</h3><p>With AI models getting smaller and more efficient, the dream of having a personal AI assistant running locally on our phones is becoming more of a reality. Whether for survival, privacy, or just nerding out, the ability to run LLMs on mobile devices opens up a world of possibilities. So, here’s to the future — one where AI fits in our pockets, respects our privacy, and doesn’t require us to sign away our souls in a terms-of-service agreement.</p><blockquote>Happy coding, don’t forget to clap and follow for similar content.</blockquote><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f0198948905a" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Deepseek r1 qwen1.5b model is broken!]]></title>
            <link>https://medium.com/@harshithdr10/deepseek-r1-qwen1-5b-model-is-broken-8691ccbd4025?source=rss-4a7bbfab3726------2</link>
            <guid isPermaLink="false">https://medium.com/p/8691ccbd4025</guid>
            <category><![CDATA[opensource-ai]]></category>
            <category><![CDATA[deepseek-r1]]></category>
            <category><![CDATA[large-language-models]]></category>
            <dc:creator><![CDATA[Harshith Deshalli Ravi]]></dc:creator>
            <pubDate>Tue, 21 Jan 2025 20:31:43 GMT</pubDate>
            <atom:updated>2025-01-31T17:18:11.593Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/619/1*xFvq2dxqof4DNd0cZR8_-Q.png" /><figcaption>src: <a href="https://www.deepseek.com/">deepseek.com</a></figcaption></figure><p>I was surprised when I saw the benchmarks of DeepSeek R1 release open-source LLM models. I thought, ‘I want to try this out locally,’ so I downloaded it from the Ollama official website.</p><p>I started with the 1.5B parameter model. After trying it, I felt it was worse than other models of the same size. At first, I thought I might have made a mistake while downloading the model, but then I realized it was a quantized Q4_K_M model. Despite this, its performance was so poor that it couldn’t be used effectively.</p><p>After that, I tried models with different parameter sizes, and some of them were usable and even performed well. However, the first model I downloaded was far worse and not even close to the performance of other distilled models. I’ll try to highlight the numerous issues with the first model.</p><p><strong>Problem 1:</strong></p><p>It often <strong>hallucinates </strong>even when solving smaller math questions. I tested it with numerous questions, but the answers it produced after processing were consistently different from the expected results. Despite taking time to ‘think,’ the model frequently arrived at incorrect or nonsensical conclusions, highlighting its unreliability in handling even basic mathematical tasks.</p><p><strong>Problem 2:</strong></p><p>The model takes a significant amount of <strong>time to process questions,</strong> often asking nonsensical follow-up questions and eventually providing hallucinated responses. For example, I asked, ‘Which is bigger 5⁹⁹ or 99!?’ The model took 2.5 minutes to respond, even with an inference speed of 100 tokens per second. Despite the lengthy processing time, its answer was ‘5,’ which is blatantly incorrect. I tested it with various other math questions, and it exhibited similar behavior — delivering wrong answers after an unnecessarily long thought process. This not only highlights its inefficiency but also its lack of accuracy in basic problem-solving tasks.</p><p><strong>Problem 3:</strong></p><p>It’s too <strong>sensitive </strong>to questions related to individuals, countries, or the economy. It doesn’t even answer general questions like geographical ones. For example, it refuses to provide the capital of China. I found this model to be overly restrictive.</p><p><strong>Problem 4: (major issue)</strong></p><p>After one deep-thinking process, it sometimes becomes so hallucinated that it continues to answer any future questions with the same answer in that chat. Additionally, it occasionally changes its language to Chinese or another language, and once this happens, all subsequent answers are provided in the same language.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ylLPgz0H1zgM-PpyEZI2pA.png" /><figcaption>src: Author</figcaption></figure><p><strong>Problem 5:</strong></p><p>If you use bad words in the chat, it changes to a different language and continues to answer in that language.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*OwtpSeG7H4P2U8UQSKX48A.png" /><figcaption>src: Author</figcaption></figure><p>If you want to test this model, you can download it from this link — <a href="https://ollama.com/library/deepseek-r1:1.5b">ollama</a></p><p>or run the following command in your terminal:</p><pre>ollama run deepseek-r1:1.5b</pre><p><strong>Conclusion:</strong></p><p>Among all of DeepSeek’s reasoning models, most perform well, delivering reliable results across various tasks. However, this particular model is broken and fails to meet basic expectations. Its performance is significantly worse than the similar-parameter model, DeepSeek R1-Distill-Qwen1.5b, which outperforms it in both accuracy and reliability. Overall, this model does not live up to the standard set by its counterparts and should be approached with caution.</p><blockquote><em>Happy coding, don’t forget to clap and follow for similar content.</em></blockquote><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8691ccbd4025" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Installing CUDA is not that hard!]]></title>
            <link>https://medium.com/@harshithdr10/installing-cuda-is-not-that-hard-5886deff812c?source=rss-4a7bbfab3726------2</link>
            <guid isPermaLink="false">https://medium.com/p/5886deff812c</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[cuda]]></category>
            <category><![CDATA[docker]]></category>
            <dc:creator><![CDATA[Harshith Deshalli Ravi]]></dc:creator>
            <pubDate>Fri, 22 Nov 2024 00:36:43 GMT</pubDate>
            <atom:updated>2025-01-31T17:18:48.490Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/475/0*UsXjZcNrsj4Z9I4i" /><figcaption>Nvidia</figcaption></figure><p>As a Data Science student, I have been working with CUDA for years, but I feel frustrated every time I try to install it. You see, we typically use TensorFlow or PyTorch, depending on our project’s purpose. However, the developers of these libraries can be quite cruel to us because they often don’t support the same CUDA version.</p><p>So I researched is there any way to resolve this issue, and I found solution is containerization via docker. There are images easy to install to any operating system with Docker using just one command in your terminal.</p><pre>Docker run -it &lt;image_name&gt;</pre><p>Although I have used Docker for several projects, I didn’t know it was possible to do that. Let’s set these things aside. Now, in this article, I will demonstrate how to build and run a container with Nvidia GPU CUDA support.</p><p>First, you need to have Docker on your computer or laptop. You can follow these steps if you haven’t installed it yet. If you’ve already installed Docker on your system, skip this part.</p><h3><strong>Docker installation</strong></h3><ol><li><strong>For Linux users (actual developers)</strong></li></ol><ul><li>Follow steps from this article — <a href="https://medium.com/@mrdevsecops/docker-installation-on-ubuntu-88193b135b25">link</a></li></ul><p>2. <strong>For Windows users</strong></p><ul><li>Typically, Docker usually work with linux operating system, to support windows the developers have found workaround by utilizing WSL (Windows Subsystem for Linux).</li><li>Follow steps from this article — <a href="https://medium.com/@supportfly/how-to-install-docker-on-windows-bead8c658a68">link</a></li></ul><p>3. <strong>For Mac users</strong></p><ul><li>Only you have Nvidia GPU.</li><li>Follow steps from this article — <a href="https://medium.com/@supportfly/steps-for-installing-docker-on-mac-c9cb9ad06665">link</a></li></ul><p>If you are facing any issues, look out for documentation from docker official blog page.</p><h3>Note:</h3><p>There are several points to note before following these steps:</p><ol><li>Make sure you have sufficient storage left on your system, as these images are huge.</li><li>Docker is computationally intensive, so ensure you have enough computing power. This means your system should not be as old as you are.</li><li>One more thing: be sure you have an NVIDIA GPU, because CUDA only supports NVIDIA GTX and RTX GPUs.</li></ol><h3>Docker Image installation</h3><p>Docker has several images for both Tensorflow and Pytorch with CUDA support. Before selecting an image, go to any LLM model (chatbot like chatgpt) and ask the following query.</p><pre>I have &lt;Your_gpu_name&gt;, which &lt;tensorflow or pytorch&gt;, cuda and cudnn are  compatible with this GPU?</pre><p>It will give you which TensorFlow/Pytorch is compatible with your system.</p><p>Before proceeding, ensure that you update your GPU drivers.</p><p>Now, go to dockerhub and search for image which has tensorflow or Pytorch and cuda support.</p><p><strong>For Tensorflow:</strong></p><p>Go to the <a href="https://hub.docker.com/r/tensorflow/tensorflow/tags?page=1&amp;name=2.10.0">TensorFlow/TensorFlow</a> Docker Hub page and select your TensorFlow version. Then, look for the GPU option.</p><p>Or search below query in <a href="https://hub.docker.com/">docker hub</a>.</p><pre>tensorflow/tensorflow:2.10.1-gpu</pre><p>Search for the above image on Docker Hub. Make sure to replace it with your version.</p><p><strong>For Tensorflow-jupyter:</strong></p><p>If you’re a Data Scientist like me, you usually work with Jupyter Notebook. Don’t worry, Docker has you covered there as well.</p><p>Go to <a href="https://hub.docker.com/r/tensorflow/tensorflow/tags?page=1&amp;name=2.10.0">TensorFlow/TensorFlow</a> Docker Hub page and select your TensorFlow version. Then, look for the GPU option and jupyter.</p><p>Or search below query in <a href="https://hub.docker.com/">docker hub</a>.</p><pre>tensorflow/tensorflow:2.10.0-gpu-jupyter</pre><p>Search for the above image on Docker Hub. Make sure to replace it with your version.</p><p>Copy entire image name, come back to your terminal and run the below</p><pre>docker pull &lt;your_image_name&gt;</pre><p><strong>For Pytorch:</strong></p><p>Go to the <a href="https://hub.docker.com/r/pytorch/pytorch/tags">Pytorch/Pytorch</a> Docker Hub page and select your Pytorch version. Then, look for your system supported cuda and cudnn version.</p><p>Copy entire image name, come back to your terminal and run the below</p><pre>docker pull &lt;your_image_name&gt;</pre><p><strong>For pytorch-jupyter:</strong></p><p>Go to <a href="https://hub.docker.com/r/pytorch/pytorch/tags">Pytorch/Pytorch</a> Docker Hub page and select your TensorFlow version. Then, look for your system supported cuda and cudnn version with jupyter.</p><p>Copy entire image name, come back to your terminal and run the below</p><pre>docker pull &lt;your_image_name&gt;</pre><p>This method is more flexible than installing CUDA on your host machine because it is not necessary to uninstall and install different versions for different projects’ requirements.</p><p>If you enjoyed reading, be sure to give it 50 claps! <strong><em>Follow </em></strong>and don’t miss out on any of my future posts — <a href="https://medium.com/@abdur-rahman/subscribe"><strong><em>subscribe </em></strong></a>to my profile for must-read blog updates!</p><blockquote><em>Thanks for reading!</em></blockquote><blockquote><em>Happy coding, don’t forget to clap and follow for similar content.</em></blockquote><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5886deff812c" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>