<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by paritosh raval on Medium]]></title>
        <description><![CDATA[Stories by paritosh raval on Medium]]></description>
        <link>https://medium.com/@paritoshraval100?source=rss-4c5fcfd4c5c4------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Fri, 24 Apr 2026 05:39:51 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@paritoshraval100/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Boost Your Coding Efficiency with Continue.dev]]></title>
            <link>https://blog.devgenius.io/boost-your-coding-efficiency-with-continue-dev-159e079c0f09?source=rss-4c5fcfd4c5c4------2</link>
            <guid isPermaLink="false">https://medium.com/p/159e079c0f09</guid>
            <category><![CDATA[generative-ai-tools]]></category>
            <category><![CDATA[generative-ai-solution]]></category>
            <category><![CDATA[code]]></category>
            <category><![CDATA[genai]]></category>
            <dc:creator><![CDATA[paritosh raval]]></dc:creator>
            <pubDate>Mon, 15 Jul 2024 07:48:52 GMT</pubDate>
            <atom:updated>2024-07-15T16:42:11.481Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*eSxk8ErG7Xw8rJNfaeQM0Q.png" /><figcaption>Image referenced from Continue.dev blogs</figcaption></figure><p>In the dynamic world of software development, efficient coding practices can significantly enhance productivity and project outcomes. Whether you’re an individual developer or part of a collaborative team, optimizing your coding workflow is essential. Enter <a href="https://continue.dev">continue.dev</a>, a robust platform designed to streamline and enhance your development process.</p><h3>What is Continue.dev?</h3><p>Continue.dev is a cutting-edge tool that integrates seamlessly into your development environment, providing features that help you maintain focus, manage coding tasks, and enhance your overall productivity. It’s designed to support developers across various programming languages and frameworks, making it a versatile addition to your toolkit.</p><h3>Getting Started</h3><p>Getting started with continue.dev is a breeze. Follow these steps to integrate it into your coding workflow:</p><ol><li><strong>Sign Up</strong>: Visit the <a href="https://continue.dev">continue.dev website</a> and create an account.</li><li><strong>Install the Extension</strong>: Download and install the continue.dev extension for your preferred IDE. Here I have used VSCode.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wz6S-Wx4Q8_WsHCtR0Q8NA.png" /></figure><p>Once installed, you can either sign in with your GitHub account to access the free limited-time offer, or you can install the model locally as shown below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*QvwJT8fnf_QUyTPH-aeG3Q.png" /></figure><blockquote><strong>1. Download and start Ollama</strong><br>Checking for connection to Ollama… — <a href="https://ollama.com/download">https://ollama.com/download</a></blockquote><blockquote><strong>2. Download a model for chat<br></strong>We recommend using llama3, the latest open-source model trained by Meta.</blockquote><blockquote><strong>Command: </strong>ollama run llama3</blockquote><blockquote><strong>3. Download a model for tab autocomplete<br></strong>We recommend using starcoder2:3b, a state-of-the-art 3B parameter autocomplete model trained by Hugging Face.</blockquote><blockquote><strong>Command</strong> ollama run starcoder2:3b</blockquote><blockquote><strong>4. Download a model for embeddings<br></strong>We recommend using nomic-embed-text, a 8192 context-length that outperforms OpenAI ada-002 and text-embedding-3-smallon both short and long context tasks.</blockquote><blockquote><strong>Command</strong>: ollama pull nomic-embed-text</blockquote><p>Once it’s done, you can use the continue.dev extension in your IDE as shown below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*tCtUEVhLc6osg2j0yCdO3Q.png" /></figure><p>You can ask to generate new code and have chat about it.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/632/1*OpavzVYY9pK43RFTrviXDQ.png" /></figure><p>You can play with some commands like CMD+L and CMD+i to play with it.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*bEsPpYjWaj0oVa2rrNkTbw.png" /></figure><p>Please refer <a href="https://www.continue.dev/">https://www.continue.dev/</a> for more information.</p><h3>Conclusion</h3><p>By integrating continue.dev with VS Code and GitHub, or using local models, you can significantly enhance your coding efficiency, This setup provides a unified solution for managing your development workflow, making it easier to stay productive and deliver high-quality code.</p><p>Reference: <a href="https://docs.continue.dev/setup/select-provider">https://docs.continue.dev/setup/select-provider</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=159e079c0f09" width="1" height="1" alt=""><hr><p><a href="https://blog.devgenius.io/boost-your-coding-efficiency-with-continue-dev-159e079c0f09">Boost Your Coding Efficiency with Continue.dev</a> was originally published in <a href="https://blog.devgenius.io">Dev Genius</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[GPT-4o: The New Frontier in AI]]></title>
            <link>https://medium.com/@paritoshraval100/gpt-4o-the-new-frontier-in-ai-79b91e4fc5ec?source=rss-4c5fcfd4c5c4------2</link>
            <guid isPermaLink="false">https://medium.com/p/79b91e4fc5ec</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[openai]]></category>
            <category><![CDATA[chatgpt]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[paritosh raval]]></dc:creator>
            <pubDate>Sat, 08 Jun 2024 18:33:30 GMT</pubDate>
            <atom:updated>2024-06-08T18:33:30.531Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*q4fxt_cNZtbtX0FV9JIEBA.png" /></figure><p><strong>Image Credit: </strong>Open AI</p><p>OpenAI has introduced GPT-4o, their most advanced AI model yet. Called “GPT-4 Omni,” this model can handle text, images, and audio all at once, making it a huge step forward in AI technology.</p><p>GPT-4o is a big leap in AI technology, solving many problems of older models and being more reliable and versatile. For businesses, it means more efficient operations and better customer service. For individuals, it means easier and more helpful interactions with AI, making technology a seamless part of daily life.</p><h3>What’s New?</h3><h4>Handling Different Types of Input</h4><p>GPT-4o can process and understand text, audio, images, and video, and it can also create outputs in these formats. For example:</p><ul><li>It can understand a picture, listen to audio, and read text at the same time.</li><li>It can generate responses that include speaking with emotions, singing, or even laughing.</li></ul><h4>One Model for Everything</h4><p>Before GPT-4o, different models were used for tasks like transcribing audio and generating text. Now, GPT-4o uses one unified model, which means:</p><ul><li>It can understand and produce more natural audio outputs.</li><li>It can recognize different voices and background sounds in one go.</li></ul><h3>Better User Experience</h3><h4>Fast and Realistic Responses</h4><p>GPT-4o responds to audio inputs in about 320 milliseconds, making conversations feel more natural and immediate. This is a big improvement from the previous 5.4 seconds response time.</p><h4>Free Access for Everyone</h4><p>OpenAI has made sure that even free users of ChatGPT can access some features of GPT-4o, such as:</p><ul><li>Intelligent conversations.</li><li>Creating and analyzing data charts.</li><li>Interacting with photos.</li><li>Summarizing and analyzing uploaded files.</li><li>Exploring and using GPTs from the GPT Store.</li><li>Personalizing the experience with memory functions.</li></ul><h3>Technical Improvements</h3><h4>Better Language Processing</h4><p>GPT-4o uses an improved system for processing text, making it faster and more efficient. This means:</p><ul><li>Quicker responses.</li><li>Less computing power needed.</li><li>More accurate text generation.</li></ul><h3>Real-World Uses</h3><h4>Customer Service</h4><p>GPT-4o can improve customer service by giving fast, accurate, and personalized responses, which reduces wait times and boosts satisfaction.</p><h4>Content Creation</h4><p>It helps create high-quality content for blogs, articles, and marketing, making it easier for writers.</p><h4>Healthcare</h4><p>GPT-4o can help doctors by understanding medical texts and patient data, making diagnoses and treatments more accurate.</p><h4>Education</h4><p>It can act as a smart tutor, providing personalized learning and answering students’ questions in real-time.</p><h3>Ethical and Responsible AI</h3><h4>Reducing Bias</h4><p>GPT-4o has advanced methods to detect and reduce biases, ensuring fair and unbiased outputs.</p><h4>Transparency</h4><p>OpenAI is committed to being transparent about how GPT-4o works, helping users understand its decision-making process and building trust.</p><h3>Conclusion</h3><p>GPT-4o is set to change the world of AI with its advanced features and practical applications. Whether you want to improve customer service, create content, or innovate in your field, GPT-4o provides the tools you need to succeed.</p><h3>Reference</h3><p><a href="https://openai.com/index/hello-gpt-4o/"><strong>https://openai.com/index/hello-gpt-4o/</strong></a></p><p><strong><em>“With GPT-4o, OpenAI proves that even AI can multitask better than most humans, effortlessly handling text, images, and audio — all while keeping us wondering if it’s truly intelligent or just really good at faking it.”</em></strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=79b91e4fc5ec" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Basics of Generative AI]]></title>
            <link>https://medium.com/kinomoto-mag/basics-of-generative-ai-273139b10b2b?source=rss-4c5fcfd4c5c4------2</link>
            <guid isPermaLink="false">https://medium.com/p/273139b10b2b</guid>
            <category><![CDATA[genai]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[chatgpt]]></category>
            <category><![CDATA[generative-ai-tools]]></category>
            <category><![CDATA[generative-ai-solution]]></category>
            <dc:creator><![CDATA[paritosh raval]]></dc:creator>
            <pubDate>Fri, 12 Apr 2024 19:17:43 GMT</pubDate>
            <atom:updated>2024-04-29T05:23:06.220Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*x9f9OyY1EzR4e7Uod19IWQ.jpeg" /></figure><h3>Introduction</h3><p>Generative AI, such as ChatGPT, is gaining popularity among people who aren’t necessarily tech experts. It uses math techniques from statistics, data science, and machine learning that researchers have been refining for a long time.</p><h3>What is generative AI?</h3><p>Generative AI is a type of AI that can generate new content. Many people use generative AI in chat applications. One popular example is ChatGPT, a chatbot made by OpenAI.</p><p>Generative AI apps understand human language and can respond with text, images, or even computer code.</p><h3>Large language models(LLMs)</h3><p>Generative AI applications use large language models (LLMs) that are specialised machine learning models for understanding and processing natural language.</p><p>These applications can do tasks like:</p><ol><li>Text Generation: Creating new sentences or paragraphs based on given prompts or contexts.</li><li>Language Translation: Translating text from one language to another.</li><li>Sentiment Analysis: Analyzing the sentiment or emotion expressed in a piece of text.</li><li>Text Summarization: Generating concise summaries of longer texts.</li><li>Question Answering: Providing answers to questions based on textual input.</li></ol><h4>Transformer models</h4><p>Transformer models are a type of advanced technology used for understanding and working with text. They are really good at figuring out relationships between words in sentences. This helps them understand the meaning of text more effectively, especially over longer distances in a sentence.</p><p>The main idea behind transformer models is their ability to pay attention to different parts of a sentence at once, which helps them capture important details and connections between words. This makes them useful for tasks like translating languages, summarizing text, and understanding sentiments in written content.</p><p>These models are popular because they can handle large amounts of text data efficiently and can learn complex patterns from this data. They are made up of multiple layers that work together to process and understand text.</p><p>Some well-known transformer models you might have heard of include BERT, GPT, and T5, which have been trained on huge amounts of text to become really good at tasks like answering questions or generating text based on given prompts. They have significantly improved our ability to work with and understand language using computers.</p><p><strong><em>Tokenization </em></strong>refers to the process of breaking down a piece of text into smaller units called tokens. These tokens can be words, subwords, or characters, depending on the specific tokenization technique used. Tokenization is a fundamental step in natural language processing (NLP) and is used to prepare text data for further analysis or processing by algorithms. The goal of tokenization is to segment the text into meaningful units that can be easily handled and manipulated by computational systems.</p><blockquote><strong>Sentence</strong>: “The cat jumped over the fence.”</blockquote><blockquote><strong>Tokenization</strong>:</blockquote><blockquote><strong>Tokens (words):</strong> [“The”, “cat”, “jumped”, “over”, “the”, “fence”, “.”]</blockquote><blockquote>In this example, the sentence is tokenized into individual words, and each word becomes a token. This tokenized representation allows a computer or AI system to process and understand the sentence more effectively, as it breaks down the text into manageable units for analysis and further processing.</blockquote><p><strong>Embeddings</strong> refer to the process of representing words or tokens as numerical vectors (arrays of numbers) in a way that captures their semantic meanings and relationships within a dataset. In natural language processing (NLP), word embeddings are used extensively to convert words from a textual format into a format that can be understood and processed by machine learning algorithms.</p><blockquote>Let’s use a simple example to illustrate word embeddings:</blockquote><blockquote>Suppose we have a small corpus (collection of text) with the following sentences:</blockquote><blockquote>“I love apples.”</blockquote><blockquote>“I enjoy bananas.”</blockquote><blockquote>“I like oranges.”</blockquote><h4><strong>Step 1: Tokenization</strong></h4><blockquote>We tokenize each sentence into individual words:</blockquote><blockquote><strong>Sentence 1:</strong> [“I”, “love”, “apples”, “.”]</blockquote><blockquote><strong>Sentence 2: </strong>[“I”, “enjoy”, “bananas”, “.”]</blockquote><blockquote><strong>Sentence 3:</strong> [“I”, “like”, “oranges”, “.”]</blockquote><h4><strong>Step 2: Vocabulary Creation</strong></h4><blockquote>We create a vocabulary containing unique words from the corpus:</blockquote><blockquote>V<strong>ocabulary:</strong> [“I”, “love”, “apples”, “enjoy”, “bananas”, “like”, “oranges”, “.”]</blockquote><h4><strong>Step 3: Assigning Embeddings</strong></h4><blockquote>We assign numerical vectors (embeddings) to each word in the vocabulary. Let’s represent each word with a 3-dimensional vector for simplicity:</blockquote><blockquote><strong>“I”:</strong> [0.2, 0.5, -0.1]</blockquote><blockquote><strong>“love”:</strong> [0.8, 0.3, -0.2]</blockquote><blockquote><strong>“apples”:</strong> [0.6, 0.1, 0.4]</blockquote><blockquote><strong>“enjoy”:</strong> [0.7, 0.4, 0.2]</blockquote><blockquote><strong>“bananas”</strong>: [0.5, 0.6, -0.3]</blockquote><blockquote><strong>“like”</strong>: [0.3, 0.2, 0.9]</blockquote><blockquote><strong>“oranges”</strong>: [0.4, 0.7, 0.5]</blockquote><blockquote><strong>“.”</strong>: [0.0, 0.0, 0.0] (representing punctuation)</blockquote><h4><strong>Using Embeddings</strong></h4><blockquote>Now, each word in our sentences can be represented by its embedding vector. For example:</blockquote><blockquote>“I love apples.” can be represented as:</blockquote><blockquote>[“I”: [0.2, 0.5, -0.1], “love”: [0.8, 0.3, -0.2], “apples”: [0.6, 0.1, 0.4], “.”: [0.0, 0.0, 0.0]]</blockquote><h4><strong>Word Similarity</strong></h4><blockquote>We can observe that words with similar meanings or contexts (e.g., “love” and “like”) have embeddings that are closer in value, reflecting their semantic relationships.</blockquote><blockquote>These embeddings allow machine learning models to understand and process textual data more effectively by capturing word meanings and relationships based on patterns in the input text. The learned embeddings can be further refined and used as input features for various NLP tasks like sentiment analysis, language translation, and more.</blockquote><p><strong>Attention</strong> is a critical mechanism enabling AI systems to selectively focus on important information and understand contextual relationships within data. This enhances performance across tasks like language understanding, image analysis, and decision-making.</p><blockquote><strong>Example</strong>: English to French Translation</blockquote><blockquote><strong>Input</strong>: “The cat is sitting on the mat.”</blockquote><blockquote><strong>Tokenization</strong>:</blockquote><blockquote>Break down the input sentence into tokens:</blockquote><blockquote><strong>Tokens</strong>: [“The”, “cat”, “is”, “sitting”, “on”, “the”, “mat”, “.”]</blockquote><blockquote><strong>Word Embeddings:</strong></blockquote><blockquote>Convert each token into a numerical representation (embedding).</blockquote><blockquote><strong>Transformer Encoder:</strong></blockquote><blockquote>The transformer’s encoder processes the embedded tokens and computes self-attention scores.</blockquote><blockquote>The attention mechanism helps the model focus on relevant words (e.g., “cat”, “sitting”, “mat”) while considering their relationships within the sequence.</blockquote><blockquote><strong>Transformer Decoder:</strong></blockquote><blockquote>The transformer’s decoder uses the encoder’s output and attends to relevant parts of the input to generate the translated output.</blockquote><blockquote>The decoder’s attention mechanism helps align the translated words with the corresponding words in the input, ensuring accurate translation.</blockquote><blockquote><strong>Output</strong>: “Le chat est assis sur le tapis.”</blockquote><blockquote>In this example, the attention mechanism allows the transformer model to focus on key words (like “cat”, “sitting”, “mat”) during translation, understanding their roles and relationships within the input sentence. This selective focus contributes to accurate and contextually relevant translations, demonstrating the importance of attention in General AI for understanding and processing complex data.</blockquote><p><strong>“As we continue to explore the capabilities and potential of AI, it’s clear that this technology is reshaping our world and opening up new horizons for innovation and discovery.”</strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=273139b10b2b" width="1" height="1" alt=""><hr><p><a href="https://medium.com/kinomoto-mag/basics-of-generative-ai-273139b10b2b">Basics of Generative AI</a> was originally published in <a href="https://medium.com/kinomoto-mag">Kinomoto AI</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to Create Diagrams with ChatGPT and draw.io]]></title>
            <link>https://blog.devgenius.io/how-to-create-diagrams-with-chatgpt-and-draw-io-851efb626f08?source=rss-4c5fcfd4c5c4------2</link>
            <guid isPermaLink="false">https://medium.com/p/851efb626f08</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[diagrams]]></category>
            <category><![CDATA[chatgpt]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[paritosh raval]]></dc:creator>
            <pubDate>Fri, 05 Apr 2024 06:59:18 GMT</pubDate>
            <atom:updated>2024-04-09T03:46:26.191Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Qj_vYcqVjkfPG5_7NHHEAQ.jpeg" /></figure><p>Diagrams help us understand complex ideas by showing them visually. But making diagrams can be hard. Luckily, with ChatGPT and draw.io, it&#39;s much easier!</p><p>Let’s create a sequential diagram illustrating the user authentication flow using ChatGPT and draw.io.</p><h4>Steps:</h4><p>Open <strong>ChatGPT</strong> and provide the following prompt:</p><blockquote><strong>“Create PlantUML template for generating a sequence diagram for user authentication”</strong></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ekY2PFI1ReeTG6RHCZOBAQ.png" /></figure><p>It will generate something similar to the following. <strong>Copy</strong> it.</p><pre>@startuml<br>actor User<br>participant &quot;Login Page&quot; as LP<br>participant &quot;Authentication Service&quot; as AS<br>database Database<br><br>User -&gt; LP: Enter Credentials<br>activate LP<br>LP -&gt; AS: Send Credentials<br>activate AS<br>AS -&gt; Database: Verify Credentials<br>activate Database<br>Database --&gt; AS: Verification Result<br>deactivate Database<br>AS --&gt; LP: Authentication Result<br>deactivate AS<br>LP --&gt; User: Authentication Status<br>deactivate LP<br>@enduml</pre><blockquote><strong>Now, open</strong><a href="https://app.diagrams.net/"><strong> Draw.io</strong></a><strong> and navigate to Arrange → Insert → Advanced → PlantUML…</strong></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*gRksj9mXv2GF_fYJfAmrIg.png" /></figure><blockquote><strong>Paste the ChatGPT generated PlantUML code in box.</strong></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*pqPv4j8WUgYFecv60go1HQ.png" /></figure><blockquote><strong>Click on “Insert” and that’s it. It will generate a sequential diagram for you. You can export(File → Export as → …) it and use in your documentation.</strong></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*2szYmxmAsGnX9v9Qi4wj5A.png" /></figure><h4>Exercise:</h4><p>I’ve already provided the prompt to create a class diagram for “inventory management” using ChatGPT and draw.io. Let’s proceed with the task.</p><blockquote><strong>“Create PlantUML template for generating a class diagram for inventory management”</strong></blockquote><p>It should generate a diagram like the one below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*tdQYefZuDVM1gDegWuq7vQ.png" /></figure><h4>Conclusion:</h4><p>The combination of ChatGPT and draw.io offers a seamless way to create diagrams quickly and efficiently. By leveraging ChatGPT’s ability to generate PlantUML code based on simple prompts, users can easily obtain the necessary diagram structure. Then, with draw.io’s integration of PlantUML support, users can effortlessly convert the generated code into a visual representation. This streamlined process simplifies diagram creation, making it accessible to a wide range of users. Whether it’s for illustrating web service communication or login authentication flows or any other requirement like inventory management, this approach provides a convenient solution for visualizing complex concepts and enhancing documentation efforts.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=851efb626f08" width="1" height="1" alt=""><hr><p><a href="https://blog.devgenius.io/how-to-create-diagrams-with-chatgpt-and-draw-io-851efb626f08">How to Create Diagrams with ChatGPT and draw.io</a> was originally published in <a href="https://blog.devgenius.io">Dev Genius</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Mastering the Art of Prompt Engineering: Crafting Engaging and Effective Prompts]]></title>
            <link>https://medium.com/kinomoto-mag/mastering-the-art-of-prompt-engineering-crafting-engaging-and-effective-prompts-0779d5fa25a3?source=rss-4c5fcfd4c5c4------2</link>
            <guid isPermaLink="false">https://medium.com/p/0779d5fa25a3</guid>
            <category><![CDATA[openai]]></category>
            <category><![CDATA[prompt]]></category>
            <category><![CDATA[prompt-engineering]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <dc:creator><![CDATA[paritosh raval]]></dc:creator>
            <pubDate>Tue, 02 Apr 2024 06:39:11 GMT</pubDate>
            <atom:updated>2024-04-29T14:56:25.017Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*6HY_XsankA_codDkIDGFqA.jpeg" /></figure><h4>Introduction:</h4><p>In the world of artificial intelligence (AI), the way we ask questions, known as input queries or prompts, is really important. It affects how accurate and relevant the answers from AI systems are. Whether we’re training language models, making chatbots, or doing tasks like understanding natural language, writing good prompts is key to getting the results we want. In this blog, we’ll talk about different ways to write better prompts for AI, and we’ll give examples to show how each technique works.</p><p><strong>Clear and Specific Input Queries:</strong> It’s really important to be clear and specific when asking questions to AI systems. If the questions are unclear or vague, the AI might get confused and give the wrong answers. Let’s look at an example to understand this better:</p><blockquote><strong>Poor Prompt: </strong>“Predict.”</blockquote><blockquote><strong>Improved Prompt:</strong> “Using historical sales data, forecast next month’s revenue for Product X.”</blockquote><p><strong>Contextual Relevance in AI Input Queries: </strong>Giving extra details in the questions helps AI systems understand what they need to do better. It guides them to give better answers that fit the situation.</p><blockquote><strong>Poor Prompt:</strong> “Discuss.”</blockquote><blockquote><strong>Improved Prompt: </strong>“Based on recent customer feedback, analyze sentiment trends towards our new product.”</blockquote><p><strong>Open-Ended Input Queries for AI: </strong>Instead of asking questions that have only yes or no answers, open-ended questions give AI models more room to provide detailed and thoughtful responses. This helps them understand things better and gives more interesting answers.</p><blockquote><strong>Poor Prompt:</strong> “Is this article informative?”</blockquote><blockquote><strong>Improved Prompt:</strong> “Provide an analysis of the key points discussed in this article and evaluate its overall informativeness.”</blockquote><p><strong>Varied Input Query Formats:</strong> Trying out different ways of asking questions to AI helps them handle different jobs and information better. Mixing up the questions with commands or giving some background info makes AI more flexible and reliable:</p><blockquote><strong>Poor Prompt: </strong>“Summarize.”</blockquote><blockquote><strong>Improved Prompt:</strong> “Generate a concise summary of the main themes and findings from this research paper.”</blockquote><p><strong>Encouragement and Positivity in AI Input Queries:</strong> Keeping a positive and supportive tone when talking to AI helps them feel more comfortable and confident in their responses. When they’re encouraged, AI models are more likely to give clear and confident answers.</p><blockquote><strong>Poor Prompt:</strong> “Explain why this solution won’t work.”</blockquote><blockquote><strong>Improved Prompt:</strong> “Propose potential improvements to enhance the effectiveness of this solution.”</blockquote><p>These extra things make talking about making prompts for AI even better. It shows that making prompts is complicated and affects how people use the AI, what’s right or wrong, and the impact it has on society.</p><ol><li><strong>Consistency in Input Queries: </strong>Emphasizing the significance of keeping input queries consistent to guarantee that AI-generated responses are uniform and reliable across various interactions and situations.</li><li><strong>Feedback Mechanisms: </strong>Talking about how feedback mechanisms help continuously improve input queries and enhance AI performance by incorporating user input and validation in an iterative process.</li><li><strong>Ethical Considerations:</strong> Dealing with ethical issues in prompt engineering, including avoiding biased language and unintentionally reinforcing harmful stereotypes in input queries.</li><li><strong>Accessibility Features:</strong> Exploring the integration of accessibility features in input queries to accommodate users with disabilities and ensure equitable access to AI technologies.</li><li><strong>Continuous Learning: </strong>Highlighting the importance of continuous learning and adaptation in prompt engineering, wherein AI systems evolve over time based on user feedback and evolving requirements.</li><li><strong>Collaboration with Domain Experts: </strong>Advocating for collaboration between AI developers and domain experts in crafting input queries, leveraging subject matter expertise to enhance the relevance and effectiveness of prompts.</li></ol><h4>Conclusion:</h4><p>Asking clear, relevant, and open-ended questions with a positive tone is crucial for optimal AI performance. Effective questioning is essential for improving AI models like chatbots and language programs, enhancing their performance across different tasks and benefiting users.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=0779d5fa25a3" width="1" height="1" alt=""><hr><p><a href="https://medium.com/kinomoto-mag/mastering-the-art-of-prompt-engineering-crafting-engaging-and-effective-prompts-0779d5fa25a3">Mastering the Art of Prompt Engineering: Crafting Engaging and Effective Prompts</a> was originally published in <a href="https://medium.com/kinomoto-mag">Kinomoto AI</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Fine-Tuning AI: Transforming Life Sciences with Precision and Expertise]]></title>
            <link>https://medium.com/@paritoshraval100/fine-tuning-ai-transforming-life-sciences-with-precision-and-expertise-3308de1d1acf?source=rss-4c5fcfd4c5c4------2</link>
            <guid isPermaLink="false">https://medium.com/p/3308de1d1acf</guid>
            <category><![CDATA[life-sciences]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[healthcare]]></category>
            <category><![CDATA[fine-tuning]]></category>
            <dc:creator><![CDATA[paritosh raval]]></dc:creator>
            <pubDate>Mon, 01 Apr 2024 14:06:03 GMT</pubDate>
            <atom:updated>2024-04-01T14:06:03.595Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*f4SZ0NWd-Lucpa4N8wj9LA.jpeg" /></figure><h4>Introduction:</h4><p>Fine-tuning AI models is like tweaking a powerful tool to make it work better for specific jobs in fields like medicine and biology. This technique helps scientists understand complicated things in our bodies, find new medicines faster, and tailor treatments to each person. In this blog, we’ll talk about how fine-tuning AI models is used in life sciences, explaining it in simple terms and giving examples to show how it helps researchers make big discoveries and improve people’s health.</p><h4>Understanding Fine-Tuning:</h4><p>Fine-tuning AI models is like customizing a tool to fit a particular job. Instead of starting from scratch, which takes a lot of time and effort, we use models that have already learned a lot from previous tasks. These pre-trained models already know a lot about patterns and features in data. Fine-tuning is about making small tweaks to these models so they can better handle new tasks or datasets.</p><p>Imagine you have a toolbox with various tools for different jobs. Instead of making a new tool every time you need something done, you might take one of the tools you already have and adjust it slightly to fit the new task better. That’s what fine-tuning is like for AI models.</p><p>By fine-tuning, we’re essentially teaching the model to specialize in a new area while still benefiting from what it’s learned before. It’s like giving your tool a new job without starting from scratch, which saves time and effort.</p><p>Here are some examples of fine-tuning AI models for the life science domain:</p><ol><li><strong>Drug Side Effect Prediction:</strong> When scientists make new medicines, they need to check if they have any bad side effects. They can use AI to help with this too. By teaching the AI about different drugs and the side effects they can cause, scientists can make it predict which new drugs might have problems. By teaching the AI well and giving it lots of data, they can make sure it’s good at spotting potential risks, which helps keep people safe when they take new medicines.</li><li><strong>Clinical Trial Optimization:</strong> Imagine scientists want to test a new treatment on people to see if it works. This is called a clinical trial. But finding the right people for the trial can be tricky. With AI, scientists can analyze lots of information about patients to find the best candidates for the trial. By using AI to help choose the right patients and plan the trial better, scientists can make sure it runs smoothly and quickly, which helps get new treatments to patients faster.</li><li><strong>Infectious Disease Forecasting: </strong>Health officials want to predict and prevent the spread of diseases like flu or COVID-19. AI can help them do this by analyzing data about how diseases spread. By teaching AI about past outbreaks and how diseases move between people, officials can use it to forecast where outbreaks might happen next. This helps them plan ahead and take actions to stop the spread, like vaccinations or travel restrictions, which protects people from getting sick.</li><li><strong>Clinical Decision Support:</strong> Healthcare providers use clinical decision support systems to assist in diagnosing diseases and planning treatments. AI models can be fine-tuned with patient data, medical guidelines, and clinical research findings to provide personalized recommendations for patient care. By optimizing the AI’s algorithms and training it with real-world patient cases, clinicians can improve diagnostic accuracy, treatment effectiveness, and patient outcomes.</li><li><strong>Personalized Nutrition Recommendations:</strong> Nutritionists can leverage AI models fine-tuned with dietary intake data and individual health metrics to provide personalized nutrition recommendations. By optimizing the AI’s algorithms and training it with dietary guidelines and nutritional research, nutritionists can tailor dietary plans to meet the specific needs and goals of each individual, promoting better health and wellness outcomes.</li><li><strong>Predictive Modeling for Public Health: </strong>Public health agencies can employ AI models fine-tuned with demographic data, health indicators, and environmental factors to develop predictive models for disease outbreaks and public health emergencies. By optimizing the AI’s algorithms and training it with data from past outbreaks and epidemiological studies, policymakers can anticipate disease trends, allocate resources effectively, and implement targeted interventions to mitigate the impact of public health threats.</li></ol><h4>Conclusion:</h4><p>Fine-tuning AI models has changed the way we do science in fields like healthcare, drug discovery, biology, and environmental conservation. By making small tweaks and paying close attention to details, scientists can now use AI to solve big problems with more accuracy and flexibility than ever before. This has led to exciting discoveries and improvements in human health and protecting the environment. As we keep learning and using AI in smarter ways, there’s so much more we can do to make life sciences even better.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3308de1d1acf" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Exploring Ngrok: Exposing Local Servers Securely]]></title>
            <link>https://blog.devgenius.io/exploring-ngrok-exposing-local-servers-securely-50fef2ee500a?source=rss-4c5fcfd4c5c4------2</link>
            <guid isPermaLink="false">https://medium.com/p/50fef2ee500a</guid>
            <category><![CDATA[web]]></category>
            <category><![CDATA[servers]]></category>
            <category><![CDATA[webservices-testing]]></category>
            <category><![CDATA[local-server]]></category>
            <category><![CDATA[ngrok]]></category>
            <dc:creator><![CDATA[paritosh raval]]></dc:creator>
            <pubDate>Fri, 29 Mar 2024 14:07:58 GMT</pubDate>
            <atom:updated>2024-03-29T16:56:04.192Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*I8_dJ_9LAhOmFnUe5nuRuQ.jpeg" /></figure><h4>Introduction:</h4><p>In today’s digital age, web developers and engineers often find themselves needing to test or share their locally hosted web applications, APIs, or services with others over the internet. However, exposing a local server to the public internet securely can be a daunting task due to security concerns and network configurations. This is where Ngrok comes to the rescue. Ngrok is a powerful tool that simplifies the process of exposing local servers securely, enabling developers to share their work with collaborators or test webhooks and APIs in real-time. In this blog post, we’ll explore the features and capabilities of Ngrok and how it can benefit developers in their day-to-day workflow</p><h4>What is Ngrok?</h4><p>Ngrok is a software tool that creates a secure tunnel between a public endpoint and a locally hosted web server or service. It allows developers to expose their local development environment to the internet temporarily, enabling external access to their applications or APIs for testing, collaboration, or demonstration purposes. Ngrok handles the complexities of network configuration and security, making it easy for developers to share their work with others without exposing their local machines to security risks.</p><h4>Key Features of Ngrok:</h4><ol><li><strong>Expose Local Servers:</strong> Ngrok enables developers to expose servers running on their local machines to the internet with a single command. This allows them to share their local development environment with collaborators or test web applications and APIs in real-time.</li><li><strong>Secure Tunneling:</strong> Ngrok establishes a secure tunnel between the local machine and the Ngrok servers using TLS encryption. This ensures that all data transmitted between the local server and the Ngrok servers is encrypted, protecting it from interception or tampering by unauthorized parties.</li><li><strong>Dynamic URLs</strong>: Each time Ngrok is run, it generates a unique URL that serves as a public endpoint for accessing the locally hosted server. This URL can be easily shared with collaborators or clients, allowing them to access the local server over the internet without any additional setup.</li><li><strong>HTTPS Support: </strong>Ngrok supports HTTPS by default, allowing developers to test secure web applications or APIs locally without the need for SSL certificates. This ensures that all communication between the client and the server is encrypted and secure.</li><li><strong>Traffic Inspection:</strong> Ngrok provides a web interface that allows developers to inspect incoming requests and responses, monitor traffic in real-time, and debug issues with their locally hosted services. This makes it easy to identify and troubleshoot any problems that may arise during testing or development.</li></ol><p><strong>Installation</strong>: <a href="https://dashboard.ngrok.com/get-started/setup"><em>https://dashboard.ngrok.com/get-started/setup</em></a></p><p>To get started with Ngrok, follow these setup points from the official Ngrok documentation:</p><ol><li><strong>Sign Up for Ngrok Account:</strong> Visit the Ngrok website and sign up for an account. You’ll need an account to access the Ngrok dashboard and generate authentication tokens.</li><li><strong>Download Ngrok:</strong> After signing up, download the Ngrok client for your operating system (Windows, macOS, Linux) from the Ngrok website.</li><li><strong>Installation: </strong>Install Ngrok on your local machine by following the installation instructions provided for your operating system.</li></ol><pre>brew install ngrok/ngrok/ngrok<br>ngrok config add-authtoken 2eMRCRO5YB636T44rk14pSnV29u_5kvN3wg3T8JPBpk3v3uW3</pre><blockquote><strong>NOTE</strong>: The authtoken mentioned here will not work for you; you will need to use your own unique token.</blockquote><p>4. Next, deploy your application online by exposing it at an ephemeral domain forwarding to your upstream service. For instance, if your application is running locally on port 8080, run the command:</p><pre>ngrok http 5000</pre><blockquote>Follow Steps from <a href="https://dashboard.ngrok.com/get-started/setup">https://dashboard.ngrok.com/get-started/setup</a> as per your system</blockquote><p>5. It will open something like below</p><pre>ngrok                                                                                                                                                                                                 (Ctrl+C to quit)<br>                                                                                                                                                                                                                      <br>Take our ngrok in production survey! https://forms.gle/aXdBFdzEd36duddn6                                                                                                                                              <br>                                                                                                                                                                                                                      <br>Session Status                online                                                                                                                                                                                  <br>Account                       Your Name (Plan: Free)                                                                                                                                                                   <br>Version                       3.8.0                                                                                                                                                                                   <br>Region                        India (in)                                                                                                                                                                              <br>Latency                       63ms                                                                                                                                                                                    <br>Web Interface                 http://127.0.0.1:4040                                                                                                                                                                   <br>Forwarding                    https://1111-123-123-323-10.ngrok-free.app -&gt; http://localhost:5000                                                                                                                     <br>                                                                                                                                                                                                                      <br>Connections                   ttl     opn     rt1     rt5     p50     p90                                                                                                                                             <br>                              2       0       0.00    0.00    121.64  137.59                                                                                                                                          <br>                                                                                                                                                                                                                      <br>HTTP Requests                                                                                                                                                                                                         <br>-------------</pre><p>The URL <a href="https://1111-123-123-323-10.ngrok-free.app">https://1111-123-123-323-10.ngrok-free.app</a> is the public URL generated by Ngrok for your local server. When you run Ngrok, it creates a secure tunnel to expose your locally hosted application to the internet. This generated Ngrok URL serves as the public endpoint through which external users can access your local server.</p><p>In this specific case, accessing the Ngrok URL<a href="https://1111-123-123-323-10.ngrok-free.app"> </a><a href="https://1111-123-123-323-10.ngrok-free.app">https://1111-123-123-323-10.ngrok-free.app</a> will forward incoming requests to the local server running on <a href="http://localhost:5000/">http://localhost:5000</a>. This allows anyone, anywhere to access your locally hosted application via the Ngrok URL without being physically connected to your local network.</p><h4>Conclusion:</h4><p>Ngrok is a valuable tool for developers who need to expose their local development environment to the internet securely. Its ease of use, powerful features, and robust security make it an essential tool in the toolbox of any web developer or engineer. By simplifying the process of exposing local servers securely, Ngrok empowers developers to collaborate more effectively, test their applications with confidence, and bring their ideas to life faster than ever before.</p><p>In summary, Ngrok is a game-changer for developers looking to share their work with others or test web applications and APIs in real-time. With Ngrok, exposing local servers securely has never been easier.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=50fef2ee500a" width="1" height="1" alt=""><hr><p><a href="https://blog.devgenius.io/exploring-ngrok-exposing-local-servers-securely-50fef2ee500a">Exploring Ngrok: Exposing Local Servers Securely</a> was originally published in <a href="https://blog.devgenius.io">Dev Genius</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Extracting User Story Details from Azure DevOps Python]]></title>
            <link>https://blog.devgenius.io/extracting-user-story-details-from-azure-devops-python-a24fed55f522?source=rss-4c5fcfd4c5c4------2</link>
            <guid isPermaLink="false">https://medium.com/p/a24fed55f522</guid>
            <category><![CDATA[jira]]></category>
            <category><![CDATA[azure]]></category>
            <category><![CDATA[python]]></category>
            <category><![CDATA[user-stories]]></category>
            <category><![CDATA[azure-devops]]></category>
            <dc:creator><![CDATA[paritosh raval]]></dc:creator>
            <pubDate>Fri, 29 Mar 2024 08:12:35 GMT</pubDate>
            <atom:updated>2024-03-29T16:59:02.269Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*G0nlq3uvCo8muP5u1Lk_6Q.jpeg" /></figure><h4>Introduction:</h4><p>In the realm of software development, Azure DevOps serves as a cornerstone for managing project workflows, including user stories, tasks, and sprints. However, manually retrieving story details from Azure DevOps can be time-consuming and prone to errors. In this blog, we’ll explore how to automate the extraction of story details from Azure DevOps URLs using Python.</p><h4>Understanding the Challenge:</h4><p>In agile software development, user stories encapsulate requirements and serve as building blocks for delivering functionality. Often, teams need to extract story details such as title and description from Azure DevOps for various purposes, including documentation, planning, and analysis. Automating this process can enhance productivity and accuracy.</p><h4>Solution Overview:</h4><p>Our solution involves leveraging Python along with Azure DevOps REST API to automate the extraction of story details from Azure DevOps URLs. We’ll parse the URL to extract essential components such as the server URL, project name, and story ID. Then, we’ll make an authenticated API request to Azure DevOps to fetch the story details.</p><h4>Step 1: Parsing the URL:</h4><p>We start by parsing the provided Azure DevOps URL to extract crucial components using Python’s urllib.parse module. The extracted components include the server URL, organization name, project name, and story ID.</p><p>The URL “<a href="https://dev.azure.com/praval100/test_project/_workitems/edit/4/">https://dev.azure.com/your_org/your_project/_workitems/edit/123/</a>&quot; is dissected to extract the server URL (“dev.azure.com”), organization name(“<a href="https://dev.azure.com/praval100/test_project/_workitems/edit/4/">your_org</a>”) project name (“your_project”), and story ID (“<a href="https://dev.azure.com/praval100/test_project/_workitems/edit/4/">123</a>”)</p><pre>import requests<br>import urllib.parse<br><br>def extract_info_from_url(url):<br>    # Parse the Azure DevOps URL<br>    parsed_url = urllib.parse.urlparse(url)<br>    <br>    # Extract server name<br>    server_name = parsed_url.netloc<br>    <br>    # Extract path segments<br>    path_segments = parsed_url.path.strip(&#39;/&#39;).split(&#39;/&#39;)<br>    <br>    # Extract organization name<br>    organization_name = path_segments[0]<br>    <br>    # Extract project name<br>    project_name = path_segments[1]<br>    <br>    # Extract story ID<br>    story_id = path_segments[-1]<br>    <br>    return server_name, organization_name, project_name, story_id</pre><h4>Step 2: Making the API Request:</h4><p>Once we have the necessary information extracted from the URL, we construct the API endpoint URL using the server URL, project name, and story ID. We then make a GET request to the Azure DevOps REST API endpoint to fetch the story details. The API request is authenticated using a token retrieved from environment variables.</p><pre>def get_story_from_url(story_url):<br>    server_url, organization_name, project_name, story_id = extract_info_from_url(story_url)<br><br>    token = &quot;your_token&quot;<br>    # API endpoint URL<br>    api_url = f&quot;https://{server_url}/{organization_name}/{project_name}/_apis/wit/workitems/{story_id}?7.1-preview.3&quot;<br>    # Request headers with authentication<br><br>    headers = {<br>        &quot;Authorization&quot;: f&quot;Basic {token}&quot;,<br>        &quot;Content-Type&quot;: &quot;application/json&quot;<br>    }<br><br>    try:<br>        # Make GET request to fetch story details<br>        response = requests.get(api_url, headers=headers)<br><br>        # Check if request was successful (status code 200)<br>        if response.status_code == 200:<br>            # Parse JSON response<br>            story_details = response.json()<br><br>            # Extract specific details (for example, title and description)<br>            title = story_details[&quot;fields&quot;][&quot;System.Title&quot;]<br>            description = story_details[&quot;fields&quot;][&quot;System.Description&quot;]<br><br>            output_json = {<br>                &quot;story_title&quot;: title,<br>                &quot;story_desc&quot;: description,<br>                &quot;story_id&quot;: story_id<br>            }<br><br>            return output_json<br>        else:<br>            print(&quot;Failed to fetch story details. Status code:&quot;, response.status_code)<br><br>    except Exception as e:<br>        print(&quot;An error occurred:&quot;, e)<br>        return e</pre><p>You have seen we used token in above code. You can generate token by below steps.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/568/1*3lOOwNLBPAJsijaw8upuTg.png" /></figure><ol><li>Sign in to Azure DevOps: Go to the Azure DevOps portal (<a href="https://dev.azure.com/">https://dev.azure.com/</a>) and sign in with your credentials.</li><li>Navigate to User Settings: Once logged in, click on your profile icon located in the top right corner of the page. From the dropdown menu, select “Security”.</li><li>Generate PAT: In the Security page, locate the “Personal access tokens” section. Click on the “New Token” button to create a new PAT.</li><li>Select Scopes: Under “Token scopes”, choose the appropriate scopes based on the level of access your application or script requires. For example, if you’re using the token for read-only access to work items, select the appropriate scope accordingly.</li><li>Copy Token: Once generated, the token will be displayed on the screen. Make sure to copy the token and securely store it in a safe location. You won’t be able to retrieve it later.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Utj2uVAkdiBH5__wl05JOA.png" /></figure><ol><li>Go to terminal and perform <strong>echo -n :PAT | base64</strong></li></ol><p><strong>Example</strong>:</p><pre>echo -n :y4daplka6d7tnhewsuph2d4tn2qfqdtrveph27ymcrgaekvaj6dq | base64</pre><p>and it will generate something like</p><pre>Ond0dGFwfGfddTYfdG5vfHdzdWRwagQ0Y3RgcWZxZHRgdmVwaDIgeW1jcmdhZWt2YWo2ZHE=</pre><p>And you have to use this type of token in code.</p><blockquote><strong>Note</strong>: Please note that the token provided in this example is just for demonstration purposes. It won’t work for you. You’ll need to generate your own token following the steps outlined above. Additionally, ensure that you encode your Personal Access Token (PAT) with base64 before using it in your application or script for authentication. This step is crucial for securing your token and ensuring proper authorization when accessing Azure DevOps resources.</blockquote><h4>Conclusion:</h4><p>By automating the extraction of story details from Azure DevOps URLs using Python, teams can enhance their productivity and accuracy in managing project work items. This approach not only saves time but also ensures consistency and reliability in retrieving essential information from Azure DevOps, empowering teams to focus on delivering high-quality software solutions.</p><h4>Code:</h4><pre>import requests<br>import urllib.parse<br><br>def extract_info_from_url(url):<br>    # Parse the Azure DevOps URL<br>    parsed_url = urllib.parse.urlparse(url)<br>    <br>    # Extract server name<br>    server_name = parsed_url.netloc<br>    <br>    # Extract path segments<br>    path_segments = parsed_url.path.strip(&#39;/&#39;).split(&#39;/&#39;)<br>    <br>    # Extract organization name<br>    organization_name = path_segments[0]<br>    <br>    # Extract project name<br>    project_name = path_segments[1]<br>    <br>    # Extract story ID<br>    story_id = path_segments[-1]<br>    <br>    return server_name, organization_name, project_name, story_id<br><br>def get_story_from_url(story_url):<br>    server_url, organization_name, project_name, story_id = extract_info_from_url(story_url)<br><br>    token = &quot;your_token&quot;<br>    # API endpoint URL<br>    api_url = f&quot;https://{server_url}/{organization_name}/{project_name}/_apis/wit/workitems/{story_id}?7.1-preview.3&quot;<br>    # Request headers with authentication<br><br>    headers = {<br>        &quot;Authorization&quot;: f&quot;Basic {token}&quot;,<br>        &quot;Content-Type&quot;: &quot;application/json&quot;<br>    }<br><br>    try:<br>        # Make GET request to fetch story details<br>        response = requests.get(api_url, headers=headers)<br><br>        # Check if request was successful (status code 200)<br>        if response.status_code == 200:<br>            # Parse JSON response<br>            story_details = response.json()<br><br>            # Extract specific details (for example, title and description)<br>            title = story_details[&quot;fields&quot;][&quot;System.Title&quot;]<br>            description = story_details[&quot;fields&quot;][&quot;System.Description&quot;]<br><br>            output_json = {<br>                &quot;story_title&quot;: title,<br>                &quot;story_desc&quot;: description,<br>                &quot;story_id&quot;: story_id<br>            }<br><br>            return output_json<br>        else:<br>            print(&quot;Failed to fetch story details. Status code:&quot;, response.status_code)<br><br>    except Exception as e:<br>        print(&quot;An error occurred:&quot;, e)<br>        return e<br>    <br>print(get_story_from_url(&quot;https://dev.azure.com/your_org/your_project/_workitems/edit/your_story_id/&quot;))</pre><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a24fed55f522" width="1" height="1" alt=""><hr><p><a href="https://blog.devgenius.io/extracting-user-story-details-from-azure-devops-python-a24fed55f522">Extracting User Story Details from Azure DevOps Python</a> was originally published in <a href="https://blog.devgenius.io">Dev Genius</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[A Beginner’s Guide to Web Scraping in Python]]></title>
            <link>https://blog.devgenius.io/a-beginners-guide-to-web-scraping-in-python-8ed3d884ac9e?source=rss-4c5fcfd4c5c4------2</link>
            <guid isPermaLink="false">https://medium.com/p/8ed3d884ac9e</guid>
            <category><![CDATA[web-scraping]]></category>
            <category><![CDATA[python]]></category>
            <dc:creator><![CDATA[paritosh raval]]></dc:creator>
            <pubDate>Fri, 29 Mar 2024 07:28:54 GMT</pubDate>
            <atom:updated>2024-03-29T16:58:54.958Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*APmDU6nsheJH1baBh-5dcg.jpeg" /></figure><h4>Introduction:</h4><p>In today’s digital age, the internet is a vast repository of information waiting to be explored. However, manually extracting data from websites can be a tedious and time-consuming task. This is where web scraping comes into play. In this guide, we’ll explore the basics of web scraping using Python, a powerful programming language known for its simplicity and versatility.</p><h4>What is Web Scraping?</h4><p>Web scraping is the process of extracting data from websites automatically. It involves fetching the HTML content of a web page, parsing it, and extracting the desired information. This information can then be used for various purposes such as data analysis, content aggregation, price monitoring, and more.</p><h4>Getting Started with Python Web Scraping:</h4><p>To get started with web scraping in Python, we’ll need to install a few libraries. The two main libraries we’ll be using are Requests and BeautifulSoup.</p><pre>import requests<br>from bs4 import BeautifulSoup</pre><h4>Fetching HTML Content from a URL:</h4><p>The first step in web scraping is fetching the HTML content of a web page. We can use the Requests library to make HTTP requests to the URL and retrieve the HTML content.</p><pre>def read_data_from_url(url):<br>    try:<br>        response = requests.get(url)<br>        if response.status_code == 200:<br>            return response.text<br>        else:<br>            print(f&quot;Failed to retrieve data from {url}. Status code: {response.status_code}&quot;)<br>            return None<br>    except Exception as e:<br>        print(f&quot;An error occurred while retrieving data from {url}: {e}&quot;)<br>        return None</pre><h4>Parsing HTML with BeautifulSoup:</h4><p>Once we have fetched the HTML content, the next step is to parse it and extract the desired information. BeautifulSoup is a Python library that makes it easy to navigate and search through HTML documents.</p><pre># Example usage:<br>url = &quot;https://wiki.python.org/moin/BeginnersGuide&quot;<br>data = read_data_from_url(url)<br>if data:<br>    soup = BeautifulSoup(data, &#39;html.parser&#39;)<br>    # Extract text from the HTML<br>    text = soup.get_text()<br>    print(text)<br>else:<br>    print(&quot;Failed to retrieve data from the URL.&quot;)</pre><h4>Conclusion:</h4><p>Web scraping is a powerful technique for extracting data from websites automatically. With Python and libraries like Requests and BeautifulSoup, you can quickly and easily build web scraping applications to gather information from the web. However, it’s essential to use web scraping responsibly and ethically, respecting the terms of service of the websites you scrape. Happy scraping!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8ed3d884ac9e" width="1" height="1" alt=""><hr><p><a href="https://blog.devgenius.io/a-beginners-guide-to-web-scraping-in-python-8ed3d884ac9e">A Beginner’s Guide to Web Scraping in Python</a> was originally published in <a href="https://blog.devgenius.io">Dev Genius</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Bridging Conversations and Data: Chat Interface with Real-Time Database Interaction]]></title>
            <link>https://blog.devgenius.io/bridging-conversations-and-data-chat-interface-with-real-time-database-interaction-c85c73b8a0d7?source=rss-4c5fcfd4c5c4------2</link>
            <guid isPermaLink="false">https://medium.com/p/c85c73b8a0d7</guid>
            <category><![CDATA[sql]]></category>
            <category><![CDATA[openai-chatgpt]]></category>
            <category><![CDATA[openai]]></category>
            <category><![CDATA[chatbots]]></category>
            <category><![CDATA[langchain]]></category>
            <dc:creator><![CDATA[paritosh raval]]></dc:creator>
            <pubDate>Wed, 13 Mar 2024 06:27:33 GMT</pubDate>
            <atom:updated>2024-03-16T17:47:53.709Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*9QV-KuFkQ0r_qTVAZjXjtw.jpeg" /></figure><p>In this blog post, we’ll explore how to create a dynamic chat interface that not only responds intelligently but also fetches real-time data from a SQL database. By combining the prowess of OpenAI’s language model and the versatility of SQL databases, we’ll empower our chat interface to provide contextually relevant information on the fly.</p><p>Picture this: you’re chatting with a virtual assistant, seeking answers to your queries, and suddenly, it responds not just with pre-programmed responses, but with real-time data fetched from a SQL database. Intrigued? Let’s delve deeper into the fascinating realm of creating a dynamic chat interface that seamlessly integrates with SQL databases.</p><h3>Preparing the Environment</h3><p>Before diving into the implementation details, let’s ensure we have all the necessary components set up:</p><ol><li><strong>OpenAI API Key:</strong> Obtain your OpenAI API key and set it as an environment variable using os.environ[&#39;OPENAI_API_KEY&#39;].</li><li><strong>Database Connection</strong>: Establish a connection to your SQL Server database using PyODBC. Construct the connection string with your database credentials, server details, and ODBC driver information.</li></ol><pre>db_connection_string = &#39;mssql+pyodbc://DBUsername:DBPassword@ServerName:Port/DBName?driver=ODBC+Driver+17+for+SQL+Server&#39;</pre><h3>Constructing the Bridge: OpenAI and SQL Database Integration</h3><p>Now, let’s bring our chat interface to life by integrating OpenAI’s language model with our SQL database. This synergy allows us to not only respond intelligently but also fetch relevant data on-the-fly. We’ll initialize the OpenAI language model, establish a connection to our SQL database, and create a powerful chain that seamlessly combines the two.</p><pre>from langchain_community.utilities import SQLDatabase<br>from langchain_community.llms import OpenAI<br>from langchain_experimental.sql import SQLDatabaseChain<br><br># Set OpenAI API key<br>os.environ[&#39;OPENAI_API_KEY&#39;] = &#39;your_openai_api_key&#39;<br><br># Initialize OpenAI language model<br>llm = OpenAI(temperature=0.2, verbose=True)<br><br># Initialize SQL Database<br>db = SQLDatabase.from_uri(db_connection_string)<br><br># Create a chain combining the SQL database and OpenAI model<br>db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)</pre><h3>Engaging with the Chat Interface</h3><p>With our bridge constructed, it’s time to engage in conversation! Imagine typing a query into the chat interface, hitting enter, and witnessing the magic unfold as the virtual assistant responds with contextually relevant information fetched directly from the SQL database. We’ll provide a simple yet effective interface for users to interact with, enhancing their experience and providing them with valuable insights in real-time.</p><pre>def chat_with_db():<br>    print(&quot;Type &#39;Q&#39; to quit&quot;)<br><br>    while True:<br>        prompt = input(&quot;Enter your prompt: &quot;)<br><br>        if prompt.lower() == &#39;Q&#39;:<br>            print(&#39;Quitting...&#39;)<br>            break<br>        else:<br>            try:<br>                # Retrieve response from the database chain<br>                response = db_chain.run(prompt)<br>                print(response)<br>            except Exception as e:<br>                print(e)<br><br># Let&#39;s chat!<br>chat_with_db()</pre><p>As we conclude our journey, we reflect on the transformative impact of bridging conversations and data. By integrating conversational AI with real-time database interaction, we empower our applications to deliver unparalleled user experiences. Whether it’s providing instant product information, retrieving personalized recommendations, or answering complex queries, the possibilities are endless. So, armed with this newfound knowledge, go forth and revolutionize your applications with the power of conversational data retrieval!</p><p>Below is the code consolidated into a single file:</p><pre>import os<br>import pyodbc<br>from langchain_community.utilities import SQLDatabase<br>from langchain_community.llms import OpenAI<br>from langchain_experimental.sql import SQLDatabaseChain<br><br># Set OpenAI API key<br>os.environ[&#39;OPENAI_API_KEY&#39;] = &#39;your_openai_api_key&#39;<br><br># Define SQL database connection string<br>db_connection_string = &#39;mssql+pyodbc://DBUsername:DBPassword@ServerName:Port/DBName?driver=ODBC+Driver+17+for+SQL+Server&#39;<br><br># Initialize SQL database<br>db = SQLDatabase.from_uri(db_connection_string)<br><br># Initialize OpenAI language model<br>llm = OpenAI(temperature=0.2, verbose=True)<br><br># Create SQL database chain<br>db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)<br><br># Function to chat with SQL database<br>def chat_with_db():<br>    print(&quot;Type &#39;Q&#39; to quit&quot;)<br><br>    while True:<br>        prompt = input(&quot;Enter your prompt: &quot;)<br><br>        if prompt.lower() == &#39;Q&#39;:<br>            print(&#39;Quitting...&#39;)<br>            break<br>        else:<br>            try:<br>                print(db_chain.run(prompt))<br>            except Exception as e:<br>                print(e)<br><br># Main function<br>if __name__ == &quot;__main__&quot;:<br>    chat_with_db()</pre><p>So why wait? Dive into the world of bridging conversations and data, and unlock the potential of your applications today!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c85c73b8a0d7" width="1" height="1" alt=""><hr><p><a href="https://blog.devgenius.io/bridging-conversations-and-data-chat-interface-with-real-time-database-interaction-c85c73b8a0d7">Bridging Conversations and Data: Chat Interface with Real-Time Database Interaction</a> was originally published in <a href="https://blog.devgenius.io">Dev Genius</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>