<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by CodeCraft on Medium]]></title>
        <description><![CDATA[Stories by CodeCraft on Medium]]></description>
        <link>https://medium.com/@codecraft?source=rss-4079a22d4d20------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sun, 17 May 2026 05:25:35 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@codecraft/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Remembering Bram Moolenaar: A Tribute to the Creator of Vim Text Editor]]></title>
            <link>https://codecraft.medium.com/remembering-bram-moolenaar-a-tribute-to-the-creator-of-vim-text-editor-a1926510268?source=rss-4079a22d4d20------2</link>
            <guid isPermaLink="false">https://medium.com/p/a1926510268</guid>
            <category><![CDATA[vim]]></category>
            <category><![CDATA[text-editor]]></category>
            <category><![CDATA[bram-moolenar]]></category>
            <category><![CDATA[coding]]></category>
            <category><![CDATA[open-source]]></category>
            <dc:creator><![CDATA[CodeCraft]]></dc:creator>
            <pubDate>Thu, 10 Aug 2023 04:49:15 GMT</pubDate>
            <atom:updated>2023-08-10T04:49:48.579Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/512/1*YPhGyHHpF4OKtgRyEhnHKQ.png" /></figure><p>In the realm of open source software development, certain figures shine brightly, leaving an indelible mark on the tech landscape. Bram Moolenaar, the visionary mind behind the Vim Text Editor for Linux, was one such luminary. It is with heavy hearts that we pay tribute to his life and achievements as we bid farewell to a true legend who left an everlasting imprint on the world of coding.</p><p>Bram Moolenaar’s passing on August 3, 2023, marked the end of an era in the open source arena. The creator of Vim- a text editor that transcended its functional purpose, Bram’s influence extended far beyond lines of code. Vim, a masterpiece of efficient text manipulation, stands as a testament to his dedication, innovation, and unwavering commitment to the developer community.</p><h3>Creating Vim: A Tool Beyond Measure</h3><p>Vim, short for “Vi IMproved,” emerged from Bram’s ingenious mind as a highly configurable text editor that redefined the way developers interact with text. Its inclusion as “vi” in most UNIX systems and Apple’s OS X exemplified its fundamental significance. However, Vim was no ordinary editor; it was a tool meticulously crafted to elevate the art of creating and modifying text to unparalleled levels of efficiency.</p><h3>A Heartfelt Farewell to Bram Moolenar</h3><p>The news of Bram Moolenaar’s passing left the open source community in mourning. In a poignant message from his family, it was revealed that Bram had been battling a medical condition that rapidly progressed in the weeks leading up to his passing. His dedication to Vim and the community it fostered was evident, and his family expressed pride in the legacy he had built.</p><p>As the tech world mourns this loss, Bram’s family is making arrangements for his funeral service in the Netherlands. While the details are yet to be determined, the event will be held in Dutch. The funeral has been opened up for the public and those who wish to pay their respects and be a part of this solemn occasion have been invited by the family in the mail announcing his death.</p><h3>A Legacy Continues</h3><p>Bram Moolenaar’s legacy lives on through Vim and the community he nurtured. His contributions have indelibly shaped the way developers interact with code, leaving behind a legacy that will inspire countless generations to come. As we celebrate his life, let us also celebrate the enduring impact he had on the world of technology.</p><p>In closing, we at CodeCraft, will always remember Bram not only as a brilliant developer but also as a beacon of innovation, dedication, and the open source spirit that continues to drive the world forward.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a1926510268" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Economic Potential of Generative AI]]></title>
            <link>https://codecraft.medium.com/economic-potential-of-generative-ai-f6e8b01b37ad?source=rss-4079a22d4d20------2</link>
            <guid isPermaLink="false">https://medium.com/p/f6e8b01b37ad</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[mckinsey]]></category>
            <category><![CDATA[generative-ai-tools]]></category>
            <dc:creator><![CDATA[CodeCraft]]></dc:creator>
            <pubDate>Wed, 02 Aug 2023 10:44:52 GMT</pubDate>
            <atom:updated>2023-08-02T10:44:52.430Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*sm2NgRBcFfd3L2nje1-smg.jpeg" /></figure><p>The rise of Generative AI is creating groundbreaking breakthroughs in how we used to operate and manage business and industry operations traditionally. It is the mastermind behind the surging success of LLM models, like ChatGPT, Google Bard, GitHub Copilot, and others in the making.</p><p>These generative AI models can change the face of businesses, organizations, and the world. <a href="https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#introduction">According to a recent report by McKinsey</a>, the introduction of Generative AI in business operations has the potential to add trillions to the current world economy.</p><p>The advent of Generative AI could add around $2.6–4.4 trillion annually across different industries; in context, the entire GDP of the United Kingdom was $3.1 trillion in 2021. Let’s delve into the discussion and learn more about Generative AI’s impact on the world’s economy in the coming years.</p><h3>How Generative AI Can Change Operations Across Different Industries?</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*cOn2ykV60u7BgGHr.jpg" /></figure><p>Generative AI, the next step in the evolution of AI, has made industries and businesses rush to adapt and transform their operations worldwide. To analyze how Generative AI can bring a change in productivity across industries, let us have a look at the different scenarios:</p><h3>Impact of Generative AI on business operations</h3><p>Generative AI can automate business operations and processes in 63 identified use cases. Use cases refer to implementing AI to overcome a business hurdle and result in one or more outcomes. For example, Generative AI can be used by marketing companies to create creative content for personalized emails which can significantly cut expenses and increase revenue. As per reports, 63 use cases in 16 different business functions from several industries can generate $2.6–4.4 trillion in economic benefits annually.</p><h3>Impact of Generative AI on business processes</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*pnXQbKr6MDxMsbnI.jpg" /></figure><p>Thinking about the impact of Generative AI on work activities required in business occupations, the report revealed that it could perform 2,100 detailed work activities required in 850 occupations, including communicating operational plans to the team. So, keeping this in mind, the economic benefits of Generative AI, when applied across knowledge workers’ activities, will amount to $6.1–7.2 trillion.</p><p>Customer operations, marketing and sales, software engineering, and R&amp;D are the four major business areas that will generate 75% of AI use cases value when rolled out worldwide. Besides, it can significantly impact the banking, retail, consumer packaged goods, and pharmacy industries. Generative AI deployment can increase annual revenues by 1.2–2.0% ($400–660 billion) for retail and consumer packaged companies, 2.8–4.7% ($200–340 billion) for the banking industry, and 2.4–4.5% ($60–100 billions) across all medical-products and pharmaceutical industries.</p><p>To provide you a glimpse of how Generative AI will transform the current landscape of different industries, here is how:</p><ul><li><strong>Transformation in Customer Operations</strong></li></ul><ol><li>Human-like chatbot will resolve complex customer queries.</li><li>Agents will use AI-developed called scripts and real-time assistance to provide real-time information to the customer.</li><li>AI will provide a quick conversation summary to create records of managing customer queries.</li></ol><p><strong>Result </strong>— Productivity will increase by 30–45% at current business costs.</p><ul><li><strong>Transformation in Marketing and Sales</strong></li></ul><ol><li>Gather information from unstructured data resources to create effective strategies.</li><li>Customized campaigns tailored as per their demographic, sentiment, and location.</li><li>Offer comprehensive information, comparisons, and dynamic recommendations to customers.</li></ol><p><strong>Result </strong>— Productivity will increase by 5–15% at current market spending.</p><ul><li><strong>Transformation in Software Engineering</strong></li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*7SONYFINU1Fx7IrM.jpg" /></figure><ol><li>Analyze, clean, and label large volumes of data</li><li>Develop multiple IT architecture designs and iterate the potential designs.</li><li>Reduce development time by providing code drafts and rapidly finding prompts.</li><li>Deploy algorithms to perform functional and performance testing to ensure quality.</li><li>Provide insights on system logs, performance issues and suggest fixes.</li></ol><p><strong>Result </strong>— Productivity will increase by 20–45% at current global expenditures.</p><ul><li><strong>Transformation in R&amp;D</strong></li></ul><ol><li>Improvise market reporting, creativity, and solution drafting</li><li>Generate prompt-based drafts and designs</li><li>Enhance and create rich and immersive virtual simulations</li><li>Implement optimized test cases for efficient testing</li></ol><p><strong>Result </strong>— Productivity will increase by 10–15% at current R&amp;D costs.</p><h3>Impact of AI on the Growth of the Economy And Society</h3><p>Generative AI will accelerate the technical automation of individual processes and enhance economic growth. MGI (McKinsey Global Institue) estimated that technology performance for natural language understanding will match human median performance by 2027, but with Generative AI, 2023 is the year of transformation. Here is how Generative AI will change the economy and society:</p><ul><li>The total hours that could be automated by integrating Generative AI into existing technologies is increased to 60–70%, which was previously 50%.</li><li>Developed countries will adapt to Generative AI early compared to developing countries with lower wages, like China, India, and Mexico because higher wages will make it economically feasible.</li><li>As AI’s ability to excel in natural language processing is accelerating, it is estimated that 50% of industrial processes will be automated by 2030–2060.</li><li>The midpoint scenario at which 50% of the work will be automated is accelerated by a decade and is expected to be by 2045, which was 2053, as per 2016’s reports.</li><li>With the advancing Generative AI, the potential to automate management and develop talents has increased by 43%, which was 16% in 2017.</li><li>Advances in technical capabilities can risk the work of higher-wage knowledge workers, who were earlier thought to be immune from automation.</li></ul><p>The global economic growth in the past decade was 2.9% which slowed down in the past decade as compared to the previous two decades. However, automation can help improve statistics and boost economic growth with annual productivity. Depending on the rate of AI adoption, it can accelerate economic growth from 0.2 to 3.3 between 2030–2040. Moreover, the best part is that Generative AI can boost labor productivity. Transitioning to new activities and changing their work is essential to take advantage of these benefits.</p><h3>The Bottom Line</h3><p>We are entering a new world only possible in Sci-Fi movies and books with Generative AI. It can transform our lives and fuel economic growth. Amidst the new and growing possibilities with Generative AI, it is essential to mitigate the risks. Business leaders, policymakers, and individuals play a fundamental role in devising and implementing policies to maximize benefits.</p><p>Generative AI has a bright potential for revolutionizing the world. However, working together to utilize AI ethically to create significant value and limit the potential disrupt lives and livelihoods is essential.</p><p>If you are interested in exploring Generative AI and have any relevant projects or collaborations in mind, we would be pleased to hear from you. Please feel free to <a href="https://www.codecrafttech.com/"><strong>contact us</strong></a><strong> </strong>to discuss any ideas, questions, or potential opportunities. Once again, thank you for your readership, and we look forward to connecting with you!</p><p>To read my article on <a href="https://codecraft.medium.com/bloom-ai-model-the-stepping-stone-for-next-level-intelligence-911d51676bf7">BLOOM AI, click here</a>!</p><h3>About the Author:</h3><p><a href="https://www.linkedin.com/in/drkirankumarc/"><strong>Dr. Kiran Kumar</strong></a><strong> </strong>is an accomplished AI researcher, innovator, and senior data scientist. With a Ph.D. in Supply Chain Analytics, he possesses a profound understanding of data analysis and machine-learning techniques. His extensive research contributions are showcased through numerous publications in esteemed international journals. Driven by a passion for pioneering advancements, he holds patents for groundbreaking innovations in the field. Currently, he is focused on developing cutting-edge products by leveraging his expertise in Prompt engineering and Generative AI.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f6e8b01b37ad" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Unlocking User Engagement: Immersive Metaverse Design Experiences Demystified]]></title>
            <link>https://codecraft.medium.com/unlocking-user-engagement-immersive-metaverse-design-experiences-demystified-3c59ebdbcda7?source=rss-4079a22d4d20------2</link>
            <guid isPermaLink="false">https://medium.com/p/3c59ebdbcda7</guid>
            <category><![CDATA[ui-design]]></category>
            <category><![CDATA[metaverse]]></category>
            <category><![CDATA[user-experience]]></category>
            <category><![CDATA[immersive]]></category>
            <category><![CDATA[ux]]></category>
            <dc:creator><![CDATA[CodeCraft]]></dc:creator>
            <pubDate>Thu, 27 Jul 2023 12:22:06 GMT</pubDate>
            <atom:updated>2023-07-27T12:22:06.149Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*mXJ5kdTFv-bMpTMk7Sh-Ag.png" /><figcaption>Designing Immersive Experiences for the Metaverse</figcaption></figure><h3>Introduction</h3><blockquote><strong><em>Picture this: a virtual realm where reality and imagination intertwine, a space that blends augmented reality, virtual reality, and cutting-edge technologies.</em></strong></blockquote><p>Welcome to the metaverse, a digital playground that has captured the imagination of tech enthusiasts, storytellers and industry leaders alike. It’s no wonder that designing immersive experiences for the metaverse has become the hottest topic in town!</p><p>In this article, we’ll dive deep into the metaverse and explore how crafting captivating experiences within this virtual realm can revolutionize industries and captivate audiences. So fasten your virtual seatbelt and get ready to unlock the secrets of designing immersive experiences for the metaverse!</p><h3>Understanding the Metaverse: What is the Metaverse?</h3><p>The metaverse is not just another buzzword thrown around by tech geeks at a hipster café. It’s a mind-bending concept that takes us beyond the confines of our physical reality and plunges us into a digital universe where the possibilities are endless. Think of it as a cosmic mashup of augmented reality, virtual reality, and other emerging technologies.</p><p>Within the metaverse, you can transcend the mundane and immerse yourself in richly detailed virtual worlds, connect with people from across the globe, and even defy the laws of physics (take that, gravity!). It’s like stepping into the pages of a science fiction novel, only to realize that the adventure is real.</p><blockquote><strong><em>But hold on, let’s not get too carried away. The metaverse is not just an ethereal concept. It’s being built as we speak.</em></strong></blockquote><p>Initially coined by Neal Stephenson in his novel called Snow Crash, the metaverse does not just exist in computer fantasy anymore but it is becoming a fast moving reality composed of stacked up immersive experiences. Tech wizards are concocting mind-boggling platforms and applications that form the foundation of this digital realm. From virtual reality wonderlands to social VR spaces, the metaverse is taking shape, pixel by pixel.</p><h3>The Importance of User Experience (UX) Design in the Metaverse</h3><p>In the metaverse, design is the magic potion that transforms a mere collection of pixels into an enchanting experience. It’s what separates the virtual wheat from the chaff, the mediocre from the mind-blowing. In other words, design is the secret sauce that makes the metaverse sizzle.</p><p>User experience (UX) and user interface (UI) design are the dynamic duo that ensures your journey through the metaverse is smooth, intuitive, and devoid of any virtual potholes. Navigating virtual environments should be as easy as a Sunday morning stroll, and that’s where thoughtful design steps in. It’s all about creating interfaces that beckon users, guiding them effortlessly through the metaverse maze.</p><h3>How To Design Immersive Experiences for the Metaverse?</h3><p>Designing <a href="https://metropolismag.com/viewpoints/metaverse-design-guide/">immersive experiences for the metaverse</a> is like being a master illusionist in a digital circus. You have the power to transport users to fantastical realms, mesmerize their senses, and make them forget they’re even wearing virtual goggles (well, almost).</p><p>First up, we have the dynamic duo of user experience (UX) and user interface (UI) design. These superheroes of the metaverse ensure that users can navigate virtual worlds with ease. Intuitive interfaces, seamless interactions,<a href="https://medium.com/geekculture/voice-user-interface-bots-ca0ce1f40e95"> voice user interfaces (VUI)</a> and a touch of magic make immersive metaverse meetings user-friendly.</p><p>But that’s just the tip of the iceberg &amp; let’s not stop there! What are the multidisciplinary perspectives for an immersive metaverse experience?</p><ol><li><strong>Spatial Design:</strong></li></ol><p>Spatial design is where the metaverse truly comes to life. It’s like being an architect of the digital realm, shaping virtual environments that leave users breathless. Scale, proportion, lighting, and sound design are your tools of the trade when designing immersive experiences for the metaverse. With a few clicks, you can turn a dull gray room into a vibrant tropical paradise or transport users to a futuristic cityscape where neon lights flicker and hoverboards zoom past. It’s all about creating immersive metaverse experiences that make users question the very nature of reality.</p><p><strong>2. Storytelling and narratives:</strong></p><p>Ah, storytelling! The beating heart of the metaverse design. Narratives add depth and purpose to the metaverse. Engaging storylines, captivating characters, and interactive experiences transform the metaverse into a realm where every user becomes the star of their own blockbuster adventure.</p><p><strong>3. Intuitive interaction design:</strong></p><p>Last but not least, we have interaction mechanics and <a href="https://codecraft.medium.com/6-point-design-guide-for-generation-z-fc5e4fe48b82">intuitive design</a> to consider when designing immersive experiences for the metaverse. The metaverse is not just a one-way street where users passively consume content. It’s a bustling marketplace of ideas and experiences. Interaction mechanics, from gesture-based controls to haptic feedback, invite users to engage and shape their digital surroundings. Scale, proportion, lighting, and sound design — these ingredients work together to create immersive metverse environments that transport you to realms beyond your wildest dreams.</p><p>Intuitive design ensures that even the most technologically-challenged individuals can navigate the metaverse like a digital guru.</p><h3>Potential Challenges of Designing Immersive Experiences for the Metaverse:</h3><p>Designing for the metaverse isn’t all rainbows and unicorns. Like any digital frontier, it comes with its fair share of challenges.</p><ul><li><strong>Technical and hardware limitations:</strong></li></ul><p>Hardware limitations can make your head spin faster than a rollercoaster ride when designing user experiences for the metaverse. From clunky headsets to limited processing power, you’ll need to work your design magic while keeping in mind the technical constraints. But fear not, as technology evolves, these challenges of designing experiences for the metaverse will soon be a thing of the past.</p><ul><li><strong>Ethical Constraints:</strong></li></ul><p>We can’t forget about the ethical considerations of designing for the metaverse. As users dive deeper into digital realms, questions of privacy, data security, and digital identity arise. It’s crucial to design with integrity and ensure that users’ rights and well-being are protected. The metaverse should be an inclusive space that fosters diversity and respects individual boundaries.</p><p>Designing for the metaverse is an exhilarating rollercoaster ride with loops, twists, and breathtaking views. I hope this article has given tips and tricks for designing for the metaverse like a mastery, crafting immersive experiences that leave users awestruck!</p><p>I’m <a href="https://www.linkedin.com/in/siri-kaliparambil-6423b1160/">Siri Kaliparambil</a>, Senior Content Designer at <a href="https://www.codecrafttech.com/">CodeCraft Technologies</a>, and an established author who has penned numerous articles for technology and user experience design over the course of my career. Through my writing, I skillfully weave the fascinating threads of these diverse subjects into captivating narratives that resonate with readers. I am passionate about making complex concepts accessible by demystifying jargon and presenting information with clarity &amp; creativity. My narratives are not only defined by my previous experiences but also by my unique experiences &amp; anecdotes. In my spare time, you can find me reading a book, watching war documentaries or headbanging to some heavy metal!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3c59ebdbcda7" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Agile Practice: Scrum Project Management In Software Development]]></title>
            <link>https://codecraft.medium.com/the-agile-practice-scrum-project-management-in-software-development-bb62908b8d99?source=rss-4079a22d4d20------2</link>
            <guid isPermaLink="false">https://medium.com/p/bb62908b8d99</guid>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[scrum]]></category>
            <category><![CDATA[agile-methodology]]></category>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[agile]]></category>
            <dc:creator><![CDATA[CodeCraft]]></dc:creator>
            <pubDate>Fri, 21 Jul 2023 10:33:22 GMT</pubDate>
            <atom:updated>2023-07-21T10:33:22.042Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3uBs8mEKSZvahRy5pKUyxQ.jpeg" /><figcaption>Agile methodology in the software development lifecycle</figcaption></figure><p>With the idea being birthed in the 90s, Agile and Scrum are popular methodologies used for engagement and software development and have become quite the buzzwords in today’s IT world. What makes these methodologies so different from other traditional engagement models is their iterative nature which allows teams to collaborate better for faster and more efficient product delivery. Not only did the introduction of these tools help with cutting down tons of unnecessary documentation but it was pivotal in introducing flexibility into the software development process which allowed developers to incorporate the latest updates into the product they were building.</p><h3>An Introduction to Agile &amp; Scrum</h3><p>Now that you’ve an understanding of how and why these methodologies have become cardinal in today’s software development arena, let’s get down to understanding what they really are and how we can harness their advantages to build robust software.</p><p><strong>Let’s get started!</strong></p><h3>What is Agile?</h3><p>As we have understood from the establishment of the topic, Agile is an iterative process which allows us to break down a vast project into smaller sub-tasks. Simply put, these sub-tasks are treated as smaller projects of their own and developers address a single sprint and introspect on the outcome before moving on to the next. These smaller but consumable increments in the project allow for developers to break down requirements and plans in such a way that there is a natural mechanism to respond to change as the results are evaluated continuously.</p><h3>What is Scrum?</h3><p>Put another way, scrum is a result of an Agile process. A team that is saddling the benefits of agile uses a scrum framework to empower teams to learn from each other and through experiences while continuously self-organizing to work on a solution. Using the Scrum framework to streamline software development allows teams to reflect on their wins and losses while consciously looking for areas of improvement.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/982/0*u-aer3OZ0a9V641W" /></figure><h3>Why use Agile &amp; Scrum for software development?</h3><p>While the following breakdown of agile and scrum approaches must have given you a peek into the many benefits of this process, let us delve deeper into their principles and how they address business requirements to understand why you need it for software development.</p><h3>Principles of Scrum</h3><ul><li><strong>Welcomes Change-</strong> Scrum allows you to adapt to change even later on in the development process as it is an iterative process.</li><li><strong>Efficient Development-</strong> Using the scrum methodology, teams can put out results constantly with defined milestones/sprints.</li><li><strong>Inspect &amp; Adapt-</strong> Scrum allows teams to constantly reflect on their progress while introspecting with all the hands on deck to pave the best way to move forward.</li><li><strong>Self-organizing Teams-</strong> Scrum teams are also self-organizing as the process requires constant coordination and collaboration amongst all stakeholders.</li><li><strong>Effective Communication-</strong> This is the foundation that the pillar of scrum stands on as regular planned scrum calls are a part of the process.</li><li><strong>Flat team structure</strong>- There is no subordinate superior concept in a scrum framework which allows everyone involved to communicate without inhibitions while being more disciplined and accountable.</li><li><strong>The Mantra of Unity- </strong>There is no man of the match when it comes to following the agile framework and team works more unitedly towards the goal as victories are shared</li></ul><h3>The Doctrine For Building An Effective Scrum Team</h3><p>In the last segment of this introduction to agile and scrum in software development, let us look at the doctrine on how to build a scrum team which functions like a well-oiled machine, let us look at who consists of a scrum team and their underlying philosophies of working.</p><h3>Agile Scrum Roles &amp; Their Responsibilities Explained</h3><ol><li><strong>Product Owner-</strong> This is usually the product manager or the product sponsor who decides the features and functionalities of the product, their release dates and prioritization of tasks in the workflow.</li><li><strong>Scrum Master-</strong> The Scrum Master is responsible for the facilitation of the agile workflow and for instilling scrum principles and values into the process. He/she is also accountable for keeping all stakeholders involved and for removing any impediments or internal politics that may arise.</li><li><strong>The Core Project Team-</strong> The core project team consists of 5–10 individuals who are responsible for carrying out the execution of the agile workflow. They are spread across a cross-functional and self-organizing team consisting of programmers and developers, QA analysts, UI/UX designers etc.</li></ol><h3>The Tenets &amp; Philosophies Of An Effective Scrum Team</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/723/0*LU9lVRAmvRkneV-5" /></figure><p><strong>A typical Scrum workflow (Source: Google Images)</strong></p><p>A well-functioning and effective scrum team is built on top of certain philosophies which are the outliers which set it apart from traditional software development teams. Let us have a look at them here:</p><ol><li><strong>Understand your customer’s pain points</strong></li></ol><p>The chief objective of an effective scrum team is to understand the customer’s pain points. People don’t want a product without any utility! They want something that can add value to their lives. A good scrum team focuses on building user-centric solutions by understanding their customer’s requirements and pain points and tailoring their product around it.</p><p><strong>2. Building a mindset of team-unity</strong></p><p>The agile workflow follows the thought process of sharing both wins and losses. This encourages a mindset of standing united with the team instead of having an individualistic approach to the workflow. Moreover, the team is empowered to find the best solution to work with by having a collaborative approach through regular scrum meetings.</p><p><strong>3. Fall down seven, get up the eighth!</strong></p><p>The agile methodology encourages an environment where it is okay for an individual to fail and where solutions are found through team-effort. Sometimes, this is vital for building a great product as it allows the team to follow an interactive approach where they can round back to square one and start over with the learnings from the failure.</p><p><strong>4. Eliminating the unnecessary</strong></p><p>The “All hands on deck” approach which is a marker of a successful agile workflow is crucial in helping teams to focus their efforts on the right track. Developers set short sprints so that they have focused goals which prevents them from deviating.</p><p><strong>5. A Short Product Backlog</strong></p><p>By setting shorter sprints and well-defined goals, agile and scrum teams focus on deploying features regularly and efficiently. They know what matters the most as regular scrum meetings are held to discuss the progress and the roadmap ahead, thus helping a great deal in streamlining the workflow.</p><h3>Conclusion</h3><p>The agile methodology and the scrum framework of working have caught on in today’s competitive development environment for a good reason; they allow for freedom from the rigid practices of non-agile models. By yoking the many benefits of agile and scrum, teams are able to build better user-centric digital experiences even while spending lesser time on upfront planning. The concept of “being agile” empowers scrum teams to deliver products which are on-par with the latest industry standards which explains its popularity which does not cease to grow.</p><p>We hope this exploration into agile and scrum practices has given you an insight into how to bring about great results in software development by embracing flexibility. Tune into our blog for more info about the latest in the tech world!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=bb62908b8d99" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[BLOOM AI Model — The Stepping Stone For Next-Level Intelligence]]></title>
            <link>https://codecraft.medium.com/bloom-ai-model-the-stepping-stone-for-next-level-intelligence-911d51676bf7?source=rss-4079a22d4d20------2</link>
            <guid isPermaLink="false">https://medium.com/p/911d51676bf7</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[bloom]]></category>
            <dc:creator><![CDATA[CodeCraft]]></dc:creator>
            <pubDate>Tue, 18 Jul 2023 13:05:39 GMT</pubDate>
            <atom:updated>2023-07-19T08:18:26.741Z</atom:updated>
            <content:encoded><![CDATA[<h3><strong>BLOOM AI Model — The Stepping Stone For Next-Level Intelligence</strong></h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*JMfyMVGpOIAaEXdv9LxKnQ.jpeg" /><figcaption>BLOOM AI: The largest, open-source multilingual language model</figcaption></figure><p>The emergence of artificial intelligence has created a breakthrough in the world. The BLOOM model is a versatile framework at the technology forefront with advanced capabilities of understanding natural language, machine learning, and problem-solving.</p><p>The BLOOM model, “Biologically Localized and Online One-shot Multi-Task Learning,” is a machine learning framework, <a href="https://medium.com/@kiran.phd.0102/generative-ai-is-it-mere-hype-or-a-portal-to-a-new-future-39701b24f5ae">breaking the frontiers in generative AI</a>, that blends the power of deep learning algorithms with human-brain inspired notions.</p><p>Developed by more than 1000 AI researchers, BLOOM AI is the largest open-access AI model. It creates an opportunity for small businesses, start-ups, and individuals to leverage the potential of the AI model to create innovative applications.</p><blockquote><strong><em>Without further ado, let’s delve deep into the BLOOM AI model and see how it is a stepping stone for the next level of intelligence!</em></strong></blockquote><h3><strong><em>Everything you should know about BLOOM AI</em></strong></h3><p>BLOOM is an open-access multilingual language model with a staggering 176 billion parameters and training data on over 366 billion tokens. The initiatives of Hugging Face’s Big Science team, the Microsoft DeepSpeed Team, the NVIDIA Megatron-LM Team, the IDRIS/GENCI Team, the PyTorch team, and BigScience’s engineering team were involved in developing the most perfect language model in the world.</p><p>The project was founded by Hugging Face and the French NLP community and soon went on to attract participants from over 70+ countries and experts from 250 institutions. The two imminent French agencies-CNRS and GENCI, provided a computing grant of a whopping three million for the research and training of the BLOOM Model. The BLOOM Model was trained on the Jean Zay supercomputer at IDRIS/CNRS in the south of Paris for over 117 days (11 March — 6 July 2022).</p><p>It is built on Transformer architecture which comprises an input-embedding layer, 70 transformers blocks, and an output language-modeling layer. The architecture of the BLOOM model is identical to GPT3; however, BLOOM is trained in 46 different languages and 13 programming languages.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*AHfYY-Nk9U2_bYp0" /></figure><h3><strong>What languages is BLOOM AI trained on?</strong></h3><p>BLOOM is based on the causal language model. It is trained as a next-token predictor and predicts the succeeding token in a sentence based on the preceding tokens. This attribute enables BLOOM to connect different concepts in a sentence and accurately solve arithmetic, translational, and programming problems. BLOOM’s architecture comprises 70 transformer blocks with each block comprising a self-attention layer and a multi-perceptron layer, with input and post-attention layer norms.</p><p>Graph-pattern search, full-text search, edit graph data, slicer, and advanced phrases query searches are a few of the capabilities that BLOOM possesses. One of the major advantages of BLOOM is that it is a 16 GB RAM which is sufficient to run a super-powerful language model without the necessity of a GPU.</p><h3>What are they differentiators between BLOOM AI and ChatGPT?</h3><p>Here are some differentiators that set BLOOM AI apart from other language modrls:</p><ul><li>Employed 384 graphics cards of 80 gigabytes each on the Jean Zay 28 PFLOPS supercomputer for training.</li><li>Utilizes 176 billion parameters</li><li>Seventy layers with 112 attention heads for each layer.</li><li>Implements ALiBi positional embeddings — GeLU activation function</li><li>Open-source, anyone can use and access it.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*BfIIRqXBBDjooQJ5" /></figure><h3><strong>Understanding BLOOM AI’s Architecture</strong></h3><h3><strong><em>How does the BLOOM model Work?</em></strong></h3><p>The architecture of BLOOM is based on the casual-decoder transformer model, which is the standard model used for developing LLMs with above 100B parameters for best performance. However researchers and developers introduced key variations in the standard model to ensure BLOOM outperforms all the language models.</p><p><strong>Here are some innovations that make BLOOM different:</strong></p><ul><li><strong>ALiBi Positional Embedding</strong></li></ul><p>Additional information is added to the embedding layer in the standard architecture model. However, while building BLOOM, the developers implemented ALiBi (Attention with Linear Biases), which utilizes a unique approach by attenuating the attention scores from the distance between the keys and queries. The main motive is to leverage the potential of ALiBi because of its ability to extrapolate the longer sequences. However, to the researchers’ surprise, the ALiBi application enhanced downstream performance and led to a smoother training process. It even outperformed both learning and rotary embeddings.</p><ul><li><strong>Embedding LayerNorm</strong></li></ul><p>The developing team experimented with another additional layer normalization right after the embedding layer during the preliminary experiments on a whopping 104 billion parameters model, significantly improving training stability. The BigScience team decided to train BLOOM with additional layer normalization to avoid training instabilities. Notably, the preliminary experiments were conducted in float16, and the final training was performed on bfloat16. It led to a conclusion that float16 is the cause for training instabilities and bfloat16 doesn’t need an embedding LayerNorm.</p><ul><li><strong><em>BLOOM Training Process</em></strong></li></ul><p>The BLOOM Model is trained on the ROOTS corpus, and the training process comprises different stages like data sourcing and processing. The ROOTS corpus consisted of 498 Hugging Face datasets that cover 46 languages and 3 programming languages.</p><p>The BLOOM model was trained on Megatron-DeepSpeed 20, a state-of-the-art framework for large-scale distributed training. This dynamic framework comprises of two parts:</p><ol><li><strong>Megatron-LM21 — </strong>It provides the capability for Transformer execution, tensor parallelism, and data loading primitives.</li><li><strong>DeepSpeed 22 — </strong>It provides the ZeRO optimizer, model pipelining and distributes the training components on the table.</li></ol><p>This framework developed by the dynamic fusion of Megatron — LM21 and DeepSpeed 22 offers efficient and effective training with 3D parallelism. It provides the four essential and complementary approaches to distributed deep learning, and they are:</p><ol><li><strong>Data Parallelism</strong></li></ol><p>Data Parallelism creates multiple replicas of the model and places each replica on a different device. The model is fed on each device with a slice or a part of the data. The parallel processing ensures the synchronization of all the model replicas at the end of every training phase.</p><p><strong>2. Tensor Parallelism</strong></p><p>Tensor parallelism focuses on partitioning individual layers of the model across multiple devices. Instead of having the whole activation or gradient stored on a single GPU, the fragments of the tensor are stored on multiple GPUs, which assists in performing horizontal parallelism and intra-layer model parallelism.</p><p><strong>3. Pipe Parallelism</strong></p><p>The pipe parallelism approach splits the model’s layers across different GPU systems to ensure that each GPU system handles a fraction of the model assisting in vertical parallelism.</p><p><strong>4. ZeRO Optimizer -</strong></p><p>Zero or Zero Redundancy Optimizer ensures that different processes utilize only a fraction of data (parameter, gradients, and optimizer states) necessary for training steps. The developers used ZeRO stage 1, where only the optimizer stages were shared.</p><p>The BLOOM model received training for 117 days and achieved a training throughput of 150 TFLOPS which is currently the highest throughput a language model can achieve with A100 80GB GPUs.</p><h3><strong><em>Advantages of the BLOOM AI model:</em></strong></h3><p>BLOOM offers many benefits, making it one of the most powerful tools for diverse industry domains. Here are some of its benefits:</p><ul><li>The BLOOM model’s ability to swiftly adapt to new tasks, even with minimal training data, is one of its most striking aspects.</li><li>The BLOOM model prioritizes ethical and fair decision-making to minimize biases and promote transparency and trustworthiness.</li><li>As new duties develop, more modules may be easily added without interfering with the performance of current modules.</li><li>The BLOOM model constantly adjusts its model parameters depending on the most recent data, ensuring it stays in sync with changing data distributions.</li><li>The capacity of the BLOOM model to learn from sparse data and its complex neural network design contributes to its high accuracy.</li></ul><h3><strong><em>Limitations of the BLOOM AI Model:</em></strong></h3><p>One thing that limits its potential to be harnessed by every organization is itshigh running costs. The BLOOM model was trained on the 384 NVIDIA Tesla A100 GPUs, which cost around $32,000 each. The LLM Research is focused on training the model on bigger aspects, leading to rising training and running costs.</p><p>Moreover, the compressed version of BLOOM is 227 GB, and specialized hardware with hundreds of gigabytes of VRAM is required to operate and run the model. Compared to Chat GPT, it requires a large computing cluster equivalent to NVIDIA DGX 2, which costs around $400,000. However, Hugging Face plans to launch an API platform for the researchers at $40/month, which may not be cost-effective.</p><p>Besides, the BLOOM model is trained on real datasets because of which it may generate biased content. This can lead to over-representing some figures, under-representing some facts, and encouraging stereotypes which can lead to the creation of factually incorrect content and the generation of repetitive texts.</p><h3><strong><em>Applications of BLOOM</em></strong></h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*gbH4QKjc0A6oMYbF" /></figure><blockquote><strong>BLOOM learning capabilities help in natural language processing</strong></blockquote><p>The BLOOM AI model presents many applications throughout various industries and businesses. Its potential can be leveraged to improve operational efficiency and open new doorways for innovation. One of the potential applications of the BLOOM AI model can be seen in natural language processing which include but are not limited sentiment analysis, text summarization, and language translation.</p><p>With proficient training in 46 languages and 13 programming languages, generating coherent text and content for different purposes, like marketing, content creation, and others, makes it helpful. Researchers and developers can use it for research and development purposes to build advanced language models and artificial intelligence tools.</p><p>The researchers have warned about the authenticity of the content generated by the model, and factual content for math and history should not be trusted directly, thereby limiting its usage for biomedical, political, and legal purposes.</p><h3><strong><em>Wrapping up,</em></strong></h3><p>The BLOOM AI model opens the portal to next-level intelligence with its exceptional accuracy, scalability, flexibility, rapid learning, and natural language processing. All these abilities make it an excellent tool to implement in various industries to make operations easier.</p><p>The model’s capacity to handle and analyze complex data, generate human-like responses, and take decisions based on ethical approaches makes it different from other language models. Organizations can leverage the potential of BLOOM to improve their operational efficiency and productivity. The progress in AI technology opens up new doors and unlocks opportunities to revolutionize the world, and BLOOM is one of the important stepping stones in the transformational journey.</p><p>Thanks for sticking on till the end. We appreciate your interest and commitment in exploring this fascinating field. We hope that you found the information valuable and insightful.</p><p>If you are interested in exploring Generative AI and have any relevant projects or collaborations in mind, we would be pleased to hear from you. Please feel free to <a href="https://www.codecrafttech.com/"><strong>contact us</strong></a><strong> </strong>to discuss any ideas, questions, or potential opportunities. Once again, thank you for your readership, and we look forward to connecting with you!</p><h3><em>About the Author:</em></h3><p><a href="https://www.linkedin.com/in/drkirankumarc/"><strong>Dr. Kiran Kumar</strong></a><strong> </strong>is an accomplished AI researcher, innovator, and senior data scientist. With a Ph.D. in Supply Chain Analytics, he possesses a profound understanding of data analysis and machine-learning techniques. His extensive research contributions are showcased through numerous publications in esteemed international journals. Driven by a passion for pioneering advancements, he holds patents for groundbreaking innovations in the field. Currently, he is focused on developing cutting-edge products by leveraging his expertise in Prompt engineering and Generative AI.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=911d51676bf7" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Kubernetes Architecture: The Components Explained]]></title>
            <link>https://codecraft.medium.com/the-kubernetes-architecture-the-components-explained-41a5e84234b1?source=rss-4079a22d4d20------2</link>
            <guid isPermaLink="false">https://medium.com/p/41a5e84234b1</guid>
            <category><![CDATA[computer-science]]></category>
            <category><![CDATA[docker]]></category>
            <category><![CDATA[components]]></category>
            <category><![CDATA[kubernetes]]></category>
            <category><![CDATA[containers]]></category>
            <dc:creator><![CDATA[CodeCraft]]></dc:creator>
            <pubDate>Fri, 14 Jul 2023 12:43:51 GMT</pubDate>
            <atom:updated>2023-07-14T12:43:51.927Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*C10lkH5tmK3t-9ck6aOVLA.jpeg" /><figcaption>The components of Kubernetes architecture</figcaption></figure><p>In the first chapter of this roundup about Kubernetes and its significance in the DevOps space, we discussed an overview of the tech as well as its correlation to Dockers and Containers. We also surveyed how Kubernetes work in tandem with containers to deploy apps faster and with greater efficacy. To know more about the subject, refer to this link.</p><p>As we go further into the topic, let us take a look at the architecture that this technology is built on. This article is a runthrough of the Kubernetes architecture and its top components explained.</p><h3>What are the top components of the Kubernetes architecture?</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Qh0vxkkDLKtcnWoi" /></figure><p>Kubernetes are primarily used to manage and scale the deployment of applications and have a huge impact on accelerating the continuous integration and continuous development process of the app. Kubernetes create environments to facilitate this which in turn create clusters to run containerized applications via nodes. Broadly speaking, the Kubernetes architecture consists of three major branches. These include:</p><ol><li><strong>Pods-</strong> The base unit which manages containerised apps</li><li><strong>Kubernetes Data Plane-</strong> Units that manage containerized workloads</li><li><strong>Kubernetes Control Plane-</strong> This contains the API workload and is responsible for managing clusters and workloads.</li></ol><p><strong>NOTE:</strong> Since the local storage ends when a pod shuts down, Kubernetes offers a mechanism to streamline this process via a persistent storage mechanism which allows the user to store data beyond the lifecycle of the cell.</p><h3>Top 6 Elements of the Kubernetes Architecture Explained!</h3><p>Now that you have a high level understanding of the components of the Kubernetes architecture and how they work in synchronization to streamline the CI/CD process, let us further take a look at some of the elements that make this procedure happen. Let us have a look at this in the upcoming segments of this article:</p><h3><strong>Pod</strong></h3><p>As mentioned before in the article, Pods are the base unit of the Kubernetes architecture which plays a major role in managing containerised applications. It is the smallest functional unit which supports multiple containers on a single pod through a scalable approach. Some primary features about them are that they generally follow a one-container-per-pod architecture approach for ease of integration in use cases. They also create a new IP address on recreation.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/247/0*j7ZWXx6yxciukJYi" /></figure><h3>Deployment</h3><p>The deployment feature on Kubernetes is used to integrate and manage the deployment process dynamically. Its primary function is to modify or create instances on pods to scale their replicas in a staged and easy to control manner. It is essentially a resource object that rolls out declarative updates using `Kubectl`.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/451/0*gYAH-MwhIRdfc7Hs" /></figure><h3>Services</h3><p>The Services element of Kubernetes assists in providing all the internal and external communication that is necessary for deployment. It acts as a pass which spans over nodes to create a static IP address for every individual set of pods functioning within the Kubernetes architecture. Services are an abstraction which works to simplify container management by defining a policy of accession.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/518/0*ihmfDxVxLEebgAZQ" /></figure><h3>Ingress</h3><p>The primary functionality of Ingress is to route traffic within the architecture into clusters by providing HTTP/HTTPS accessibility. Put simply, it is a routing rule which acts as the intermediary between the clients requests and the Kubernetes services. Ingress acts as a load controller which simplifies the complexity of traffic flowing from outside the platform to the internal pods.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/421/0*zPDXJQTvzo_lGxPt" /></figure><h3>ConfigMap</h3><p>ConfigMap is an API object that is used to configure the layout of the app using key value pairs of strings while establishing runtime parameters on the Kubernetes platform. You can also create a 12 factor app which lets you change the Kubernetes platform environment in a dynamic manner.</p><h3>Secret</h3><p>In the last leg of this article, we’ll have a glance at what secrets are on Kubernetes. Secrets are essentially nothing but sensitive or encoded data which is stored inside an object. They use a base 64 encoded format to store encoded data on the API server.</p><h3>Conclusion</h3><p>Our experts at CodeCraft predict that Kubernetes may very well be on the way to becoming the future of cloud computing and CI/CD and we hope that our insights have provided you with a comprehensive understanding of the tech stack and how it works. If there is anything more you’d like to know about the subject, please feel free to let us know through our social handles and we’ll see how best we can answer your questions.</p><h3>Resources- Kubernetes Official Documentation</h3><p>If you’d like to know more about Kubernetes, please refer to the <a href="https://kubernetes.io/docs/home/"><strong>official documentation linked here.</strong></a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=41a5e84234b1" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Maximizing Potential: Codecraft’s Strategy For Training]]></title>
            <link>https://codecraft.medium.com/codecrafts-strategy-tech-training-mentoring-11897c77d67d?source=rss-4079a22d4d20------2</link>
            <guid isPermaLink="false">https://medium.com/p/11897c77d67d</guid>
            <category><![CDATA[mentorship]]></category>
            <category><![CDATA[thought-leadership]]></category>
            <category><![CDATA[strategy]]></category>
            <category><![CDATA[training-courses]]></category>
            <category><![CDATA[training-and-development]]></category>
            <dc:creator><![CDATA[CodeCraft]]></dc:creator>
            <pubDate>Wed, 12 Jul 2023 06:34:36 GMT</pubDate>
            <atom:updated>2023-07-12T07:03:03.253Z</atom:updated>
            <content:encoded><![CDATA[<h3>Maximizing Potential: CodeCraft’s Strategy For Training</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*MZ_FLca2fUbXZTFiFfYVpA.jpeg" /><figcaption>Tech &amp; IT training for software engineers</figcaption></figure><p>We all want to be productive, and we can usually come up with the right goals and objectives. The problem is that we don’t always know how to make sure that those goals are being met, or who to turn to if they are not. That’s what makes a good training program effective: it helps you figure out what your goals are, and then helps you achieve them by providing the training you need, whether it be in terms of managing your time effectively or leveling up your creative abilities.</p><p>At CodeCraft, we have a long-standing commitment to investing in the training and development of our employees. We’ve designed our training to be highly practical, with a focus on hands-on learning and enabling exposure to real-world scenarios.</p><p>We have experts in the industry who design and deliver updated training on an as-needed basis. They also curate and make available proven sources of learning for engineers of various levels of experience. In the post Covid scenario, it is critical that we ensure that the fresh engineers joining us get hands-on training on the foundations of programming and on the tools and technologies they need to use to code and collaborate effectively in a team. Additionally, our training helps software engineers to build professional connections and networks that are extremely valuable as they progress in their careers.</p><h3>Investing In Our Employees: CodeCraft’s Commitment To Effective Training</h3><p>There are several reasons why we invest in training processes for fresher software engineers. Some of the potential benefits of this approach include:</p><ol><li><strong>To improve skills and knowledge</strong>: Offering training can help software engineers improve their skills and knowledge, which can help them to hit the ground running when they start off on a project.</li><li><strong>To stay current with industry standards:</strong> The field of software engineering is constantly evolving, and offering training can help ensure that software engineers are up-to-date on the latest tools, technologies, and best practices.</li><li><strong>To ensure that the best join us:</strong> By providing training opportunities, we want to show our commitment to our employee’s professional growth and development and offer them the best possible arena to hone their skills. Fresh engineers get an excellent opportunity to learn and prove their ability to contribute to projects in a given time frame.</li><li><strong>To improve team collaboration and communication</strong>: Training can help improve communication and collaboration within a team by ensuring that all team members have a common understanding of the tools and processes being used.</li><li><strong>To increase efficiency and productivity</strong>: By providing training, we can help software engineers work more efficiently and effectively, which can lead to increased productivity.</li></ol><h3>From Novice to Expert: CodeCraft’s Training Philosophy</h3><p>Companies that invest in training their employees demonstrate that they value their employees’ growth and development. This can create a positive and supportive work environment that can lead to higher levels of employee engagement and satisfaction.</p><p><strong><em>During the training, our employees are exposed to all sorts of things that are essential for a programmer.</em></strong></p><p>For fresh engineers, we start with an introduction to the basics of programming and then pass them through hands-on sessions covering all the essentials of programming such as OOPS, functional programming, using notations to express algorithms and software designs, so that they are able to pick up any new technology, language or framework needed in their projects.</p><p>As a fresher, it is very important that firstly, you gain mastery in one programming language. In programming, it all boils down to how well we can decompose a given problem into subproblems and solve subproblems using functions and ultimately compose them together to form the final solution. With our training, engineers who are fresh out of college will be taught how to solve interesting programming problems in terms of a composition of functions and learn how to test them. The philosophy that we use is the `Assert`s first, and gradually a full fledged unit testing framework will be imparted to master the test driven development methodology.</p><p><strong><em>We have a 3 month rigorous training program for fresh engineers.</em></strong></p><p>Our trainers have decades of experience in the field of software construction and when their experience flows into the initial training, there’s no doubt that the engineers will get to learn the art and science of programming in the best way possible.</p><p>Our well-defined training programs and regular code reviews are meant to encourage developers to become proficient in a programming language so that they can quickly adapt themselves to any environment with which they come across. They also acquire the skills required for effective documentation, debugging and collaborating with other members of their team. Lastly, we ensure that we build on our employees’ experience with each other at the same time as building their portfolio to give them a launching pad for their career.</p><h3>What is our strategy to unleash the full potential of a fresher?</h3><p>We understand that it can be intimidating to join a new organization at the beginning of your career, so we make sure that every new employee gets comprehensive training programs designed to help them hit the ground running.</p><p>CodeCraft has built a team of experts who are able to impart their knowledge and experience to newer employees, creating a positive and supportive learning environment for all. Here are some of the aspects that are covered in the initial training:</p><ul><li><strong>Building a strong programming foundation:</strong></li></ul><p>The ability to write clean functions and test them effectively is a much required skill and we treat it as such. Essentially all programming languages and frameworks share a lot of common aspects. A reasonable amount of exposure to basic data structures and algorithms will help solve programming problems in a methodical way.</p><p>As our freshers strengthen their foundation as they dive deeper into the core features of a language like JavaScript and TypeScript, they will become equipped to learn any new language as needed in future. We ensure that this training involves exercises to master the programming concepts learnt. By the end of two months of initial training one will be having exposure to programming fundamentals like elementary data structures, OOPS concepts and functional programming paradigm.</p><p>In the third month, they will be exposed to app development where they get to learn concepts like how to use libraries, how the Front-end and Backend work in tandem with the database system usage. We also have weekly review sessions which can help in gauging their performance so that they have an understanding of their prowess and what are corrective measures to take up.</p><ul><li><strong>Working with collaboration tools:</strong></li></ul><p>Engineers need to communicate effectively and profusely, be it sharing code and getting it reviewed or exchanging ideas with peers. To aid this, we introduce the Git version control system way in the beginning so that they will get to know the effective way to collaborate with the team.</p><p>Slack training is also brought in at this point as the tool will be extensively used for day-to-day topic based discussions. An internal wiki system is also used to document the knowledge gained during training along with other information that engineers need to refer to work effectively.</p><ul><li><strong>Effective task management:</strong></li></ul><p>Task management is an important skill for software engineers because it helps them to stay organized and focused, and to make the most effective use of their time. Proper task management can help software engineers to break down complex projects into smaller, more manageable tasks, and to prioritize these tasks based on their importance and dependencies. This can help to keep a project on track and ensure that it is completed in a timely and efficient manner. Task management training can also help software engineers to develop the skills they need to handle changing priorities and to adapt to new challenges as they arise.</p><ul><li><strong>Organizational benchmarking:</strong></li></ul><p>In order to ensure that the new developers are able to grasp the concepts and be able to put them into practice, we have designed a training programme conducted by professionals with extensive experience in software development, who were able to explain the concepts and techniques in simple terms. The modules also follow a standardized evaluation system where the mentors could assess the new developer after each module and give him or her feedback on his or her performance. The new developers will also be given a test at the end of each module which will help us assess their proficiency in the technical front.</p><p>The standardization that the modules follow along with the evaluation system followed by the mentors who imparted training to the new developers allowed for us to assess people according to their prowess in the tech front. This systematic nature of the training programme also provided us with an authorized way to benchmark people according to their technical skill across the organization.</p><h3>In closing…</h3><p>CodeCraft is a fast-growing digital agency with a strong focus on IT and design and we’re dedicated to fostering an environment that’s fun and rewarding to work in. Ultimately, we want to provide a training experience that will help you grow and develop your skills. We are committed to developing the talent of each and every individual that comes through our doors.</p><p>We hope this article has helped you to understand how we leverage effective training at codeCraft to achieve superior productivity and hire the most competent, hard working and motivated employees.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=11897c77d67d" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[An Introduction To Kubernetes, Containers & Dockers]]></title>
            <link>https://codecraft.medium.com/an-introduction-to-kubernetes-containers-dockers-691ff8615284?source=rss-4079a22d4d20------2</link>
            <guid isPermaLink="false">https://medium.com/p/691ff8615284</guid>
            <category><![CDATA[containers]]></category>
            <category><![CDATA[docker]]></category>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[kubernetes]]></category>
            <dc:creator><![CDATA[CodeCraft]]></dc:creator>
            <pubDate>Fri, 07 Jul 2023 08:19:10 GMT</pubDate>
            <atom:updated>2023-07-18T07:20:19.683Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*QYPegWsoYppq1BycIgoJfQ.jpeg" /></figure><p>Docker, Kubernetes, and containers are already a big deal in the tech space. If you’re unfamiliar with any of these terms, I’ll give you a brief rundown. Containers allow developers to package up applications into an isolated instance and run it on a server or in the cloud without changing the source code. Docker provides tools that make it easy to create images of these containers so teams can build upon them quickly. Finally, Kubernetes is an open-source project designed to provide scheduling and deployment functionality in application environments.</p><p>Now that you know the basic pieces of the puzzle and have a better idea of what containers, Docker, and Kubernetes are all about, you can dive deeper into how they work together to help you build modern web apps through this introductory article.</p><p><strong><em>Let’s get started!</em></strong></p><h3>What are containers?</h3><p>A container is essentially an application packaged in its own file format so that it can be run without having any dependencies between one another. Containers are fast because they don’t require any additional libraries during runtime as they use container images and engines; instead everything happens within the same process space which makes them extremely efficient compared with traditional virtual machines (VMs).</p><h3>Why are containers popular?</h3><p>Containers are quite extensively used by devops engineers in today’s software development arena. Here are some reasons why they are important:</p><ul><li><strong>Portability:</strong></li></ul><p>Running an app in a container makes it far easier for the developer to deploy the app on multiple platforms and OS. This helps them to cut down on time while streamlining the DevOps process.</p><ul><li><strong>Faster delivery:</strong></li></ul><p>Containers scale up the deployment process of the application. This is because the benefits that containers offer help in scaling up the app development process.</p><ul><li><strong>Efficiency:</strong></li></ul><p>Containers boost up the development process and its efficiency makes it a cost effective alternative compared to regular virtual machines.</p><h3>What is Docker?</h3><p>Docker is a container technology that allows you to package code and dependencies together in a standardized way. You can then run these containers on any Docker Engine host without need to worry about the host machine or configuration. Docker containers are lightweight and they can be kept running for long periods of time.</p><p>When Docker was first released in 2013, container technology was far from where it is today. But it required a lot of work and tools outside of the standard OS to accomplish its concept. Docker came about to change all that and make building container images easier than ever before.</p><h3>Top Benefits of Docker:</h3><p>Here are some benefits of using Docker:</p><ul><li><strong>Light-weight:</strong></li></ul><p>Docker does not require a separate operating system for each application because of which it helps to cut down costs. Because of this feature, the architecture of Docker is light-weight compared to traditional virtual machines</p><ul><li><strong>Faster:</strong></li></ul><p>One of the primary advantages of using Docker is that it makes the development process faster as the architecture ships only what is needed.</p><ul><li><strong>Great community support:</strong></li></ul><p>Docker receives great support and the technology has many forums and discussions happening on top of it. The tech stack also has a great open source program where developers from all over the world collaborate.</p><ul><li><strong>Easy Configuration:</strong></li></ul><p>Docker is quite simple to configure. Refer to the documentation which is linked in the notes below to learn more about it</p><p><strong>NOTE:</strong> You can <a href="https://docs.docker.com/get-started/overview/#:~:text=Docker%20Desktop%20is%20an%20easy,share%20containerized%20applications%20and%20microservices."><strong>install Docker</strong></a> by referring to its docs.</p><p><strong>Popular Alternatives For Docker:</strong></p><ol><li>Podman</li><li>Buildah</li></ol><h3>Limitations of Docker:</h3><p>Here are some of the limitations that may present when using Docker:</p><ol><li>Communication between containers is hindered.</li><li>Scaling and load balancing may become an issue in case of increase in the number of containers.</li><li>Separating sensitive information becomes a hurdle when deploying an app using Docker.</li><li>Docker does not roll out updates very often.</li><li>It becomes significantly harder to mount and unmount files on Docker.</li></ol><p>Now that you are familiar with both the advantages and disadvantages of Docker as well as containers, let us cut to the chase and answer the question on hand.</p><p>WHY KUBERNETES? Secondly, why are they the buzzword they are in the DevOps space?</p><p><strong><em>Let’s get started on an in-depth exploration of Kubernetes and their significance!</em></strong></p><h3>What is Kubernetes and Why use it?</h3><p>Kubernetes, also known as K8s, is a distributed system for automating deployment, scaling and management of containerized applications. It was originally developed by Google in 2013, but has since been adopted by other large organizations.</p><p>It’s a powerful and highly scalable container orchestration platform that makes it easy for developers to get their applications up in the cloud. Kubernetes gives users access to a cloud-native approach to building distributed services using containers.</p><p><strong>SNIPPET:</strong> Kubernetes can be deployed on every major cloud provider including AWS, GCP (Google Container Engine), Azure Container Service and OpenShift from Red Hat.</p><h3>What is Kubernetes (K8s) used for?</h3><p>In this segment, let us explore some potential use cases and benefits of Kubernetes in development environments:</p><ul><li><strong>Increased usage of containers</strong></li></ul><p>Kubernetes is a container orchestrator that allows users to run and manage distributed services on clusters. It is a cloud-native approach to building distributed applications, which means you can use it to build microservices and other service-oriented architectures (SOAs).</p><ul><li><strong>Supports scalability across machines:</strong></li></ul><p>Kubernetes supports horizontal scalability with multiple nodes running in parallel on host machines. It provides tools for automating management of these applications across multiple clouds or data centers.</p><ul><li><strong>Enhanced management of containers:</strong></li></ul><p>You can also use it to manage clusters of different types of containers — for example, servers versus desktops or data center applications versus mobile apps — and automate the observability and control of those applications across environments.</p><ul><li><strong>Has self-healing capabilities</strong></li></ul><p>The Kubernetes architecture also allows for self-healing capabilities. This feature is what allows files in Kubernetes to self-detect and independently resolve issues that arise when a component goes down.</p><ul><li><strong>Features high availability</strong></li></ul><p>Kubernetes also comes with a high availability Kubernetes clusters which have more than one node to minimize failure as API services run on multiple nodes.</p><h3>Kubernetes Architecture Diagram</h3><p>Refer to the image given below to understand the Kubernetes architecture through a visual diagram:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*5gXr9YYWUdRfU_1Q" /></figure><p><strong>Alternatives for Kubernetes (K8s):</strong></p><ol><li>Docker Swarm</li><li>OpenShift by Red Hat</li><li>AWS Fargate</li><li>Nomad by HashiCorp</li><li>AWS ECS</li></ol><p>Containers and Kubernetes are two sides of the same coin. Together they allow us to use containers at scale, and do so in a more organized, efficient way. What were once pipe dreams that existed only on the backs of giants like Google and Amazon are now easily within reach for a variety of different use cases. When it comes to modern Serverless architecture though, there’s simply no beating the cost savings that go along with deploying and scaling applications in this way.</p><p>We hope you liked our article which delivers the foundation of Kubernetes. We’ll have more coming your way about this tech stack in the near future with the next piece which will talk about the <strong>components in Kubernetes</strong>.</p><p><strong>If you’d like to know more about </strong><a href="https://codecraft.medium.com/the-kubernetes-architecture-the-components-explained-41a5e84234b1"><strong>Kubernetes and the components of its architecture</strong></a><strong>, please refer to the next article in this series.</strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=691ff8615284" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Soft Skills At CodeCraft : Paving New Ways To Learn]]></title>
            <link>https://codecraft.medium.com/soft-skills-at-codecraft-paving-new-ways-to-learn-e6145916a5d6?source=rss-4079a22d4d20------2</link>
            <guid isPermaLink="false">https://medium.com/p/e6145916a5d6</guid>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[information-technology]]></category>
            <category><![CDATA[soft-skill-training]]></category>
            <category><![CDATA[soft-skills-development]]></category>
            <dc:creator><![CDATA[CodeCraft]]></dc:creator>
            <pubDate>Tue, 04 Jul 2023 09:22:49 GMT</pubDate>
            <atom:updated>2023-07-04T09:22:49.045Z</atom:updated>
            <content:encoded><![CDATA[<h3>Soft Skills At CodeCraft : Paving New Ways To Learn</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*RiC5HraUdwyqqLvARbzspg.jpeg" /></figure><p>A lot of people think that soft skills only apply to work environments but this couldn’t be further from the truth; soft skills are an essential in every arena of life! The right kind of soft skills training can give you a competitive edge as a professional.</p><p>Soft skills also have a long-term payoff: they help build relationships with clients and colleagues, setting up opportunities for future collaboration down the line. Learning soft skills is a process that never ends. However, there are some things that will help you get started on the right track and make sure that you’re developing all of the right skills for your role in software development.</p><p>You’ll need soft skills if you want to stay competitive in today’s market. In fact, studies show companies are seeking people with strong communication and interpersonal skills more than ever before!</p><h3>Why do you need soft skills as a young software professional?</h3><p>Having strong skills may be the all-or-nothing of your successful software career. Communication, collaboration, and interaction with others are the most important aspects of sound soft skills which can help you progress in your journey as in your software professional. These three areas of soft skills are the pillars of your career that don’t require a lot of technical knowledge, but they can make or break your success as a developer.</p><p>Communication is about being able to listen well and understand others when they speak. Collaboration requires that you work well with other people in teams and on projects — and it helps if you take responsibility for making sure everyone is doing their part! Interaction comes down to how well you communicate with customers (or clients), which helps them feel understood by you and can help exert an environment where they feel that you know what is expected out of the project at hand.</p><h3>What is CodeCraft’s Strategy for soft skills development?</h3><p>In fact, the demand for skilled professionals who can communicate and collaborate effectively has never been higher than it is right now — and that’s why we at our company are here to help you learn how to harness your own personal talents so they can be put to use as you pave your way as a professional. So let’s dive into our 12-pronged approach for building these valuable traits:</p><ol><li>Effectively managing your tasks while keeping a healthy work-life balance</li><li>Maintaining a hygienic and professional email etiquette</li><li>Balancing your energy levels to prevent an early burnout</li><li>Developing effective networking skills for building impactful connections</li><li>Building a growth philosophy of finding your inspiration daily</li><li>Honing communication skills for a better collaborative approach</li><li>A how-to on handling emotions at work for a balanced mindset</li><li>Structuring an environment of constructive criticism to allow introspection</li><li>Learning how to set the stage for a project by improving client interviewing skills</li><li>The significance of assuming responsibility for tasks to ensure smooth workflow</li><li>Developing critical thinking skills for analytical problem solving</li><li>How to pitch ideas better through illustrative and action-oriented presentations</li></ol><h3>CodeCraft’s Doctrine For Effective IT Soft Skills Training</h3><p>We, at CodeCraft, know that soft skills are the real drivers of your career and productivity. We provide training in communication, critical thinking, and creativity through our proprietary curriculum, delivered on a 1:1 basis, which is focused on creating an impact through actionable behavior.</p><p>Our trainers will ensure that you are equipped with the right tools and techniques to help you succeed in today’s competitive environment. By using Bloom’s taxonomy in our soft skills module, we also work on understanding the different stages of information processing to help us analyze our training delivery methods. This will help you to understand how to solve problems by using the approach of analyzing the process to assess the situation, i.e. from the grassroot level.</p><p>The training is designed to serve you by inculcating your learning towards building skills and knowledge so that you develop your creativity, critical thinking ability and improve your self-confidence in facing change and challenges in the work environment.</p><p>At the end of the day, we need to do whatever it takes to make sure that training sticks so that you’re taking back something to implement with you as you start your venture as a young IT professional. With this in mind, we’ve created a model which can help us to assess how well our method of teaching is working, and how good a job we’re doing in helping you to achieve their goals. By using an evaluation model based on the Kirkpatrick paradigm, we also determine if you are learning and if your take back from the experience is something you’re able to translate into actionable behavior.</p><h3>Signing off…</h3><p>Which brings us back to our original question: what do you need to know in order to establish yourself as a determined young professional, IT or otherwise? We can’t give you all the answers, but we can tell you one thing for sure. No matter how you define success, if you want to break into this industry today, you have to have great interpersonal and soft skills. With the right foundation, you will be able to build upon those skills and you’ll also find that it opens doors for you throughout your career!</p><p>We’re dedicated towards creating an environment which fosters synergic learning and personal growth for everyone that is creating CodeCraft! We hope that this article will give your insights of approach towards building a collaborative and harmonious workplace.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e6145916a5d6" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[What can we learn from Automation Testing failures?]]></title>
            <link>https://codecraft.medium.com/what-can-we-learn-from-automation-testing-failures-c4654933bfe9?source=rss-4079a22d4d20------2</link>
            <guid isPermaLink="false">https://medium.com/p/c4654933bfe9</guid>
            <category><![CDATA[software-testing]]></category>
            <category><![CDATA[testing]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[quality-assurance]]></category>
            <category><![CDATA[web-application-testing]]></category>
            <dc:creator><![CDATA[CodeCraft]]></dc:creator>
            <pubDate>Tue, 14 Sep 2021 05:22:36 GMT</pubDate>
            <atom:updated>2023-05-04T09:43:23.074Z</atom:updated>
            <content:encoded><![CDATA[<h3>CodeCraft’s POV: Learning From <strong>Automation Testing F</strong>ailures</h3><h4>In the world of <strong>software testing</strong>, tests sometimes fail! Here’s CodeCraft’s POV on getting it right to avoid flakey testing</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*t8FbDUtY6P5mSwtPcq7cdA.png" /></figure><p><a href="https://www.codecrafttech.com/quality-assurance-usa.html"><strong>Quality Assurance</strong></a> is a very important part of a successful <a href="https://www.codecrafttech.com/services.html">software development</a> methodology. With the trend turning towards Agile, a quick testing feedback, quick deployment and a quick deliverable have become a must. You need to ensure that you research and plan to suit these SDLC methodologies to facilitate your <a href="https://codecraft.medium.com/a-quick-guide-to-ci-cd-b02259d52d2"><strong>CI/CD</strong></a> pipeline. CI/CD ensures feedback after every check-in. This in general means a faster development process. <a href="https://www.codecrafttech.com">Faster deployment enables developers</a> to provide any number of bugs fixing in a minimum amount of time. The code is only deployed on the customer’s end after ensuring that it does not break. However, with sites and apps being equipped with increasingly sophisticated features, <strong>manual testing</strong> becomes a complicated and long-winded task and with <strong>automation testing</strong>, this can be made a lot easier.</p><p><strong>Table of Content:</strong></p><blockquote><strong><em>1. How to analyse automated software testing failures?</em></strong></blockquote><blockquote><strong><em>2. How can we avoid flaky tests?</em></strong></blockquote><p>Above was the process that we followed to ensure the code was error-free, deployable and also helped us to ship better code faster. Now lets only focus on the test phase of the pipeline, which provides most important feedback about the build once all the automated tests are executed after every developer checks in some new code in the form of a new feature or a bug fix. As a tester, I was responsible for this phase of the pipeline and oversaw if everything was going smooth. But this is not the case all the time, when the pipeline breaks at the test phase and build turns red, there is mass panic everywhere and now everyone wants to know the reason behind the <strong>automation testing</strong> failures. For this, I was carrying out a test failure analysis to analyse a failed test and figure out what went wrong.</p><h3><strong>How to analyse <em>automated software testing</em> failures?</strong></h3><p><strong>Automation testing</strong> failures are not bad altogether, in fact we write tests to fail to find vulnerability in the system. This is not the problem, but If the time spent investigating test results exceeds the time saved by running automated tests, then automation does not improve output quality and it’s not worth the cost. To reap the benefits of <strong>automation testing</strong>, it is essential to know how to properly handle the growing amount of test results. A better understanding of results usually creates more transparency within and outside of the testing team.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*mjc8tjtMA1BT1QbC8eho7A.png" /></figure><p>In <a href="https://www.codecrafttech.com/quality-assurance-usa.html"><strong>Software testing</strong></a> in order to achieve effective failure analysis, below are some of the questions that every tester needs to ask themselves.</p><ol><li>First question you need to ask yourself is “<strong><em>Did the test fail because of a problem with the software that you were testing,</em></strong> or <strong><em>because of a problem with your test?</em></strong>”. After all, before you go telling the developers that their code has a bug, you should make certain that the problem was not caused by a test that you wrote. To understand where the problem lies whether with the software or with your test, you need to know the root cause of the issue.</li><li>Second question you need to ask yourself is “<strong><em>If the failure was caused by a problem with your application’s code, how many builds or configurations are affected by the failure?</em></strong>”. Now if you have multiple test environments make sure to replicate this and also try it out on multiple devices to see if it is device specific and finally search in automation history for the last time when this test was passed.</li><li>A final question you need to ask yourself is <strong><em>“how significant is the failure?”</em></strong>. Understand how critical of an impact this has on the application. <strong><em>Is it significant enough that you need to delay deployment until it is fixed? Or is it a relatively minor issue that doesn’t warrant cancelling a whole deployment?</em></strong></li></ol><p>In general <strong>automation testing </strong>failures can be caused by the error prone commit done by the developer or the application under test <strong>(AUT)</strong> has changed or simply the tests designed are flaky in nature. Let’s focus more on the latter as it is a more serious issue that we generally see in our <strong>automated software testing<em> </em></strong>reports.</p><h3><strong>How can we avoid flaky tests?</strong></h3><p>Too many <a href="https://www.codecrafttech.com/quality-assurance-usa.html"><strong>software testing</strong></a><strong> </strong>projects often fail due to flaky automation tests. To help you avoid these pitfalls, these are some of the best practices that are helpful in avoiding flakiness.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*8tSdmSU41AQ3LpeOKu4CGQ.png" /></figure><blockquote><strong>1. Avoid UI testing whenever possible</strong></blockquote><blockquote><strong>2. Focus on more automation testing scenario instead of test cases</strong></blockquote><blockquote><strong>3. Stop testing multiple things in one script</strong></blockquote><blockquote><strong>4. Prerequisite should never be done using driver approach</strong></blockquote><blockquote><strong>5. Stop designing test script that are dependent on each other</strong></blockquote><blockquote>6. Testing “scripts” thoroughly before committing</blockquote><blockquote><strong>7. Excessive use of xpath as locator</strong></blockquote><blockquote><strong>8. Control the controllable</strong></blockquote><blockquote><strong>9. Designing good test automation framework</strong></blockquote><h4><strong>1. Avoid UI testing whenever possible</strong></h4><p>Everyone is guilty of doing this, we look at a user story and automate its acceptance criteria on the UI layer itself. This is not wrong, but when test scenarios are complex there are chances that tests might turn out to be flaky. Instead of doing this, we have to target asserting on correct layers. Modern web applications are now clearly divided between backend and frontend. The backend is mostly composed of a number of REST web services or APIs with easily accessible end-points. The application’s logic can be tested at the API layer also instead of always resorting to validate functionality at the UI layer which is at best cumbersome.</p><h4><strong>2. Focus more on <em>automating testing </em>scenario instead of test cases</strong></h4><p>As part of <strong>manual testing</strong> when testers write test cases, normally they break scenario’s into multiple steps or into test cases sometimes. When designing a test script, always target one scenario at a time whenever possible. There need not be 1:1 mapping always with test cases. This doesn’t mean combining 100 test cases into a single test script. This applies only when you’re testing a simple flow and making multiple validation at the same time. This also helps with maintaining test scripts when test cases keep on growing. Do bear in mind that by automating a test, you are not really testing, you’re only merely checking that the feature in question is satisfying some acceptance criteria. You cannot <strong>automate testing</strong>, but you can automate the checking of known facts.</p><h4><strong>3. Stop testing multiple things in one script</strong></h4><p>This may conflict with a point that I made earlier, but what I’m trying to get is that we should not put multiple assertions on single UI elements. Instead, we have to keep our test script as simple as possible. We should not assert on something that might change tomorrow which causes the script to break. Always remember that most of the flaky tests are due to bad assertions from our code.</p><h4><strong>4. Prerequisite should never be done using UI driver approach</strong></h4><p>Test cases might have a certain dependency or precondition that has to be met before executing the test case. When we automate these kinds of test cases we are likely to use a UI driven approach to satisfy the precondition, then proceed with the testcase. We fail to note that the test is never even executed, if the script fails in the precondition stage. To avoid these failures, whenever possible use api’s to meet these requirements.</p><h4><strong>5. Stop designing test scripts that are dependent on each other</strong></h4><p>Attempting to execute hundreds of test cases in an exact and predefined order is not a good idea. The reason being, that if your test suite of hundreds of tests must be run in a certain order, and one of the test cases fail, then you must run the entire suite again when re-testing. And again, identifying the error would require manual inspection. This is obviously very inefficient. This approach works against the benefits that come with test automation; flexibility, agility, etc. This clearly defeats the purpose of testing which dictates that each single case can run on its own without being dependent on other cases, and that the order in which cases are run should not matter.</p><h4><strong>6. Testing “scripts” thoroughly before committing</strong></h4><p>Most of the time after designing a new test case, we run the test a couple of times and see if its passing. If we see a green check, we move on with automating other test cases. There is a fundamental flaw in this approach as we fail to understand that one test case may fail in a few different ways and we did not even test failure scenarios yet. And maybe for a different set of data system behaviour is slightly different from before which was also not handled. Therefore, we must test with multiple combinations of test data before signing off on the current test case.</p><h4><strong>7. Excessive use of xpath as locator</strong></h4><p>Most of the time, developers fail to allot IDs to all the web elements while it is mandatory for every web element to have an ID for effective testing. So, we as testers opt for xpath instead, knowing these xpath are slow. If the automated test script is not able to find these web elements within a prescribed time limit, the test fails resulting in flaky tests. Therefore it is better to ask the development team to add ID’s wherever possible.</p><h4><strong>8. Control the controllable</strong></h4><p>In the world of <strong>software testing</strong>, consistency is the mother of quality. When using <strong>manual testing</strong> to verify the automation failures, we might not be able to replicate failures all the time. In some cases It happens that a test case may pass on our local setup and as soon as we push it to <a href="https://codecraft.medium.com/a-quick-guide-to-ci-cd-b02259d52d2">CI/CD</a> setup the same test case fails. Most of us have faced this one time or another. This can happen with our test cases as well, if we have not taken machine speed or network speed into account. Machine we execute on may be slower in performance and the network may be slower when compared to the machine we executed on local. We have to anticipate this behaviour and handle them while developing the scripts with use of waits provided by<strong> selenium testing</strong> tool.</p><h4><strong>9. Designing good test automation framework</strong></h4><p><strong>Automation testing</strong> requires the right tools, test automation frameworks, and technical knowledge to yield results. Before building an automation framework, first you need to select the right tool for the project. For that you need to know the application being tested is web-based or mobile-based? To test the former, use <strong>Selenium testing</strong> to automate your tests. For the latter, <strong>Appium</strong> is one of the best possible tools for automation. When creating a Test Automation Framework, we should consider the following main points</p><blockquote>To be able to create automated tests quickly by using appropriate abstraction layers</blockquote><blockquote>The framework should have meaningful logging and reporting structure</blockquote><blockquote>Should be easily maintainable and extendable</blockquote><blockquote>An error handling and retry mechanism to rerun failed tests</blockquote><h3><strong>Conclusion</strong></h3><p>Your tests will fail, at least sometimes. Test automation failure analysis is a key pillar of continuous testing. Continuous testing creates a lot of test results data, which in turn results in failed test cases. The way you react to the failures plays a pivotal role in shaping the effectiveness of your overall testing strategy. <strong>Automation testing</strong>, if done wrong or with no thought process, is a waste of time and provides no value to anyone. The right failure testing analysis solution allows you to focus on actual failures that may be a risk to the business, not the false alarms. And as you mature your DevOps process and expand test automation, smart test reporting will become critical as you expand. Above points are not geared towards any specific kinds of testing tools, but can be considered as general best practices across any framework whether it is used for <strong>selenium testing</strong>, <strong>appium testing</strong> or any other tool for that matter. If the above thing is done well, you will have no problems maintaining automation scripts as you scale.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c4654933bfe9" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>