<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by V. Thulisile Sibanda on Medium]]></title>
        <description><![CDATA[Stories by V. Thulisile Sibanda on Medium]]></description>
        <link>https://medium.com/@thulieblack?source=rss-3060862dc839------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Tue, 07 Apr 2026 14:41:01 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@thulieblack/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Fueling Passion with Curiosity in Open Source: Beyond Contributions]]></title>
            <link>https://thulieblack.medium.com/fueling-passion-with-curiosity-in-open-source-beyond-contributions-47004d12085a?source=rss-3060862dc839------2</link>
            <guid isPermaLink="false">https://medium.com/p/47004d12085a</guid>
            <category><![CDATA[open-source]]></category>
            <dc:creator><![CDATA[V. Thulisile Sibanda]]></dc:creator>
            <pubDate>Thu, 25 Sep 2025 00:00:41 GMT</pubDate>
            <atom:updated>2025-11-18T11:09:58.915Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/761/0*ssCB_FTSperjQCF8.jpg" /></figure><p>Many of us began our open source journey by fixing a typo, fixing a bug, improving documentation, or adding a feature. That one merged pull request marked the start of our contributions to open source.</p><p>But what happens after you become a regular contributor to that project? If you are someone who strives to grow in every situation, you’ll soon realize that being a regular contributor is just the beginning and often not enough. There is much more to open source; it is an entire world of learning, ownership, and leadership.</p><blockquote><strong><em>Disclaimer</em></strong><em>: If you’re one of those who are tired of just being a regular contributor and looking to go beyond just contributions, then this article is for you. It’s not a guide, but it highlights what lies beyond contributions, how to embrace maintainership, learning in public, and personal growth.</em></blockquote><h3>Curiosity as a Compass</h3><p>Curiosity is often an underrated attribute to have, and yet it can be a catalyst for growth. It’s easy to fall into the habit of waiting for specific tasks to be assigned to you or for “good first issues” that have a clear scope and are easy to tackle.</p><p>True growth begins when you cultivate a mindset that seeks to understand the “why” behind your contributions. Consider asking yourself questions that challenge your knowledge, such as:</p><ul><li>What problem is this pull request (PR) trying to solve, and what challenges does this project aim to address?</li><li>How do maintainers determine which features are needed for this project?</li><li>How can I improve the user or onboarding experience?</li><li>What are the current best practices for maintenance and project management?</li><li>Where are the unnoticed gaps that need attention?</li></ul><p>Each question sparks a new opportunity. Allowing yourself to be curious about how things are and how they can be improved can inspire you to take the initiative beyond your assigned tasks and begin making broader, innovative contributions.</p><h3>Learning in Public</h3><p>While contributions and mentorship programs are important aspects of open source, they are not the only paths to involvement. Often, after completing a program, participants may feel stuck, unsure of what to do next, and might end up moving on to another program. However, many successful maintainers did not rely solely on structured programs for their growth; they thrived by learning in public.</p><p>Here are some pathways I took to elevate my presence in the project and further my growth and learning. These are not the only options available, but they are practices I still use in my learning journey that you might consider exploring:</p><ol><li><strong>Participate in Community Meetings and Discussions:</strong> Many projects have various communication forums. Utilize these spaces to increase your visibility. Engage in discussions, share your insights, and learn from others. Such involvement can offer unique perspectives and foster relationships that create additional learning opportunities.</li><li><strong>Volunteer:</strong> Volunteering without expectations often leads to unexpected results. Even small contributions can provide valuable experiences you wouldn’t have gained otherwise. For example, my journey in community building began after I hosted a few sessions at an AsyncAPI Online Conference. This experience was new for me and encouraged me to get more involved, helping me establish my presence not just through writing documentation but also through active participation in other community activities.</li><li><strong>Drive Project Initiatives:</strong> Similar to volunteering, identify areas where the project could improve or expand. Look for opportunities to take the lead, such as becoming a release coordinator or champion. Taking on such initiatives positions you as an essential team member and helps cultivate your leadership skills.</li><li><strong>Learn in Public:</strong> Sharing your learning journey with others not only fosters your growth but also inspires and educates those around you. Being transparent about your challenges and discoveries can create a ripple effect, attracting more contributors and solidifying your role as a knowledgeable resource in the community. You can do this through writing blog posts, creating tutorials, or giving talks at meetups or conferences.</li><li><strong>Seek, Don’t Just Wait for Mentorship:</strong> While mentorship programs can provide valuable guidance, don’t rely solely on formal arrangements. Approach community members whose work you admire and ask for advice. Forming connections based on shared interests can lead to organic mentorship and foster collaborations that accelerate your growth.</li></ol><p>By taking these steps, you position yourself as a resource, not just a participant. You’ll be surprised at how much faster you grow when your learning is visible.</p><h3>The Leap to Maintainership</h3><p>The greatest shift starts when you decide to move beyond being a regular contributor and aim to become a maintainer. That’s when curiosity meets responsibility. Instead of waiting to be assigned tasks, you start creating issues, reviewing your peers’ work, and learning how to properly triage tasks within a project.</p><p>You get to contribute to shaping the direction of the project, onboard other new contributors, collaborate, and actually gain experience on how to work in a distributed environment, and also foster innovation.</p><p>Making such a decision and leap will also challenge your thinking and help you view things from a different perspective. It cultivates a mindset focused on sustainability and long-term success.</p><p>Ultimately, growth in open source is not linear; it is a cycle of curiosity, experimentation, ownership, and responsibility. Every interaction and effort counts. Your unique perspective is valuable, and a whole world awaits you to explore beyond regular contributions.</p><p>Fuel your passion with curiosity, and let it take you further than you ever imagined. Get out there, explore, and let your journey inspire others, too!</p><p><em>Originally published at </em><a href="https://thulieblack.github.io/blog/passion-and-curiosity"><em>https://thulieblack.github.io</em></a><em> on September 25, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=47004d12085a" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[What is functional programming and why it is important to learn it?]]></title>
            <link>https://thulieblack.medium.com/what-is-functional-programming-and-why-it-is-important-to-learn-it-1821d8ff4d67?source=rss-3060862dc839------2</link>
            <guid isPermaLink="false">https://medium.com/p/1821d8ff4d67</guid>
            <dc:creator><![CDATA[V. Thulisile Sibanda]]></dc:creator>
            <pubDate>Tue, 24 Jan 2023 07:25:40 GMT</pubDate>
            <atom:updated>2025-01-06T14:59:28.368Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*6btrlStXw5jyfl1u" /></figure><p>In this current remote work setup, employers have upgraded their standards when it comes to hiring programmers. They look for individuals who possess the ability to use multiple functional paradigms to solve business problems. Functional programming has gained popularity due to its adaptability and efficiency to solve real-world problems. This article will cover the core concepts and the advantages of functional programming.</p><h3>What is functional programming?</h3><p>Functional programming is a declarative programming paradigm style where one applies pure functions in sequence to solve complex problems. Functions take an input value and produce an output value without being affected by the program. Functional programming mainly focuses on what to solve and uses expressions instead of statements. Functional programming excels mostly at mathematical functions where the values don’t have any correlation and doesn’t make use of concepts like shared state and mutable data used in object-oriented programming.</p><h3>Functional Programming Concepts</h3><p>Functional programming is built with various core concepts which we will explore below:</p><h4>● First-class functions</h4><p>First-class functions in functional programming are treated as data type variables and can be used like any other variables. These first-class variables can be passed to functions as parameters, or stored in data structures.</p><h4>● Recursion</h4><p>Unlike object-oriented programming, functional programming doesn’t make use of “while” or “for” loops or “if-else” statements. Functional programs avoid constructions that create different outputs on every execution. Instead, recursive functions call themselves repeatedly until they reach the desired state or solution known as the base case.</p><h4>● Immutability</h4><p>In functional programming, we can’t modify a variable after being created. The reason for this is that we would want to maintain the program’s state throughout the runtime of the program. It is best practice to program each function to produce the same result irrespective of the program’s state. This means that when we create a variable and assign a value, we can run the program with ease fully knowing that the value of the variables will remain constant and can never change.</p><h4>● Pure functions</h4><p>Pure functions form the foundation of functional programming and have two major properties:</p><ul><li>They produce the same output if the given input is the same</li><li>They have no side effects Pure functions work well with immutable values as they describe how inputs relate to outputs in declarative programs. Because pure functions are independent this means that they are reusable, easy to organize, and debug, making programs flexible and adaptable to changes. Another advantage of using pure functions is memoization. This is when we cache and reuse the results after computing the outputs from the given inputs.</li></ul><h4>● High-order functions</h4><p>A function that accepts other functions as parameters or returns functions as outputs is called a high-order function. This process applies a function to its parameters at each iteration while returning a new function that accepts the next parameter.</p><h3>Advantages of functional programming</h3><h4>● Easy to debug</h4><p>Since pure functions produce the same output as the given input, this means they aren’t any changes or any other hidden output produced. Functional programming functions are immutable, this also means that it’s easier to check for errors in code faster.</p><h4>● Lazy evaluation</h4><p>Functional programming adopts the lazy evaluation concept, whereby the computations are only evaluated the moment they are needed. This gives programs the ability to reuse results produced from previous computations.</p><h4>● Supports parallel programming</h4><p>Because functional programming uses immutable variables, creating parallel programs is easy as they reduce the amount of change within the program. Each function only has to deal with an input value and have the guarantee that the program state will remain constant.</p><h4>● Easy to read</h4><p>Functions in functional programming are easy to read and understand. Since functions are treated as values, immutable, and can be passed as parameters, it is easier to understand the codebase and purpose.</p><h4>● Efficient</h4><p>Since functional programs don’t rely on any external sources or variables to function, they are easily reusable across the program. This makes them more efficient as there isn’t extra computation needed to source the programs or run operations on runtime.</p><h3>Drawbacks of Functional Programming</h3><h4>● Terminology Problems</h4><p>Because of its mathematical roots, functional programming has a lot of terminologies that may be difficult to explain to a layman. Terms like “pure functions” can easily scare off people looking to learn more about functional programming.</p><h4>● Recursion</h4><p>Although recursion is one of the best features in functional programming, it is very expensive to use. Writing recursive functions requires higher memory usage which can be costly.</p><h3>Functional programming languages</h3><p>Now as you can imagine not all programming languages support functional programming. Some languages however were designed to be specifically for functional programming, while others do support both functional and object-oriented programming. Below is a list of some of these programming languages:</p><p>● <strong>Haskell</strong> — Made specifically for functional programming, Haskell is a statically typed programming language. It compiles code faster, is memory safe, efficient, and easier to read.</p><p>● <strong>Python</strong> — Although python supports functional programming, however, it was designed to prioritize object-oriented programming first.</p><p>● <strong>Erlang</strong> — Although it is not popularly used like Haskell, Erlang is best suited for concurrent systems. Messaging apps like WhatsApp and Discord make use of Erlang because of its scalability.</p><p>● <strong>JavaScript</strong> — Similar to Python, JavaScript isn’t specifically designed for functional programming. However, functional programming features like lambda expressions and attributes are supported making JavaScript a top-used language among multi-paradigm languages.</p><p>● <strong>Clojure</strong> — Clojure is a functional programming language that provides tools to avoid mutable states. Although it supports both mutable and immutable data types, it is less strict than other languages.</p><p>● <strong>Scala</strong> — Supporting both functional and object-oriented programming, Scala was designed to address the shortcomings of Java. It also comes with a built-in static-typed system similar to Haskell.</p><h3>Conclusion</h3><p>As we have covered some general core concepts found in functional programming, there is so much more to be explored. Learning more about functional paradigms can give you the leverage to use their tools and techniques to solve business problems.</p><p>I hope you found some value while reading this article and if you have any questions or suggestions please drop them in the comment section. See you in the next article, till then take care.✌🏿✌🏿</p><p><em>Originally published at </em><a href="https://thulieblack.hashnode.dev/an-introduction-to-functional-programming"><em>https://thulieblack.hashnode.dev</em></a><em> on January 24, 2023.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1821d8ff4d67" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[New Age Technologies: The 4TH Industrial Revolution Technologies]]></title>
            <link>https://thulieblack.medium.com/new-age-technologies-the-4th-industrial-revolution-technologies-db03de20d112?source=rss-3060862dc839------2</link>
            <guid isPermaLink="false">https://medium.com/p/db03de20d112</guid>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[iot]]></category>
            <category><![CDATA[robotics]]></category>
            <category><![CDATA[blockchain]]></category>
            <dc:creator><![CDATA[V. Thulisile Sibanda]]></dc:creator>
            <pubDate>Tue, 30 Aug 2022 08:51:43 GMT</pubDate>
            <atom:updated>2022-08-30T08:51:43.343Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Wm-nBQE4NKvp9D21dShGMQ.jpeg" /><figcaption>The 4th industrial revolution technologies</figcaption></figure><h3>Introduction</h3><p>Mankind is constantly seeking ways to make life better. Many have taken different paths in search of knowledge that transforms the way we do things. Such quests have led to the rise of the fourth industrial revolution. Because of this, we have seen lives improve, and industries worldwide are adopting this into their businesses. With this innovative and cutting-edge technology, the economy across the globe has also improved exponentially. This article will focus on how the fourth industrial revolution has impacted livelihood in this 21st century.</p><h3>What is the fourth industrial revolution?</h3><p>The fourth industrial revolution is the integration of technologies that are smart, fast, scalable, and more reliable to execute different tasks. Many companies have embraced this new revolution and the inventions that come with it.</p><h3>Benefits of the fourth industrial revolution</h3><p>In each and every phase of evolution, benefits are drawn from it. The fourth industrial revolution has come with a lot of advantages. These advantages are as follows:</p><ul><li>Increased productivity — with many innovations like programmable machines, production is done faster.</li><li>Good censored decisions — with techniques like data tools, companies can make decisions wisely and accurately.</li><li>Quality products — they aim for continuous improvement due to the latest technologies being developed. Quality products are created using cutting-edge technologies, ultimately improving the lifestyle of the consumers</li><li>Safety — the fourth industrial revolution has reduced risks in operations when the environment is considered hazardous. With machines, workers can work independently without worrying about their safety.</li></ul><h3>Fourth industrial technologies</h3><p>We cannot turn a blind eye to the most important thing that led to the rise of the fourth industrial revolution. Innovation has been the center of it, and we, therefore, need to give credit to some of the technologies that play a massive role in this period. These technologies are mentioned below;</p><h4>1. Internet of Things (IoT)</h4><p>Information is considered the most crucial part of anyone seeking to understand something. Internet of Things has been the hub of the development of devices connected to the internet. The number of devices being developed under this technology is increasing exponentially. Information is widely available and is changing how people perceive and comprehend things. In general, the Internet of Things has revolutionized various fields of work.</p><h4>2. Artificial intelligence (AI)</h4><p>Artificial intelligence is the most effective technology ever to be invented. Applications such as speech recognition, Google assistance, and Socratic have impacted many people’s lives. Artificial intelligence inventions have also helped improve the productivity of many remote workers and support work-life balance.</p><h4>3. Virtual Reality and Augmented Reality</h4><p>Virtual reality and augmented reality are technologies that are widely associated with gaming. However, many have used them for various purposes that benefit society. For example, one can easily locate a place and know its exact features without needing to be physically there. This saves both time and money. Additionally, these technologies have gone as far as training student pilots to fly a plane before finally getting into a real one.</p><h4>4. Blockchain</h4><p>In this age of technology, protecting your information is essential. Blockchain technology ensures that your information is safe and secure. Many industries have been drawn to the technology’s unique features, such as digital ledger and cryptography. Blockchain is still new and has potential for growth, not limited to cryptocurrencies. In a few years, industries such as insurance, banking, and healthcare, among others, will change the way of securing data with the help of blockchain.</p><h4>5. Robotics</h4><p>The creation of machines has been the focus of humankind since the first revolution. Robotics is another advanced step that creates intelligent machines that assist humans in various industries. Robotics includes automation and mass production and can be achieved with minimal labor. Although the development of robots is still a debate regarding human job security, robotics creates various opportunities under the fourth industrial revolution.</p><h3>Conclusion</h3><p>The fourth industrial revolution marks the beginning of even more remarkable achievements in human history. Everything, including humans, is getting smarter by the day. However, we must keep in check that every invention in this era benefits and improves mankind’s welfare.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=db03de20d112" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Seven Groundbreaking Inventions Made by Women in Tech]]></title>
            <link>https://thulieblack.medium.com/seven-groundbreaking-inventions-made-by-women-in-tech-1697b8aa00b8?source=rss-3060862dc839------2</link>
            <guid isPermaLink="false">https://medium.com/p/1697b8aa00b8</guid>
            <category><![CDATA[women-in-tech]]></category>
            <category><![CDATA[black-women]]></category>
            <category><![CDATA[women-in-stem]]></category>
            <dc:creator><![CDATA[V. Thulisile Sibanda]]></dc:creator>
            <pubDate>Mon, 29 Aug 2022 11:45:20 GMT</pubDate>
            <atom:updated>2022-08-29T11:45:20.678Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YX4kohweWs2zupIe-iwGUA.jpeg" /><figcaption>Seven groundbreaking inventions made by women in tech</figcaption></figure><h3>Introduction</h3><p>Inventions have been a major contributor to the world we see now more than they were a decade ago. From Albert Einstein, Isaac Newton to Mark Zuckerberg, these are a few names of individuals who trailblazed some inventions that have shaped over the years into what we see today. However, there are many incredible inventions made by women of which many of us are unaware of. In this article, we are going to outline groundbreaking inventions made by women in tech who enabled their desires to come to reality.</p><h4>1. Hedy Lamar — WiFi</h4><p>Wifi is a powerful tool in this modern age that has enabled impossible situations possible. This invention has led people to connect globally without the limitations of distance. Virtual meetings, working remotely, and bringing the world into the comfort of your own home are now possible due to Wifi. Most of us are now enjoying it, however, it is also important to appreciate and understand the efforts of individuals that made this luxury possible.</p><p>Hedy Lamar, an actress by profession in the 1920s, brought about this idea that led to what we call Wifi technology today. This came from her desire to change the outcome of the war. with her co-Inventor George Antheil, Lamar helped the navy to be able to communicate through different frequencies without leaking information to their enemies.</p><p>The frequency hopping technology was later patented in the 1940s giving rise to the first secure military encryption technology. This technology evolved to what we now benefit from as Wifi. Not only did Hedy exhibit her beauty, but she proved to be more of beauty with brains.</p><h4>2. Shirley Ann Jackson — Caller ID</h4><p>Shirley Ann Jackson was an American Physicist who invented the caller ID technology. She worked in a telecommunications company in 1976 where she invented the called ID. Due to her extensive knowledge coupled with her research abilities, she was able to devise such a helpful technology.</p><p>Every person can concur that the caller id has been very useful in our daily lives. We can now save numbers with the names of people on our devices and recognize them while calling. This has been seen to be a useful mechanism over the years and has saved a lot of time instead of spending it memorizing numbers. With confidence, we can conclude calling people is much easier and simple, thanks to Shirley Ann Jackson.</p><h4>3. Ann Tsukamoto — Stem Cell Isolation</h4><p>While conducting medical research, it is crucial to eliminate all the unwanted factors and concentrate on the subject to be tested. Clinical properties within the human body tend to interfere when elimination is not done. This results in vague conclusions and makes it difficult to advance that project.</p><p>Ann Tsukamoto developed an interest in this matter when she worked in a company specializing in stem cell research and development. Tsukamoto invented a cutting-edge technology of isolating stem cells in order to conduct clinical experiments. With her technology, she discovered a hematopoietic cell in the blood, which led to some of the stem transplant technologies focusing on cell-related diseases.</p><h4>4. Roberta William — Graphic Computer Game Designer</h4><p>After a long day’s work, we need time to relax to gather the energy for the following day. There are different ways people relax, some do yoga, and some soak themselves in a hot bath. Some find gaming as something to relax too.</p><p>Roberta Willam, known as the gaming genius was able to design and create some of the best-ever known adventure games in history. With her detailed storylines and complicated puzzles, Roberta was able to create memorable gaming experiences for people around the globe.</p><h4>5. Radia Perlman — Spanning Tree Protocol</h4><p>Radia Perlman played a role in what we call the Internet today. As a network engineer, her spanning tree protocol invention widely contributed to the development of the production of networks using ethernet. She ultimately contributed to the way data moves and is organized.</p><h4>6. Marie Van Brittan Brown — CCTV</h4><p>Imagine a patient goes missing in the hospital, and the hospital personnel cannot find the traces of the patient’s disappearance. This can be very traumatizing to the relatives of the patient and easily jeopardize the hospital’s reputation.</p><p>Thanks to Marie, a nurse who overcame this challenge by inventing CCTV. With it, we are now better positioned to keep our environment alert. Her works were unique; almost every company uses this technology for better security and accountability.</p><h4>7. Ada Lovelace — Computer Algorithms</h4><p>Born from a noble family working with the inventor Charles Babbage, Ada was able to translate what was Charle’s first general-purpose computer. She worked closely to interpret how machines worked. Babbage’s machine produced algebraic patterns and because of Ada’s fascination with mathematicians, she created the very first algorithm.</p><h3>Conclusion</h3><p>All these women in tech contributed their part to what we call modern-day technology. As we progress in technology, we look forward to the future, hoping for more extraordinary inventions. These women in their diversity were not limited to their careers and status. They strived to better the world despite all odds. Their inventions were a result of pure hard work, consistency, and resilience. The tech ecosystem can only get better moving forward.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1697b8aa00b8" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Getting Started With Deep Learning III]]></title>
            <link>https://thulieblack.medium.com/getting-started-with-deep-learning-iii-b0418bd85a28?source=rss-3060862dc839------2</link>
            <guid isPermaLink="false">https://medium.com/p/b0418bd85a28</guid>
            <dc:creator><![CDATA[V. Thulisile Sibanda]]></dc:creator>
            <pubDate>Tue, 30 Nov 2021 06:43:53 GMT</pubDate>
            <atom:updated>2025-01-06T14:58:45.672Z</atom:updated>
            <content:encoded><![CDATA[<p>One of the most important properties in Neural Networks is to improve performance by learning from past experiences. Learning is a process where a network learns through an interactive process of applied parameters. The network becomes more knowledgeable in every iteration of the learning process. Before we go ahead and see how the learning actually happens in detail, lets take a look at two important parameters that will play a major role in learning.</p><h3>Weights and Biases</h3><p>Weight and bias are learning parameters that transform input data within the networks hidden layers. Both of these parameters are adjusted during training. Weight affects the amount of influence a change in the input will have upon the output while Bias mainly represents how far off the predictions are from the actual values.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*QHeCwFNr-SVSE2ml.png" /></figure><p><em>source DeepAi</em></p><h3>Learning Process</h3><p>Neural networks are composed of layers of neurons (computational units) connected to each other. Each neuron transforms input data by multiplying the initial values by weights. The neurons have an activation function that defines the neurons output. The activation function is used to introduce non-linearity in the model and ensures that values passed in the network lie within the expected range. This process is repeated until the final output layer can provide predictions related to the task. Neural networks build knowledge from datasets where the right answer is provided in advance, then learn by tuning themselves to find the answers on their own in an iterating process of going ‘back and forth’ increasing the accuracy of predictions. The process of going ‘forth’ is called forward propagation, while the going ‘back’ is called back propagation.</p><h3>Activation Function</h3><p>Activation function is an algorithm used in neural networks which defines how the weighted sum of the input values is transformed into outputs. Activation functions are useful as they add non-linearity to the network. Non-linearity means that the output cannot be reproduced from a linear combination of inputs. Non-Linear functions are useful as they help networks learn complex information in order to provide accurate predictions/outputs. The reason we use non linear activation functions is because if we were to use a linear activation function then the neural network will just produce a linear function of inputs and no matter how many layers our neural network has, it will behave just like a single layer.</p><p>Let’s look at some of the commonly used non-linear activation functions.</p><h4>Sigmoid</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/700/0*7N8ZcIaDImyq95e6.png" /></figure><h3>Forward Propagation</h3><p>Forward propagation is the process where the input data is passed into the network where by all neurons apply their transformation(activation function) to the data they received from the previous layer of neurons and pass it to the next layer. The data flows in a forward direction till the final layer is reached and produces the output data(predictions). The input data is fed only in a forward direction and the data does not flow in reverse direction during the generation of the output data. These configurations are known as feed-forward networks which help in forward propagation. The reason why the data needs to only flow forward is because the reverse direction will cause a formation of a cycle thereby prohibiting the generating of output data(that’s why they are called feed forward networks).</p><h3>Loss Functions</h3><p>The next phase is to use the loss function to estimate the errors and measure the models predictions against the actual values. A loss function is a method used to optimize models with the intent of reducing the loss between predicted values and actual values. Ideally we want as less loss as possible, that is why it is important to adjust the parameters gradually till we achieve the desired results.</p><h3>Back Propagation</h3><p>Once the loss has been calculated the information is then propagated backward hence getting the name back propagation. It involves the calculation of the gradient going back ward through the feed forward network from the last layer to the first. This allows us to calculate the gradient of the loss function with respect to the weights of the model. Weights are updated individually to gradually reduce the loss function over training iterations. <em>see image below</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/700/0*_5OmVHCBSBEWKirw.png" /></figure><p><em>source </em><a href="http://toress.ai"><em>toress.ai</em></a></p><p>What this process above represents is that we are making the loss as low as possible when we go back to iterating to the network. This technique is called gradient descent.</p><h3>Gradient Descent</h3><p>Gradient descent is an optimization algorithm used to train models by minimizing the loss function whenever the parameters are being updated. Until the loss function is close to zero or equal to, the model continues to adjust in order for the parameters to obtain as minimum as possible. In order for the gradient to reach a minimum, the learning rate should be set at an appropriate value. If the learning rate is too high, the minimum point will not be reached but if we set the learning rate too low the gradient will take time to reach the minimum point, that is why the learning rate shouldn’t be too high or too low.</p><p>Let’s look at the type of gradient descents</p><ul><li>Batch Gradient Descent</li><li>Batch gradient descent calculates the error of each example within the training dataset but only updates the parameters after all the training examples have been evaluated. The advantage of batch gradient descent is that it is computationally efficient, produces stable error gradient and a stable convergence, but sometimes that convergence point isn’t the best the model can achieve.</li><li>Stochastic Gradient Descent</li><li>Stochastic gradient descent (SGD) runs a training epoch for each example within the dataset and it updates each training example’s parameters one at a time. The advantage of SDG is that it’s frequent updates allows us to have a pretty detailed rate of improvement but because of the frequent updates it becomes more computationally expensive than batch gradient descent.</li><li>Mini-batch Gradient Descent</li><li>Mini-batch gradient descent combines both batch gradient descent and stochastic gradient descent concepts. It splits the training dataset into small batch sizes and performs updates on each of those batches. This approach strikes a balance between the computational efficiency of batch gradient descent and the speed of stochastic gradient descent. This is the go-to algorithm when training a neural network and it is the most common type of gradient descent within deep learning.</li></ul><h3>Overfitting</h3><p>Overfitting is a concept in machine learning where the model fits exactly against the training data and doesn’t perform well when it comes to unseen data. Generally we want models to perform well for both training and testing data(unseen data). Overfitting happens when the model trains for too long and starts to memorize the data. Low error rate and high accuracy in a model are good indicators of model over fitting. <em>see example below</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/300/0*IWdThba7nTD2UDh4.png" /></figure><p><em>example of an overfitting model source Wikipedia</em></p><p>There are a few methods one can try to avoid overfitting :</p><ol><li>Less training — reducing the training epochs or training time helps with overfitting by reducing the model noise and preventing the model in memorizing the data.</li><li>Reducing Neural Layers — A complex model is likely to overfitting therefore by reducing the neural network layers, the model becomes less complex and eventually reducing overfitting.</li><li>Dropout — Dropout is a form of regularization, a technique used to constrain models from learning complex data. Applying dropout helps by reducing or randomly ignoring neurons in a given layer during training hence reducing overfitting.</li></ol><h4>Conclusion</h4><p>Finally deep learning plays a crucial role in our modern civilization. From demographic statistics to healthcare, deep learning has been a tool that enabled devising of viable solutions to address real life problems. It is very important for those that want to delve into this pool of unlimited knowledge to have a full comprehension of it, starting from the basics provided. With these few basic insights, l believe one is able to navigate a way to advance and create a brighter future that we all hope for with a magic touch of AI.</p><p>If you missed the previous sections of this article you can find</p><h4>Resources</h4><p><em>Originally published at </em><a href="https://thulieblack.hashnode.dev/getting-started-with-deep-learning-iii"><em>https://thulieblack.hashnode.dev</em></a><em> on November 30, 2021.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b0418bd85a28" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Getting Started With Deep Learning II]]></title>
            <link>https://thulieblack.medium.com/getting-started-with-deep-learning-ii-b0301aa830cf?source=rss-3060862dc839------2</link>
            <guid isPermaLink="false">https://medium.com/p/b0301aa830cf</guid>
            <dc:creator><![CDATA[V. Thulisile Sibanda]]></dc:creator>
            <pubDate>Tue, 16 Nov 2021 07:06:46 GMT</pubDate>
            <atom:updated>2025-01-06T14:57:17.555Z</atom:updated>
            <content:encoded><![CDATA[<p>Data comes in different forms either an image or text, but one thing is certain when it comes to deep learning, models don’t understand words or sentences they only process tensors. In this section we are going to delve into how text data is encoded to tensors, exploring different methods to achieve this, and hopefully at the end we will have a basic understanding of how things work.</p><h4>One hot encoding</h4><p>One hot encoding is a method where data is encoded in form of vectors consisting of 0 beside 1 element. Each word encoded as a one-hot vector is unique meaning that each word in a sentence has a unique assigned vector. There aren’t any instances where two words share the same vector. <em>see image below</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/443/0*-Iq2OWHxGHrJaBC6.png" /></figure><p>In the previous <a href="https://thulieblack.hashnode.dev/getting-started-with-deep-learning">article</a> we touched on why we need to use floating point numbers as the preferred data type for tensors. We are now going to see this in practice. The image above shows a representation of a one-dimensional tensor but in order for us to feed this information to the model, we need to convert the vectors to a tensor(floating point).</p><p><em>But why go through the trouble of converting data types instead of just feeding the vectors to the model?</em></p><p>When we take a look at the dimension of the vector, we see that all the 0 elements don’t contain any information. This is a perfect example of memory wasted, remember each values has a storage property assigned to it. There isn’t enough information to describe the dimension because the goal is that the more information the better. This is why we need to convert the vector to a tensor in order to save memory storage and be able to pass meaningful data to the model.</p><h4>Embeddings</h4><p>We have established a sense of how one hot encoding works however the technique is only effective when it comes to small datasets. <em>How then do we apply it to large datasets without running out of memory while providing more dimensionality?</em> To solve this issue, we can simply implement word embedding. Word embedding is a dense vector of tensors that represents words in a multidimensional space, where words that have a similar meaning have similar representations. Let’s take for instance we have two sentences<em>’i love my dog’</em> and <em>‘i love my cat’</em> , cat and dog will share a similar representation(encoding). <em>see example below</em>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/472/0*C6h5Q5z6iFbbJrhZ.PNG" /></figure><p>Let’s explore at two mainly used algorithms use in Word Embeddings</p><ol><li>Embedding Layer</li><li>Embedding layer is a word embedding that has been learned from a neural network model. It can be understood as a look-up table that maps one hot vector’s indices to dense vectors(embeddings). The dimensionality of the embedding layer can be defined according to the task, and the weights are randomly initialized but gradually adjusted during training. Once trained, the learned word embeddings will roughly encode similarities between words.</li><li>Word2vec</li><li>Word2vec is a method that processes text to vectors of words. The word2vec is mainly based on the idea of representing words in their context. There are two major learning models that can be used as part of the word2vec in word embedding :</li></ol><ul><li>Continuous Bag Of Words — this model learns the embedding by predicting the current word, based on its context. The intuition behind this model is quite simple, given a sentence like <em>‘I like my dog’</em>, we chose our target word to be <em>‘like’</em> and our context words will be <em>‘i my dog’</em>. What this model will do is take the distributed representations of the context words to try and predict the target word.</li><li>Continuous Skip Gram — this model learns by predicting the surrounding words given in a current word. It takes the current word as input and tries to accurately predict the words before and after this current word. This model essentially tries to learn and predict the context words around the specified input word, given a sentence <em>‘let’s go play outside’</em>, we can choose our input word to be <em>‘go’</em> and the model will try to predict <em>‘ let’s play outside’</em>.</li></ul><p>Both models are focused on learning about words given in their usage context where the context is defined by the neighboring words. The main benefit of these algorithms is that they take huge amounts of words in a large dataset, and produce high-quality embeddings that can be learned efficiently while using low storage space, providing more dimensions, and taking less computation time.</p><h3>Tokenizers</h3><p>Tokenizers is a technique that splits text, breaking it down into smaller pieces called tokens. For example, ‘The cat is sleeping’ can be split into four tokens ‘The’ ‘cat’ ‘is ‘sleeping’. <em>see example below</em>.</p><p>Not only do tokens function in the splitting and breaking down of text, but they also give machines the ability to read text. This plays a pivotal role when it comes to model building and text pre-processing.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/850/0*LbnC8v5ppujfCv1q.png" /></figure><p><em>text representation of tokenization</em></p><p>Let’s look at the three most commonly used tokenization algorithms and see how they work.</p><h4>Spacy</h4><p>Spacy is a natural language processing library used in tokenizing text in tensors. The input to the tokenizer is text and the output is a sequence of objects stored in a DOC object. The DOC object is a container that holds an array of tensors and for this to be constructed, a vocabulary(vocab) that contains a sequence of word strings must be created. Vocab is a storage class used to store data during tokenization in order to save memory. During tokenization, spacy tokenizes text by applying specific language rules. For example, punctuation that is found at the end of a word is usually split but in the case of a word like <em>‘N.Y.’</em> it remains as a single token.</p><h4>Subword</h4><p>As the name entails, subword tokenizes text into subwords. For example, ‘unfriendly’ can be split as ‘un-friend-ly’.One of the common methods subword uses is Byte Pair Encoding (BPE), which is popular when building transformer-based models. Subword uses character tokenization by taking one text data character at a time without grouping them. BPE helps subword by effectively tackling the concerns of running out of vocabulary words, and also strikes a good balance between performance and tokenization quality.</p><h4>What is BPE</h4><p>BPE is a data compression algorithm whereby the most frequent pair of bytes (word/token) of data is replaced by a byte that does not appear in the data. Let’s take for instance we have data <em>aaabdaaabac</em> which needs to be encoded. The byte pair <em>aa</em> occurs frequently, so we replace it with <em>Z</em> as it does not have any occurrences in our data. So now we will have <em>ZabdZabac</em> where <em>Z = aa</em>. The next common byte pair is <em>ab</em> so we’ll replace it with <em>Y</em>. We now have <em>ZYdZYac</em> where <em>Z = aa</em> and <em>Y = ab</em>. The only byte left is <em>ac</em> which occurs only once so we don’t encode it. We can use recursive byte pair encoding to encode <em>ZY</em> as <em>X</em>. Our data has now transformed into XdXac where X = ZY, Y = ab, and Z = aa. There’s no need to further compress the data as there are no more byte pairs appearing more than once. To decompress the data we start by performing replacements in reverse order. <em>see example below</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/728/0*eQUAsHAx1WI4hHPT.gif" /></figure><p><em>source jaketae</em></p><p>We have seen how BPE works in general but when it comes to text tokenization, things work differently as we don’t need any compressing. BPE ensures that the most frequent words are represented in the vocabulary as single tokens while less frequent words are broken down into two or more tokens. Suppose we have a preprocessed words , these words are spilt into characters. The bpe algorithm will look for the most frequent byte pairing, merge them and perform the same iteration until the iteration has reached the limit. That is why when it comes to machine translation, bpe is the most recommended tokenizer to do the job.</p><h4>Sentence Piece</h4><p>Sentence piece is a subword-independent text tokenizer and detokenize designed for neural-based text processing, where the size of the vocabulary is predefined before feeding to the neural model. lt implements subword segmentation algorithm’s as a parameter for the trainer. In its mission to achieve the process, sentence piece uses main components which are a normalizer, trainer, encoder, and decoder.</p><p>Normalizer is when the logical-equivalent characters of information/text/data is converted into simpler forms. From the normalized corpus, the subword segmentation model is then trained (Trainer). Encoder internally executes Normalizer to convert input tokenizers into a subword sequence with the subword model trained by Trainer. The Decoder corresponds to postprocessing respectively where the subword is converted into normalized text. Similar to how it works with BPE, sentence piece achieves higher quality tokenization and reduces error when working with longer strings and characters. It also trains non-space languages like Chinese or Japanese with the same ease as with English and French.</p><h4>Conclusion</h4><p>Now that we have seen how text data can be converted to tensors and also established an understanding of how different tokenizers work, in the last section of the <a href="https://thulieblack.hashnode.dev/getting-started-with-deep-learning-iii">article</a> we’ll finally have a look at how models learn and see how different parameters influence the model’s performance.</p><p>If you missed the first part of this article, you can find it <a href="https://thulieblack.hashnode.dev/getting-started-with-deep-learning">here</a>.</p><h4>Resources</h4><p><em>Originally published at </em><a href="https://thulieblack.hashnode.dev/getting-started-with-deep-learning-ii"><em>https://thulieblack.hashnode.dev</em></a><em> on November 16, 2021.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b0301aa830cf" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Getting Started With Deep Learning]]></title>
            <link>https://thulieblack.medium.com/getting-started-with-deep-learning-a3a828f63990?source=rss-3060862dc839------2</link>
            <guid isPermaLink="false">https://medium.com/p/a3a828f63990</guid>
            <dc:creator><![CDATA[V. Thulisile Sibanda]]></dc:creator>
            <pubDate>Mon, 08 Nov 2021 06:38:59 GMT</pubDate>
            <atom:updated>2025-01-06T14:56:41.056Z</atom:updated>
            <content:encoded><![CDATA[<p>In Deep Learning, data is the most pivotal element and it comes in different formats such as images, text, etc. It is in these simple or complex formats that data needs to be converted to numeric(tensors) for easy interpretation, and storage and ultimately feeding it into a model that can be useful in our day-to-day activities. The purpose of the whole article is poised to give a basic understanding of some of the parameters, what they do and how they work. The article is divided into three segments and be sure to follow all segments in order to get a full comprehension piece by piece.</p><h3>What are tensors</h3><p>Tensors are data structures that comprise of scalars, vectors, and matrices of higher dimensions. A single-dimensional tensor can be represented as a vector while a two-dimensional tensor can be represented as a matrix (see the image below).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/541/0*T4CjYw4WSnQJUAbm.PNG" /></figure><p><em>source: Deep Learning with PyTorch</em></p><p>Even though we can refer to tensors as a multi-dimensional matrix, it is important to note that tensors are dynamic. This means that they have the ability to transform when performing mathematical operations.</p><h3>Why Tensors Not Numpy Arrays</h3><p>Numpy is a python library that performs advanced mathematical computations and is very popular in providing multidimensional arrays. Although tensors operate the same way as NumPy arrays, tensors have the ability to perform faster computations on graphical processing units(GPU) becoming the most commonly used data structure in Deep Learning. Another advantage of tensors is that they are immutable, meaning that they can be implemented in several different ways without any change in behavior. Tensors also provide low-level implementations on numerical data structures, which boosts performance and efficiency, unlike Numpy arrays.</p><h3>Dimensions</h3><p>Before we proceed to talk about how tensors are stored in memory, we have mentioned that tensors are multidimensional matrices. This leads to the question of what is a dimension. Basically, a dimension is an aspect/detail of something. The purpose of dimensions is to give in-depth details of our data and the more data we have the better but just as Mark Twain said <em>too much of anything is bad</em>, this also applies to our case as well. Let’s take for instance a greyscale image, which is represented by a collection of scalars arranged in a grid with height and width(pixels) this automatically equates to a two-dimension <em>H + W</em>. If we add an RGB (Red, Blue, and Green) color channel to our dimension <em>H + W + C</em>, this adds more detail to the image and provides the necessary data to be captured but if we add another 4th dimension, the very essence of the image becomes vague. ln as much as we connote the fact that the more dimensions we have, the more data captured, it is therefore imperative to be able to derive details that are deemed necessary to avoid wastage of resources. ln, a nutshell dimensions play a pivotal role when it comes to tensors and data as a whole.</p><h3>Storage</h3><p>Each tensor has a storage property assigned to it that holds data. Let’s look at what storage is and how tensors are stored. Storage is a one-dimensional array that contains values of any given data type. For values to be indexed in storage, they have to be allocated in contiguous chunks. This means that the neighboring values/elements are next to each other in memory. Since we mostly work with large dimensional data, how then do we store multi-dimensional matrices in memory when storage is only in one dimension? There are various ways to tackle this paradox but one simple way to approach this is by using Stride. Stride is an indexing method that indicates the number of elements that should be skipped over in storage in order to get to the next element of a given dimension. That’s a lot to take in but let’s take an example whereby we have a 3 by 3 tensor. We go to the first index on the first row and get all the elements, repeat the same process on the second row and finally the third. By the end of the process, the stride of this dimension will now be a 3 by 1 tensor. In simple terms whenever we call stride, we count the rows and collapse the columns. see the example below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/700/0*NR6hV2vJ0Ha4aVqJ.jpeg" /></figure><p><em>source: Deep Learning with PyTorch</em></p><h3>Why 32-Bit floating point numbers are the preferred data type for tensors</h3><p>There are different data types used in tensors, mainly integers, and floating points. Now that we have seen how tensors are stored, let’s talk about why use floating point numbers more than integers. Deep Learning involves processing thousands and hundreds of dimensional data, imagine the amount of computational time and memory needed to store all this information. This is why Pytorch tensors have a default data type of 32-bit float which take less computing time and less memory storage. Using these floating numbers improves model accuracy, and increases performance and efficiency. Whenever you hear about deep learning, think of floating point tensors.</p><h3>Batch Size</h3><p>Batch size is a hyperparameter in machine learning that consists of a number of samples(tensors) that will be passed through the network at an iteration. Let’s imagine we have a plate of rice, do we eat the rice in one go? that’s almost impossible unless you want to have an upset stomach, the same thing applies to our model we need to feed the data in bits in order to obtain good results. The batch size determines how the model performs and can be utilized by trying different sizes depending on the data size. Let’s take for instance we have 10 000 samples of data, we can define the batch size to 100, data will then be grouped and passed to the model as sets of 100. we will later see the impact of batch size</p><h4>Conclusion</h4><p>In the next <a href="https://thulieblack.hashnode.dev/getting-started-with-deep-learning-ii">article</a>, we will delve into ways we can convert text data to tensors and also look at some of the most popular tokenization algorithms used in deep learning.</p><h4>Resources</h4><p><em>Originally published at </em><a href="https://thulieblack.hashnode.dev/getting-started-with-deep-learning"><em>https://thulieblack.hashnode.dev</em></a><em> on November 8, 2021.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a3a828f63990" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Recommendation System]]></title>
            <link>https://thulieblack.medium.com/recommendation-system-8aa2caa3a129?source=rss-3060862dc839------2</link>
            <guid isPermaLink="false">https://medium.com/p/8aa2caa3a129</guid>
            <dc:creator><![CDATA[V. Thulisile Sibanda]]></dc:creator>
            <pubDate>Fri, 22 Oct 2021 06:57:22 GMT</pubDate>
            <atom:updated>2025-01-06T14:54:52.013Z</atom:updated>
            <content:encoded><![CDATA[<p>In the past weeks, I had an opportunity to explore different concepts in Deep Learning and stumbled upon a very interesting topic about Recommender Systems and how machine learning plays a pivotal role in creating them. In this article, we are going to delve into the basics of recommender systems.</p><p>A recommender system is a subset/class of information filtering system that anticipates the evaluation or preference that a user would feed to an item, in simple terms recommender systems are software tools used to provide suggestions to a user according to their requirements.</p><p>Recommendation systems use different techniques but we are going to mainly focus on the two most popular used filtering systems.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*PgdREoWEOmCwxpMZ" /></figure><h4>Collaborative Filtering</h4><p>Collaborative filtering (CF) is the most commonly used technique in designing recommender systems. Most popular websites like Netflix, IMDb, and Amazon make use of collaborative filtering. In this technique, collaborative filtering establishes its method by gathering data and examining information based on a user to make predictions by looking at similarities with other users.</p><p>Different types of Collaborative Filtering :</p><ul><li>Memory Based Filtering — this technique makes use of user rating information to anticipate the similarities between the users and items to make predictions/recommendations.</li><li>Model-Based Filtering — in this technique we create a model by extracting information from the rating dataset (using data mining) and use the model to make recommendations/predictions.</li></ul><h4>Content-Based Filtering</h4><p>Content-based filtering(CBF) algorithms recommend items to a user based on similarity count. The best matching items are recommended by comparing various items which a user previously rated before. For example, if I go to an online store and like a particular android phone, the content-based filtering algorithms will recommend pages related to android devices.</p><h4>Getting Started</h4><p>Now that we have laid the foundation of understanding recommender systems, let’s delve into the things needed to get started in building a recommendation system collaborative filtering model.</p><h4>Data</h4><p>Data is the most important must-have tool when it comes to building applications in deep learning. In this particular case, we’ll need data that contains a set of items and users that have already rated or reacted to these items. The ratings can range from 1 to 5 showing the most liked or disliked product. When working with this type of data, it will mostly form a matrix whereby each row would contain ratings given by a user and each column would contain ratings received for each item.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/477/0*Zz8Ao5SP6Cyxaurk" /></figure><p><em>(ratings matrix)</em></p><p>In the example shown above, we have five users who have rated five items. Some cells in the matrix don’t contain any information as it is unlikely for users to rate or react to every item they come across. These particular empty cells are called <em>sparse</em> and the mostly filled cells are called <em>dense.</em></p><h4>Steps Involved</h4><p>To build a system that can automatically recommend items to users based on their preferences, the first step is to find similar items and users. The next step is to predict the item ratings that a particular user has not rated.</p><p>To get started, we need to solve the following problem statements :</p><p><strong>Approach</strong></p><ol><li>Determine which users or items are similar to one another.</li><li>Discover the similarities between the users or items and establish the rating a user would give to an item based on the ratings similar to other users.</li><li>Measure the accuracy of the ratings that were calculated.</li></ol><ul><li>Because collaborative filtering contains different types of algorithms, there are multiple ways to find similarities between users and items or multiple ways to calculate ratings based on similar users. Depending on the preferred approach used, the end result would still be a collaborative filtering method.</li><li>One crucial thing to bear in mind, in the collaborative filtering approach the similarities are not calculated by using factors such as the age of users or genre of movies but are strictly calculated on a rating basis (either implicit or explicit) a user gives to an item.</li><li>When approaching to solve the measure of accuracy from your predictions, there are multiple techniques used like error calculation.</li></ul><h4>Error Calculating Techniques</h4><p>One of the approaches used to measure the accuracy of your results is the Root Mean Squared Error(RMSE). In this technique, you predict ratings from a test dataset of user-item pairs whose ratings values are known(available). The difference between the known values and the predicted values would now be the error. We square all the error values from the test set, find the average(or mean) and take the square root of that average to get the RMSE. Another metric used to measure accuracy is the Mean Absolute Error(MAE). In this technique, we find the magnitude of error by finding its absolute value and taking the average of all error values.</p><h4>Conclusion</h4><p>Although there are several other recommender systems like demographic and hybrid filtering, it is very important to know that they all come with pros and cons. With that in mind, purpose and fulfillment are derived from the fact that the systems improve mankind’s way of living by removing the burdens of principle time wastage. Users are in a better position to focus on other necessities as the recommender system enables them to do so with just one search click touch of a button.</p><h4>Resources</h4><p>1. <a href="https://developers.google.com/machine-learning/recommendation/content-based/basics">developers.google.com/machine-learning/reco..</a></p><p>2. <a href="https://builtin.com/data-science/collaborative-filtering-recommender-system">builtin.com/data-science/collaborative-filt..</a></p><p>3. <a href="https://towardsdatascience.com/essentials-of-recommendation-engines-content-based-and-collaborative-filtering-31521c964922">towardsdatascience.com/essentials-of-recomm..</a></p><p>4. <a href="https://medium.com/analytics-vidhya/movie-recommender-system-using-content-based-and-collaborative-filtering-84a98b9bd98e">medium.com/analytics-vidhya/movie-recommend..</a></p><p><em>Originally published at </em><a href="https://thulieblack.hashnode.dev/recommendation-systems"><em>https://thulieblack.hashnode.dev</em></a><em> on October 22, 2021.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8aa2caa3a129" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Fundamentals Of Writing]]></title>
            <link>https://thulieblack.medium.com/the-fundamentals-of-writing-b1252aa49328?source=rss-3060862dc839------2</link>
            <guid isPermaLink="false">https://medium.com/p/b1252aa49328</guid>
            <dc:creator><![CDATA[V. Thulisile Sibanda]]></dc:creator>
            <pubDate>Thu, 15 Jul 2021 08:10:17 GMT</pubDate>
            <atom:updated>2025-01-06T14:55:36.712Z</atom:updated>
            <content:encoded><![CDATA[<p>Have you ever thought of an idea or concept so brilliant and you have no clue of how to put it across? Imagination is the most important part of writing, whether it’s a narrative, descriptive, or a presentation. However, putting your mind to a pen and paper can be difficult sometimes. That alone can lead to frustrations ultimately quenching the passion that one has for writing. This article is poised to give simple steps on how to structure an article and how to effectively prune your writing skills.</p><p>An article comprises of the 4 aspects which l personally call <em>“The 4Ps of the writing process”</em><br>These are :</p><ol><li>The planning process</li><li>The Pen to paper process</li><li>The pruning process</li><li>The polishing process.</li></ol><h3>Planning process</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/0*2Io43-A2X1lIX6aC" /></figure><p>As a writer, blogger, or even business person it’s very important to note that information is a hub of any article that you intend to tackle. Planning consists of up to 60% of any article, this implies that more energy is put into this process as it involves a lot of research and strategies on how best to achieve what you want to say. You need to be clear on what you want to write to avoid wasting time when drafting your article. Questions like the targeted audience and the timeline are also devised in this phase.</p><h3>Pen-to-paper process</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/480/0*lCvNodddc72SIzZg" /></figure><p>Writing is a process and at this stage, the then gathered information will now be put into an articulated, desired writing style. Your message has to be clear, precise, and consistent at this stage. It’s like building, a correct mixture or ratio of materials has to put in order for the structure to stand. You are basically framing the route, and the rhythm of the article taking into consideration the audience that are going to read the article. You still have room to add more information and determine the type of images to be placed as well. The process of progressive proofreading is initiated and the reason l said it’s progressive is that errors and omissions are expected in this stage.</p><h3>Pruning process</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/480/0*Lezoc3uS4KqlMykV" /></figure><p>Proofreading can never be overemphasized as this is one of the vital tools in writing. In big establishments, there are people who are employed specifically for this role and are majorly called editors. At this stage, you will be looking at your article with the eye, mostly the critical constructive one. Put yourself in the shoes of the readers and always remember you are your first critique. You can never lie to yourself though most people choose to ignore it. Grammar and punctuations are some of the things to look at. Take your time and if you can have a second opinion, please let them have a look and provide feedback. This will not only help you writing-wise but also improve your confidence and ultimately build networks. We can never live forever in isolation, we need each other one way or the other. ‌</p><h3>Polishing Process</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/480/0*3UzqzQq2XuPji94P" /></figure><p>This is the final stage in your writing process and hence the most important one. At this stage, you leave no stone unturned. Ensuring that the article written is in sync with the heading and composition thereof is very important at this point. Final adjustments including the right images, and quotes are also the main priority as they determine how the article will be framed. Be proud of what you have archived at this stage and take it as a win for yourself, let your creative mind be enjoyed by your desired audience.</p><h3>Conclusion</h3><p>Writing is a skill and but not only is it a skill, it is also an engine that drives the way people think in almost every aspect of our lives. Being able to translate your imagination into pen and paper, and allowing your audience to enter your secret place through writing is such a wonderful opportunity that only open-minded writers can appreciate. Remember to stay positive, your first article might not be perfect but the more you write, the more perfect your skills will be.</p><p><em>gif credits Bing</em></p><p><em>Originally published at </em><a href="https://thulieblack.hashnode.dev/the-fundamentals-of-writing"><em>https://thulieblack.hashnode.dev</em></a><em> on July 15, 2021.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b1252aa49328" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>