<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Samuel Ajala on Medium]]></title>
        <description><![CDATA[Stories by Samuel Ajala on Medium]]></description>
        <link>https://medium.com/@samuelajala01?source=rss-ecea3ce2fcaf------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Mon, 06 Apr 2026 04:26:15 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@samuelajala01/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[How do Robots learn? A Comprehensive Introduction to Robot Learning]]></title>
            <link>https://medium.com/@samuelajala01/how-do-robots-learn-a-comprehensive-introduction-to-robot-learning-7f46adb4ff7b?source=rss-ecea3ce2fcaf------2</link>
            <guid isPermaLink="false">https://medium.com/p/7f46adb4ff7b</guid>
            <category><![CDATA[robotics]]></category>
            <category><![CDATA[deep-learning]]></category>
            <category><![CDATA[neuroscience]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[reinforcement-learning]]></category>
            <dc:creator><![CDATA[Samuel Ajala]]></dc:creator>
            <pubDate>Wed, 31 Dec 2025 20:18:02 GMT</pubDate>
            <atom:updated>2026-01-02T10:18:57.955Z</atom:updated>
            <content:encoded><![CDATA[<h3>How do Robots learn: A Comprehensive Introduction to Robot Learning</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/746/1*Qm80Ps8zB2h3hpc7ZthvKQ.jpeg" /></figure><p>So…you might have come across a humanoid folding laundry, doing dishes, and might have thought you yourself, What’s with the hype, If a robot can do parkour, like Atlas, folding a piece of towel should be easy right, naa…far from correct.<br>So...shall we begin</p><h4>Pre-</h4><p>Before we dive in first, let’s do a bit of recap and background check. You must have heard the term “Machine Learning”, well, it’s like a buzzword these days. So recap, Machine Learning is a method of Artificial Intelligence where you teach a model to learn from patterns in data and then use it to make predictions or analysis. In high-level view, that’s simply it.</p><p>You get a bunch of relevant data, pass it to a ML model, the model figures out the pattern and then inference can be performed on it.</p><blockquote>shocker: Many robots that we call autonomous do not learn or improve from their experience, they are usually explicitly programmed. A popular example is performing path planning on a drone, this just makes use of path planning algorithms.</blockquote><h3>What Robots used to be like</h3><p>It’s not that robots have changed. It’s that there are now new and better ways to improve Robotics (largely in aspects of Intelligence). Previously, robots are known to follow fixed, pre-programmed rules.</p><h3><strong>Introduction</strong></h3><p>Robot learning is a field that rests at the intersection of Machine Learning and Robotics. It is concerned with techniques and methods that allow a robot to learn and improve their behaviour through experience, and adapt to it’s environment.<br>That is about the simplest definition you can get.</p><p>Instead of being told exactly what to do in every situation, a learning robot:<br>- Observes the world through sensors<br>- Takes actions<br>- Evaluate the results<br>- Adjusts future behaviour<br>This is similar to how humans learn from trial and error.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/445/1*UlbkBoWPj9FYpIKZmEQA5w.gif" /><figcaption>Figure folding a towel autonomously</figcaption></figure><h3>The Anatomy of Robot Learning</h3><p>Before we explore why robots need to learn, we must define the terms. We have an agent, that observes its environment, and performs actions to change it’s state, with the goal of maximising cumulative reward over time.</p><p>An agent is the learner, the entity that perceives the world, processes information, and makes decisions to achieve a specific goal.</p><p>The Environment is everything external to the agent’s decision-making process. It is the world the robot lives in.</p><h3>The framework: The Markov Decision Process (MDP)</h3><p>In Robot Learning and Reinforcement Learning, we treat the robot as an agent that interacts with an environment.</p><p>This interaction is formally described as a Markov Decision Process (MDP). An MDP provides a framework for modeling decision-making in situations where outcomes are partly random and partly under the control of the robot.</p><h4>The 4 Pillars of the MDP</h4><p>​To turn a physical robot into a mathematical model, we break its world down into four variables:</p><ul><li>​State (S): The robot’s current &quot;snapshot&quot; of the world. This includes its own joint positions and sensor data (like LIDAR or camera frames).</li><li>​Action (A): The set of all possible moves the robot can make, such as &quot;rotate motor 15 degrees&quot; or &quot;close gripper.&quot;</li><li>​Transition Function (P): The probability that taking action A in state S will lead to a new state S&#39;. In robotics, this is often called the Dynamics Model.</li><li>Reward (R): A numerical signal (+1 for success, -1 for a crash) that tells the robot how well it is doing.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/908/1*tPzLF94IPfd6pbwUs0vRNw.png" /></figure><h3>How the Robot &quot;Decides&quot;: Policies and Value</h3><p>Policy(π)</p><p>A policy is the robot’s strategy, like a rule, that tells the robot what actions to perform based on its current state.</p><p>π(s) → a</p><p>policy: state → action</p><p>For example:</p><ul><li>A robot arm sees the gripper is misaligned → rotate slightly</li><li>A mobile robot detects an obstacle → turn left.</li><li>A walking robot feels imbalance → shift weight.</li></ul><p>Now, understand that policies can be in two forms, either Deterministic, or Stochastic.</p><p>A deterministic policy tells the robot — always choose the same action in a state.</p><p>A stochastic one chooses actions with probabilities, which is useful under uncertainty.</p><p>Now that we understand what policies are, let’s move on. If a policy tells the robot what to do, we still need a way to evaluate:</p><ul><li>how good or bad the action was</li><li>whether the state is desirable or risky</li><li>if the decision will lead to success in the long run.</li></ul><p>That’s where value functions come in. A value function is a mathematical tool that estimates the expected cumulative reward an agent can achieve from a given state or state-action pair. There are primarily two types of Value functions, we have the state-value function, and the action-value function.</p><ul><li>State-Value function(V(s)): This function returns a value that tells the agent how well being in a particular state is.</li><li>Action-Value function(Q(s,a)): This function estimates the return from taking a specific action in a state.</li></ul><p>For example, in a grid-world game where an agent navigates to a goal, V(s) might assign higher values to states closer to the goal, while Q(s,a) would rank moving “up” or “right” as better actions in specific cells.</p><h4>How Policies and Values work together</h4><p>There are two major ways robots learn decisions.</p><ol><li>Value-based learning: In this method, the agent learns a value function first, the policy is then derived indirectly from it.</li><li>Policy-based learning: here, the agent learns the optimal policy directly, without relying on value functions. Here’s how it works: the robot finishes an entire task or episode, looks at the total reward at the end and then could say — “everything I did in this episode was good, let’s do more of that&quot; or “the whole run was bad, let’s change everything.” The problem here is that because the robot only learns if its overall actions were good or bad at the end, it’s hard to tell which specific actions was the mistake.</li></ol><p>Modern methods have found a way to combine these two, it’s called Actor-Critic. It splits the brain into two neural networks.</p><ul><li>Actor(Policy): decides what actions to take.</li><li>Critic(Value function): does not move the robot. It instead watches the actor and predicts how much reward the actor will get from the current state.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/670/1*ZXMJYb4khLsrJy9LmkKAHw.png" /></figure><h3>Why Do Robots Need to Learn?</h3><p>Traditional robots work well in structured environments (like factories). But the real world is unpredictable, noisy, and full of variation. Learning allows robots to adapt to new environments, handle uncertainty, improve over time, and perform complex tasks (walking, grasping, driving).</p><p>Examples:</p><ul><li>A robot learning to walk without falling.</li><li>A robot arm learning to pick up objects of different shapes.</li><li>A self-driving car learning safer driving strategies.</li></ul><p>Now, there are many ways you can teach a robot to adapt, we’ll be covering the most used ones:<br>1. <strong>Learning from Demonstration</strong>: a.k.a Imitation Learning. This technique allows an agent to learn how to perform tasks or acquire new skills by observing and mimicking demonstrations. The demonstration is data that is provided by an expert or another robot that knows how to perform the task.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*4HEhgt1kFjajKNMrdAkaEA.gif" /><figcaption>Learning Acrobatics by watching YouTube</figcaption></figure><p>There are actually two main methods used used to achieve Imitation Learning. The first is Behavioural cloning, and the second is Inverse Reinforcement Learning.</p><ul><li>Behavioral cloning reduces robot learning to supervised learning: the robot does not discover how to act—it copies how an expert acted. No rewards. No trial-and-error. Just imitation.</li><li>DAgger(Dataset Aggregation): To understand why we need it, you first have to see where Behavioral Cloning (BC) fails. BC is like a student memorizing a video of a pro driver. The student knows exactly what to do as long as they stay on the perfect line. But the moment they hit a tiny bump and two inches off-track, they panic because the video never showed them &quot;how to get back to the centre.&quot; They’ve never seen that state before, so they make another mistake, and another, until they crash. It iteratively queries expert on states visited by the learner.</li><li>Inverse Reinforcement Learning: classical RL assumes a reward function is known. Have you ever thought — given a robot arm trying to move a cup of water, how do you reward ensuring the cup is always upright? How do you mathematically define that. Inverse RL flips the problem, Instead of learning a policy from a reward, IRL learns the reward function from expert behaviour.</li></ul><p>2. <strong>Reinforcement Learning (Trial and Error)</strong>: This method enables training a robot through direct interaction with its environment, where behavior is improved based on success or failure rather than explicit instruction. This method assumes that the agent is not told what actions to take.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*feJAZC-T4UKwVp-xLJtEOQ.png" /></figure><p>3. Model learning: or world modelling, is a robot learning technique in which the robot learns an internal model of how the world behaves and uses this model to reason, plan, and improve its actions.<br>Instead of learning solely from trial-and-error in the real environment, the robot learns how its actions change the state of the world, which enables it to think before acting.</p><h3>Learning Through Interactions: Exploration and Feedback</h3><p>Up until this point, we’ve discussed a lot of things, including the foundations and building blocks: policies(what the robot should do in any situation), value functions (which tell how good or bad an action is), and the core learning techniques.</p><p>But real world Robot Learning doesn’t happen in isolation. Robots must interact with their environment, involving two new elements: Exploration(trying new things to discover what works) and feedback (signals that tell the robot whether it is good or bad).</p><p>In Reinforcement Learning, the robot acts as an agent takes actions based on it’s current state through trial and error, over many interactions, with the goal of maximising its cumulative reward over time.</p><p>Exploration is crucial because a robot that repeats the same pattern will never be able to improve or fund better strategies.</p><p>Feedback can come from the environment itself (sparse rewards, like +1 for completing a task) or from humans. In the ones involving humans, humans can guide exploration more efficiently and safely, preventing dangerous trials.</p><p>This loop is what makes Robot Learning powerful, but also expensive and risky, which is why simulation is heavily relied on.</p><h3>Why Simulation is used, and why it breaks</h3><p>Simulations allow training a robot to happen millions of times faster and safer than reality. Training in a virtual environment can generate vast amounts of data easily, which allows the robot to lesrn complex skills like manipulation and locomotion which would typically take several months or years to achieve on a physical robot.</p><p>There are tools for this out there like MujoCo and Isaac Sim, which leverages parallelization to run simulations using GPUs further accelerating learning. Policies learned can then be transferred to the real robots(sim-to-real).</p><p>Sim-to-real doesn’t always work too. The reason is that simulations many times oversimplify, this could be deformations, signals, sensor noise — a policy that works well in simulation could fail when transferred to the real world. This gap majorly stems from imprefect modelling.</p><p>But in practice, pure simulation or pure real-world training is rare. Most successes are from hybrid approaches.</p><h3>Current Trends in Robot Learning</h3><p>As at December 2025, Robot Learning is exploding due to Foundational Models — large pre-trained Models, like LLMs, but for Robotics.</p><p>Other key trends include:</p><ul><li><strong>Multimodal Foundation models</strong> like π0(Pi zero) , a Vision-Language-Action model designed to serve as a general-purpose robot brain — developed by Physical Intelligence.</li><li><strong>Humanoids and Fast-Learning Robots</strong>: Companies like Tesla (Optimus), Figure(the towel-folding robot), and Agility push general-purpose humanoids trained with vast data.</li></ul><p>Thank you for reading to the end of this article, I hope I was able to lay the foundations well, and hopefully, it becomes a field you might look into.</p><p>Give the article a like if you enjoyed it, and feel free to follow.</p><h3>Further readings</h3><p>If you’re still curious, these are some amazing sources I found</p><p><a href="https://vedder.io/misc/state_of_robot_learning_dec_2025.html">State of Robot Learning, December 2025</a></p><iframe src="https://drive.google.com/viewerng/viewer?url=https%3A//hal.science/hal-04060804v1/document&amp;embedded=true" width="600" height="780" frameborder="0" scrolling="no"><a href="https://medium.com/media/fc4792f5d7a56f27231fab914e6f89f3/href">https://medium.com/media/fc4792f5d7a56f27231fab914e6f89f3/href</a></iframe><p><a href="https://www.nvidia.com/en-us/glossary/robot-learning/">What is Robot Learning?</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7f46adb4ff7b" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[What is LoRa? A Beginner’s Guide to Long-Range IoT Communication]]></title>
            <link>https://medium.com/@samuelajala01/what-is-lora-a-beginners-guide-to-long-range-iot-communication-503bab08f17c?source=rss-ecea3ce2fcaf------2</link>
            <guid isPermaLink="false">https://medium.com/p/503bab08f17c</guid>
            <category><![CDATA[iot]]></category>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[engineering]]></category>
            <category><![CDATA[communication]]></category>
            <category><![CDATA[lora]]></category>
            <dc:creator><![CDATA[Samuel Ajala]]></dc:creator>
            <pubDate>Fri, 18 Apr 2025 22:59:01 GMT</pubDate>
            <atom:updated>2025-04-18T23:10:00.326Z</atom:updated>
            <cc:license>http://creativecommons.org/licenses/by/4.0/</cc:license>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*xA8ThiqQjHhMKf4gLqKp9Q.png" /><figcaption>Can you hear me 10km away? Meet Lora</figcaption></figure><p>Funny how I heard about LoRa for the first time two days ago, it looked like I had just discovered fire. I’m in my fourth year in uni😂LoRa, in short, means Long Range, from this, you’d already have an idea. It is a Low-power radio communication technique, just like WiFi and Bluetooth. This means it was created for devices and components that use very little power, but are needed to transmit data over long distances. It uses the Chirp Spread Spectrum, which was developed in 2014 by Semtech, which makes it somewhat new. LoRa is like the internet but for IoT communication.</p><p>You may be asking, can’t WiFi already do that, well it can’t, Not like LoRa, WiFi is great for short bursts of high speed data, like streaming, but so bad over long distances, as data moves further away, signal drops off fast, and it consumes a lot of data.</p><h4>How LoRa Actually Works</h4><p>At the heart of LoRa’s approach is a clever modulation technique known as <strong>chirp spread spectrum (CSS)</strong>. Instead of sending data in quick bursts on a narrow frequency, LoRa uses <strong>chirps</strong> — signals that gradually increase or decrease in frequency over time, like an oscillation.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/269/1*mfHL605Z-qwgppcM8ojU1w.jpeg" /><figcaption>the spread spectrum</figcaption></figure><p>The range and reliability of the signal are controlled using a parameter called the Spreading Factor (SF). It is a crucial parameter defining features like the range, time-on-air, and battery life. The spreading factor is like a knob that controls <strong>how far your data travels</strong> vs <strong>how fast it can travel</strong>. A higher SF means the chirps take longer to transmit, making them easier for the receiver to detect, even in noisy environments or over long distances. But then, higher SF leads to more<strong> time-on-air</strong> and <strong>greater power consumption</strong>.</p><p>Depending on regional regulations, LoRa operates within a fixed frequency band, typically with bandwidths like 125 kHz or 250 kHz. The signal is not spread across multiple bands, instead, it sweeps within that single channel. The receiver is tuned to this same band and can distinguish between different chirps based on timing and frequency shift.</p><p>Note that while LoRa radios can support transmit powers up to +22 dBm, national regulations often impose stricter limits. For example, in many regions, the maximum legal transmit power is around +14 dBm. These constraints ensure that LoRa devices coexist peacefully with other users of the radio spectrum.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/225/1*_idpSUoua2-ez_k6WPptDg.jpeg" /><figcaption>a LoRa module</figcaption></figure><p>In essence, LoRa’s strength lies in its ability to trade off data rate for greater sensitivity and range, using an elegant chirp-based signal design — all while keeping power consumption impressively low.</p><h4>If Lora is like the internet, what about its protocol?</h4><p>Yes, it has. Recall that a protocol is defined as the set of rules that guide how two or more devices communicate. The most widely used protocol using LoRa is LoRaWAN: Long Range Wide Area Network. Individuals can also create their protocols.</p><p>But let’s talk about LoRaWAN, it isn’t just a random add-on, it’s the brain behind the LoRa operation. It takes LoRa’s raw ability to scream across long distances and adds order to the chaos. Think of LoRa as the walkie-talkie, and LoRaWAN as the person holding the instruction manual.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Cl0iGjbr8twO-vZsqYgCaA.png" /></figure><h4>The Things Network(TTN)</h4><p>TTN is a free, open, crowd-sourced LoRaWAN network. It’s like the public Wi-Fi of the LoRa world, but for IoT devices. Built by a global community of nerds, hackers, hobbyists, cities, and companies.</p><p>You can connect your LoRaWAN devices to it for free. All you need is a LoRaWAN gateway connected to the internet, and boom — your device is on the cloud. Think of it like a massive relay system: your sensor sends a message → a local TTN gateway hears it → forwards it to TTN’s cloud → you get the data on your dashboard or app.</p><h4>Putting it All Together</h4><p>Say I have a sensor node in a remote location, and I want to receive data from it. My sensor node needs a LoRa module, just like a WiFi or Bluetooth module, and uses a LoRa protocol; the most widely used is LoRaWAN, but custom ones can also be created. That LoRa module is what gives it the long-range superpower — it’s the antenna and the brain that knows how to whisper across kilometers.</p><p>When my sensor gets the data, it will pass through a LoRa gateway. This gateway behaves like an access point, just like WiFi, listening to any LoRa signal in its coverage area. It doesn’t process the data, though — it just picks it up and forwards it. The data then hops through multiple gateways depending on the distance, which is then picked up. Think of it like your data jumping lily pads till it reaches the cloud.</p><p>Once it gets to the network server (like The Things Network or your private one), that’s where the real backend magic happens. It checks the message, decrypts it if needed, and forwards it to wherever it needs to go — maybe your mobile app, a dashboard, or a database storing all your sensor info. You didn’t even have to touch a SIM card or run Wi-Fi. Just vibes and long-range packets.</p><h4>LoRa in the Wild: Where You’ll Find It IRL</h4><p>Let’s talk about where you’ll find it being used in the real world, but may not even know.</p><ul><li><strong>Supply Chain and Delivery Tracking: </strong>LoRa is used to track the location and condition of goods during transportation. Whether it’s a package, a pallet, or even an entire container, LoRa helps track its status in real-time. Since LoRa is low-powered and very long-range, devices with these capabilities can send out GPS data spanning hundreds of kilometers.</li><li><strong>Internet of Things:</strong> Since LoRa is like the internet for IoT devices, It is great for collecting data from remote sensors to send to a central hub for analysis.</li><li><strong>Smart Agriculture:</strong> Ever wondered how farmers know exactly when to water crops or track soil quality across massive farms? Yup — LoRa. It powers moisture sensors, weather stations, and even cow trackers. Farmers save resources and get real-time insight without needing to visit each corner of the field.</li><li><strong>Utilities and Energy Monitoring:</strong> From water meters that automatically report usage to gas pipelines that alert for pressure drops, LoRa plays a big role. Utility companies love it because it means fewer manual checks, faster response times, and lower operational costs.</li></ul><h4>What LoRa <em>Can’t</em> Do, Its Limitations (No Cap)</h4><ol><li><strong>Data rate Limitations:</strong> LoRa’s extended range comes at a significant bandwidth cost. While traditional wireless technologies can transfer megabytes per second, LoRa typically operates at just 0.3–50 kbps, depending on configuration. It has a slow data transfer rate and is unsuitable for real-time communication. This means LoRa is fundamentally unsuitable for:</li></ol><ul><li>Streaming video or audio</li><li>Transferring large files or firmware updates</li><li>Real-time high-frequency sensor monitoring</li><li>Applications requiring immediate bi-directional communication</li></ul><p>2. <strong>Doesn’t guarantee Delivery:</strong> Like what the **** is even this, why not?😂 Well, its limitations are from its design, remember it is originally designed for low power, and it’s more of a &quot;send and hope” than “send and confirm.” Keep in mind that although it doesn’t guarantee delivery, when it delivers, it uses Cyclic Redundancy Check for error detection.</p><p>3. <strong>Limited Payload size:</strong> It has a limited amount of data it can send, and it largely depends on the spreading factor and chosen bandwidth. LoRa isn’t built for sending large chunks of data — its strength lies in sending small, meaningful messages efficiently. Using a higher Spreading Factor means the data rate drops, and so the payload size has to shrink to avoid long transmission times. Typically, payloads are somewhere between 50 to 255 bytes, depending on the regional regulations and network settings. This makes LoRa ideal for sending sensor readings, alerts, or small control commands — but not for streaming video or sending big files.</p><h4>Wanna Build with LoRa? Here’s Where to Start</h4><p>Alright, at this point, you’re hyped up about this tech and want to get your hands dirty with this technology, but where do you begin? Don’t worry, it’s not as complicated as you’d think. First things first, you’ll need a LoRa module, the piece of hardware that gives your device the ability to talk LoRa. Think of it like the Bluetooth chip in your earbuds, but for long-distance, low-power messages.</p><p>Next, you need a microcontroller: an Arduino, ESP32, or Raspberry Pi to control the LoRa module and read data from your sensors (temperature, humidity, GPS, anything really). This microcontroller is the brain, it also houses the software layer, which has the protocol.</p><p>Now, for your device to send data somewhere useful, you’ll need a LoRa gateway. This acts like a bridge between your LoRa device and the internet. You can either: Buy one, build your own (using something like a Raspberry Pi and a LoRa HAT), or connect to an existing public gateway nearby using platforms like <strong>The Things Network</strong></p><p>Once your data is talking to a gateway, you’ll want to receive and visualize the data; there are platforms for these too, or you can build a custom web app.</p><p>And there we have it — we’ve reached the end of the article.<br> I hope you enjoyed the ride and picked up something new along the way.</p><p>If you’re still curious (which I hope you are), there’s so much more to explore in the world of wireless tech and IoT, and I’ll be writing more deep-dives like this soon. So be on the lookout, hit that follow, and stay tuned for more tech made simple, fun, and a little bit chaotic.</p><p>Till then, happy building 🚀</p><h3>Next Steps</h3><p>You might want to check out the resources below to learn more about this tech, because let’s be honest, we’ve only scratched the surface.</p><p><a href="https://en.wikipedia.org/wiki/LoRa">Lora on Wikipedia</a></p><p><a href="https://how2electronics.com/iot-projects/lora-lorawan-projects">50+ LoRa based Arduino and esp32 projects</a></p><p><a href="https://youtube.com/playlist?list=PLyx3JZ-p9QKNesFzsST7LadX6u6R3eLRO&amp;si=h8dr9f0zJf8SrSiU">Lora Projects Playlist</a></p><p><a href="https://www.hackster.io/">Hackster.io</a></p><p>You can also reach out on my socials:</p><p><a href="https://x.com/cy63rx_">Twitter</a> …. <a href="https://linkedin.com/in/samuelajala01">Linkedin</a></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/651/1*NRwtCb5QpP6L_LaFyXrBGw.jpeg" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=503bab08f17c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Lessons learnt from my 100 days ML learning streak.]]></title>
            <link>https://medium.com/@samuelajala01/lessons-learnt-from-my-100-days-ml-learning-streak-594b9b9198b6?source=rss-ecea3ce2fcaf------2</link>
            <guid isPermaLink="false">https://medium.com/p/594b9b9198b6</guid>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[self-improvement]]></category>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[psychology]]></category>
            <dc:creator><![CDATA[Samuel Ajala]]></dc:creator>
            <pubDate>Sun, 30 Jun 2024 21:40:12 GMT</pubDate>
            <atom:updated>2024-06-30T21:40:12.116Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*JC5KO-NnAhwAWD6q" /><figcaption>Photo by Behnam Norouzi on Unsplash</figcaption></figure><h3>Lessons learnt from my 100 days learning challenge</h3><h3>Introduction</h3><p>Hi there, I completed a learning streak in April. I wanted to challenge myself to learn at least one new concept or topic on Artificial Intelligence everyday, and it was successful. I didn’t have a broken streak.</p><p>I had two forms of inspiration to take on this challenge, one was the “no gree for anybody&quot; theme for 2024 and the other was to teach myself to be consistent. I said to myself that 2024 had to be the year for me.</p><p>I started on January 1 and felt confident that I could do it again because I did one last year and was successful. This time I decided to look for an accountability partner and found someone people that were interested, over a hundred, in short, only two of us completed the challenge😅</p><p>In February I got a certificate from Coursera on — ”Supervised Machine Learning: Regression and Classification” and felt pretty motivated to continue.</p><p>Exam was to start April 2 but my first paper was on the 4th, my anxiety grew and I still didn’t stop, I told myself one hour of ML won’t make me fail, we all waste more than that doing unimportant stuff.</p><h3>Some challenges I faced</h3><ul><li>I basically use a 3GB RAM Laptop that only lasts for 10 minutes when light’s off. I used it for a while when I started this challenge, but felt uncomfortable because there was light issues on my campus, so I had to get used to my phone(thanks to Google Colab).</li><li>I didn’t strictly follow my roadmap. Earlier when I started, I was ticking off topics of the roadmap very quickly, so I was able to complete that roadmap quickly so I was stuck on what to do next.</li></ul><p>Despite all that, I was still able to complete it without losing a streak. One thing I realised was that you can always stretch your limits to achieve what you want, that’s the challenge — ”with all these excuses, what can I do, what is possible.&quot; Anything is possible.</p><p>Surpass your limits!!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=594b9b9198b6" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Things I wish I knew when I started writing code]]></title>
            <link>https://medium.com/@samuelajala01/things-i-wish-i-knew-when-i-started-coding-87fd90f02619?source=rss-ecea3ce2fcaf------2</link>
            <guid isPermaLink="false">https://medium.com/p/87fd90f02619</guid>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[experience]]></category>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[coding]]></category>
            <category><![CDATA[newbie]]></category>
            <dc:creator><![CDATA[Samuel Ajala]]></dc:creator>
            <pubDate>Thu, 16 Feb 2023 03:34:54 GMT</pubDate>
            <atom:updated>2025-12-31T20:19:57.967Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*KzFLH0Yal7ITVDRB" /></figure><p>When you start learning how to code, you’re gonna think you know all you need to, but that’s not always the case, just as when I started, I thought I wouldn’t need to write on this, but I was wrong, here are some things you probably weren’t told.</p><h3>Be Patient</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/275/0*mgjW130FkM9JKeFw" /><figcaption>Be patient</figcaption></figure><p>There’s no rush, when learning how to code, don’t focus on when or by what time you should have mastered it, Just take your time. If you’re not patient and decide to rush through things, you’ll probably end up knowing little and then having to come back to learn it, not cool right.</p><h3>Don’t Compare yourself with others</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/283/0*xm5xy15RTG6l7mQj" /></figure><p>When you start comparing yourself to people you think are better, you become anxious and lose patience, you wanna learn things fast, and start to measure your progress by another person, this happened to me earlier when I started coding, I experienced this, and a lot of people still do.</p><h3>Try another learning option</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/265/0*CqbsPH6aVhyh_nuK" /></figure><p>If you’re having problem trying to understand a concept or topic, but you don’t seem to get, and you think you’re not smart enough for coding, you’re wrong, maybe the problem is not you, it could be your learning resource/teacher, there are thousands of resources/teachers, try another one, then you’ll understand it.</p><h3>Start building with what you know</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/221/0*gNMN1zMt0J-vV3IT" /></figure><p>This is something I learnt late in my coding journey, Once you have a basic knowledge of a concept, build with it immediately, then as you learn, you can always add to the project. Never wait until you feel you’ve known enough, you can’t, everything changes with time, so start building.</p><h3>Nothing is too difficult to learn</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/259/0*vpvT4jmWNe-1QAgW" /></figure><p>If others can know it, so can you. We’re humans, and one great thing about the human brain is that it can learn anything with consistency. You may be having a hard time trying to grasp some concepts, and it may take a while, but it’s not impossible.</p><h3>Practice will get you out of Tutorial hell</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/299/0*zkJtS-YfnwQoFFhp" /></figure><p>You can actually watch videos and think you already know it, but when it comes to execution, you’re stuck, you don’t know what to do, you can’t code a simple app without copying or following a video, you’re probably in tutorial hell, and how do you get out? — Practice, when you practice your learning consistently, concepts become clearer, and now you don’t just know something, you can now do it.</p><h3>Always have fun</h3><p>Remember to always have fun, and take breaks when necessary to avoid burnout. Taking breaks from coding isn’t a sign of weakness.</p><h3>Conclusion</h3><p>That’s all for now, hope you enjoyed reading this article, If you did, please leave some emojis and follow If you have any questions, feel free to comment or reach out. Let’s connect</p><p><a href="https://twitter.com/samuelajala01">Twitter</a> <a href="https://github.com/samuelajala01">Github</a> <a href="https://linkedin/in/samuelajala01">LinkedIn</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=87fd90f02619" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>