<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Anil Pai on Medium]]></title>
        <description><![CDATA[Stories by Anil Pai on Medium]]></description>
        <link>https://medium.com/@anilpai?source=rss-45df127af154------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Thu, 07 May 2026 06:29:55 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@anilpai/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[From Azure to Antler: Building Loopdesk AI in India’s Startup Elite]]></title>
            <link>https://anilpai.medium.com/from-azure-to-antler-building-loopdesk-ai-in-indias-startup-elite-7f540a3d3ea0?source=rss-45df127af154------2</link>
            <guid isPermaLink="false">https://medium.com/p/7f540a3d3ea0</guid>
            <category><![CDATA[ai-startup-ideas]]></category>
            <category><![CDATA[india-startup-ecosystem]]></category>
            <category><![CDATA[entrepreneurship]]></category>
            <category><![CDATA[venture-capital]]></category>
            <category><![CDATA[video-creation-tool]]></category>
            <dc:creator><![CDATA[Anil Pai]]></dc:creator>
            <pubDate>Sat, 10 May 2025 18:13:25 GMT</pubDate>
            <atom:updated>2025-05-10T19:29:26.396Z</atom:updated>
            <content:encoded><![CDATA[<p>When I first applied to Antler’s residency program in India, I wasn’t entirely sure what to expect. As a tech entrepreneur and full-stack developer with over 15 years of experience in software engineering and AI innovation, I had driven impactful projects at Microsoft Azure and was now building Loopdesk AI, a platform revolutionizing video editing with cutting-edge AI automation. I was looking for Antler’s global network and co-founder matching to scale Loopdesk AI and connect with like-minded innovators in India’s startup ecosystem. Now, having completed the program as part of the 6th cohort in India, I can confidently say that the experience transformed not just my business trajectory but also my approach to entrepreneurship itself.</p><h3><strong>What is Antler?</strong></h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/563/1*24yV453mRJQXM3uTkrtHvQ@2x.jpeg" /></figure><p>For those unfamiliar, Antler is a global early-stage venture capital firm that identifies, invests in, and challenges exceptional people to build the defining companies of tomorrow. Unlike traditional accelerators, Antler focuses on the pre-idea, pre-team stage, helping individuals find the right co-founders and validate business ideas before making investments. What makes Antler unique is its dual emphasis on founder matching and idea validation. The program is structured as a 12-week residency that takes participants through a carefully designed journey from introduction to investment readiness.</p><h3><strong>The Application Process</strong></h3><p>The journey began with a multi-stage application process that tested both my vision and resilience. It started with an online application requiring a detailed resume, a video pitch outlining my entrepreneurial aspirations, and a written submission on my motivation for joining Antler. This was followed by two rounds of virtual interviews with the Antler India team. The interviews focused on my ability to translate complex AI solutions into scalable products, which resonated with Antler’s mission to back high-impact founders. What stood out was their emphasis on adaptability and founder potential over a fully formed idea — questions probed how I’d pivot under market shifts and collaborate with diverse co-founders, setting the stage for the program’s collaborative ethos.</p><h3>The Cohort Experience - a 12 Week Journey</h3><p>The program lasted for 12 intensive weeks, each with a specific focus and purpose. Here’s how my journey unfolded:</p><p><strong><em>Week 1: Introduction and Building Connections</em></strong><br>The cohort kicked off with introductions and in-person social events designed to help us get to know each other. Highlights included:</p><ul><li>A FounderSpeak session with Zepto, offering insights from one of India’s fastest-growing startups.</li><li>Workshops focused on finding compatible co-founders.</li><li>Sessions on understanding venture scalability, which helped frame our thinking from day one.</li><li>A two-day Breakout event that fostered deeper connections among participants.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*OIszWtuGtTNCcqaN0495fg@2x.jpeg" /></figure><p>The first week set the tone — intense, collaborative, and focused on relationships as much as ideas. For Loopdesk AI, these connections sparked early discussions with potential co-founders who shared my passion for AI-driven creative tools.</p><p><strong><em>Week 2: Idea Validation &amp; Customer Discovery</em></strong><br>By the second week, we were diving deep into validation methodologies:</p><ul><li>A session with Boris Wertz from Version One Ventures gave us a global perspective.</li><li>The Idea Validation Workshop provided a structured approach to testing assumptions.</li><li>The “Billion Touchpoints” exercise pushed us to think at scale.</li><li>Sector-specific jam sessions in Fintech, Mobility, and Digital Public Infrastructure.</li><li>An inspiring FounderSpeak session with Sumit from Instawork.</li><li>AMA with Antler India Residency (AIR) alumni from Kubo Care.</li><li>A Pickleball evening for informal networking.</li></ul><p>I began testing Loopdesk AI’s core hypothesis — streamlining video editing workflows — through customer discovery, connecting with creators who validated the need for AI automation.</p><p><strong><em>Week 3: Sector Deep Dives</em></strong><br>Week three focused on industry-specific knowledge:</p><ul><li>Deep dives into SaaS + AI, revealing trends in enterprise AI solutions.</li><li>Antler &amp; AU Bank’s session on founder support resources.</li><li>Fintech sector exploration, valuable for understanding adjacent markets.</li><li>A workshop on decision-making for early-stage founders.</li><li>Sector deep dives into ConsumerTech and HealthTech.</li></ul><p>The ConsumerTech dive was particularly relevant, as I refined Loopdesk AI’s positioning as a creator-focused platform addressing global content creation demands.</p><p><strong><em>Week 4: Market Validation, Legal &amp; Finance, and MVP Development</em></strong><br>This week was content-rich:</p><ul><li>Speak sessions on market sizing and competition analysis.</li><li>Presentations applying market sizing techniques.</li><li>User research training based on “The Mom Test” principles.</li><li>Finance 101 and Legal 101 sessions with Kant.</li><li>Practical MVP development insights from Dodo Payments and Cricinshots founders.</li><li>A FounderSpeak session featuring Jar.</li></ul><p>I applied “The Mom Test” to Loopdesk AI, interviewing video editors to uncover pain points in pacing and transitions, shaping our MVP’s feature set.</p><p><strong><em>Week 5: Product Development and Storytelling</em></strong><br>By now, teams were forming, and concepts were taking shape:</p><ul><li>An impactful session on storytelling transformed how I communicated Loopdesk AI’s vision. I learned to frame our AI-first platform as a tool that empowers creators to focus on storytelling, not technicalities. This shift made my pitches to potential co-founders and early adopters more compelling, emphasizing how Loopdesk AI removes editing barriers to unlock creative freedom.</li><li>Focused work sprints with selected co-founders, where Loopdesk AI’s prototype began integrating AI-driven transitions and formatting.</li></ul><p><strong><em>Week 6: Dedicated Build Time</em></strong><br>The sixth week was a valuable pause for building. My team made significant progress on Loopdesk AI’s MVP, incorporating creator feedback to enhance automation features, condensing months of development into days of focused effort.</p><p><strong><em>Week 7: Pitch Days with the Antler Team</em></strong><br>This was our first major milestone, presenting Loopdesk AI’s progress to the Antler team. Feedback on our pitch — particularly on clarifying our enterprise value proposition — helped refine our narrative for investor audiences.</p><p><strong><em>Weeks 8–10: Development and Refinement</em></strong><br>These weeks focused on iterative development and investor prep. We ran pilot tests with 15 video creators, iterating Loopdesk AI’s MVP based on their feedback. Mentor sessions with Antler’s ConsumerTech experts helped us sharpen our go-to-market strategy, targeting both creators and enterprises.</p><p><strong><em>Week 11: Go-to-Market Strategy</em></strong><br>As we approached the end:</p><ul><li>A session on B2C Go-to-Market strategy with Ankur Saxena provided insights for Loopdesk AI’s creator-focused rollout.</li></ul><p>We developed a dual strategy: <em>direct-to-creator subscriptions</em> and <em>enterprise partnerships</em> for scalable adoption.</p><p><strong><em>Week 12: Entering the Cohort Finale</em></strong></p><p>As we enter the final week of the program, we’re gearing up for the culminating presentations. Loopdesk AI’s pitch is set to showcase our AI platform’s potential to transform video editing, aiming to impress the Antler team and lay a strong foundation for upcoming investor pitches.</p><h3>The Antler Network and Resources</h3><p>Antler’s global network was a game-changer:</p><ul><li>Industry experts guided us during sector deep dives, particularly in ConsumerTech and AI.</li><li>Former founders from Zepto, Instawork, Jar, Dodo Payments, and Cricinshots shared battle-tested advice.</li><li>A global community spanning 30+ cities.</li><li>Technical resources and mentor connections in AI and ConsumerTech.</li><li>Strategic partnerships with AU Bank.</li></ul><p>The FounderSpeak session with Zepto was memorable, as their hypergrowth story inspired Loopdesk AI’s ambition to scale globally from India.</p><h4>The People Who Made It Special</h4><p>The cohort’s diversity was a highlight:</p><ul><li>Experienced corporate professionals.</li><li>Serial entrepreneurs.</li><li>Technical specialists.</li><li>Young innovators.</li></ul><h3>Life After the Program</h3><p>Post-Antler, Loopdesk AI is preparing to pitch to the Antler Investment Committee for pre-seed funding to accelerate our MVP development and market entry. We’ve attracted significant interest, with 17 US investors and 5 Indian investors reaching out to explore opportunities. The stronger US interest aligns with the global demand for enterprise AI solutions, and we’re tailoring our pitch to highlight Loopdesk AI’s scalability and creator empowerment. The Antler network continues to support us, providing introductions to ConsumerTech mentors and investor prep sessions. As we refine our pitch, the program’s lessons in validation and storytelling are proving invaluable in navigating the fundraising landscape.</p><h3>Key Learnings and Takeaways</h3><p>My biggest takeaways from the 12-week Antler experience:</p><ul><li><em>Founder-Market Fit Matters</em>: The ConsumerTech deep dives validated the demand for AI-driven creative tools, reinforcing Loopdesk AI’s mission to empower creators.</li><li><em>Speed of Execution</em>: The build weeks showed how focus can condense months of progress into days.</li><li><em>The Power of a Curated Network</em>: From pickleball evenings to jam sessions, informal connections were as valuable as structured programming.</li><li><em>Validation Before Investment</em>: The “Mom Test” session in Week 4 changed how I approach customer interviews. For Loopdesk AI, shifting from “Do you like this tool?” to “What’s the biggest challenge in your editing workflow?” uncovered critical insights about pacing and transitions, shaping our AI features to address real pain points.</li><li><em>Storytelling is Everything</em>: The storytelling session transformed Loopdesk AI’s pitch, framing it as a platform that frees creators from technical hurdles, making it resonate with investors and users.</li></ul><h3>Advice for Future Applicants</h3><p>My advice for Antler India applicants:</p><ul><li>Be clear about your motivation but flexible on ideas.</li><li>Engage fully in sector deep dives, even outside your focus.</li><li>Actively participate in founder matching early on.</li><li>Come with an open mind about co-founders.</li><li>Embrace the intensity — the 12 weeks are designed to push you.</li><li>Don’t skip social events; your co-founder connection may happen over pickleball.</li><li>Use Pitch days for real feedback.</li></ul><h3>Conclusion</h3><p>Twelve weeks ago, I entered Antler with a vision for Loopdesk AI but uncertain about scaling it. Today, I’m a founder with a refined MVP, a co-founder, and a global network propelling us toward investor pitches. For those serious about entrepreneurship, Antler’s residency offers a unique launchpad. The structured programming — from FounderSpeak to deep dives to pitch practice — builds a strong foundation. But the curated cohort and Antler’s network, which continues post-program, are what make it transformative. The 6th cohort wasn’t just about building Loopdesk AI — it was about becoming a founder ready to shape the future of AI-driven creativity.</p><blockquote>Anil Pai is the CTO and Co-founder of Loopdesk AI, and was part of Antler India’s 6th cohort. Connect with him on <a href="https://www.linkedin.com/in/anilpai/">LinkedIn</a>.</blockquote><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7f540a3d3ea0" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Freshwater lakes on an island]]></title>
            <link>https://medium.com/mind-boggling-algorithms/freshwater-lakes-on-an-island-98ae2f7d30de?source=rss-45df127af154------2</link>
            <guid isPermaLink="false">https://medium.com/p/98ae2f7d30de</guid>
            <category><![CDATA[depth-first-search]]></category>
            <category><![CDATA[breadth-first-search]]></category>
            <category><![CDATA[algorithms]]></category>
            <category><![CDATA[matrix]]></category>
            <category><![CDATA[graph-theory]]></category>
            <dc:creator><![CDATA[Anil Pai]]></dc:creator>
            <pubDate>Tue, 18 Jun 2024 11:19:43 GMT</pubDate>
            <atom:updated>2024-06-18T11:19:43.243Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Nh-Lgc5i2O4TOWB000lQnw.png" /><figcaption>Pacific ocean with many islands (AI generated image)</figcaption></figure><p>The Pacific Ocean, known for its expansive reach, is dotted with numerous islands, each with its unique geography. While some islands boast freshwater lakes, others are devoid of such water bodies. Imagine representing this vast marine tapestry as a 2D matrix, where ‘0’ signifies water and ‘1’ indicates land. Now consider an algorithmic challenge: given the coordinates of a land piece within this matrix, can we devise a function that accurately counts the number of freshwater lakes present on that particular island?</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/dfd9f966b919c70823ebf31c5265842c/href">https://medium.com/media/dfd9f966b919c70823ebf31c5265842c/href</a></iframe><p>To unravel the mystery of lake counts on an island, our approach unfolds in two strategic phases. Initially, we employ Breadth-First Search (BFS) to chart out the island’s contours, expanding our search in all eight possible directions. Subsequently, we harness Depth-First Search (DFS) to delve into the island’s terrain, probing in four cardinal directions to enumerate the lakes. Here, a ‘lake’ is identified as a cluster of ‘0’s fully encapsulated within the island’s bounds, ensuring that no part of the lake touches the matrix’s periphery.</p><h4>Time &amp; Space complexity</h4><p>The time complexity of the countNumberOfLakes function is O(m*n), where m is the number of rows and n is the number of columns in the island matrix. This is because in the worst-case scenario, the function will need to visit every cell in the island matrix once.</p><p>The space complexity of the countNumberOfLakes function is also O(m-1*n-1), where m is the number of rows and n is the number of columns in the island matrix. This is because in the worst-case scenario, every cell in the island matrix could be part of a lake, and thus every cell could be added to the visited set.</p><h3>Conclusion</h3><p>Our journey to discern the number of lakes nestled within an island’s embrace concludes with the methodology delineated above. Yet, this beckons us to reflect on several intriguing considerations: Could there be a more streamlined method, perhaps a singular function that negates the need for constructing an island matrix? Is it feasible to pinpoint lakes amidst an island that’s ensconced by the ocean on all sides? How do we distinguish between the ocean’s saline embrace and the freshwater kiss of lakes when both are denoted by ‘0’s? And what if our starting coordinate is itself a watery enclave — will our function still perform reliably? Might there exist an alternative solution, maybe through leveraging an innovative data structure like a disjoint set? I invite you to ponder these questions and share your insights in the comments below.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=98ae2f7d30de" width="1" height="1" alt=""><hr><p><a href="https://medium.com/mind-boggling-algorithms/freshwater-lakes-on-an-island-98ae2f7d30de">Freshwater lakes on an island</a> was originally published in <a href="https://medium.com/mind-boggling-algorithms">Mind-boggling Algorithms</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Degrees of separation]]></title>
            <link>https://medium.com/mind-boggling-algorithms/degrees-of-separation-302136d94836?source=rss-45df127af154------2</link>
            <guid isPermaLink="false">https://medium.com/p/302136d94836</guid>
            <category><![CDATA[degrees-of-separation]]></category>
            <category><![CDATA[breadth-first-search]]></category>
            <category><![CDATA[data-structures]]></category>
            <category><![CDATA[graph]]></category>
            <dc:creator><![CDATA[Anil Pai]]></dc:creator>
            <pubDate>Wed, 12 Jun 2024 13:14:52 GMT</pubDate>
            <atom:updated>2024-06-12T13:14:52.909Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*zwWVnmMz0o5UEUku" /><figcaption>Photo by <a href="https://unsplash.com/@abrajamescalante?utm_source=medium&amp;utm_medium=referral">Abrajam Escalante</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>In the realm of social networks, it’s crucial to grasp the interconnections among nodes. Nodes with a direct link—termed edges in graph theory—are akin to friends, or zero-degree connections. Conversely, nodes linked through an intermediary node are known as mutual acquaintances or first-degree connections, colloquially referred to as friends of friends. From a programming perspective, we aim to identify both friends and mutual acquaintances within a social network.</p><h3>Separated by one degree</h3><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/37dfdc3912863869c965c93e45a1a2fd/href">https://medium.com/media/37dfdc3912863869c965c93e45a1a2fd/href</a></iframe><p>Wonderful! Having identified a node’s immediate friends and their mutual acquaintances, we can now explore how to extend this approach to ascertain the connections at the “N”th degree for a given node.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/0072b8031b6b50908ebc3ccc1f4b330e/href">https://medium.com/media/0072b8031b6b50908ebc3ccc1f4b330e/href</a></iframe><p>The time complexity of the Breadth-First Search (BFS) algorithm is O(V + E), where V is the number of vertices (nodes) and E is the number of edges in the graph. This is because every vertex and every edge will be explored in the worst case.</p><p>The space complexity of BFS is O (V). In the worst-case scenario, the queue will store all the vertices of the graph. This happens when the input graph is a tree (or a forest), where there is no cycle and the last level of the tree (or forest) contains n/2 nodes.</p><p>So, for the Nth-degree node function given above:</p><ul><li>Time Complexity: <em>O(V + E)</em></li><li>Space Complexity: O (V<em>)</em></li></ul><p>&quot;One degree apart&quot; typically refers to the relationship between two people created by a third party, commonly a mutual friend. This idea is frequently mentioned when discussing social networks and the six degrees of separation’ theory. This theory proposes that everyone on the planet can be connected through a maximum of six steps, with each step involving an acquaintance.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=302136d94836" width="1" height="1" alt=""><hr><p><a href="https://medium.com/mind-boggling-algorithms/degrees-of-separation-302136d94836">Degrees of separation</a> was originally published in <a href="https://medium.com/mind-boggling-algorithms">Mind-boggling Algorithms</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Generating Time Series]]></title>
            <link>https://medium.com/mind-boggling-algorithms/generating-time-series-bacb700a3e2c?source=rss-45df127af154------2</link>
            <guid isPermaLink="false">https://medium.com/p/bacb700a3e2c</guid>
            <category><![CDATA[time-series-data]]></category>
            <category><![CDATA[algorithms]]></category>
            <category><![CDATA[heapq]]></category>
            <category><![CDATA[data-structures]]></category>
            <dc:creator><![CDATA[Anil Pai]]></dc:creator>
            <pubDate>Wed, 28 Jun 2023 13:48:46 GMT</pubDate>
            <atom:updated>2023-06-28T13:48:46.187Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*N5a8g0iP0W3pkqR7" /><figcaption>Photo by <a href="https://unsplash.com/@isaacmsmith?utm_source=medium&amp;utm_medium=referral">Isaac Smith</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>A list of time intervals are given representing start/end times of various car trips. All times are positive 64-bit integer timestamps, and the ranges are inclusive on left and exclusive on right.</p><p>The goal is to generate a time series (list of time and count, sorted by time) of the number of active trips over time.</p><p>Input :(0,5), (2,3), (4,6), (7,10)</p><p>Output:</p><p>(0,2) =&gt; 1 <br>(2,3) =&gt; 2 <br>(3,4) =&gt; 1 <br>(4,5) =&gt; 2 <br>(5,6) =&gt; 1 <br>(6,7) =&gt; 1 <br>(7,10) =&gt; 2</p><p>As shown above, the output should provide the time slots followed by number of trips active during that period.</p><p>This is a problem of generating time series data plots for the graph with time intervals on the x-axis and active trips on the y-axis.</p><p>Heaps (Priority Queue) data structure can be used here to build a min-heap (sorted values in ascending order) to check for start times of new trip with already active end times of previous trips.</p><pre>import heapq as hq<br><br><br>def generate_time_series(trips):<br>  trips.sort(key=lambda x: (x[0], x[1]))<br>  series = {}<br>  minStartTime = []<br><br>  for trip in trips:<br>    if minStartTime and minStartTime[0] &lt;= trip[0]:<br>      # start time of incoming trip is lesser than the earliest end time ( AKA minimum start time)<br>      end = hq.heappop(minStartTime)<br>      series[end] = len(minStartTime)<br>    # push current trip end time to a heap<br>    hq.heappush(minStartTime, trip[1])<br>    series[trip[0]] = len(minStartTime)<br><br>  # Ending all the active trips<br>  while minStartTime:<br>    end = hq.heappop(minStartTime)<br>    series[end] = len(minStartTime)<br>  <br>  # Print the active trips as a time series<br>  sorted_series = dict(sorted(series.items()))<br>  prev_item = None<br>  for item in sorted_series.items():<br>    if prev_item is None:<br>      prev_item = item<br>    else:<br>      print(f&quot;({prev_item[0]},{item[0]}) ====&gt; {prev_item[1]} active trip(s)&quot;)<br>      prev_item = item<br>  return &#39;&#39;<br><br>trips = [[2, 3], [4, 6], [7, 10], [0, 5]]<br>print(generate_time_series(trips))<br><br>trips2 = [[1, 9], [3, 10], [2, 6], [7, 8]]<br>print(generate_time_series(trips2))</pre><p>Once the map (dictionary in python) is ready, sort them by key and print them to represent plots on a time series.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=bacb700a3e2c" width="1" height="1" alt=""><hr><p><a href="https://medium.com/mind-boggling-algorithms/generating-time-series-bacb700a3e2c">Generating Time Series</a> was originally published in <a href="https://medium.com/mind-boggling-algorithms">Mind-boggling Algorithms</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Streaming Algorithms : Computing current rank of a value in a stream]]></title>
            <link>https://medium.com/mind-boggling-algorithms/streaming-algorithms-computing-current-rank-of-a-value-in-a-stream-a30ce27e6ac7?source=rss-45df127af154------2</link>
            <guid isPermaLink="false">https://medium.com/p/a30ce27e6ac7</guid>
            <category><![CDATA[binary-search-tree]]></category>
            <category><![CDATA[algorithms]]></category>
            <category><![CDATA[data-structures]]></category>
            <dc:creator><![CDATA[Anil Pai]]></dc:creator>
            <pubDate>Sat, 01 May 2021 02:18:30 GMT</pubDate>
            <atom:updated>2021-05-01T02:19:54.604Z</atom:updated>
            <content:encoded><![CDATA[<h3>Streaming Algorithms — Computing current rank of a value in a stream</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*J30NmGEF4IJA6xAN" /><figcaption>Photo by <a href="https://unsplash.com/@__matthoffman__?utm_source=medium&amp;utm_medium=referral">Matt Hoffman</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>A stream of numbers are flowing in and we need to find out the rank of the number. The first number flowing in is ranked 1, and next numbers to follow will have a rank based on previous stream of numbers we have seen.</p><p>A = <strong>[4, 6, 2, 3, 5, 1, 10]</strong></p><p><strong><em>4 (Rank →1)</em></strong></p><p><strong><em>6 (Rank →2)</em></strong></p><p><strong><em>2 (Rank →1)</em></strong></p><p><strong><em>3 (Rank →2)</em></strong></p><p><strong><em>5 (Rank →4)</em></strong></p><p><strong><em>1 (Rank →1)</em></strong></p><p><strong><em>10 (Rank →7)</em></strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/596/1*bC5tftShUDENgWc9LtyFAA.png" /><figcaption>Binary Search Tree</figcaption></figure><p>One way to solve the problem is to build a Binary Search Tree (BST) using the stream of numbers. All the values less than the node, goes to the left and all the values greater than the node, goes to the right. Let&#39;s also assume, <em>all the numbers in the stream are unique</em>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/596/1*bN7k6Sk50UkWN5Eizc1TIg.png" /><figcaption>Every Node should store an additional value to represent left_size</figcaption></figure><p>Note that every node should store an additional variable left_size which increments everytime we traverse to the left of a node.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/b56e996251299106a863ec482a743c63/href">https://medium.com/media/b56e996251299106a863ec482a743c63/href</a></iframe><p>A variation of this problem is to consume the entire stream of numbers and provide ranks. This involves a 2 step process of :</p><ol><li>insert every value and build the BST.</li><li>traverse the entire tree and return their ranks.</li></ol><p>Reference : <a href="https://www.geeksforgeeks.org/rank-element-stream/">https://www.geeksforgeeks.org/rank-element-stream</a></p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/23cb5df801396178ec55b31f19b3bfeb/href">https://medium.com/media/23cb5df801396178ec55b31f19b3bfeb/href</a></iframe><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a30ce27e6ac7" width="1" height="1" alt=""><hr><p><a href="https://medium.com/mind-boggling-algorithms/streaming-algorithms-computing-current-rank-of-a-value-in-a-stream-a30ce27e6ac7">Streaming Algorithms : Computing current rank of a value in a stream</a> was originally published in <a href="https://medium.com/mind-boggling-algorithms">Mind-boggling Algorithms</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Streaming Algorithms — running median of an array using two heaps]]></title>
            <link>https://medium.com/mind-boggling-algorithms/streaming-algorithms-running-median-of-an-array-using-two-heaps-cd1b61b3c034?source=rss-45df127af154------2</link>
            <guid isPermaLink="false">https://medium.com/p/cd1b61b3c034</guid>
            <category><![CDATA[heap]]></category>
            <category><![CDATA[python]]></category>
            <category><![CDATA[median]]></category>
            <category><![CDATA[algorithms]]></category>
            <category><![CDATA[data-structures]]></category>
            <dc:creator><![CDATA[Anil Pai]]></dc:creator>
            <pubDate>Thu, 10 Sep 2020 18:17:25 GMT</pubDate>
            <atom:updated>2020-09-10T18:17:25.177Z</atom:updated>
            <content:encoded><![CDATA[<h3>Streaming Algorithms — running median of an array using two heaps</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*cNrk9Vwp4V9eAy2-" /><figcaption>Photo by <a href="https://unsplash.com/@markusspiske?utm_source=medium&amp;utm_medium=referral">Markus Spiske</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>This is my first post under my new publication titled “Mind-boggling algorithms”. In this post I will be discussing a way to calculate running medians of an array.</p><p>To understand this problem, you need to imagine an array which grows over time.</p><p>The Python Standard Library provides a min-heap implementation using <strong><em>heapq</em></strong> module. A min-heap always has the lowest element as the root element. In some cases, we need to use the max-heap. Though not documented by heapq docs, there is an internal function<em> _heapify_max </em>which can be used. For a min-heap, using <em>heappush</em> will push a element to the heap &amp; internally calls the <em>heapify</em> function. For a max-heap, <em>heappush</em> must follow <em>_heapify_max .</em></p><p>The approach involves maintaining a max-heap &amp; min-heap at all times. If the new streaming value is less than current median then add the value to max-heap otherwise min-heap.</p><p>Before we jump to process of calculating the median, make sure the length of difference between max_heap and min_heap is not more than 1.</p><p>The median calculation is based on the size of the heaps. If both heaps are of same size, median is the average of topmost element of both heaps. Otherwise the topmost element of larger heap becomes the median.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/65dff959b405cb17dd938292d38f6997/href">https://medium.com/media/65dff959b405cb17dd938292d38f6997/href</a></iframe><p>Reference:</p><p><a href="https://docs.python.org/3.8/library/heapq.html">heapq - Heap queue algorithm - Python 3.8.5 documentation</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=cd1b61b3c034" width="1" height="1" alt=""><hr><p><a href="https://medium.com/mind-boggling-algorithms/streaming-algorithms-running-median-of-an-array-using-two-heaps-cd1b61b3c034">Streaming Algorithms — running median of an array using two heaps</a> was originally published in <a href="https://medium.com/mind-boggling-algorithms">Mind-boggling Algorithms</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Currency Arbitrage using Bellman Ford Algorithm]]></title>
            <link>https://anilpai.medium.com/currency-arbitrage-using-bellman-ford-algorithm-8938dcea56ea?source=rss-45df127af154------2</link>
            <guid isPermaLink="false">https://medium.com/p/8938dcea56ea</guid>
            <category><![CDATA[arbitrage]]></category>
            <category><![CDATA[graph]]></category>
            <category><![CDATA[currency]]></category>
            <category><![CDATA[forex]]></category>
            <category><![CDATA[data-struc]]></category>
            <dc:creator><![CDATA[Anil Pai]]></dc:creator>
            <pubDate>Wed, 18 Sep 2019 07:50:53 GMT</pubDate>
            <atom:updated>2024-06-05T21:50:14.611Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*bPvDOkNbMBoP3WYu" /><figcaption>Photo by <a href="https://unsplash.com/@alexandermils?utm_source=medium&amp;utm_medium=referral">Alexander Mils</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>Graph problems are always interesting and currency <a href="https://en.wikipedia.org/wiki/Arbitrage">arbitrage</a> is one of the standard graph problems from CLRS book (Introduction to Algorithms).</p><p>Few weeks ago, I stumbled upon this problem when I was reading about forex trading &amp; transaction costs.</p><blockquote>Arbitrage is defined as near simultaneous purchase and sale of securities or foreign exchange in different markets in order to profit from price discrepancies.</blockquote><p>Forex arbitrage is a risk-free trading strategy that allows forex traders to make a profit with no open currency exposure. The strategy involves acting on opportunities presented by <strong><em>pricing inefficiencies in the short window</em></strong> they exist. This type of arbitrage trading involves the buying and selling of different currency pairs to exploit any pricing inefficiencies. The act of exploiting the pricing inefficiencies will correct the problem so traders must be ready to act quickly with arbitrage strategies. For this reason, these opportunities are often around for a very short time.</p><p>Suppose we are given a table of currency exchange rates, represented as a 2D array. Determine whether there is a possible arbitrage, i.e, whether there are certain sequence of trades you can make, starting with some amount A of any currency, such that you can end up with some amount greater than A of same currency.</p><p>Let’s say, 1 U.S. dollar bought 0.82 Euro, 1 Euro bought 129.7 Japanese Yen, 1 Japanese Yen bought 12 Turkish Lira, and 1 Turkish Lira bought 0.0008 U.S. dollars. Then, by converting currencies, a trader can start with 1 U.S. dollar and buy U.S. dollars, thus turning a 0.82*129.7*12*0.008 =1.02 US dollars, thus making a 2% profit.</p><p>For the sake of simplicity, let’s assume there are no transaction costs and you can trade any currency amount in fractional quantities.</p><blockquote>How do we solve this?</blockquote><h4>Graph data structure</h4><p>Weighted directed graphs can be represented as an <strong>adjacency matrix</strong>. For a graph with <em>|V| X |V|</em> vertices, an <strong>adjacency matrix</strong> is a |V| times |V| matrix of values, where the entry in row <em>i</em> &amp; column <em>j</em> is a non-zero integer if and only if the edge <em>(i, j)</em> is in the graph. If you want to indicate an edge weight, put it in the row <em>i</em>, column <em>j</em> entry, and reserve a special value (perhaps None) to indicate an absent edge.</p><h4>Finding arbitrage</h4><p>Arbitrage opportunities arise when a cycle is determined such that the edge weights satisfy the following expression</p><blockquote><strong><em>w1 * w2 * w3 * … * wn &gt; 1</em></strong></blockquote><p>The above constraint of finding the cycles is harder in graphs. Instead we must transform the edge weights of the graph such that the standard graph algorithms can be applied.</p><p>Let’s take the logarithm on both sides, such that</p><blockquote><strong><em>log(w1) + log(w2) + log(w3) + … + log(wn) &gt; 0</em></strong></blockquote><p>Taking the negative log, this becomes</p><blockquote><strong><em>(-log(w1)) + (-log(w2)) + (-log(w3)) + … + (-log(wn)) &lt; 0</em></strong></blockquote><p>Therefore we can conclude that if we can find a cycle of vertices such that the sum of their weights if negative, then we can conclude there exists an opportunity for currency arbitrage. Luckily, Bellman-Ford algorithm is a standard graph algorithm that can be used to easily detect negative weight cycles in O(|V*E|) time.</p><h4>Bellman Ford Algorithm</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*I7srRSdQ6rksHRIq" /><figcaption>Photo by <a href="https://unsplash.com/@herfrenchness?utm_source=medium&amp;utm_medium=referral">Clarisse Croset</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>Let <em>G(V, E)</em> be a graph with vertices, <em>V</em>, and edges, <em>E</em>.</p><p>Let <em>w(x)</em> denote the weight of vertex <em>x</em>.</p><p>Let <em>w(i, j)</em> denote the weight of the edge from source vertex <em>i </em>to destination vertex <em>j</em>.</p><p>Let <em>p(j)</em> denote the predecessor of vertex <em>j</em>.</p><p>The Bellman-Ford algorithm seeks to solve the single-source shortest path problem. It is used in situations where a source vertex is selected and the shortest paths to every other vertex in the graph need to be determined. After applying Bellman-Ford algorithm on a graph, each vertex maintains the weight of the shortest path from the source vertex to itself and the vertex which precedes it in the shortest path. In each iteration, all edges are relaxed if <strong><em>[w(i) + w(i, j) &lt; w(j)]</em></strong> and the weight of each vertex is updated accordingly. After the i-th iteration, the algorithm finds all shortest paths consisting of at most <em>i</em> edges.</p><p>Once all shortest paths have been identified, the algorithm loops through all of the edges and looks for edges that can further decrease the value of the shortest path. If we can still relax the edges, then <strong><em>a negative weight cycle has been found</em></strong> since a path can have at most |V-1| edges.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*UaafKp2kzPnSueIv" /><figcaption>Photo by <a href="https://unsplash.com/@sharonmccutcheon?utm_source=medium&amp;utm_medium=referral">Sharon McCutcheon</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>Printing a negative weight cycle is done to show the arbitrage opportunity. We use the predecessor chain to print the cycle. Now that we have an edge which can be further relaxed, we have found the source &amp; destination vertex of such an edge. Let’s <strong><em>start from the source vertex and go backwards until you see the source vertex again or any other vertex that predecessor chain has already shown us while printing the negative weighted cycle.</em></strong></p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/a127207f5d550fb862b9d517ac739756/href">https://medium.com/media/a127207f5d550fb862b9d517ac739756/href</a></iframe><p>In the real world scenario, it may be hard to find the arbitrage opportunity. It is advisable to take the negative logarithm of currency value after converting the floating point value with 2 decimal places and multiply that by 100 . This is beneficial such that we can avoid arbitrage opportunities which are less than 1%.</p><h3>Conclusion</h3><p>Bellman Ford algorithm can be used to find arbitrage opportunities among a given bunch of currencies represented as a graph. Normally these opportunities exist for a very short period of time, so someone interested in profiting from such a risk free transaction must act quick.</p><h3>Next Steps ?</h3><ol><li>Visualization: We can make use of visualization libraries to plot the graph and visualize the vertices and edges. This will make understanding the concept easier as we can see the negative weighted cycles consisting of vertices and edges in a different color.</li><li>Real World Currency Arbitrage: Using dummy values or historical data might be great for learning purposes, but how about using the real time currency value data and finding arbitrage in real time ?</li></ol><p>Here is a way to check currency arbitrage in real time: <a href="https://gist.github.com/anilpai/08f3cedb60b7a3c9a3b4e27c0c022096">https://gist.github.com/anilpai/08f3cedb60b7a3c9a3b4e27c0c022096</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8938dcea56ea" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Career development tips for Software Engineers]]></title>
            <link>https://anilpai.medium.com/career-development-tips-for-software-engineers-5b1c5fb4014d?source=rss-45df127af154------2</link>
            <guid isPermaLink="false">https://medium.com/p/5b1c5fb4014d</guid>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[books-recommendation]]></category>
            <category><![CDATA[data-structures]]></category>
            <category><![CDATA[software-design]]></category>
            <category><![CDATA[algorithms]]></category>
            <dc:creator><![CDATA[Anil Pai]]></dc:creator>
            <pubDate>Mon, 13 Aug 2018 21:44:33 GMT</pubDate>
            <atom:updated>2018-08-13T21:44:33.758Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/978/1*Vh8XHn5X1YlNy4YPeVWQsw.jpeg" /></figure><p>This is my first medium post. I am going to write about a prime question I am asked every time.</p><blockquote>“How do you stay updated with the industry? What books or websites one needs to read to enhance their career and knowledge?”</blockquote><p>To begin with, let me speak about myself. I have a Bachelors degree (India) and Masters degree in Computer Science Engineering (USA). I have a great penchant for attending hackathons and won a couple of them. Hackathons are not always about winning, but its more about networking, time management and getting shit done.</p><blockquote>“How was the Hack Reactor JavaScript Bootcamp experience?”</blockquote><p>I have also attended Coding Bootcamps and aced it. In 2016, I was working at Autodesk and they sponsored 20 candidates for Hack Reactor remote boot camp at their Michigan office. The program begins with 6 basic online courses relating to basic javascript with github submissions for every module, followed by a final interview via google hangout. Majority of them cleared the first hurdle as most candidates had Masters degree in Computer Science, else they have to pay additional fees to pass the interview again. The next step is 6 weeks of rigorous pair programming mentored by a Hack reactor instructor. Each week has 3 full size projects that is assigned to every pair, and a couple of surprise tests, so around 18 projects in total. The bootcamp ends with a 48hrs hackathon project. This is a great way to learn NodeJS and Advanced JS if you want to be a full stack JavaScript developer in future.</p><h3>Interview Preparation Books</h3><p>I would like to suggest few books which might help you in building a great career in the field of software development.</p><h4>Algorithms &amp; Data Structures</h4><p><strong>#1 Cracking the Coding Interview by Gayle McDowell</strong></p><p>If you are a student and want to prepare for the interviews, then this is the first book to start with. If you are not a Java/C++ programmer, then you can happily discard the solutions code. As the author says, the purpose of the book is to make the reader start thinking in terms of data structures and algorithms for solving the problem. Don’t memorize code !</p><p><strong>#2 Intro to Algorithms by Cormen ( C )and 3 others (LRS)</strong></p><p>This was the book I read in my second year of Bachelors (famously known as <strong>CLRS</strong>). Its very vast and extensive with lots of code and mathematical equations. I still read this book sometimes to revise some of the concepts. Its a good reference book to have on your bookshelf.</p><p><strong>#3 Algorithms unlocked by Cormen</strong></p><p>Same author as before. The content is condensed and part of readers prefer this version. Frankly, I haven’t read this book yet.</p><p><strong>#4 Algorithms — 4th Edition by Robert Sedgewick &amp; Kevin Wayne</strong></p><p>One of my favorites. This book helped me a lot when I was studying Advanced Data Structures at Syracuse University (Masters). The programming language used is C++. I would recommend this book to anyone who wants to be strong at Data Structures and Algorithms.</p><p><strong>#5 Algorithm Design by Jon Kleinberg and Eva Tardos</strong></p><p>This book helps you develop the thought process while designing your algorithms. I am yet to finish this book.</p><p><strong>#6 The Algorithm Design Manual by Steven S Skiena</strong></p><p>Another great book on Algorithms design. I haven’t read this book yet.</p><p><strong>#7 Coding Practice</strong></p><p><a href="https://leetcode.com">Leetcode</a> — a nice site to practice interview questions. Majority of the companies use similar questions or sometimes copy paste from this site for phone screens. Highly recommend solving as many problems as possible from this site.</p><p><a href="https://www.geeksforgeeks.org">Geeks for Geeks</a> — you will find variety of problems with solutions in multiple languages. I see lot of interesting hard problems with great explanations which are easier to understand.</p><p><a href="http://codewars.com">Codewars</a> — if you want to casually practice coding problems, this is my favorite. All problems are crowdsourced and you can choose problems based on how others have rated them. Your solutions are also voted by others.</p><p>I would also recommend a youtube video series by Tushar Roy.</p><p><a href="https://www.youtube.com/channel/UCZLJf_R2sWyUtXSKiKlyvAw">Tushar Roy - Coding Made Simple</a></p><h3>System Design</h3><p>System Design knowledge is crucial when you are planning to grow within an organization. If you have interviewed for a senior role, you already know how important it is to come up with great architecture design within minutes to win a job offer.</p><p><strong>#1 Grokking Algorithms by Aditya Bhargava</strong></p><p>This is a good book to get started with design interviews. I haven’t myself read this book entirely but I liked the initial few chapters. I recommend this book for a beginner, even if you are not planning to interview for a Senior position.</p><p><strong>#2 </strong><a href="http://highscalability.com"><strong>highscalability.com</strong></a></p><p>This is the only site I have been referring to regularly for the last 8 years to read about system design, architecture and everything to do with scalability. I would suggest reading this once a month to catch up with all scalability stuff. Since most companies have their own engineering blogs today, trying subscribing to them.</p><h4>#3 Github links:</h4><p>Most of the system design interview prep material is available on github, and here are 3 important github repos which are more than sufficient to prepare.</p><ul><li><a href="https://github.com/donnemartin/system-design-primer">GitHub - donnemartin/system-design-primer: Learn how to design large-scale systems. Prep for the system design interview. Includes Anki flashcards.</a></li><li><a href="https://github.com/checkcheckzz/system-design-interview">GitHub - checkcheckzz/system-design-interview: System design interview for IT companies</a></li><li><a href="https://github.com/theanalyst/awesome-distributed-systems">GitHub - theanalyst/awesome-distributed-systems: A curated list to learn about distributed systems</a></li></ul><p><strong>#4 The Architecture of Open Source Applications</strong></p><p><em>Link : </em><a href="http://aosabook.org/en/index.html"><em>http://aosabook.org/en/index.html</em></a></p><p>Here is a decent list of topics to revise relating to few open source projects. I did not like the way its been structured and you can find most of them online. I wouldn’t recommend buying this book. You can definitely skim through the free chapters.</p><p><strong>#5 Design data intensive application by Martin Kleppmann</strong></p><p>I have heard very nice reviews about this book. This book is a must read if you are planning software development and data engineering.</p><h3>More books worth reading….</h3><p>These books do not fall under algorithms, data structures or system design category, but they are worth reading.</p><p><strong>Advanced Architecture for Big Data Applications</strong></p><p><em>Link : </em><a href="https://www.safaribooksonline.com/learning-paths/learning-path-architect/9781491987063/"><em>https://www.safaribooksonline.com/learning-paths/learning-path-architect/9781491987063/</em></a></p><p><strong>Google SRE Book</strong></p><p>Link : <a href="https://landing.google.com/sre/book/index.html">https://landing.google.com/sre/book/index.html</a></p><p>I have heard a lot of nice reviews about this book. Its a free book by Google and will help you get some knowledge relating to operations. I have read few chapters randomly and I liked it.</p><p><strong>The Effective Engineer: by Edmond Lau</strong></p><p><strong>Coders at Work by Peter Seibel</strong></p><p><strong>Joel on Software by Joel Spolsky</strong></p><p>I will continue updating this list as I find any interesting books or websites. Until then, do post your comments and suggestions below.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5b1c5fb4014d" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>