<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Kevin Cisse on Medium]]></title>
        <description><![CDATA[Stories by Kevin Cisse on Medium]]></description>
        <link>https://medium.com/@kevcisse?source=rss-cae979e91ae1------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sat, 16 May 2026 20:09:39 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@kevcisse/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Where AI Systems Fail]]></title>
            <link>https://medium.com/@kevcisse/where-ai-systems-fail-ab31321c47cc?source=rss-cae979e91ae1------2</link>
            <guid isPermaLink="false">https://medium.com/p/ab31321c47cc</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[systems-integration]]></category>
            <category><![CDATA[small-business]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[enterprise-architecture]]></category>
            <dc:creator><![CDATA[Kevin Cisse]]></dc:creator>
            <pubDate>Wed, 15 Apr 2026 16:55:00 GMT</pubDate>
            <atom:updated>2026-04-16T21:59:27.176Z</atom:updated>
            <content:encoded><![CDATA[<p>( Why Most Teams Aren’t Ready)</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*a4zAypSdH9sgnpeQWXgsEw.png" /></figure><p>AI is working overtime.</p><p>Teams are moving faster.<br>Output is increasing.<br>Workflows are getting compressed.</p><p>Observing from the outside, one might say this looks like progress.</p><p>But once AI moves beyond individual usage and into real workflows, funny things start to happen — or not so funny depending on how you shake it (the magic AI 8ball):</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*491mTIE-lXxm9Qo34Rt-bw.png" /></figure><p><em>The system begins to break in ways most teams didn’t expect</em> because w<em>e’re deploying probabilistic systems into environments that expect deterministic outcomes.</em></p><p><em>Let that sink in.</em></p><p>AI systems don’t fail because they’re probabilistic.<br>They fail when deployed without systems designed to manage uncertainty.</p><h3>AI Works at Scale</h3><p>It’s easy to assume that AI systems are inherently unreliable.</p><p>But that’s not what we’re seeing.</p><p>Companies like <a href="https://factory.graymatter-robotics.com/">Graymatter Robotics</a>, <a href="https://waymo.com/">Waymo</a>, <a href="https://zoox.com/">Zoox</a>, <a href="https://www.overland.ai/">Overland AI</a> and <a href="https://www.tesla.com/">Tesla</a> are already deploying AI in environments where failure isn’t an option.</p><p>These companies are deploying autonomous cars and autonomous mobile robots in cities and factories across America. These are some of the most complex, high-stakes applications of AI in existence.</p><p>And it works. Not because the AI is perfect.</p><p>But because:</p><blockquote><strong><em>The system around the AI is designed for imperfection.</em></strong></blockquote><ul><li>Redundancy</li><li>Validation</li><li>Continuous feedback</li><li>Defined constraints</li></ul><p>These systems don’t assume correctness.</p><p>They are built to <strong>manage uncertainty at every layer.</strong></p><h3>Most Companies Aren’t There Yet</h3><p>In contrast, most organizations are:</p><ul><li>Adopting AI at the tool level</li><li>Not the system level</li><li>Increasing usage without increasing structure</li></ul><p>Which leads to:</p><blockquote><strong><em>Acceleration without reliability.</em></strong></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*o_i5pqgQc3I8e3BW1hhq3Q.png" /></figure><h3>The Illusion of “It Works”</h3><p>At the individual level, AI feels incredibly reliable.</p><ul><li>You prompt it, then you get an answer</li><li>You refine it, then the output improves</li><li>You use it daily, then it becomes part of your workflow</li></ul><p>So the natural assumption is:</p><blockquote><em>“Now this we can scale!”</em></blockquote><p>But scaling AI isn’t just increasing usage.</p><p>It’s introducing it into:</p><ul><li>Production systems</li><li>Customer-facing workflows</li><li>Decision-making processes</li></ul><p>This is where things change because there’s likely some scenarios you haven’t considered.</p><h3>Where Things Actually Fall Apart</h3><p>Not theoretically.</p><p>More like practically.</p><p>And Repeatedly.</p><h4>Hallucinations in Production Environments</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*inWcgUPNvMcXVzpPVSV6XQ.png" /></figure><p>In isolation, a slightly incorrect answer is manageable.</p><p>In production?</p><p>It becomes a huge liability.</p><ul><li>AI support agents giving wrong answers to customers</li><li>Internal tools generating inaccurate insights</li><li>Documentation being created with subtle but critical errors</li></ul><p>The issue isn’t that AI makes mistakes.</p><blockquote><em>It’s that most systems don’t have a </em><strong><em>structured way to detect or handle them.</em></strong></blockquote><h4>Lack of Evaluation Frameworks</h4><p>Most teams don’t actually know how to measure AI performance.</p><p>They rely on:</p><ul><li>“Looks good”</li><li>“Seems right”</li><li>“Worked last time”</li></ul><p>But when AI becomes part of a workflow, that’s not enough.</p><p>You need:</p><ul><li>Defined success criteria</li><li>Repeatable evaluation methods</li><li>Feedback loops</li></ul><p>Without that:</p><blockquote><em>You’re scaling something you can’t reliably measure.</em></blockquote><h4>Workflow Fragmentation</h4><p>AI adoption often happens tool-by-tool:</p><ul><li>One team uses one model</li><li>Another uses a different workflow</li><li>A third builds something custom</li></ul><p>Individually, it works.</p><p>Organizationally?</p><p>There’s no shared system and instead we have parallel acceleration. This in turn c<em>reates inconsistency, duplication, and hidden complexity.</em></p><h4>Lack of Clear Ownership</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wJEJt1F33cU6HIOGYbfJ5Q.png" /></figure><p>Who owns AI in your organization?</p><ul><li>Engineering?</li><li>Product?</li><li>Ops?</li><li>Security?</li></ul><p>In most companies, the answer is:</p><blockquote><em>“Everyone… and no one.”</em></blockquote><p>Which means:</p><ul><li>No one defines standards</li><li>No one enforces best practices</li><li>No one is accountable for failures</li></ul><p>And when something goes wrong:</p><blockquote><em>It’s unclear where the responsibility actually sits.</em></blockquote><h4>Over-Reliance Without Safeguards</h4><p>As teams get comfortable with AI, something subtle happens:</p><p>They trust it more.</p><p>They check it less.</p><p>They integrate it deeper into workflows.</p><p>That’s when risk compounds.</p><p>Not at the beginning.</p><blockquote><em>But after adoption feels “normal.”</em></blockquote><h3>The Core Problem</h3><p>All of these issues point to the same underlying gap:</p><blockquote><strong><em>AI capability is advancing faster than AI system design.</em></strong></blockquote><p>We’re great at:</p><ul><li>Accessing models because the technical barrier to entry consists of navigating to Chatgpt.com from your browser and typing something as simple as “Hello World”</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*RvyccaYV-7caO6d3mTdhuw.png" /></figure><ul><li>Using tools to increase productivity — as it’s been since the dawn of time</li><li>Increasing output because process improvement or in other words “getting better” is inherently human. It’s what we do.</li></ul><p>We’re not yet great at:</p><ul><li>Structuring AI systems because we’re dealing with a product that is currently experiencing controlled adoption by the general public (at least in the US — unless you’re in tech) until trust is built over time.</li><li>Designing for failure because that’s dependent on more input data. We haven’t yet seen or defined all of the edge cases where AI systems begin to fail. Though that could soon begin to change as we continue to adopt these systems into our every day lives.</li><li>Building reliable operational layers because there aren’t too many organizations that are taking the systems approach to AI adoption at an operational level. It’s often teams operating in silos with their own agendas unconcerned with the next team’s objectives.</li></ul><h3>What Mature AI Systems Actually Require</h3><p>If AI is going to move from isolated experimentations to infrastructure, it needs more than usage.</p><p>It needs systems design. It needs reliable architecture built at scale.</p><h4>Failure-Aware Architecture</h4><p>Systems need to assume, <em>AI will be wrong sometimes.</em></p><p>That means:</p><ul><li>Implementing fallback mechanisms</li><li>Creating, implementing and measuring confidence thresholds</li><li>Human check-points &amp; final approval</li></ul><h4>Evaluation as a First-Class System</h4><p>Not an afterthought.</p><p>Teams need:</p><ul><li>Defined metrics</li><li>Continuous testing</li><li>Output validation pipelines</li></ul><h4>Standardized Workflows</h4><p>Not just tools.</p><p>But:</p><ul><li>Shared patterns</li><li>Documented processes</li><li>Repeatable systems</li></ul><h4>Clear Ownership</h4><p>Someone needs to own:</p><ul><li>AI usage standards</li><li>Risk thresholds</li><li>System performance</li></ul><p>Without that:</p><blockquote><em>You don’t have a system. You have activity.</em></blockquote><h3>What This Means Going Forward</h3><p>AI adoption is no longer optional. The baseline is rising quickly, and the gap between companies that use AI and those that don’t is already closing.</p><p>The next divide will be more subtle — and more important.</p><blockquote><strong><em>Between companies that use AI and companies that can operationalize it.</em></strong></blockquote><p>Those who figure this out will compound their advantage. Those who don’t won’t fall behind immediately — but over time, the difference in reliability, efficiency, and trust will become hard to ignore.</p><h3>Final Thoughts</h3><p>AI doesn’t fail all at once. It’s no longer experimental. At the highest levels, it’s already working. It also fails quietly.</p><ul><li>In edge cases</li><li>In overlooked outputs</li><li>In unmonitored workflows</li></ul><p>Until one day, those small failures compound.</p><p>The question isn’t whether AI is ready.</p><blockquote><strong>It’s whether your systems are ready for AI.</strong></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Zi2fdA9DCzul0l_hI9n89g.png" /></figure><h3>What Comes Next</h3><p>In the next piece, we’ll break down:</p><blockquote><strong><em>What an actual “AI operational layer” looks like — and how organizations can start building one without slowing down their teams.</em></strong></blockquote><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ab31321c47cc" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[AI’s Already Accelerating Your Team. Now You Need to Control It.]]></title>
            <link>https://medium.com/@kevcisse/ais-already-accelerating-your-team-now-you-need-to-control-it-9418ae809284?source=rss-cae979e91ae1------2</link>
            <guid isPermaLink="false">https://medium.com/p/9418ae809284</guid>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[manufacturing]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[marketing]]></category>
            <dc:creator><![CDATA[Kevin Cisse]]></dc:creator>
            <pubDate>Fri, 03 Apr 2026 20:55:07 GMT</pubDate>
            <atom:updated>2026-04-03T20:55:07.040Z</atom:updated>
            <content:encoded><![CDATA[<p>AI adoption inside companies didn’t start with strategy.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*OPjG_a-N86Biu8qhpc0SIw.png" /></figure><p>It started with individuals.</p><p>Engineers using agentic solutions to move faster.<br>Marketers drafting email campaigns in a matter of minutes.<br>Power Users automating repetitive work.</p><p>And in many cases — it’s working.</p><blockquote><em>Agile Teams are shipping faster.<br>Output is increasing.<br>The upside couldn’t be more real.</em></blockquote><p>The question now isn’t whether AI should be used. It’s how to <strong>turn individual gains into a system you can trust at scale.</strong></p><h3>From Individual Advantage to Organizational Capability</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*qb-qLrJLHS_79pOg2BbyKA.png" /></figure><p>Right now, most AI usage lives at the <strong>individual level</strong>:</p><ul><li>Personal workflows</li><li>Ad hoc prompting</li><li>Tool-by-tool experimentation</li></ul><p>That’s where speed comes from.</p><p>But speed alone doesn’t scale.</p><p>To make AI a true organizational advantage, it needs to evolve into:</p><ul><li>Repeatable workflows</li><li>Shared standards</li><li>Observable systems</li></ul><blockquote><em>Otherwise, you don’t have an AI capability — you have scattered acceleration.</em></blockquote><h3>The Gap: Acceleration Without Structure</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*WBkAIX5Oy0lj6KuX2LNunw.png" /></figure><p>Even in companies with policies and approved tools, a gap is emerging:</p><ul><li>AI is being used across teams</li><li>But usage isn’t consistently visible</li><li>Outputs aren’t always validated the same way</li><li>Workflows aren’t standardized</li></ul><p>This isn’t a failure. It’s what happens when:</p><blockquote><strong><em>Technology adoption moves faster than operational design.</em></strong></blockquote><p>We’ve seen this before:</p><ul><li>Cloud before cloud governance</li><li>Data before data infrastructure</li><li>SaaS before IT standardization</li></ul><p>AI is following the same path — only way faster.</p><h3>What Tech Leaders Actually Need</h3><p>This isn’t about restricting AI.</p><p>It’s about <strong>making it reliable at scale</strong>.</p><p>That means taking a systems approach to consider:</p><h3>1. Visibility</h3><ul><li>Where is AI being used?</li><li>For what types of tasks?</li><li>With what level of risk?</li></ul><h3>2. Standards</h3><ul><li>What’s acceptable input/output?</li><li>When does human validation kick in?</li><li>What workflows should be shared amongst teams?</li></ul><h3>3. Control Without Friction</h3><ul><li>Enabling teams to move fast</li><li>Without introducing unnecessary bottlenecks</li><li>While still protecting critical systems and data</li></ul><h3>The Shift</h3><p>What’s happening now:</p><blockquote><em>AI is a personal productivity layer.</em></blockquote><p>What it needs to become:</p><blockquote><strong><em>An operational layer inside your organization.</em></strong></blockquote><p>Winning organizations will identify that they need to shift from:</p><ul><li>Individual experimentation to organizational capability</li><li>Individual speed to reliability at scale</li><li>Individual wins to organizational leverage</li></ul><h3>Final Thoughts</h3><p>The companies that win won’t be the ones that adopt AI the fastest. They’ll be the ones that <strong>turn it into something they can trust, measure, and scale.</strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9418ae809284" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Future of AR/VR: Which Reality Will You Be Working In?]]></title>
            <link>https://medium.com/@kevcisse/the-future-of-ar-vr-which-reality-will-you-be-working-in-4b13a9b046d9?source=rss-cae979e91ae1------2</link>
            <guid isPermaLink="false">https://medium.com/p/4b13a9b046d9</guid>
            <category><![CDATA[vr]]></category>
            <category><![CDATA[virtual-reality]]></category>
            <category><![CDATA[augmented-reality]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[technology]]></category>
            <dc:creator><![CDATA[Kevin Cisse]]></dc:creator>
            <pubDate>Mon, 05 Jan 2026 05:03:37 GMT</pubDate>
            <atom:updated>2026-01-11T01:30:20.752Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*9HuoD5qwd6tMCYqWREUSYg.png" /><figcaption>From operating rooms to factory floors to living rooms, AR and VR are reshaping how we work, create, and consume.</figcaption></figure><p>In an age where it’s no longer uncommon to hear someone say, <em>“We’re living in a simulation,”</em> it’s worth asking a more practical question:</p><p><strong>Which reality do you see yourself living — and working — in 10 years from now? What about 20?</strong></p><p>To some, this line of thinking might sound a bit off the rails. But if you live in a major U.S. city, chances are you’ve already seen hundreds of Waymo vehicles driving around autonomously — no driver in sight. Some of you might even be brave (or crazy) enough to sit shotgun in one of them. Who am I to judge?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*a_k3U0XK4LZT8MDNtRW-LQ.png" /><figcaption>Autonomous rides, sunset skies, and a city in motion — welcome to the future of urban mobility.</figcaption></figure><p>If I told you 20 years ago that this would be your reality today, would you have believed me?</p><p>That same question applies to <strong>Augmented Reality (AR)</strong> and <strong>Virtual Reality (VR)</strong>.</p><h3>Augmented Reality vs. Virtual Reality: What’s the Difference?</h3><p><strong>Augmented Reality (AR)</strong> overlays digital content — images, text, audio, or contextual data — onto the real world in real time. It enhances your perception of your environment rather than replacing it. AR blends the digital and physical worlds.</p><p><strong>Virtual Reality (VR)</strong>, on the other hand, is fully immersive. It transports users into a completely digital 3D environment, effectively shutting out the physical world. If you haven’t read the book <a href="https://amzn.to/4sB7Ott"><em>Ready Player One</em></a>, maybe you’ve heard of the movie <a href="https://amzn.to/3Nnuq0p"><em>Ready Player One</em></a>. Users interact with their virtual environment through headsets, controllers, gloves, or motion tracking systems.</p><p>Both are powerful — but they solve very different problems.</p><h3>So… What’s the Point?</h3><p>We already see AR and VR delivering value across industries:</p><ul><li><strong>Retail:</strong> Virtual try-ons for clothing and makeup (Sephora, <a href="https://makeuseoftech.com/try-ikea-products-without-purchasing-virtually/">IKEA Place</a>)</li><li><strong>Gaming:</strong> <a href="https://pokemongo.com/en">Pokémon GO</a>, <a href="https://minecraftspe.com/minecraft-earth/">Minecraft Earth</a></li><li><strong>Education:</strong> <a href="https://www.medicinevirtual.com/">AR anatomy and visualization tools</a> for medical students</li><li><strong>Construction:</strong> On-site visualization of building plans and structural overlays with <a href="https://www.autodesk.com/blogs/construction/extended-reality-construction-ar-vr-mr/">AutoDesk Construction Cloud</a></li><li><strong>Navigation:</strong> Google Maps Live View with AR street directions</li><li><strong>Maintenance &amp; Manufacturing:</strong> <a href="https://www.ptc.com/en/products/vuforia">AR-guided repair</a>, <a href="https://www.3ds.com/products/delmia/augmented-experience">inspection</a>, and <a href="https://www.siemens.com/global/en/company/stories/industry/digitaltwin-virtualreality-augmentedreality-service-porsche.html">training overlays</a></li></ul><p>The question isn’t <em>if</em> these technologies work.<br> It’s <strong>how — and how fast — they become normal.</strong></p><h3>Adoption Is the Real Challenge</h3><p>Having seen AR/VR deployments firsthand in manufacturing environments, one thing is clear: <strong>adoption is hard</strong>.</p><p>Sometimes the technology simply isn’t mature enough for certain high-value use cases. Other times, skepticism gets in the way — especially when end users believe AR/VR is being introduced to automate them out of a job or to complicate workflows with yet another tool.</p><p>This tension isn’t new.</p><h3>A Brief History of “Too Early”</h3><p>Google turned heads in 2012 with <a href="https://en.wikipedia.org/wiki/Google_Glass">Google Glass</a> — but consumers weren’t ready. The product lacked clear use cases, social acceptance, and distribution.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YROJAV3cxKil_5xJqyEmpg.png" /><figcaption>Google Glass ahead of its time, ready for its moment.</figcaption></figure><p><a href="https://en.wikipedia.org/wiki/Spectacles_(product)">Snap Inc. took a different approach.</a> After acquiring Vergence Labs in 2014, Snap released Spectacles 2 years later via Snapbot vending machines in select markets like Los Angeles. It felt playful, experimental, and exciting — and it worked.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*0gGBUdAc-_LaS6paP7QyGw.jpeg" /><figcaption>Photo courtesy of CNET</figcaption></figure><p>Fast forward to today: Meta has physical storefronts selling <a href="https://www.meta.com/ai-glasses/">Ray-Ban Meta and Oakley Meta AI glasses</a>.</p><p>Meta and Snap didn’t just build hardware — they built <strong>ecosystems first</strong>. Both companies already had millions of loyal users creating content on their platforms. Iterating on AR hardware was a natural extension: better tools enable better content, which drives engagement, traffic, and ultimately ad revenue.</p><p>Google tested the idea.<br> Snap and Meta operationalized it.</p><h3>Where VR Fits — and Where It Struggles</h3><p>Some people still shake their heads at Meta’s rebrand from Facebook to Meta. The Metaverse was supposed to redefine how we collaborate and work. While the Oculus and later Meta Quest headsets were well received, common complaints followed:</p><ul><li>Head and neck discomfort during extended use</li><li>Battery life limitations</li><li>Hardware weight and ergonomics</li></ul><p>VR excels in <strong>gaming, workforce training, and technical demos</strong>, but at its current price point and form factor, it’s hard to imagine mass adoption in everyday work life within the next five years — unless the hardware becomes significantly lighter, more comfortable, and longer-lasting.</p><h3>Why AR Will Shape the Near Future</h3><p>AR, however, is a different story.</p><p>In the next 5–10 years, we’ll have to seriously address how AR integrates into classrooms — much like the conversations already happening around AI in education.</p><p>Surgeons like <a href="https://www.instagram.com/p/DR4pog1kZS_/?utm_source=ig_web_copy_link&amp;igsh=MzRlODBiNWFlZA==">Dr. Bestman</a>, are already using Ray-Ban Meta glasses to record procedures (without patient identifiers) for training and collaboration. The Meta Ray-Ban Display glasses also support <strong>real-time language translation</strong> — imagine global teams collaborating seamlessly, each speaking their native language.</p><p>The biggest blocker? <strong>Data privacy.<br></strong> But that’s an article for another day.</p><h3>Who’s Leading the Race?</h3><p>Today, Meta appears to be leading the AR hardware race with its Ray-Ban Meta smart glasses, pairing consumer-ready hardware with a mature content and developer ecosystem. Snap Inc. follows closely, driven by strong R&amp;D and multiple successful iterations of Spectacles, with further advancements expected in the coming years. Google — arguably the first to spark widespread interest in AR (smart) glasses — is positioning itself for a <a href="https://www.businessinsider.com/google-augmented-reality-reset-xr-glasses-android-qualcomm-2024-7">return with a revised strategy</a>: focusing on building an open platform for developers while <a href="https://www.cnet.com/tech/computing/googles-putting-it-all-on-glasses-next-year-my-demos-with-project-aura-and-more/">partnering with established hardware manufacturers</a>, Samsung, Xreal, Warbyparker and Qualcom, rather than producing consumer devices in-house. Apple is an obvious contender as well, though it has remained relatively quiet publicly. Given its history, it’s reasonable to assume significant investment in AR-related R&amp;D is already underway.</p><p>The more interesting question is this:</p><p><strong>Will we see an open, customizable AR wearable in the next 5–10 years?</strong></p><ul><li>Open-source development for builders</li><li>The ability to opt out of data collection</li><li>Customizable hardware, with CAD files available to 3D print replacement parts</li></ul><p>Sounds unrealistic?</p><p>That’s already happening at <a href="https://www.slate.auto/en"><strong>Slate</strong></a>. More on that <a href="https://youtu.be/Vv0wC_ffAHU?si=3zStI5XazgeBVA33">here</a> — courtesy of my good friend Rich.</p><h3>The Future Is Already Here</h3><p>It just hasn’t fully settled into our daily workflows yet.</p><p>The same way autonomous vehicles quietly became normal, AR will too — not all at once, but gradually, then suddenly.</p><p>The question isn’t whether AR and VR will shape the future of work.</p><p>It’s <strong>which reality you’re preparing for.</strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4b13a9b046d9" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Climb]]></title>
            <link>https://medium.com/@kevcisse/the-climb-74d431f32605?source=rss-cae979e91ae1------2</link>
            <guid isPermaLink="false">https://medium.com/p/74d431f32605</guid>
            <category><![CDATA[gamechangers]]></category>
            <category><![CDATA[covid-diaries]]></category>
            <category><![CDATA[covid19]]></category>
            <category><![CDATA[inspiration]]></category>
            <category><![CDATA[game-changing-leadership]]></category>
            <dc:creator><![CDATA[Kevin Cisse]]></dc:creator>
            <pubDate>Thu, 25 Jun 2020 04:22:52 GMT</pubDate>
            <atom:updated>2020-07-07T17:13:57.328Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/450/1*144tjztvsGrTb1gmUF7hAQ.jpeg" /></figure><p><strong>It’s easy to get lost in the weeds</strong>. If you’re like me, you might have been blaming the Coronavirus for every one of your 2020 problems. But why? For most, shifting blame onto someone or something else is reflexive. One could say instinctive, but we are not animals. Challenges allow us to grow. <strong>So, let’s not get stuck in the weeds.</strong> The Washington Post reported, “2.4 million Americans filed jobless claims last week, bringing nine-week total to 38.6 million”<a href="https://www.washingtonpost.com/business/2020/05/21/unemployment-claims-coronavirus/">1.</a> The most we’ve seen in decades. New York Times also reported: “The cause of this recession — a global pandemic — means that our economic future will be determined in large part by the path of the virus” <a href="https://www.nytimes.com/2020/05/21/business/stock-market-today-coronavirus.html">2.</a></p><p>Well there goes our year. Right? ..Wrong. The Coronavirus might have caused you to lose your job, it might have had a negative impact on your personal relationships, or even cost the life of a loved one. If history has taught us anything, it’s that our reaction to difficult situations will have as much of an impact on the result, as the cause. Essentially — <strong>cause + reaction = result</strong>. Don’t get stuck in the weeds worrying about who to blame for your problems. <strong>Instead design and implement solutions</strong>.</p><h3>The path less taken. The hard road.</h3><p>Let’s grow and work towards a common goal. While individually our goals may be different, collectively the goal should be to emerge from this better people. I recently sat down with Kevin L. Nichols, the founder and CEO of the Social Engineering Project. The Social Engineering Project is an Oakland based Google and Microsoft funded social impact venture with Stanford University that is designed to address the lack of diversity in the tech industry through pipeline programs for underrepresented students of color. Kevin’s status as a<strong> </strong><em>Gamechanger</em><strong><em> </em></strong>is attributed to the fact that he is at the forefront of affecting much needed positive change. His professional career began at the Lawrence Livermore National Laboratory as mechanical engineering intern. After realizing engineering was not the path in line with his values, he went on to work as a legal assistant for Morrison &amp; Foster where he founded their diversity program. Nowadays, he’s aiming to solve the tech industries “diversity problem” with the help of his co-founder, Brian A. Brown. Their purpose is only just beginning to resonate with the masses.</p><h3>Abyss.</h3><p>In Principles, Ray Dalio coins the term “abyss” in reference to the low points in one’s life. Ray explains how one should look forward to these moments, expecting them, because they are bound to happen. The low points, the hard times are where growth happens. This is where we need to spend time <strong>learning in order to rebuild</strong>. Let’s consider one of my favorite examples from Dark Knight Rises, a scene called <a href="https://youtu.be/KXxw-zXRqOs">“The Climb”</a>. During our conversation, Kevin noted, “The climb is synonymous with the ladder…and you never really get to the top until you die”. You have to be present, focus on the step in front of you. When you think you’ve reached the top, you realize there’s always another climb. Nothing is forever — good or bad. But, while there’s life, there’s hope.</p><p>If you’re like me, you had big plans for 2020 and it’s been really shitty so far. Really shitty. But if you can’t stand the smell of shit, then don’t dwell in it. Let’s make this the greatest comeback story ever told. <strong>The Bounce back.</strong> Kevin Nichols and I talked about his decline in funding during COVID. I have a feeling he won’t let that stop him. He and Brian A. Brown will continue on their journey despite whatever obstacles lie waiting. Their annual Science in the City event was due to be cancelled because of the Coronavirus. That would’ve meant a little over a hundred kids missing out on an amazing learning opportunity. The solution? Science in the City is going Virtual. We don’t know what tomorrow will look like. Truth is we never did. This shouldn’t scare us into paralysis. Do what is necessary today, to put yourself and those you love in a better place tomorrow. We have a golden opportunity to re-create our future.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=74d431f32605" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>