<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Microsoft Design - Medium]]></title>
        <description><![CDATA[Stories from the thinkers and tinkerers at Microsoft. Sharing sketches, designs, and everything in between. - Medium]]></description>
        <link>https://medium.com/microsoft-design?source=rss----71c99841f1ad---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Mon, 06 Apr 2026 03:37:46 GMT</lastBuildDate>
        <atom:link href="https://medium.com/feed/microsoft-design" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[The age of the design hacker]]></title>
            <link>https://medium.com/microsoft-design/the-age-of-the-design-hacker-7023878d8f24?source=rss----71c99841f1ad---4</link>
            <guid isPermaLink="false">https://medium.com/p/7023878d8f24</guid>
            <category><![CDATA[ui]]></category>
            <category><![CDATA[microsoft]]></category>
            <category><![CDATA[design]]></category>
            <category><![CDATA[ux]]></category>
            <dc:creator><![CDATA[Microsoft Design]]></dc:creator>
            <pubDate>Thu, 16 Oct 2025 16:39:14 GMT</pubDate>
            <atom:updated>2025-10-16T16:39:09.598Z</atom:updated>
            <content:encoded><![CDATA[<p>Six practical mindsets for building secure and resilient UX</p><p>By Venita Subramanian</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*2fqJseZ5nd7b3QaE.jpg" /></figure><p><em>Venita Subramanian is a Design lead within Microsoft Security UX where she spearheads Secure by Design, a company-wide initiative focused on blending creativity, craft, and frameworks to make security a natural part of every experience from the start.</em></p><p>The idea of an “ethical design hacker” may sound outlandish, like some kind of digital-era Robin Hood that exploits a system to protect the people. But to thrive in today’s UX landscape, where product makers aren’t just crafting interfaces but shaping entire systems in real time, ethical hacking is a mindset shift that can empower us to flip the role of design from reactive to anticipatory. By applying the same creativity, curiosity, and persistence attackers use, we can spot vulnerabilities before they do, strengthening and protecting the products we create.</p><p>This is critical as UX enters an era of unprecedented speed and scale. The way we design, build, and deliver products is accelerating, collapsing timelines that once defined our craft. Following a recent talk I did at Design Week, Microsoft’s largest design conference, the theme was unmistakable: AI isn’t just another tool. It is reshaping how we work, what customers expect, and how quickly we are expected to deliver.</p><p>Meanwhile, the same technology fueling innovation is creating new vulnerabilities at an equally unprecedented pace. Microsoft now tracks over <a href="https://www.microsoft.com/en-us/security/security-insider/threat-landscape/microsoft-digital-defense-report-2024?">1,500 unique threat groups</a>, from nation-state actors to cybercrime syndicates — many already using the very tools we rely on to create. The ground beneath us is shifting fast, and UX design’s role isn’t shrinking in response. It’s expanding, and we are actively shaping the systems, behaviors, and safeguards that determine whether products are trusted or exploited.</p><h3>Secure by Design: a cultural shift</h3><p>When we launched <a href="https://microsoft.design/articles/secure-by-design-a-ux-toolkit/">Secure by Design: UX</a> last November, we grounded it in guidelines, frameworks, and tools that helped teams anticipate vulnerabilities before code was written. From the start, the ambition was cultural transformation: making security a shared part of design practice. That goal remains unchanged. What has and keeps shifting is both the UX and threat landscape; the tension we face today looks different from even just twelve months ago.</p><p>Craft and human-centered design remain our foundation, but speed now dominates. Those shaping experiences have always been taught to slow down, to test, to refine with intention. Today’s timelines often see those practices as obstacles — even though security, another cornerstone, cannot be overlooked. UX design is no longer separate from making. We are writing code, building flows, and shaping systems that go straight into production. That means our influence on security outcomes is immediate and undeniable. The way forward is not more gates or process. It is a mindset shift: we are no longer only UX designers, researchers, and content specialists responding to risks after they appear. We are ethical design hackers, anticipating risks before they surface.</p><h3>What does it mean to think like an ethical design hacker?</h3><p>To think like an ethical design hacker is to look at a product the way an adversary might, but through the lens of UX design. It means scanning flows and interactions for places where small gaps could cascade into bigger risks and experimenting with safeguards that make resilience part of the experience. Where ethical hackers stress-test code, teams shaping experiences can stress-test the user journey, anticipating how vulnerabilities emerge not only from technical flaws but from the ways people interact with systems. In this mindset, we are not only creating products; we are protecting the people who use them. And because technology, threats, and user expectations move too quickly for static rules alone to keep pace, what endures are the mindsets we bring to our craft. They help us approach problems from new angles, anticipate risks earlier, and design with both innovation and responsibility in mind.</p><p>A great example of this is EchoLeaks, the name of a flaw in Microsoft 365 Copilot that security researchers uncovered earlier this year. On the surface, it looked like an ordinary email, but hidden instructions in the background formatting silently turned Copilot into an attack tool. With no clicks and no warnings, sensitive data was quietly sent out, invisible to the user. The Copilot team moved quickly to address the issue, and no customers were impacted. Still, EchoLeaks highlights a broader challenge we see across many AI systems: hidden inputs driving visible actions without cues or controls for the user.</p><p>Attacks like prompt injection, data leakage, and feature abuse are emerging in many forms across AI-powered products. In each instance, the weak point is the same: invisible automations, ambiguous interactions, and users left without visibility or control. These are not engineering problems alone — they are UX problems. And they show why adopting the mindset of an ethical design hacker is no longer optional. That shift comes to life through six design mindsets: practical ways to reframe our craft so we design not only for usability, but for resilience.</p><h3>6 mindsets that shape ethical design hacking</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*INR2rtK6mZ9C9OA_.png" /></figure><h3>Mindset 1: Always anticipate misuse</h3><p>UX practitioners are trained to think about the ideal path, how something should work. Attackers think the opposite. They look for the cracks: vague prompts, gray areas, edge cases where the system behaves in ways no one expected. In AI systems, that ambiguity is everywhere. A single open-ended question can pull in more information than a user intended or expose data that was never meant to be surfaced.</p><p>Anticipating misuse flips the script. It asks us to pause and ask: How could this feature be twisted, stretched, or chained with something else? By designing for the worst-case alongside the best-case, we build systems that hold up under pressure. When users can trust that our products will not fail them in messy, unpredictable moments, teams earn the freedom to innovate and move faster.</p><h3>Mindset 2: Don’t let the details tell the story</h3><p>Attackers rarely need full access to break a system. They often stitch together fragments — one confirmation here, a count of results there, a subtle change in how content loads — and use them to infer something bigger. What feels like harmless detail to us can become a breadcrumb trail that gives away the whole picture.</p><p>Think about what happens when a system refuses a request. If the response explains why, hinting that the information exists but is restricted, it has already revealed something sensitive. Even a small confirmation can help attackers map what is behind the wall. Multiply those hints across multiple interactions, and suddenly the system is telling a story it was never meant to.</p><p>For UX, this means asking not just what we are showing, but what story could someone tell if they put these pieces together. Designing securely is not about hiding everything; it is about being intentional with the signals we expose. When details are managed with care, we preserve utility for users while denying attackers the narrative they are trying to construct.</p><h3>Mindset 3: Guard against feature abuse</h3><p>Features designed to help can just as easily be turned against us. Autocomplete, previews, or sharing options seem harmless, but in the wrong hands they can be manipulated to extract sensitive data, mislead users, or bypass intended safeguards. What delights in one context can become an attack vector in another.</p><p>Guarding against this does not mean shutting down functionality. It means stress-testing features with the mindset of an adversary: asking how they might string together outputs, manipulate defaults, or exploit convenience. Sometimes the fix is as simple as limiting exposure, adding a confirmation step, or tightening defaults so that power is not handed over too easily.</p><p>When we design with feature abuse in mind, we are not only protecting systems; we are protecting trust.</p><h3>Mindset 4: Know the why behind the AI</h3><p>Designing experiences without understanding how AI makes decisions is like working in the dark. If product makers do not know what data the system is drawing from, what logic shapes its answers, or what conditions trigger a response, they cannot anticipate how users will experience it. That gap leaves teams unprepared when the system behaves in ways that feel random, inconsistent, or even unsafe.</p><p>The fix is not for UX teams to master every technical detail. It is to ask the right questions. What data is the model using? What hidden rules shape its behavior? Are we surfacing outputs we cannot fully explain? Working side by side with engineering, security, and data science partners turns the system from a black box into something we can design for with intention. Transparency makes our decisions sharper, and it makes users’ experiences more trustworthy.</p><h3>Mindset 5: Anonymize by default</h3><p>Names, IDs, and personal markers sneak into more designs than we realize. A status dashboard might show who triggered an alert. A collaborative tool might reveal which teammate last opened a file. A system log might record far more than is necessary to troubleshoot an issue. Each of these details can seem harmless, but together they create risk by exposing personal or sensitive data that attackers can use to target individuals or map relationships.</p><p>Anonymizing by default flips the starting point. Instead of asking “what information should we hide?” the question becomes “what information do we truly need to show?” Sometimes the answer is none. Other times, anonymized or aggregated data serves the same purpose for the user without exposing individuals.</p><p>This mindset does not mean stripping away context or accountability. It means designing with care so that people remain protected while systems remain usable. By minimizing exposure up front, we reduce the chance that our designs leak more than they should.</p><h3>Mindset 6: Build security together</h3><p>Security is a team sport, built through shared responsibility across disciplines. Designers, researchers, content writers, engineers, product managers and security experts all have a part to play in spotting risks and shaping safeguards. The strongest defenses emerge when these perspectives come together, not when they operate in silos.</p><p>The most effective way to do this is through reuse. Instead of every team inventing its own fixes, we can rely on established security patterns, frameworks, and guidance. Reusing and evolving these shared solutions makes products more consistent, reduces duplication, and helps secure practices scale across an organization.</p><p>For UX, this collaboration is especially powerful. When patterns are reused, they do not just make systems safer; they make them easier to use. Consistent safeguards become invisible helpers instead of frustrating obstacles. By building security together, we create cohesive, resilient experiences that protect users without slowing them down.</p><p>Great UX has always meant usable and delightful. Now it must also mean trustworthy products people can rely on even when adversaries are testing their limits. The true mark of ethical design hacking is not in avoiding failure, but in ensuring users never encounter it. The Secure Future Initiative is advancing this mindset across Microsoft, bringing together product makers to build experiences that earn and sustain trust. To learn more about this approach, explore the <a href="https://www.microsoft.com/en-us/trust-center/security/secure-future-initiative?msockid=2989ee0bd86e69bc0c3ff834d94368c9">Secure Future Initiative</a>.</p><p><em>Grateful to Sabine Roehl (CVP, Microsoft Security UX) for her pioneering leadership in Secure UX, to Charlie Bell (EVP, Security) and the Secure Future Initiative leadership for driving this vision across Microsoft, and to Joel Williams (Design Director, Identity Design) for his partnership and commitment to this mission. Deep appreciation to the extended teams across Microsoft for bringing Secure by Design UX to life through their dedication and collaboration.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7023878d8f24" width="1" height="1" alt=""><hr><p><a href="https://medium.com/microsoft-design/the-age-of-the-design-hacker-7023878d8f24">The age of the design hacker</a> was originally published in <a href="https://medium.com/microsoft-design">Microsoft Design</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Fluid forms, vibrant colors: how a subtle refresh of our Microsoft 365 icons signals deeper change]]></title>
            <link>https://medium.com/microsoft-design/fluid-forms-vibrant-colors-how-a-subtle-refresh-of-our-microsoft-365-icons-signals-deeper-change-646ad772739b?source=rss----71c99841f1ad---4</link>
            <guid isPermaLink="false">https://medium.com/p/646ad772739b</guid>
            <category><![CDATA[microsoft]]></category>
            <category><![CDATA[design]]></category>
            <category><![CDATA[ui]]></category>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[ux]]></category>
            <dc:creator><![CDATA[Microsoft Design]]></dc:creator>
            <pubDate>Wed, 01 Oct 2025 16:02:30 GMT</pubDate>
            <atom:updated>2025-10-01T17:20:52.327Z</atom:updated>
            <content:encoded><![CDATA[<p>By Jon Friedman, CVP of Design and Research for Microsoft 365</p><p>Read the full article on our new website <a href="https://microsoft.design/articles/fluid-forms-vibrant-colors/">here</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*rAy2xSi6BqHCWGBK.png" /></figure><p>When it comes to outsized impact, it’s hard to debate the almighty icon. No bigger than a postage stamp, these tiny symbols are gateways to entire experiences, distilling complex ideas, product abilities, and brand identities into a single, memorable image. By evoking emotion, sparking curiosity, and giving intuitive guidance, they make technology more accessible and approachable.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*iQJYqE-nh_f39DKv.png" /></figure><p>Today, as we roll out refreshed icons for Microsoft 365 apps, small but significant design changes are a reflection and a signal. As a reflection, they encapsulate how AI is shifting the discipline of design and the nature of product development. As a symbol, they embody an ethos rooted in connection, coherence, and fluid collaboration. While these principles guided previous redesigns, their meaning has shifted — connection today isn’t about visual consistency so much as the seamless flow of human intent across every M365 canvas.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*k234aWEremQHIg4E.png" /></figure><h3>A design journey from UI to UX</h3><p>The core 10 Office apps were last updated in 2018 and the way we described what the designs represented is almost identical to language used today: connection, coherence, seamless collaboration, fluid transitions. At the time, that referenced interface design. We were signaling a connected look and feel across platforms and devices with fluid visual transitions between apps and animations. It was the early days of apps that composed together and truly collaborative experiences.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*xPxUWoLl3AR5ShqZ.png" /></figure><p>Today, effortless collaboration means both human-to-human and human-to-AI, with contextually intelligent experiences that understand and anticipate the nuances of your work, the data you’re referencing, and the goals you’re pursuing. Connection and coherence are about Copilot’s ability to understand your intent so you can seamlessly traverse the entirety of the M365 ecosystem to achieve your goal. That’s the paradigm shift; M365 has always empowered productivity but the driving force of the UX was often app features or the tools themselves. Today, the driving force is the outcome you desire.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*r8yrwfSr9dpwgKSd.png" /></figure><p>With that paradigm shift come significant changes to the UX discipline itself and how we approach product making. Longer cycles of heads-down development used to be followed by a big reveal of big changes. Today, with model capabilities rapidly emerging and our learning as UX practitioners rapidly advancing — including becoming more technical as a discipline — product evolution is happening in continuous waves. Research shows changes to iconography are almost always received as a signal for product changes and in an era of ongoing, smaller shifts, the icons should reflect that. As such, we embraced the idea of “evolution, not revolution” throughout our design process.</p><h3>New shapes, colors, and metaphors</h3><p>The new icons emanate a sense of fluidity and play, while also being simpler, more intuitive, and highly accessible. Their metaphor, shape, color, and letter have been redefined to create a cohesive, discoverable, and navigable system, crafted with gradients and gestures woven into Microsoft’s AI expression and experiences.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Uv4zswiaN9ezdDN4.png" /></figure><p>Delightfully simple: to maintain familiarity while streamlining the visual experience, we graphically simplified the icons for clarity and reduced visual noise. Whereas Word’s icon previously used four horizontal bars, the new version uses just three, improving legibility at small sizes and creating more visual concision.</p><p>Fluid shapes: We’ve moved away from bold, static solidity to embrace softer, more fluid forms. Sharp edges and crisp lines are replaced by smooth folds and curves, giving the icons a sense of playful motion and approachability.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*LHAFWaSrQkliX_b3.png" /></figure><p>Rich and colorful: The color palette has been dramatically refined. Where gradients were once subtle, they’re now richer and more vibrant, featuring exaggerated analogous transitions that improve contrast and accessibility. This shift makes the icons feel brighter, punchier, and more dynamic.</p><p>Instantly recognizable: Letter plates were much debated because they’re valuable real estate and icons following the core 10 Office ones no longer use them. Their brand equity is so strong, however, that we decided to keep them — maintaining our heritage while also modernizing them through a more cohesive visual integration with the overall design.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*_aWRXS9QoIsswuzn.png" /></figure><h3>Art imitates truth; truth imitates art</h3><p>Iconography often balances accuracy and aspiration. No digital product is ever fully baked and so your metaphors must embody present-day truth and the future you’re actively building. When we redesigned our icons in 2018, the artistry mimicked the idea of M365 beginning to meld together. That product truth had been getting ever stronger, and then Copilot turbocharged and transformed our ability to create a truly connected ecosystem for customers.</p><p>To create Copilot’s icon, we drew from a broad lineage of design influences — traditional Office icons, newer apps like Designer and Viva, Business and Industry icons like Copilot for Sales, and many more — while embracing new metaphors to signify a fluid exchange between you and AI, where you are always in control. The icon’s vibrant color palette represents all Microsoft products, rather than just the traditional blue, and it visually expresses collaboration and creativity in simple, playful, and accessible ways.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*ReLEvYqsIgrJ8mzA.png" /></figure><p>With Copilot now being such a more complete and integrated system within M365, it’s fitting that when refreshing the core 10 Office icons, the primary source of inspiration was the Copilot icon itself. A reflection and a result of Copilot’s transformative impact, the new designs visually complete a cycle where art and truth continuously shape each other.</p><p><em>A project of this size takes a village! There are too many folks to shout everyone out, but a special thanks goes to Aaron Martinez, Ada Hurd, Alexis Copeland, Anna Gray, Anthony Dart, Arman Keyvanskhou, Braz De Pina, Claudia Nafarrate, Cole Rise, Colin Day, Danny Pak, Heath Hinegardner, Jana Huskey, Jason Custer, Ju Hyun Lee, Kris Bennett, Laura Clark, Mathieu James, Michelle Barrueto, Mike LaJoie, Phil Evans, Shelby Hutchison, Sven Seger, and Tati Astua.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=646ad772739b" width="1" height="1" alt=""><hr><p><a href="https://medium.com/microsoft-design/fluid-forms-vibrant-colors-how-a-subtle-refresh-of-our-microsoft-365-icons-signals-deeper-change-646ad772739b">Fluid forms, vibrant colors: how a subtle refresh of our Microsoft 365 icons signals deeper change</a> was originally published in <a href="https://medium.com/microsoft-design">Microsoft Design</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Designing loops, not paths]]></title>
            <link>https://medium.com/microsoft-design/designing-loops-not-paths-591b7689227f?source=rss----71c99841f1ad---4</link>
            <guid isPermaLink="false">https://medium.com/p/591b7689227f</guid>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[design-thinking]]></category>
            <category><![CDATA[design]]></category>
            <dc:creator><![CDATA[Microsoft Design]]></dc:creator>
            <pubDate>Thu, 18 Sep 2025 16:25:52 GMT</pubDate>
            <atom:updated>2025-09-18T16:34:06.340Z</atom:updated>
            <content:encoded><![CDATA[<p>How cybernetic loops are helping us turn “human in the loop” from a catchphrase into a design practice.</p><p><em>by Matt Fick, Senior UX Architect, Business &amp; Industry Copilot and Max Peterschmidt, Principal User Researcher, Business &amp; Industry Copilot.</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*rI3JKGaCpA9TrvDO.png" /></figure><p>Imagine driving across town during rush hour. You might have a planned route, but traffic jams, roadwork, or weather force you to adapt — taking detours, checking traffic apps, looping back, or even stopping for coffee until things clear up. That’s because our interactions with reality rarely follow a script; they’re a series of adjustments and feedback loops.</p><p>As the technology landscape continues to evolve, established practices are giving way to innovative solutions. Software design has historically relied on workflow-based methodologies, breaking down objectives — often described as jobs-to-be-done — into individual tasks needed to accomplish those goals. This was largely due to the constraints of deterministic software, which limited flexibility. However, recent advancements in AI are expanding the range of design possibilities.</p><p>In this article, we’ll share some principles &amp; mental models we’re using to guide the design of AI agents for business applications while keeping humans in the loop.</p><h3>The tyranny of the workflow</h3><p>Let’s start by drawing a comparison to a workflow. In traditional application design, a designer begins with a desired outcome and breaks that process down into steps. Depending on the complexity of the task, those steps may sit within a single product or span multiple products and user types (as shown below).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*GYjiyn8wn7FTIFCx.png" /></figure><p>This approach worked for decades — at least since Alan Cooper introduced goal-directed design in 1995 — because it mapped well to the limits of deterministic design and graphical user interfaces. However, in the world of generative AI, this approach is dated and inadequate — real human behavior is much messier than a linear workflow would imply. The industry has clung to workflow-based products because there haven’t been any better options… until recently.</p><p>In our new era of adaptive, intelligent agents, workflows are a <em>cage</em>. As design tools, they are rigid, overcomplicated, and limiting; when we design with workflows, we create interfaces that are also rigid, overcomplicated, and limiting.</p><h3>Designing loops</h3><p>But what if we break the mold? Instead of deconstructing goals into linear stepped sequences, what if we shift our focus to true outcomes that can be achieved through a variety of means? One promising answer is found in the theory of cybernetics.</p><p>Cybernetics, documented as early as the 1940s, describes how an adaptive system engages with its environment through a continuous feedback mechanism. Cybernetic loops can describe biology, ecology, control systems, and intelligent systems such as artificial intelligence. Much of the thinking in our work is inspired by <a href="https://www.dubberly.com/articles/cybernetics-and-service-craft.html">“Cybernetics and Service-Craft — Language for Behavior-Focused Design,”</a> a 2007 paper by Hugh Dubberly and Paul Pangaro that proposed cybernetics as a systems design framework.</p><p>This diagram describes a cybernetic loop, the atomic element of the theory of cybernetics.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*erVZ3wDo83yteuYy.png" /></figure><p>Instead of following a rigid, step-by-step workflow, the process unfolds as a continuous cycle — each component playing a distinct role in driving toward a goal. Let’s take a closer look at this loop-based system, using a common thermostat as an example:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*NM8wu4X_jZyxD_NT.png" /></figure><p><strong>Setting a goal</strong></p><p>First, a goal is defined — what ultimately is the system trying to achieve? This desired outcome is the foundation on which the system will ground all its actions and adjustments.</p><p><strong>Making decisions</strong></p><p>Next, the “brain” of the system — called the controller — evaluates information gathered from its sensors and compares it against the goal. For example, if the desired outcome is to maintain a room at 72°F and the sensors detect it’s only 68°F, the controller recognizes the gap and determines that something needs to change. In modern AI-powered systems, this controller can be either a person or an AI agent tailored to user preferences, ensuring that decision-making remains adaptable and user-driven.</p><p><strong>Sensing what’s going on</strong></p><p>To make informed decisions, the system must understand its current state. Sensors continuously collect inputs about the environment and provide critical information to the system’s controller to enable real-time adaptation.</p><p><strong>The environment and disturbances</strong></p><p>The environment represents the broader “world” the system seeks to influence — like the temperature of a room. However, the real world is inherently unpredictable: unexpected changes, like someone opening a window or door, can cause disruptions. The iterative nature of the system enables it to react to these unplanned disruptions.</p><p><strong>Taking action</strong></p><p>Once the controller determines an adjustment is necessary, the system takes action — such as turning on a heater to raise the temperature — directly manipulating the environment closer to the desired outcome.</p><p><strong>Checking again (and again, and again…)</strong></p><p>After taking action, the system’s sensors reassess the environment to determine if the action was effective. If the room reaches 72°F, the goal is achieved. If not, the cycle repeats — constantly sensing, deciding, and acting — until the goal is met.</p><p><strong>The human in the loop</strong></p><p>When AI makers talk about having a human in the loop,<strong> this is that loop.</strong> By viewing it as a set of components, we can purposefully design AI systems that include people as essential elements, rather than excluding them.</p><p>Here’s another example of a loop, illustrating the process for vibe coding:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*MdAsDc4Y4UQtywyc.png" /></figure><h3>Nested loops</h3><p>Building on this approach, we can unlock even greater flexibility and intelligence by introducing the concept of nested loops. Rather than limiting control and feedback to a single layer, we allow for multiple, interconnected layers, each with its own goals, controllers, and feedback mechanisms. For example, a software engineer guides a team of AI agents, ensuring the outputs align with project goals. This is a nested structure of oversight and action between humans and AI agents.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*KJGLKBmy8CGjDgRm.png" /></figure><p>This approach represents a higher-level controller (e.g., software engineer) guiding the work of lower-level loops (e.g., the AI), setting broader objectives that cascade down. In theory, this layered structure can be scaled to represent highly sophisticated systems with multiple tiers of intelligence and autonomy.</p><h3>Adapting how we design</h3><p>So, what if the role of design shifted from directing users step-by-step to helping them track and shape their own progress toward a goal? How do we actually put this into practice — what would it look like to reimagine processes as dynamic, continuously evolving loops rather than rigid, linear sequences?</p><h3>The loop in action</h3><p>To illustrate this shift in thinking, let’s look at retail store operations, which have historically relied on deterministic linear workflows: forecast demand, place orders, stock shelves, repeat. But the weather forecast, product trends, local events, and other disturbances are out of the retailer’s control because our interactions with reality rarely follow a script (sound familiar?).</p><p>However, the loop model fundamentally reframes this challenge. Instead of framing inventory management as a sequence of tasks, it’s treated as a continuous cycle — one that senses, responds, and learns from the environment. Let’s take a look…</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*DdNQy8NgYTU5bNfI.png" /></figure><p><strong>Goal</strong></p><p>Ensure optimal inventory and reprioritization of store associate tasks.</p><p><strong>Controller (making decisions)</strong></p><p>When the system detects an upcoming local event, it compares the current inventory and historical sales patterns to the goal.</p><p><strong>System output (sensing what is going on)</strong></p><p>The agent generates a recommendation for the store manager based on expected impact, including suggested actions (e.g., restock bottled water, increase snack orders, adjust staffing). And, for changes that require higher-level approvals, the system uses a nested loop to notify appropriate users and provide context.</p><p><strong>Environment (and disturbances)</strong></p><p>The store’s “world”: location, weather, upcoming events, and customer flow. Disturbances such as sudden weather changes or competing local events are treated as critical signals, not just noise.</p><p><strong>System input (taking action)</strong></p><p>The AI agent continuously monitors local news, weather, and event feeds, sensing disturbances that could impact demand. The store manager reviews recommendations, approves or adjusts the plan, and provides feedback. And, for nested loops, higher-level managers can review change requests and adjust automation thresholds.</p><p><strong>Controller (checking again)</strong></p><p>The agent incorporates feedback, learns from user adjustments, and refines future recommendations. Over time, the loop becomes more attuned to the store’s unique rhythms and the manager’s preferences, increasing automation and accuracy.</p><p><strong>Why this matters</strong></p><p>This adaptive loop transforms the manager’s role from reactive problem-solver to proactive, well-informed decision-maker. Compared to traditional rigid workflows, this loop model is inherently more flexible and forgiving, promoting a dynamic partnership between manager and AI agent.</p><p><em>The system doesn’t just automate tasks — it augments human judgment, surfaces relevant context, and, importantly, continuously improves through feedback, vital to building user trust in AI.</em></p><p>And, as users’ trust in AI grows, so does their willingness to automate workflows and delegate tasks. This shift unlocks real business value: increased automation leads to greater efficiency, smarter decision-making, and ultimately, a measurable return on investment.</p><h3>Key takeaways</h3><p>As we transition from traditional, step-by-step workflows to adaptive, loop-based systems, we’ve identified five key principles to ensure success:</p><ul><li><strong>Resilient Design:</strong> Feedback loops make systems responsive to change, enabling continuous improvement and resilience in real-world conditions.</li><li><strong>Flexible Solutions:</strong> Adaptive systems thrive on continuous feedback, improving efficiency and trust.</li><li><strong>Human-in-the-loop (HITL):</strong> Users — like store managers — become active partners with AI agents that learn and adapt, while systems are designed to keep humans in control at every step. HITL design ensures automation remains aligned with user intent.</li><li><strong>Oversight-ready:</strong> Loop-centric design elevates human judgment. Users shape outcomes through approvals, escalations, exceptions, and suggestions — making them dynamic participants, not passive operators.</li><li><strong>Future-proof: </strong>This approach drives measurable gains in productivity, smarter decisions, and stronger synergy between people and technology.</li></ul><h3>Closing thoughts</h3><p>So, the next time you’re rerouting your daily commute or improvising when plans go sideways, remember: our interactions with the world rarely follow a script. Most journeys require adaptation and provide us ample opportunity to learn and grow with each twist and turn.</p><p>Designing for loops means embracing uncertainty, learning from feedback, and evolving alongside users. As designers, technologists, and everyday problem-solvers, we have a choice: continue paving rigid roads or begin creating loops that empower people to better navigate the real world with all its beautiful unpredictability.</p><p>What loops are you designing — and where might they take you next?</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=591b7689227f" width="1" height="1" alt=""><hr><p><a href="https://medium.com/microsoft-design/designing-loops-not-paths-591b7689227f">Designing loops, not paths</a> was originally published in <a href="https://medium.com/microsoft-design">Microsoft Design</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Vibe coding makes prototyping close to code, closer to users]]></title>
            <link>https://medium.com/microsoft-design/vibe-coding-makes-prototyping-close-to-code-closer-to-users-1129c0bf556c?source=rss----71c99841f1ad---4</link>
            <guid isPermaLink="false">https://medium.com/p/1129c0bf556c</guid>
            <category><![CDATA[coding]]></category>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[microsoft]]></category>
            <category><![CDATA[design-thinking]]></category>
            <category><![CDATA[vibe-coding]]></category>
            <dc:creator><![CDATA[Microsoft Design]]></dc:creator>
            <pubDate>Thu, 21 Aug 2025 16:25:19 GMT</pubDate>
            <atom:updated>2025-08-21T16:25:19.422Z</atom:updated>
            <content:encoded><![CDATA[<p>Meet Quirine, a computational design manager exploring how AI reshapes the way her team builds and test ideas</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*H0whuJBn_W1c5fxX.png" /></figure><p><em>Quirine van Walt Meijer is a computational design manager passionate about transforming raw ideas into valuable product experiences. She has gone from designing voice agents for cars to building tools for developers in Core AI. Her background in industrial design engineering helps her think about systems, people, and the messy middle where ideas take shape.</em></p><p>Today, she and her team design experiences for developers who are building agentic systems and applications, striking a balance between functional and tasteful design. Her goal is to build delightful and intuitive experiences for developers through iterative prototyping with her team. Lately, her team has been vibe coding, bringing their ideas to life in more meaningful ways. We sat down with Quirine to learn more.</p><p><strong>What is vibe coding?</strong></p><p>At its essence, vibe coding refers to creating digital experiences where the creator expresses interaction goals in natural language, and an AI system translates that into working code and prototypes. The user acts like a creative director, guiding, testing, and refining AI-coded experiences instead of writing it line by line.</p><p><strong>What inspired your team to start exploring vibe coding in your design process?</strong></p><p>Design is evolving, and so are the ways we work. We are seeing a shift where designers prototype closer to code, test ideas in context, and collaborate more deeply across disciplines. A trend we are seeing across disciplines.</p><p>Vibe coding allows designers to explore ideas with immediacy and intent. In our team, vibe coding is a way to sketch out the future of our next product experience, letting us innovate rapidly and see our ideas come to life at speed. It gives designers a way to move beyond Figma mockups and into interactive, testable experiences that reflect the complexity of real-world use. If we have an idea for uplifting an experience or spinning off a new feature, vibe coding lets us prompt, context engineer, and prototype that future much faster. It allows us to better communicate our ideas to our stakeholders by presenting them with something they can click through and understand.</p><p>Vibe coding has now become another essential tool in our designer belt. We are learning as we go, figuring out how to ideate securely, make confident decisions, and stay open to what the future of design might look like.</p><p><strong>How has vibe coding changed the way you collaborate and design?</strong></p><p>Vibe coding has fundamentally transformed how we prototype and design. Before, our process was more focused on how things looked, relying on UI and mockups that present a visual user journey. Now, with vibe coding, we can prototype in environments that are much closer to the real thing, shaping not just the appearance, but the actual behavior of our experiences directly in the browser. That shift is powerful as it opens real time feedback loops. While taste and visual polish matter, vibe coding lets us bring motion, constraints, and accessibility considerations into the process much earlier. By collaborating with AI and engaging in prompt engineering, we can quickly surface those messy, real-world edge cases and “unhappy paths” that traditional workflow handoffs often miss until much later. This immediacy is incredibly rewarding, because we see our ideas come to life in code, closer to how our engineering partners would build them.</p><p>We have also found ourselves ideating with developers earlier in our processes, deepening the mutual respect for each discipline. Our UX engineers in the team have empowered us in our learning journey around vibe coding through context engineering. They have set up collaborative environments, provided contextual information and tools in a secure format such that AI can accomplish our needs, and helped designers, PMs, and others get hands-on with our internal vibe coding tool. In the spirit of prototyping through vibe coding, Will Eastler on my team built an internal computational prototyping tool that helps us share ideas in standard HTML, CSS and JavaScript. He recently wrote an article about this on the <a href="https://techcommunity.microsoft.com/blog/aiplatformblog/the-future-of-ai-wigit-for-computational-design-and-prototyping/4438320">Microsoft Tech Community blog</a> if folks would like to learn more.</p><p>Through this, we are learning how to think structurally, closer to coders. We are understanding GitHub’s folder organization, and the logic behind application design. It has created an engaging middle ground where design, product managers and engineering come together, making the process more transparent, collaborative, and more impactful for everyone involved.</p><p><strong>What are some common fears designers have about vibe coding, and how did you build an inclusive culture to help overcome these fears?</strong></p><p>One of the most common fears I have seen among designers is the feeling of not being “technical enough” or wondering if they should just stick to what they know as coding can feel out of reach. This is not unique to design. It is something that crosses many disciplines. In our team, we have worked hard to create a safe space for experimentation, where everyone can play, learn, and grow together. As mentioned, our UX engineers have been instrumental in this, guiding designers through the process and reminding everyone that we are all learning as we go. We celebrate every small win, whether it is a first commit or a comprehensive new prototype, and we have built a secure environment where people can try things out without fear of failure.</p><p>What is truly rewarding is seeing designers realize that, through vibe coding, they can directly address accessibility and create more inclusive experiences. Prompt engineering their way with ARIA requirements, tabbing, and reflow for different screen sizes right from the start. These are aspects that often felt abstract or delayed in traditional tools. It is a mindset shift, and it takes time. By sharing our “coded sketchbooks” and supporting each other, we are making vibe coding more approachable and collaborative. No matter which tool you use, there will always be differences, but at its core, vibe coding is about opening the process, learning together, and bringing ideas to life in a way that is more real and inclusive.</p><p><strong>How do you see vibe coding evolving in the next year or two?</strong></p><p>Looking ahead, I see vibe coding and context engineering become even more of a team sport, bringing together people from different disciplines to create in a truly collaborative way. As a Dutch designer, I am reminded of Johan Cruyff’s wisdom: “You play football with your head, because the ball is quicker than your legs.” In the same way, our work with vibe coding is about using our judgment, context, and human oversight to guide the process, as our tools become faster and more capable than ever. The pace of change in AI is relentless. There is always a new model, a new drop, and a new tweak that promises to make things better. But it is not about chasing every update; it is about applying our expertise, making sure the tools serve our context, our industry, and most importantly, our end users.</p><p>As we continue to use and develop vibe coding tools, especially internally, it is crucial to keep our data secure and within our own domain. We have built internal environments where designers can experiment freely, knowing their work is protected and reviewed. But as more tools and providers emerge, we all need to be mindful of where our data goes and how it is used. The evolution of collaborative vibe coding will depend on our ability to stay human. This involves thinking critically, nuance, and remembering that, no matter how fast technology moves, we are steering the process. Play with your head, stay alert, and let your creativity and human judgment lead the way.</p><p><strong>What advice would you give to other design teams who are curious about trying vibe coding but do not know where to start?</strong></p><p>Worry less about achieving perfection and focus more on understanding and shaping behavior. The real value of vibe coding comes from seeing how your ideas work in context. These tools will continue to evolve and eventually catch up with your product’s styling and polish, but the most important thing is to just start — pick a secure vibe coding tool, begin small, and center your efforts on your users and their context. As you experiment, you will quickly discover how prompt and context engineering can bring your ideas to life, and you will learn whether your initial concept holds up or needs refinement.</p><p>Do not keep your discoveries to yourself. Share what you have learned, invite others to collaborate, and, if possible, pair up with engineers. There is so much to gain from working side by side, seeing how developers approach problems, and letting them challenge or validate your solutions. GitHub Copilot is incredibly helpful in VS Code and getting comfortable in these environments will make you a stronger, more well-rounded designer. Vibe coding is about embracing the process, learning together, and letting curiosity drive you forward.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1129c0bf556c" width="1" height="1" alt=""><hr><p><a href="https://medium.com/microsoft-design/vibe-coding-makes-prototyping-close-to-code-closer-to-users-1129c0bf556c">Vibe coding makes prototyping close to code, closer to users</a> was originally published in <a href="https://medium.com/microsoft-design">Microsoft Design</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Reimagining our front door]]></title>
            <link>https://medium.com/microsoft-design/reimagining-our-front-door-e4f96e664d50?source=rss----71c99841f1ad---4</link>
            <guid isPermaLink="false">https://medium.com/p/e4f96e664d50</guid>
            <category><![CDATA[microsoft]]></category>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[ui]]></category>
            <category><![CDATA[windows]]></category>
            <category><![CDATA[design]]></category>
            <dc:creator><![CDATA[Microsoft Design]]></dc:creator>
            <pubDate>Tue, 19 Aug 2025 19:37:20 GMT</pubDate>
            <atom:updated>2025-08-19T19:37:20.477Z</atom:updated>
            <content:encoded><![CDATA[<p>Using positive psychology to craft delightful sign-in &amp; sign-up experiences</p><p>By Eva McSweeney</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*5ZAwZl9Xlmyn-RBs.jpg" /></figure><p>When was the last time you thought deeply about a sign-in screen? Probably never. And that’s kind of the point. The best authentication experiences should feel frictionless, intuitive, and almost forgettable. But crafting that invisible clarity is anything but simple. Behind every seamless sign-in or sign-up (SISU) experience lies enormous complexity, especially when you’re designing for over a billion Microsoft account users each month.</p><p>What might seem like “just” a visual SISU refresh is, in fact, a fundamental redesign of how people begin their Microsoft journey. This isn’t just about a more polished user interface (UI); it’s about trust, identity, and the quiet power of human-centered design. We’ve transformed one of our most critical and invisible interactions into something that feels seamless, safe, and above all, human.</p><p>Understanding how the human brain works is essential. Research shows that people are largely driven by subconscious impulses and emotions. The first few seconds of an interaction shape their expectations, trust and perception, long before a product’s full value is revealed. A confusing or frustrating sign-in or sign-up experience can lose users before they even begin.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*djhP0RS1slgxWC440dJpnQ.jpeg" /></figure><h3>Designing for humans: simplifying complexity</h3><p>SISU flows play a pivotal role in user retention and satisfaction. Far from being a mere formality, they are gateways in shaping how users perceive our ecosystem, and are foundational, horizontal experiences that span all Microsoft products and services. When designed well, they foster trust and showcase the value of a Microsoft account.</p><p>Deep delight in design emerges when functionality, reliability, usability, and emotional appeal are all holistically addressed. It’s not just about beautiful visuals but about minimizing friction at every step of the user’s journey. That means reducing steps to completion, lowering the information cost, and decreasing unnecessary noise.</p><p>In rethinking our experiences, we deeply analyzed the pain points our users faced. Inconsistent visuals, cluttered interfaces, context switching, and outdated authentication methods were some aspects that created frustration and confusion. For us to create an experience that felt clear, and cohesive, we had to earn our users’ trust from the very beginning, a process shaped in collaboration with our research design partners.</p><p>Embracing <a href="https://fluent2.microsoft.design/">Fluent 2</a>, Microsoft’s modern design language, brings a cohesive visual and functional experience across devices while improving accessibility for users of all abilities. By removing product logos and custom backgrounds, the interface feels cleaner and less distracting. Consistent navigation simplifies user movement, and collecting essential information during sign-up helps prevent issues down the line. Dead ends and confusing loopbacks have been eliminated, replaced by a streamlined, focused journey.</p><p>A major area of improvement has been simplifying long complex flows, such as child account setup. What was once confusing and time-consuming is now intuitive and family friendly. Our human-centered design principles have been applied throughout SISU, ensuring a cohesive, approachable experience across Microsoft services.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*lVf38dguDwM6BeQzi_YAwQ.png" /></figure><h3>Creating a calm, focused experience</h3><p>Small details make a big impact. Subtle touches, like guiding animations and smooth transitions, along with friendly microcopy crafted with our content design partners, help users feel supported rather than instructed. At its best, onboarding should create a sense of flow, where users move forward effortlessly, without friction or second-guessing. This means reducing decision fatigue and offering just the right guidance at just the right moment. A thoughtful experience doesn’t call attention to itself; it simply works, leaving users feeling confident and ready to engage.</p><p>Our redesigned flow is simplified, emotionally engaging, and centered on helping users feel understood and in control. People naturally gravitate toward familiar patterns. With this in mind, we’ve incorporated known conventions to ease cognitive load and foster trust. Features like animated illustrations for new concepts, generously sized input fields, and clear calls to action support muscle memory and create a more intuitive, human experience.</p><p>Co-branding also plays a key role in this evolution. The new sign-in flow harmonizes Microsoft’s Fluent design system with the unique visual identities of products like Xbox. This integration of product-specific colors and branding not only reinforces recognition but also brings a sense of personalization and familiarity, whether users are signing into a productivity app or a gaming platform.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*LaiEBlTq26RnJFWY2EyfQw.jpeg" /></figure><h3>Embracing passwordless authentication for a secure future</h3><p>As part of Microsoft’s <a href="https://www.microsoft.com/en-us/trust-center/security/secure-future-initiative">Secure Future Initiative</a> (SFI), the SISU experience has been redesigned with passwordless authentication at its core. New sign-ups begin with simple code verification; no passwords are required. Modern methods like passkeys deliver stronger security with less friction.</p><p>We’ve streamlined both sign-in and sign-up flows to default to these faster, more secure options. Users can now remove passwords entirely and rely on safer alternatives. Designed to be passwordless by default, the experience makes upgrading to stronger protection easy and intuitive.</p><p>We’ve also reimagined account recovery. What once required multiple screens and steps can now be completed with a single click, making the process not just safer, but also simpler and more user-friendly.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*PH5KTl5wPSFoAq0-OCSbGA.jpeg" /></figure><h3>The power of meaningful design at Microsoft</h3><p>Our reimagined sign-in and sign-up flow reflects a core belief-achieving true simplicity requires thoughtful effort and meticulous design. Through a focus on delight, modern interaction patterns, and a human-centered approach, the experience now feels simple, secure, and emotionally engaging.</p><p>Understanding human behavior and designing with empathy not only creates better experiences but also builds stronger relationships. Our new SISU journey lays the foundation for long-term trust, deeper engagement, and a more meaningful connection within our Microsoft ecosystem.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e4f96e664d50" width="1" height="1" alt=""><hr><p><a href="https://medium.com/microsoft-design/reimagining-our-front-door-e4f96e664d50">Reimagining our front door</a> was originally published in <a href="https://medium.com/microsoft-design">Microsoft Design</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[A mobile-first approach for Microsoft 365 Copilot]]></title>
            <link>https://medium.com/microsoft-design/a-mobile-first-approach-for-microsoft-365-copilot-7849a1f8b56c?source=rss----71c99841f1ad---4</link>
            <guid isPermaLink="false">https://medium.com/p/7849a1f8b56c</guid>
            <category><![CDATA[india]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[microsoft]]></category>
            <category><![CDATA[design]]></category>
            <category><![CDATA[ui]]></category>
            <dc:creator><![CDATA[Microsoft Design]]></dc:creator>
            <pubDate>Thu, 12 Jun 2025 16:53:08 GMT</pubDate>
            <atom:updated>2025-06-12T17:50:39.599Z</atom:updated>
            <content:encoded><![CDATA[<p>How Microsoft is reimagining productivity from a mobile-first lens</p><p>By Deepak Menon</p><p>Read the full article on our <a href="https://microsoft.design/articles/a-mobile-first-approach-for-microsoft-365-copilot/">new website here</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hTpplOviaSPwxjJ5Nt8upg.png" /></figure><p>More and more, the world is working on the go. As we physically move through the day, the ability to be productive should flow alongside us, regardless of platform or device. With this in mind, we’ve been redesigning the Microsoft 365 Copilot mobile app to be a one-stop destination for AI-driven productivity. Increasingly minimal in approach, the new designs are simpler, cleaner, more intentional, and nearly invisible, letting AI run seamlessly in the background and reducing the need for constant manual interaction.</p><p>As a mobile-first country rich in linguistic diversity, India is known as a litmus test for what will happen in other places. It’s often said if you solve for India, you solve for the world.</p><p>For most Indians, mobile isn’t supplemental to desktop productivity experiences; it is <em>the </em>experience. Over one billion people here solely<em> </em>use (and sometimes share) mobile devices, necessitating a mobile-first mindset among product makers. That mindset leads to many inclusive innovations, such as access to Data Public Goods, a shared digital infrastructure focused on narrowing the great digital divide. Here, that little glass square, barely the size of a postcard, has become a layer of life, a way of connecting with family, mediating work, monitoring our health, and expressing who we are.</p><p>As intelligence on tap rewrites the rules of business and transforms knowledge work — we’re officially in the age of the <a href="https://blogs.microsoft.com/blog/2025/04/23/the-2025-annual-work-trend-index-the-frontier-firm-is-born/">Frontier Firm</a> — this is how we’re thinking about it from a mobile-first lens.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*r5VDAieDwqEdw1R3gyee5Q.png" /><figcaption>Step into the future of designing mobile experiences for work on the go.</figcaption></figure><h3>Designing for the Smallest Stage</h3><p>As devices used in motion, mobile’s portability allows productivity to unfold in fragments of time, on overcrowded buses, in loud marketplaces, and in quite rural regions, where connectivity barely penetrates. A mélange of personal reflection, meetings, reading documents on the move happens amid the chaos of life and a waterfall of conversation and interconnectedness. That context — combined with considerations like screen size, portability, input methods, and precision — creates a distinct set of design considerations.</p><p>With touch as the primary mode, there’s no precision cursor, just the tap of a thumb, the swipe of a finger, or a spoken phrase. In response, the redesigned M365 Copilot for mobile adopts a simple, streamlined, purpose-driven interface with thumb-friendly layouts, prioritizing clarity and ease of access. It integrates AI-first capabilities that reimagine productivity — facilitating rapid content generation, and contextual assistance.</p><p>The design emphasizes modularity and adaptability, ensuring that key features remain accessible even in a compact format. The experience leverages native gestures and tactile feedback to create a more intuitive, responsive feel. This approach also necessitates a robust architecture that supports asynchronous interactions and intermittent connectivity, guaranteeing reliable performance regardless of a person’s location.</p><p>Just like the phone is thought of as an extension of your body, we thought of Microsoft 365 Copilot as an extension of your thinking and workflow. The challenge was figuring out how to design an interactive AI paradigm that’s conversational.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/0*Ld6RgG6hevtRs6_r.jpg" /><figcaption>A UI that empowers people to create assets on the fly.</figcaption></figure><h3>Creating a conversational paradigm with Copilot</h3><p>In India, communication apps are a public utility like electricity. They use very little data, and they work especially well on low-cost smartphones. Aside from texting and calling, messaging platforms are the post office, a store front, or a neighborhood notice board. Through multiple languages, people exchange ideas, memories, voice notes, and information to nurture a mutual understanding and shared perspective.</p><p>As a cultural and economic cornerstone of Indian society, communication is both intuitively understood but also critically considered. When designing, you must break down the dynamics of a conversation. The initiation or starting a chat involves asking questions, referring to mutual context, breaking down complex ideas, and knowing how to smoothly switch context.</p><p>When we think about what makes a discussion productive, it’s usually related to a shared purpose or a mutual context. This is especially crucial to creating a unified and consistent experience across devices and platforms. Grounding AI in your location, social settings, daily schedule, and pending projects makes Copilot a collaborative partner who responds accordingly, transitioning the conversation to the appropriate app for next steps or switching context to further your intention.</p><p>Traditionally, productivity tools have followed a content-first design — centered around documents, spreadsheets, and notes. But with the rise of AI and conversational interfaces, the focus is shifting from static artifacts to intent-driven workflows. In this new paradigm, interfaces prioritize people’s goals over formats, enabling dynamic, non-linear productivity flows. Design now centers on action, context, and the orchestration of tasks — framed as an ongoing conversation between the person and the system, rather than rigid interactions within app silos.</p><p>In addition, we know voice on mobile is a vital part of initiating intention, and the app’s voice capabilities will be released soon. We’re also actively exploring ways of leveraging small language models to enable multilingual functionalities.</p><h3>Breaking linguistic barriers: taking advantage of small language models</h3><p>In India, the language of the government, of doing business, and general communication is English, but only 11% of the country’s 1.4 billion people speak it. Outside of the country’s 22 constitutionally recognized languages, there are almost 120 others. Some people speak a different language at home, at work, and at school. And in some parts of the country, low literacy excludes people from public services.</p><p>That’s why we’re iterating on how to empower people to naturally communicate in their own language. Small language models could enable a farmer to translate and access government services, an information worker to file a report, or a student to apply for college. Breaking linguistic barriers means providing clarity and comfort without judgment or fear of asking questions — and knowing that all forms of engagement and transactions are secure and private.</p><h3>Data as a public good, when security and sovereignty intersect</h3><p>Data privacy is at the core of Microsoft 365 Copilot. Robust security protocols and a <a href="https://microsoft.design/articles/secure-by-design-a-ux-toolkit/">privacy-by-design</a> approach are embedded throughout its architecture, ensuring that all user data is protected. Working closely with Azure, we leverage local models and storage capabilities that keep data within regional boundaries and compliant with local regulations.</p><p>In India’s mobile-first economy, we are fully aware of Data Public Goods (DPG), which treats data as a shared resource like water or highways. It promotes digital sovereignty to protect against monopolization and advocates for fair access to all. There’s no advertisement or commercial presence. Without scarcity, DPG scales as needed and regardless of who uses it, your access is not impacted or limited. Run by public-private partnerships, these data exchanges reduce infrastructure costs and are driven by public consent, with privacy safeguards deeply embedded.</p><p>DPG ultimately supports innovation, secure access to health records, voting, and accelerate the delivery of public services — functional and democratic principles that align with Microsoft’s company values. As we develop Microsoft 365 Copilot with security at its core, we recognize how DPG initiatives parallel our commitment to empowering economic mobility.</p><p>Today, I see Microsoft 365 Copilot as a newly emerging force for human agency on an individual and collective level. It’s a virtual office that will hopefully inspire people and companies to take a risk and bet on their dreams. With the right tools, people’s abilities go a lot further when striving for what they want. Developing the M365 Copilot app for mobile is inclusive design at scale: solve for one country, extend to many.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7849a1f8b56c" width="1" height="1" alt=""><hr><p><a href="https://medium.com/microsoft-design/a-mobile-first-approach-for-microsoft-365-copilot-7849a1f8b56c">A mobile-first approach for Microsoft 365 Copilot</a> was originally published in <a href="https://medium.com/microsoft-design">Microsoft Design</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Designs for the frontier future]]></title>
            <link>https://medium.com/microsoft-design/designs-for-the-frontier-future-183bf3ea3951?source=rss----71c99841f1ad---4</link>
            <guid isPermaLink="false">https://medium.com/p/183bf3ea3951</guid>
            <category><![CDATA[ux]]></category>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[design]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[microsoft]]></category>
            <dc:creator><![CDATA[Microsoft Design]]></dc:creator>
            <pubDate>Fri, 30 May 2025 15:52:51 GMT</pubDate>
            <atom:updated>2025-05-30T15:52:51.062Z</atom:updated>
            <content:encoded><![CDATA[<p>How we rebuilt the Microsoft 365 Copilot app for a new kind of productivity</p><p>By Jon Friedman<br>Read the full article on our new website <a href="https://microsoft.design/articles/designs-for-the-frontier-future/">here</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*azu-tHTiWYN7WRHt.png" /></figure><p>The new <a href="https://www.microsoft.com/en-us/microsoft-365/copilot/download-copilot-app?msockid=39c64f8b036b687602765a5102466958">Microsoft 365 Copilot app</a> isn’t your typical redesign. From the introduction of net new experiences like Agents to a fresh look and feel rooted in Chat-based interactions, we reinvented our hero experience from the ground up.</p><p>At its core, this reimagination sets the stage for a new era of intent-driven computing where systems begin mapping to how people think, feel, and act, rather than the other way around. It also both reflects and propels the dawn of human-AI collaboration at scale, where individuals become “agent bosses,” empowered to direct intelligent systems toward their goals with clarity and purpose.</p><p>Beyond empowering customers in new ways, it also marks a meaningful shift in how Design helps build AI-first experiences. As UX professionals move from creating interfaces to orchestrating coherence across complex and nondeterministic systems, we play a heightened role in turning clutter into clarity, fragmentation into flow, and intention into impact.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*7j1mIjlktRkAdT-x.png" /></figure><h3>Shaping a digital workforce: research insights behind the experience</h3><p>Before diving into the M365 Copilot experience itself, let’s start from where our design process always begins: a 30,000-foot view that considers the world we’re designing for. What do people need today? What will they need tomorrow? How can we create not just features, but futures?</p><p>The 2025 Microsoft Annual <a href="https://blogs.microsoft.com/blog/2025/04/23/the-2025-annual-work-trend-index-the-frontier-firm-is-born/">Work Trend Index</a> (WTI) recently came out, which explores how AI is reshaping the workplace. Based on Microsoft 365 telemetry, LinkedIn hiring and labor trends, and insights from AI-native startups, economists, scientists and academics, the concept of “Frontier Firms” headlined this year’s report. As organizations leading the charge by blending human judgment with AI capabilities, they’re not just optimizing workflows — they’re reinventing them.</p><p>Apparent in our new M365 Copilot app is the next iteration of Fluent as a design system to support this professional shift. The future of work is less about static roles and more about dynamic goals and how you get things done, fueling a demand for unified, fluid experiences that foster collaboration between people and AI agents. As I wrote earlier in <a href="https://microsoft.design/articles/design-systems-for-the-ai-era/">Design systems for the AI era</a>, our systems must begin focusing more on verbs and action above static icons and move from passive UIs to adaptive systems. As controls become more intelligent, inputs more flexible, and outputs more personalized, we’re not just designing products but shaping a digital workforce. One where you are in the driver’s seat, conducting an orchestra of Copilot agents to make your ideas a reality.</p><h3>Shaping a new UX: experiences that lead with human intent</h3><p>For decades, design helped people grasp technology by translating computing complexity into something approachable and human. Metaphors like desktops, folders, and the cloud helped bridge the gap between abstract systems and everyday understanding. Today, technology is beginning to adapt to <em>us</em>, learning our behaviors, understanding our goals, and responding to our intent. The burden of translation is shifting, and systems will begin meeting us on our own terms — human terms.</p><p>The new Microsoft 365 Copilot app is built on a simple, powerful truth: conversation is core to how we understand and relate to the world. Long before we speak, we begin to absorb the cadence and emotion of language, even in the womb. That early exposure shapes how we connect, learn, and collaborate. Copilot Chat builds on this deep human foundation, anchoring the experience in natural, intuitive dialogue between people and AI. With Chat as the primary interface for work, people can initiate, navigate, and complete work within a single, language-driven environment, instead of jumping between siloed tools. It’s a paradigm shift from app-centric workflows to intention-centric ones and a turning point in how we interact with productivity tools.</p><p>Copilot’s dynamic windowing model further enhances the experience, paving the way for more adaptive interfaces. Users can move fluidly between their task and AI collaboration, whether it’s a side-by-side chat beside your Pages document or editing tools served up in the moment to refine an AI-generated image.</p><p>This isn’t about baking AI into interactions for the sake of it but considering where intelligence belongs and <em>why</em>. It’s about choosing the right agents for the right jobs and empowering people to drive the outcomes that matter to them, pushing productivity to new heights. By designing a UI that flexes with user intent, we’re setting the stage for a future where AI doesn’t just assist but collaborates, anticipates, and adapts in real time. With added memory and a growing agent platform supporting first- and third-party integrations, Microsoft 365 Copilot is evolving into a system that brings the right tools, context, and insights when and where they’re needed.</p><p>Visually and structurally, we also made deliberate decisions to support clarity and reduce cognitive load, something WTI showed to be a top concern. Microsoft 365 Copilot’s interface is built on an accordion-style architecture, with a collapsible navigation pane and a clean, minimal palette. In a world where constant interruptions define the modern workday, it brings calm and restores focus, creating space for what matters most.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*7QJObSt7I2fpWZKQ.png" /></figure><h3>Shaping our discipline’s future: the editorial evolution of design</h3><p>As generative AI reduces the cost of creation, the role of Design is shifting from making things to making <em>sense </em>of things. We’re no longer just builders. We are guides, bringing editorial clarity to increasingly complex workflows, especially in an AI-driven world where intelligence is no longer a limited asset but an on-demand resource.</p><p>Design has become a connective tissue across the Microsoft ecosystem, coordinating how AI agents operate across different contexts to deliver insight, continuity, and coherence. At the same time, because we are rooted deeply in lived experiences and user research, we bring an empathetic lens to our processes and decisions, championing important principles around accessibility, security and responsible AI.</p><p>These aren’t abstract ideals. They’re reflected in how Copilot is designed and shows up in the world. Through regular security audits, UX guidelines, documentation and resources, we ensured that all new libraries, prototypes and experiments were secure by default while we built the new app experience. We also tightly integrated our Responsible AI guidance across our UX themes for all platforms. Copilot also builds on Microsoft’s longstanding investments in inclusive design; features like speech-to-text, image captioning, and natural language input make it easier for people of all abilities to engage.</p><p>These choices aren’t just about functionality — they’re about trust, inclusion, and doing right by the people we design for.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*DCcJmPaukGmYLkpv.png" /></figure><h3>The future of computing is human</h3><p>Bill Hill, a late Microsoft researcher and brilliant humanist, once said: “The most important operating system you’ll write applications for isn’t Windows or Mac or Linux; it’s Homo Sapiens 1.0. It shipped about 100,000 years ago. There’s no upgrade in sight, and it’s the one that runs everything.”</p><p>That insight has never mattered more. Today, we’re finally designing for that OS in earnest. Not by asking people to adapt to machines, but by creating technology that adapts to human rhythms in ways that are fluid, intuitive, and personal. As technology evolves in capabilities and presence, we’re responsible for shaping it around people’s needs, values, and dreams. It’s about rethinking the relationship between people and technology not in service of doing more for the sake of it, but in the spirit of reconnecting with and rediscovering the meaning behind our work.</p><p>While still an early step, this launch feels like the release of more than just another productivity tool, but a meaningful step toward a new paradigm for design and the people we design for. <a href="https://www.microsoft.com/en-us/microsoft-365/copilot/download-copilot-app?msockid=39c64f8b036b687602765a5102466958">Check out the experience for yourself here</a> — we’d love to know your thoughts.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=183bf3ea3951" width="1" height="1" alt=""><hr><p><a href="https://medium.com/microsoft-design/designs-for-the-frontier-future-183bf3ea3951">Designs for the frontier future</a> was originally published in <a href="https://medium.com/microsoft-design">Microsoft Design</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[From paper to pixels: The evolution of ClearType and onscreen reading]]></title>
            <link>https://medium.com/microsoft-design/from-paper-to-pixels-the-evolution-of-cleartype-and-onscreen-reading-0d5392914bb7?source=rss----71c99841f1ad---4</link>
            <guid isPermaLink="false">https://medium.com/p/0d5392914bb7</guid>
            <category><![CDATA[typography]]></category>
            <category><![CDATA[microsoft]]></category>
            <category><![CDATA[design]]></category>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[windows]]></category>
            <dc:creator><![CDATA[Microsoft Design]]></dc:creator>
            <pubDate>Thu, 29 May 2025 16:00:17 GMT</pubDate>
            <atom:updated>2025-05-29T16:00:41.452Z</atom:updated>
            <content:encoded><![CDATA[<p>How we made digital text like printed words on paper</p><p>By Tracy Jones</p><p>Read the full article on our new website <a href="https://microsoft.design/articles/from-paper-to-pixels-the-evolution-of-cleartype-and-onscreen-reading/">here</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*29aNYFxq-xYSOuGo4rtSag.jpeg" /></figure><p>In the mid 1990’s, the rise of the internet tidal wave was growing high enough to kiss the sky. Cyberspace appeared like a slowly expanding phenomenon, but printing emails and web pages remained routine because of the strenuous experience of reading on low resolution screens. It caused eye strain, headaches, and fatigue, especially after prolonged reading.</p><p>Traditional desktop displays based on bulky CRT (Cathode Ray Tube) technology left much to be desired, but in 2000, Microsoft started to include ClearType with Windows. This technology improved the readability of text on the increasingly popular LCD panel screens by making it sharper. As ClearType made onscreen reading more comfortable and enjoyable, hundreds of millions transitioned from reading on paper to the screen. By the time Microsoft’s ClearType Collection font set was released in 2007, including the beloved Calibri, web use went from 416 million daily users to 1.38 billion. Future bloggers and new media now typed and published on digital paper. The software normalized the idea that digital text could be book-like, injecting life into the newly emerging eBook industry. Office communications were going paperless. Why print when you can read on screen?</p><p>The software’s invention was led by the late Microsoft Researcher Bill Hill. With a ponytail and a bushy beard, he wore combat boots, a kilt, and a six-inch hunting knife on his hip to the office. He was not a hunter, but a vegetarian who loved tracking animals. Once he told a story about tracking a cougar through a mountain range and finding its den before the cougar got there. Realizing that he was at the door of an apex predator, Hill left. Later he discovered the cougar had followed him home, miles outside of the animal’s range. Why would a cougar behave like that, and what does animal tracking have to do with onscreen reading? More on that later. First, we go back to 1987, the start of Microsoft’s typographic design journey, and the hiring of a talented engineer with an encyclopedic memory named Greg Hitchcock.</p><h3>Windows 3.1 and the integration of Apple’s TrueType</h3><p>By the late 1980’s, desktop computers were eclipsing typewriters and Microsoft was looking ahead to the release of Windows 3.1. Hitchcock — who would go on to shape Microsoft’s typographic evolution for the next four decades — was part of an effort to improve fonts for the new operating system. “We had bitmap fonts at the time and the Microsoft fonts were System, Tms Rmn, and Helv. We knew we needed something called an outline font technology,” he explained.</p><p>Bitmap fonts made each character a tiny image — a holdover from Windows 1 and 2. When scaled up, they became pixelated and jagged. In contrast, outline fonts are like mathematically defined shapes or vector graphics that can be scaled up or down without losing shape or quality.</p><p>The improved graphical user interface (GUI) of Windows 3.0 made the operating system more useful and engaging, but our fonts were limited. At a time when people were designing their own layouts and printing professional documents, a selection beyond System, Tms Rmn, and Helv was much needed. As anticipation for Windows 3.1 bubbled, Microsoft founded a font team including Hitchcock, the late typographer Robert Norton, and former Chief Technology Officer Nathan Myhrvold.</p><p>To upgrade from bitmap fonts, they ultimately turned to TrueType, a technology developed by Apple. The software employed hinting, a process that allowed for better alignment of characters within pixel boundaries, thereby enhancing the clarity and consistency of text display. Microsoft licensed the software from Apple, and Hitchcock was involved in integrating TrueType into the Windows operating system. Now, what would Windows 3.1’s core fonts be?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*ZfDSAcjqA-8i4ZBw.jpg" /><figcaption>Type. True. Timeless. In the early 90’s, at your local electronics store, you may have seen the TrueType Font Pack for Windows at the checkout counter.</figcaption></figure><h3>People heart fonts</h3><p>People needed a familiar typeface that linked the past to the future. From the heritage font foundry Monotype, the team <a href="https://www.linkedin.com/pulse/thirty-years-monotypes-times-new-roman-arial-windows-greg-hitchcock/">licensed</a> Times New Roman, Arial, and Courier New. Simulating the contours and shapes of typewriter fonts and newspaper print, those fonts gleamed on paper, but the team wanted to push font design further.</p><p>If office communications are a company’s nervous system, type is the brain, transmitting messages. People share their hopes and dreams in text, and that means different walks of life need a variety of fonts to express who they are. As a companion to Windows 3.1, Microsoft released <a href="https://www.linkedin.com/pulse/thirty-years-ago-microsoft-released-windows-31-font-pack-hitchcock/?trackingId=Ts4aOMGqT5SaNTDJfdl4ww%3D%3D"><em>TrueType Font Pack for Windows</em></a>, a super family of fonts comprised of 22 Monotype fonts and 22 Lucida typefaces created by legendary type designers Charles Bigelow and Kris Holmes. Three of those Lucida fonts were<em> </em>Bright, Sans, and<em> </em>Typewriter.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*K-xJYyZ-Cz_r0F3u.jpg" /></figure><p>While Typewriter’s letters have a same set width, complementary fonts Bright and Sans are based on the writing styles of the Italian Renaissance, with Bright simulating the printing type of the French Enlightenment period. Both fonts were inspired by an ancient time when owning eyeglasses was as rare as owning a private quantum computer. In 1992, Windows 3.1 and <em>TrueType Font Pack </em>hit the market like a flame to gasoline, but screens still hadn’t improved.</p><h3>Why better fonts needed better screens</h3><p>“Computer displays were not following Moore’s law,” said Hitchcock. Moore’s Law states that the processing power of computers doubles every two years, making them cheaper. “We were getting massive improvements in computation, but in the display world, change was miniscule.”</p><p>One of the tenets for improving Microsoft’s fonts was to achieve WYSIWYG (What You See Is What You Get), so what you see on screen is what you see in print. But as Hitchcock described, “the screen resolution was so low that you couldn’t really have a good representation of how it would look like on paper. No one would read on the screen because the reading experience was terrible.” As LCD (Liquid Crystal Displays) screens started creeping into the peripheral, the future of media would be digital, but hanging in the tech industry air was a key question: how do you transition the world from reading on paper to reading on screen?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/900/0*2w-VkR3IRc8-C5Qp.png" /><figcaption>Dressed in his finest kilt, Bill Hill demonstrates how ClearType transforms onscreen readability to former Vice President Al Gore.</figcaption></figure><h3>The ontology of reading: Bill Hill’s on-screen quest</h3><p>In 1995, Microsoft Researcher Bill Hill was hired to head Microsoft Typography, a group that included Hitchcock, former Typographic Lead Mike Duggan, and former Corporate Vice President Dick Brass. “Bill was the vision guy, and when we hired him, he didn’t have a tech background,” said Hitchcock. The son of a steelworker, Hill grew up in the late 1940’s in the tenement slums of Glasgow, Scotland. He learned to read at age three and was said to have read 17 books a week. Hill viewed books as a waterslide for the mind. For 18 years, he was a journalist, but in the mid 80’s, he saw where traditional media was headed and joined Aldus Corp, who was pioneering desktop publishing software.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*lTg2hA6D_9-xRo6z.png" /><figcaption>The Microsoft Typography group from right to left: Greg Hitchcock, Bert Keely, Bill Hill, Mike Duggan, and Dick Brass</figcaption></figure><p>Once Hill was hired at Microsoft, he began a personal quest to improve onscreen reading and bolster the eBook industry. He saw some early success with the commission and creation of web fonts like Comic Sans MS, Verdana, and Georgia becoming classic types, but people were still more likely to print a document versus read it on screen if it had three or more paragraphs. In the shadow of the rising tidal wave, humanity had spent the last 5,500 years reading, writing, and typing on paper, and no amount of well-designed type was going to lead people from paper to pixels. So, Hill approached the problem from a historic view.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*EQfqG5eEDKYT7Afn.jpeg" /></figure><h3>Animal tracking, like reading</h3><p>“What’s the most important operating system you’ll write applications for?” Hill would famously ask developers. “It ain’t Windows, or the Macintosh, or Linux ¾it’s homo sapiens version 1.0. It shipped about 100,000 years ago. There’s no upgrade in sight, but it’s the one that runs everything.”</p><p>That insight stemmed from a realization that onscreen reading was analogous to reading animal tracks since both use symbols to tell a story. Hitchcock, all too familiar with Hill’s theory, said, “You can tell when the animal is running. You can tell when the animal stops. You can actually tell when an animal turns its head and looks one way. Just because of the different weights of the track impressions, you can tell when an animal is lying in wait.” However, to read those symbols, you need contrast and uniformity. When the dirt was too light or the animal’s footprint too shallow, Hill had to get on his hands and knees to figure out if the indentation was a paw or hoof print. His unconventional perspective was changing the company’s debate on digital text display.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*KEbqHpq1NdmWAUuk.png" /><figcaption>From left to right, the date corresponds with the first square, a bitmap of the first time the team ever saw ClearType working (notice the blue and yellow colors where Hitchcock mistakenly turned off the wrong subpixels). The second and third squares are zoomed-in images of bitmaps on an LCD screen. The second square is a regular color bitmap and a failed attempt at turning off the right subpixels. The third square is a working demonstration of ClearType, showing a simulated magnified bitmap as if on a RGB (red green blue) display. Those color strips would be seen as white at normal viewing distances.</figcaption></figure><h3>The subpixel breakthrough</h3><p>By the turn of the millennium, the high resolution, flat, light, and energy efficient LCD screens were eclipsing the bulky, heavy, expensive low-resolution CRT monitors. The challenge of creating an immersive reading experience onscreen was related to the way the screen’s light projected into the reader’s eyes versus the way light reflects off paper. How do you take that difference into consideration? Enter Bert Keely. He was an expert in LCD screen technology, brought in to help the group with eBooks.</p><p>In <em>Inside Out: Microsoft in Our Own Words</em>, a book that celebrates the company’s 25th anniversary, Hill recalls a revelatory moment in his office: Keely picking up a magnifying lope from Hill’s desk and sticking it over an area of white on the screen. He asked Hill, “What do you see?” Peering into the screen, Hill recalled, “…that’s when the light went on for me.”</p><p>At the meeting, Keely drew a pixel on a whiteboard and explained that a pixel is composed of three subpixels: red, green, and blue (RBG). Furthermore, these subpixels can be controlled or calibrated. “The question that we all asked was, what would happen if we could treat each of those [subpixels] as if they were pixels?” said Hitchcock. That meant they could turn off subpixels in every pixel of an LCD screen, sharpen the details in text display, and make the type much easier to read over long durations.</p><p>That was the birth of ClearType and, as Hitchcock recalls, “It just blew us away — it looked awesome!” Hunched over a laptop, they saw for the first time onscreen text that rivaled ink on paper. From there, the team started thinking about designing fonts, setting the stage for our future default font Calibri, which recently gave way to <a href="https://microsoft.design/articles/a-change-of-typeface-microsoft-s-new-default-font-has-arrived/">Aptos</a>.</p><p>Although many other tech companies adopted subpixel rendering, higher resolution displays, faster processors, and touchscreens eventually made ClearType obsolete. Yet, at the advent of the Internet age, ClearType was a wild leap for onscreen reading.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*1BwpEycC_xDKI_Bc.jpeg" /><figcaption>Sitka was produced as a general-purpose serif typeface, designed primarily for onscreen reading. The design principle that drove its creation was the understanding that humans recognize individual letters to form words. Principal Researcher Kevin Larson tested each letter for legibility. Those test results would inform the creation of Sitka.</figcaption></figure><p>The visionary and iconoclast Bill Hill passed away on October 16, 2012. He was survived by his wife Tanya and their two children, Yssa and Eldon. In 2015, the Microsoft Typography team dedicated their new Sitka font to Hill’s memory. Back in his home country, Hill remains <a href="https://www.scotsman.com/news/obituaries/obituary-bill-hill-former-scotsman-journalist-who-became-the-father-of-e-reading-2478457?utm_source=chatgpt.com">hailed</a> as the “Scotsman journalist who became the father of e-reading.”</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=0d5392914bb7" width="1" height="1" alt=""><hr><p><a href="https://medium.com/microsoft-design/from-paper-to-pixels-the-evolution-of-cleartype-and-onscreen-reading-0d5392914bb7">From paper to pixels: The evolution of ClearType and onscreen reading</a> was originally published in <a href="https://medium.com/microsoft-design">Microsoft Design</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Crafting stories through motion: A conversation with Ronan McMeel]]></title>
            <link>https://medium.com/microsoft-design/crafting-stories-through-motion-a-conversation-with-ronan-mcmeel-c66e66abcdf2?source=rss----71c99841f1ad---4</link>
            <guid isPermaLink="false">https://medium.com/p/c66e66abcdf2</guid>
            <category><![CDATA[storytelling]]></category>
            <category><![CDATA[profile]]></category>
            <category><![CDATA[motion-design]]></category>
            <category><![CDATA[design]]></category>
            <category><![CDATA[microsoft]]></category>
            <dc:creator><![CDATA[Microsoft Design]]></dc:creator>
            <pubDate>Thu, 22 May 2025 17:13:21 GMT</pubDate>
            <atom:updated>2025-05-22T17:13:36.826Z</atom:updated>
            <content:encoded><![CDATA[<p>Read the full story on our new website <a href="https://microsoft.design/articles/crafting-stories-through-motion-a-conversation-with-ronan-mcmeel/">here</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*24SvMjuLxRkumlMl.png" /></figure><p>In the dynamic world where design meets storytelling, few disciplines are as captivating as motion design. It’s an art form that breathes life into ideas, blending creativity, technical finesse, and a touch of magic to captivate audiences. We sat down for a chat with Ronan McMeel, a motion designer at Microsoft, to delve into the intricacies of his role, his journey from sketching in Dublin to collaborating with global teams, and the lessons he’s learned along the way. Whether you’re a design enthusiast or simply curious about the impact of motion in storytelling, Ronan’s experiences offer a fascinating glimpse into the vibrant intersection of design, technology, and creativity.</p><p><strong>Q: You’re a motion designer at Microsoft. What does your role entail, and what kinds of projects do you work on?</strong></p><p>A: I work on a storytelling team within a product design studio. We’re a group of multidisciplinary designers focused on helping partners articulate what they’re building and why it matters. This involves creating videos, design work, and thinking strategically about advancing the studio’s goals. My day-to-day involves tasks like creating animated titles, making Figma frames come alive, and exploring motion directly in products. Storytelling and motion are surprisingly impactful here — they help cut through the noise, align people, and drive decisions.</p><p><strong>Q: How did you get into motion design? Tell us about your journey.</strong></p><p>A: Dragging a huge portfolio case across Dublin, I nervously awaited an animation interview after being turned down by art courses. The tutors noticed my constant sketching, love for mixed media, and life-drawing experience — it felt like a good fit. Animation revealed itself as a craft blending technical skill, experimentation, and storytelling, which could also lead to a career. I studied at IADT in Dublin, flipping pages on old animation desks salvaged from Sullivan Bluth’s old studio and learning traditional and digital techniques.</p><p>I landed work at a small studio, meeting great people and realizing how much I still had to learn. I became interested in blending animation, design, live action, and VFX. A freelance gig for the BBC felt like a breakthrough, but Ireland’s motion design scene was limited, so I studied VFX in Bournemouth, England. Returning to Dublin, I pestered directors with emails until freelance gigs turned into a job with creative freedom. But when the economy crashed and budgets vanished, I went freelance, taking on whatever paid the bills and learning along the way.</p><p>Sharing a studio with friends on Capel Street in Dublin was a highlight. Two friends made films funded by Screen Ireland, and I joined them as an art director, shaping visuals and painting backgrounds. The films won festival prizes, and I even got to travel. These projects, driven by storytelling and craft, stood out from the usual grind of client work.</p><p>Despite the creative growth, years of freelancing eventually took their toll, leaving me burnt out. I craved stability and a sense of direction. This led me to a role in UX design education, where I could combine my passion for design and storytelling with the structure I was seeking. That experience became the bridge to my current role at Microsoft, where I’ve truly found a space to thrive. Twenty years after lightboxes and pencil tests, the throughline remains the same: storytelling through motion, with evolving tools but unchanged foundations.</p><p><strong>Q: What is your process like when starting a new project?</strong></p><p>My process draws from my animation background and the scrappiness of self-employment, where small budgets and tight deadlines shaped my adaptive and flexible approach. Projects now vary widely, often influenced by dependencies that demand iterative problem-solving and thoughtful motion exploration when possible.</p><blockquote>When starting a project, I focus on clarifying its purpose and constraints to understand what we’re communicating and its impact. Some work requires quick solutions and incremental improvements, while other projects allow deeper creative storytelling to influence user experience design.</blockquote><p>I aim to bring unique perspectives, simplify complexity, and enhance outcomes beyond the brief if I can.</p><p>As a team, we’re developing reusable visual and animation systems to ensure quality and coherence, even under pressure. My approach balances creativity with practicality through thoughtful design and storytelling while staying grounded in real-world constraints.</p><p><strong>Q: You’re based in Ireland, but most of your team is scattered across the UK and the US. What are the challenges and joys of working remotely with an international team?</strong></p><p>A: Yes, we’re quite the global bunch — Norway, Ireland, UK, Canada, Mexico, and the US. Sometimes it feels like we’ve got as many time zones as team members! All things considered, I think we manage pretty well. They’re a talented group and genuinely good people. Collaborating with teammates from different parts of the world brings fresh perspectives and diverse approaches that I wouldn’t encounter otherwise, and that variety really enriches our work.</p><p>Of course, those same factors bring challenges. Coordinating multiple workstreams across the globe can get tricky, especially when partner teams are involved. As wonderful as it would be to have everyone under one roof — to foster easier communication and benefit from the unspoken shorthand that naturally comes with proximity — the reality is that our team wouldn’t exist if we were tied to a single office or location. So, while it’s not without its hurdles, the flexibility and breadth of collaboration make it all worthwhile.</p><p><strong>Q: What advice do you wish you had received when starting out as a designer, and what would you share with someone entering the field today?</strong></p><p>A: When I was starting out, I wish someone had told me that facing struggles like financial pressures, pricing uncertainty, or unhealthy work habits wasn’t a personal failure. These challenges are common, and having a supportive community to share them would’ve been invaluable.</p><p>My advice now is twofold: First, build a community — friends, online peers, or local connections — to share honest conversations about the ups and downs. *Community over competition* really does make a difference. Second, make time for work you’re passionate about, even alongside other jobs. Developing your creative voice will set you apart and keep you motivated. Also, use today’s abundance of resources to learn and grow, and don’t hesitate to reach out for help. It’s much harder (and lonelier) to go it alone.</p><p><strong>Q: How is AI changing motion design, and what excites or concerns you about it?</strong></p><p>A: AI tools are advancing rapidly, currently assisting with tasks like upscaling footage, enhancing audio, and aiding pre-production, though they’re not fully integrated into high-end workflows yet. What excites me is AI’s potential to enhance creativity by supporting structured workflows and augmenting human decisions, as seen in workflows resembling traditional pre-production processes. While AI-generated visuals can be unpredictable, the best outcomes always come from creators who pair storytelling skills with AI as a tool.</p><blockquote>Potential uses in motion design include modifying video elements, extending scenes, reframing shots, motion capture, and style transfers. There’s plenty of potential, and our team is keeping an eye on developments.</blockquote><p>Concerns include job security, data ethics, environmental costs, and the rapid pace of AI development. Transparency and long-term focus are crucial. Personally, I use AI for brainstorming and refining ideas, but until it offers consistent creative control, it won’t replace traditional workflows. Ask me again in six months, though — things are changing fast.</p><p><strong>Q: How do you use Copilot to boost productivity or work more efficiently at Microsoft?</strong></p><p>A: I use Copilot every day. It’s integrated into all our communication tools, helping me stay on top of acronyms, projects, and how everything connects. I rely on it to look up engineering terms or unfamiliar language, sift through recorded meetings to pinpoint key moments, and summarize lengthy emails that might not directly impact me but are still worth tracking. This way, I get the gist quickly and can dive deeper when necessary.</p><p>I’ve also been experimenting with agents. These are like custom extensions for Copilot, tailored to specific tasks or skills based on what’s most useful in individual roles. You can create them easily through natural language prompts — it’s almost like building a no-code mini-app. I think there’s real potential in this approach.</p><p>The results keep getting better, and as that progress continues, I foresee even more dynamic use cases emerging over time.</p><p><strong>Q: Are you working on any personal design projects or have an online portfolio to share?</strong></p><p>There’s a well-worn cliché about creative people avoiding an online presence because they’re never satisfied with their past work or have already moved on from it. Hi, that’s me. One day, I’ll dust off my online profiles and emerge from hibernation — when I have a clearer sense of what’s next. Until then, no groundbreaking portfolio updates here!</p><p>Right now, my priorities are a bit less design related. I’m working on improving my Brazilian Portuguese (so I can graduate from gringo status) and staring down a banjo across the room — a silent reminder to learn some Irish traditional tunes. My seven-year-old niece already has a wider repertoire than I do, so it’s time to catch up. If you’re my neighbor, you might hear me butchering a few jigs and reels or catching up on Duolingo lessons.</p><p>These days, I’m trying to be better at staying connected. I actually check LinkedIn now, so if anyone wants to say hello or reach out, feel free to give me a holler</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c66e66abcdf2" width="1" height="1" alt=""><hr><p><a href="https://medium.com/microsoft-design/crafting-stories-through-motion-a-conversation-with-ronan-mcmeel-c66e66abcdf2">Crafting stories through motion: A conversation with Ronan McMeel</a> was originally published in <a href="https://medium.com/microsoft-design">Microsoft Design</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The em dash conspiracy: How pop culture declared war on literature’s favorite punctuation]]></title>
            <link>https://medium.com/microsoft-design/the-em-dash-conspiracy-how-pop-culture-declared-war-on-literatures-favorite-punctuation-f834e22cf00d?source=rss----71c99841f1ad---4</link>
            <guid isPermaLink="false">https://medium.com/p/f834e22cf00d</guid>
            <category><![CDATA[design]]></category>
            <category><![CDATA[microsoft]]></category>
            <category><![CDATA[opinion]]></category>
            <category><![CDATA[technology]]></category>
            <dc:creator><![CDATA[Microsoft Design]]></dc:creator>
            <pubDate>Wed, 21 May 2025 15:57:46 GMT</pubDate>
            <atom:updated>2025-05-21T15:57:46.107Z</atom:updated>
            <content:encoded><![CDATA[<p>AI, Emily Dickinson, and the war on the em dash</p><p>By Celeste Moure</p><p>Read the full article on our new website <a href="https://microsoft.design/articles/the-em-dash-conspiracy-how-pop-culture-declared-war-on-literatures-favorite-punctuation/">here</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*EK9sS0IMPuMX8F4A.png" /></figure><p><strong><em>Disclaimer: </em></strong><em>The views and opinions expressed in this essay are those of the author. They do not reflect the official policy or position of Microsoft, nor is this a factual analysis of AI detection.</em></p><p>There’s a new panic sweeping through social media’s bustling marketplace of hot takes, and this time, it’s not about quiet quitting or blockchain resumes. Lurking in the shadows of our emails, our tweets, even our texts, is a punctuation mark so dangerous, so algorithmically suspicious, that its mere presence now triggers corporate witch hunts. That’s right: the em dash — humanity’s oldest dramatic pause — has been outed as genAI’s smoking gun. According to a brigade of self-appointed grammar detectives, if you dare deploy this elegant little line — gasp — you might as well just confess: I am a robot, and my favorite book is the Terms of Service agreement.</p><p>The theory goes something like this: If you spot an em dash — especially one nestled between clauses like a grammatical sandwich — you’ve caught a robot red-handed. Real humans, the theory goes, would never deploy such a flamboyant stroke. They prefer the staid comma, the predictable period, the safely corporate semicolon. The em dash? That’s the calling card of AI tools, or some other silicon-based interloper trying to pass itself off as a person who has, at any point, read a book.</p><p>This is, of course, absurd — but not for the reasons you might think. Yes, AI does use em dashes (along with every other punctuation mark known to Unicode). But so did Emily Dickinson — she basically treated the em dash as her personal Morse code — and built entire poems out of em dashes: “I’m Nobody! Who are you? — Are you — Nobody — too?” And then there’s Kurt Vonnegut, who once wrote, “Here is a lesson in creative writing. First rule: Do not use semicolons… All they do is show you’ve been to college.” (He was pro-dash.) Vladimir Nabokov used dashes to craft some of the most breathtaking prose of the 20th century. James Joyce, were he alive today, would have been banned from LinkedIn immediately for writing sentences like, “Yes I said yes I will Yes — ”</p><p>And yet, somewhere between the rise of generative AI and the decline of recreational reading, it’s been decided that the em dash was the linguistic equivalent of a CAPTCHA test. “No human would write like this,” the so-called experts say, as if Herman Melville didn’t once spend 60 pages describing a whale.</p><p>The irony, of course, is that many of the people most convinced of the em dash’s inhumanity are least equipped to spot actual AI writing. It can be easy to breeze past a paragraph of flawless corporate jargon — “synergizing scalable paradigms to leverage disruptive innovation” — without suspicion, but the moment a hyphen with delusions of grandeur is spotted, the alarm sounds. It wouldn’t surprise me if em dash critics also believed <em>Moby-Dick</em> could be improved with a few more infographics.</p><p>But let’s be clear: the em dash is not some AI-generated abomination — it’s the punctuation equivalent of a jazz solo. It’s for emphasis — for interruption — for the sudden, glorious derailment of a thought. It’s the difference between ‘I’m fine’ and ‘I’m fine — said the woman who absolutely did not just reorganize her spice drawer at 3am while listening to Radiohead’s <em>OK Computer</em> on repeat.”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*8oOzD6nhTMERMokR.png" /></figure><p>Writers have relied on it for centuries because, unlike the stiff, formal alternatives (looking at you, semicolon), the em dash is alive. It breathes. It <em>gestures wildly</em>.</p><p>And yet, the em dash detectives remain undeterred. They’ve moved on from analyzing writing samples to diagnosing entire professions. <em>Journalists</em>? Probably AI — they use too many em dashes. <em>Novelists</em>? Definitely AI — have you seen how long their sentences are? <em>Poets</em>? Don’t even get them started. At this rate, we’ll soon see think pieces arguing that Shakespeare’s use of iambic pentameter was a clear sign he was a primitive chatbot. “To be, or not to be — that is the question.” Yep, classic genAI.</p><p>The funniest part of all this is that AI doesn’t even like em dashes. Left to its own devices, genAI prefers clean, frictionless prose — the kind of writing that slides into your brain without leaving a mark. The em dash is messy. It’s human. It’s the punctuation mark you use when you’re thinking out loud — when you’re <em>alive</em> on the page.</p><p>So, to the LinkedIn linguists, the AI alarmists, the self-appointed guardians of “authentic” writing: maybe — just maybe — the reason it’s easy to think the em dash is a robot giveaway is that people aren’t reading enough books written by humans.</p><p>And that’s the real tragedy here — not the overuse of em dashes, but the underuse of libraries.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f834e22cf00d" width="1" height="1" alt=""><hr><p><a href="https://medium.com/microsoft-design/the-em-dash-conspiracy-how-pop-culture-declared-war-on-literatures-favorite-punctuation-f834e22cf00d">The em dash conspiracy: How pop culture declared war on literature’s favorite punctuation</a> was originally published in <a href="https://medium.com/microsoft-design">Microsoft Design</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>