<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[NYT Open - Medium]]></title>
        <description><![CDATA[How we design and build digital products at The New York Times. - Medium]]></description>
        <link>https://open.nytimes.com?source=rss----51e1d1745b32---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sun, 19 Apr 2026 07:46:18 GMT</lastBuildDate>
        <atom:link href="https://open.nytimes.com/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[How The New York Times Games Team Delivered Accessible, Cross-Platform Dark Mode]]></title>
            <link>https://open.nytimes.com/implementing-dark-mode-in-the-games-app-be7241ddb7ba?source=rss----51e1d1745b32---4</link>
            <guid isPermaLink="false">https://medium.com/p/be7241ddb7ba</guid>
            <category><![CDATA[design]]></category>
            <category><![CDATA[engineering]]></category>
            <category><![CDATA[cross-platform]]></category>
            <category><![CDATA[collaboration]]></category>
            <dc:creator><![CDATA[The NYT Open Team]]></dc:creator>
            <pubDate>Thu, 16 Apr 2026 15:35:45 GMT</pubDate>
            <atom:updated>2026-04-16T15:37:57.365Z</atom:updated>
            <content:encoded><![CDATA[<h4>Bringing accessible, brand-consistent dark mode to life in The New York Times Games app through cross-platform design and engineering collaboration.</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*yLeICgvBZ5fayu1RUjyTeQ.gif" /><figcaption>Illustration by Allie Sullberg</figcaption></figure><p><strong>By Vanessa Johnson, Daniel Falokun, Michael Ingber and Oleksandr Zabiiako</strong></p><p>The Games Mobile App introduced dark mode back in October. This seamless implementation was a collaborative effort, requiring careful planning and management of design, cross-platform coordination, and engineering work across both legacy and modern user interface systems. The process to implement dark mode across the app included consideration on the visuals, colors, and icons that are mode specific to maintain legibility and brand consistency. The New York Times Games team is excited to have delivered this feature before the end of 2025, and thrilled to be able to roll out a feature that continues to make our app accessible for all.</p><h3>Development Work on Mobile</h3><p>On the development side, the mobile and web engineers synchronized assets which were colors, fonts, and images to ensure that hybrid screens showed the correct styles and colors. This was possible since our design system across mobile was constructed in a way that encouraged consistency and scalability across platforms. With clear and easy to follow designs supported by our internal design system, Android and iOS could ensure consistency on both platforms.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/996/1*x3hIVp_5wr7xCfz5_Z725A.png" /><figcaption>Screens in dark mode</figcaption></figure><p>From an Android perspective, it required separate approaches for XML (Extensive Markup Language) based screens which use an imperative, hierarchical approach and Jetpack Compose screens that use a more modern declarative approach. <a href="https://developer.android.com/develop/ui/views/layout/declaring-layout">XML</a> and<a href="https://developer.android.com/compose"> Jetpack Compose</a> are different ways the UI can be constructed. To ensure dark mode compatibility, all features were revisited to make sure they were adapted, especially the additions that were being worked on during the dark mode project. Another aspect of this development was updating external libraries that required making pull requests, creating a version of those changes for testing to make sure it worked in the main project, approval, testing, and merging of the changes in the external library. With these additions, the support of dark mode was a multiteam effort.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/818/1*jqvuVZKqRLHTc2ORM6WvUw.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/318/1*qYhFU0Emm0CWLZk9SQiLfA.png" /></figure><p>From an iOS perspective, supporting dark mode required distinct approaches for UIKit-based screens (including legacy Objective-C views) versus SwiftUI screens. UIKit screens depended on two key callbacks: overrideUserInterfaceStyle, which allows developers to explicitly set a screen’s appearance regardless of system settings, and traitCollectionDidChange, which triggers whenever the system’s appearance preference changes, enabling dynamic updates to UI colors and styles. SwiftUI screens, by contrast, used the @Environment(\.colorScheme) property wrapper along with asset catalogs configured for light and dark variants, providing a more declarative and automatic approach to handling appearance changes.</p><p>This distinction was critical because UIKit screens required manual observer pattern implementation to catch appearance changes, while SwiftUI handled these updates natively through its reactive system.</p><p>The Games app also integrates several shared New York Times packages that are maintained outside the core repository. To keep dark mode working consistently across all features, we needed to coordinate updates to these external packages. This approach ensured that components such as the login flow and profile avatar selection screen, which rely on these shared packages, would support dark mode alongside the rest of the application.</p><h3><strong>Why was this coordination necessary?</strong></h3><p>Dark mode is a system-wide feature that users expect to work seamlessly across the entire app. Without updating these external packages, parts of the experience would break or appear inconsistent, diminishing the overall value of the dark mode feature.</p><p>With dark mode now available in The New York Times Games app, our users can play their favorite puzzles any time of the day while reducing eye strain during evening play sessions. Dark Mode is a highly valued feature — players are actively using it, demonstrating strong demand for this capability and validating the effort invested in its implementation across all screens and dependencies.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*qBwcrL2Hkx3rmeMvNxlDRQ.png" /><figcaption>Users Raving about Dark Mode</figcaption></figure><h3>The design perspective</h3><p>The first major step was moving from fixed color definitions to a semantic color system based on variables. Instead of using hard-coded colors, we introduced semantic tokens to the library, such as backgrounds, text, strokes and interactive elements. This approach allowed us to support Dark Mode consistently and at scale. Since we revisited the colors we took the opportunity to reorganize our naming convention as well. To support this shift we started from creating a color mapping document. On the left it listed our legacy colors, and on the right it was mapped to its new color variable. For example, what used to be called “Background Primary“ became “$bg-page”. This mapping was essential for keeping both designers and developers aligned.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/752/1*UPorPtUUuUhhUKoFlicwlw.png" /></figure><p>Once the foundations were in place, we moved on to adapt actual screens. That included updating components and pages, defining rules for how surfaces behave in light and dark contexts, and covering edge cases. There was also a need to create new design tokens that were not needed before. For example, for the bottom sheet in the light theme, the background was always white. But when switching to the dark mode, using dark gray instead of black makes it much better.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*dC_QWZbiETPF1SNlEPhOHQ.png" /></figure><p>Our illustrations also couldn’t simply have their colors inverted. Flipping the colors made it look too high in contrasty. We ended up reviewing each illustration individually and manually adapting it for dark mode. In addition to illustrations, we also revisited in-app animations to ensure they support Dark Mode. For example, new features are introduced through a modal shown when a user opens the app after an update. These modals typically include an animation that highlights the main idea of the feature, and we update this system so that the animations now adapt seamlessly to both light and dark modes.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*h4yqDFmoPOT3-7zjYqPc7g.png" /></figure><p>An essential part of the process was a design review of every screen as it was implemented. Each feature and flow went through VQA (visual design check) to ensure visual consistency. This includes validating all interactions and contextual elements, making sure no assets were missing, miss-colored or behaving unexpectedly in a dark environment. This allows us to catch issues early and deliver great final experience.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*x8ReHJhSIsOeuRnqMHAAcg.png" /></figure><p>Overall, the design work behind Dark Mode became much more than a tech upgrade or just inverting our existing colors.. It was a fully-scale process that brought evolution of our design language, color system and visual assets.</p><h3>The web perspective</h3><p>We launched a new Badges feature in September, and a “badge detail” view was built which can be opened in multiple places. It can be opened within each of the game webviews where users can earn badges, including on web/mobile-web/news apps, and it can also be opened from the natively built Me Tab and Trophy Case in the Games Apps. For the latter use case, we built a standalone hybrid webview that serves a single badge detail.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/462/1*e9zB9665FNf9dE-4VFyQKw.png" /><figcaption>Within the Connections webview</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/450/1*r6CqfcHPx9SWgsolzGHXRg.png" /><figcaption>As a standalone webview</figcaption></figure><p>This standalone webview is powered by a set of query parameters. The app passes all dynamic information (which badge it is, when it was earned, the user’s progress towards the next level of the badge, etc) through the query string in the URL and the webview parses that information to render the content.</p><p>In order to support the apps’ dark mode, we needed this standalone webview to respect each app’s dark mode setting. The apps already contain many webviews — every non-crossword game is a webview. But each of those games has its own internal dark mode setting, disconnected from the app’s global dark mode setting. This was intentionally designed to give users more fine grained control over which games they prefer in dark or light mode.</p><p>To sync dark mode between this standalone webview and the apps, we updated the native to web contract so that the webview would recognize an additional query parameter named “dark”. If the app passes in that query parameter, the webview knows it must render in dark mode. Badge detail is then able to set its theming in a very similar way to most game webviews, like so:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/502/1*T-pYeuOReoUgueqMV1--vA.png" /></figure><p>Some extra care needed to be taken to get enough contrast between the white text and the multi-colored radiating shape animation, especially given the blurred effect on the webview’s background. So in dark mode only, we lower the opacity of the radiating shapes to 40%.</p><p>In summary, to support dark mode on this webview, we leveraged prior work that enabled dark mode on game webviews. The key change was that instead of giving the user control over the setting within the webview, we allowed each app to pass in that app’s setting via a query parameter.</p><h3>Takeaways</h3><p>Dark mode in the apps has been a big milestone for The New York Times and brings great joy to our users. This project showed the complexity and teamwork needed across platforms to bring a high impact project like Dark Mode over the finish line.</p><p><a href="https://www.linkedin.com/in/danielfalokun/"><em>Vanessa Johnson</em></a><em> is an Android Engineer working on the Games App at The New York Times. Vanessa has tech led multiple projects that include Dark Mode, Strands Archive, and Connections Bot in the Apps. When not at work, Vanessa likes to play pickup basketball and travel to speak at various conferences.</em></p><p><a href="https://www.linkedin.com/in/danielfalokun/"><em>Dan Falokun</em></a><em> is an iOS Engineer working on the Games App at The New York Times. Dan has tech led multiple projects that include The Midi Crossword, Strands Archive, Dark Mode and Connections Bot in the Apps. Outside of work, Dan likes to travel and listen to music.</em></p><p><em>Mike Ingber is a Staff Web Engineer on NYT Games.</em></p><p><em>Special Thanks to our past interns: Naomy Portillo and Tony Vu for starting the project on mobile, Ihor Shamin, Nazar Novak and Maksym Safronov for helping get dark mode over the finish line on the Android side and Lian Chang, Kathy Lee, Robert Vinluan and William Frohn on the design side.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=be7241ddb7ba" width="1" height="1" alt=""><hr><p><a href="https://open.nytimes.com/implementing-dark-mode-in-the-games-app-be7241ddb7ba">How The New York Times Games Team Delivered Accessible, Cross-Platform Dark Mode</a> was originally published in <a href="https://open.nytimes.com">NYT Open</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Right Team for the Job: From Engineering to Product: A Superpower for Developer Platform…]]></title>
            <link>https://open.nytimes.com/the-right-team-for-the-job-from-engineering-to-product-a-superpower-for-developer-platform-a290e65970ea?source=rss----51e1d1745b32---4</link>
            <guid isPermaLink="false">https://medium.com/p/a290e65970ea</guid>
            <category><![CDATA[product-management]]></category>
            <category><![CDATA[developer-platform]]></category>
            <category><![CDATA[engineering]]></category>
            <category><![CDATA[product]]></category>
            <dc:creator><![CDATA[The NYT Open Team]]></dc:creator>
            <pubDate>Wed, 11 Feb 2026 21:52:17 GMT</pubDate>
            <atom:updated>2026-02-11T21:55:26.200Z</atom:updated>
            <content:encoded><![CDATA[<h3><strong>The Right Team for the Job: From Engineering to Product: A Superpower for Developer Platform Strategy</strong></h3><h4><em>Introducing Danny Cassidy, Senior Technical Product Manager, Developer Productivity in Developer Platforms</em></h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*S1GR_afPfX0TMn1QKLsxFQ.jpeg" /><figcaption>Illustration by Claire Merchlinsky</figcaption></figure><p><strong>By Geethu Jacob, Daniel Cassidy, Nikki Larson and Erica Vendetti</strong></p><p><strong><em>“PMs actively and effectively engage with engineering and technology decisions, understanding possibilities and consequences.”</em></strong></p><p><strong>This article is part of a New York Times Open </strong><a href="https://open.nytimes.com/building-the-new-york-times-product-team-the-right-team-for-the-job-ff9b43222537"><strong>series</strong></a><strong> showcasing the breadth of experiences and backgrounds of our Product Managers.</strong></p><h3><strong>Engineering can fuel product leadership</strong></h3><p>For Danny Cassidy, coding was an early creative outlet. As a high school student, he taught himself to code, trying his hand at frontend web development.</p><p>“I was very inspired by online communities back then, and how the internet could foster connection for people across the world. So, I first learned how to make extensions on forum software. My first paid gig in high school was building a social networking feature into a site for pet owners so that they could create profile pages for their pets. Those were the days!”</p><p>Building on the base of that passion, Danny studied information technology in college. “There’s a definite craft to software development”, he says. “It’s not just technical. You are thinking about solving a problem in an elegant and legible way. I always enjoyed that part of the process.” As a software engineer, Danny focused on building backend APIs and content management systems (CMS) for media brands. Here, Danny learned to understand how engineering solutions fit into the business context. Engineers “need to know how a solution will evolve and scale over time. And their solutions need to be understood and maintainable by a multitude of people throughout all of this.”</p><p>In 2015, Danny accepted an engineering job at The Times working on the backend APIs that deliver our digital content from our CMS to our desktop and mobile apps. As an engineer at The New York Times, Danny had the opportunity to work with more stakeholders than before:</p><p>“As I started working on more complex, cross-functional projects, I noticed great engineering leaders consistently asking fundamental questions: Why are we building this? Who is it serving? How is it helping the business? This realization inspired me to more fully dedicate my career to understanding and measuring how technology can drive significant business and user impact.”</p><p>Danny started to think about a move to Product Management after a conversation with leadership:</p><p>“I had no idea there was such a thing as Technical Product Management. It was honestly never a consideration for me. I always thought I’d spend my entire career in engineering. One day, my boss at the time asked me if I was interested in Product Management, and it opened up this whole conversation. I am very grateful that he recognized a shift in the kind of work I was excelling at and raised the question to me.”</p><h3><strong><em>Leveraging a technical background to build for engineers</em></strong></h3><p>Danny is now a Senior Technical Product Manager in the Delivery Platforms Mission. He is responsible for providing centralized infrastructure and tooling to the New York Times product engineering teams so they can develop more efficiently. “The platform we are building out is <em>very </em>technical and serves many kinds of engineers at The Times, all with different levels of experience, expertise, and needs. I lean on my technical background almost every day to help navigate across technical domains and user personas.”</p><p>In his current position, Danny’s background as an engineer enables him to <strong>effectively engage with engineering and technology decisions, understanding the possibilities and consequences </strong>of each potential path. For a role like Danny’s, “it’s difficult to have strong Product Judgment if you struggle to understand how technical decisions impact your users and the business outcomes you are trying to deliver.”</p><p>Danny’s engineering background proved invaluable when his team faced a critical CI/CD pipeline replacement. (Think of a CI/CD pipeline as the “assembly line” for software, making sure updates reach users quickly and reliably!) When a vendor sunset their current tooling, Danny understood firsthand the developer blockers it created for multiple teams. Armed with empathy for the engineers, Danny led a user-centered, data-driven evaluation across The New York Times and Wirecutter. His deep engagement with engineering allowed him to synthesize platform pain points, assess future-fit alternatives, and ultimately align the organization around a long-term CI/CD strategy that successfully balanced business impact, developer experience, and technical feasibility.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*fAl5DVcOeh4geiiinkACLg.png" /><figcaption><em>A snapshot from the evaluation matrix, showing how engineering teams’ needs were prioritized and compared across potential CI/CD vendors, guiding the decision toward the most future-ready solution.</em></figcaption></figure><p>Danny’s journey underscores how deep technical roots can strengthen a product manager’s leadership especially in platform work where empathy for engineers, strategic thinking, and business context must align. As you’re considering your next career steps, if you’re passionate about connecting user needs and business outcomes in tech, product management could be your next career step.</p><p>And if you’re like Danny, your technical acumen might be a powerful asset to build tools that empower others to do their best work!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a290e65970ea" width="1" height="1" alt=""><hr><p><a href="https://open.nytimes.com/the-right-team-for-the-job-from-engineering-to-product-a-superpower-for-developer-platform-a290e65970ea">The Right Team for the Job: From Engineering to Product: A Superpower for Developer Platform…</a> was originally published in <a href="https://open.nytimes.com">NYT Open</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building The New York Times Product Team: The Right Team for the Job]]></title>
            <link>https://open.nytimes.com/building-the-new-york-times-product-team-the-right-team-for-the-job-ff9b43222537?source=rss----51e1d1745b32---4</link>
            <guid isPermaLink="false">https://medium.com/p/ff9b43222537</guid>
            <category><![CDATA[mission]]></category>
            <category><![CDATA[product]]></category>
            <category><![CDATA[product-management]]></category>
            <dc:creator><![CDATA[The NYT Open Team]]></dc:creator>
            <pubDate>Wed, 11 Feb 2026 21:42:32 GMT</pubDate>
            <atom:updated>2026-02-11T21:42:32.426Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7veLdfomF4hRWjmAvnGFXA.jpeg" /><figcaption>Illustration by Giacomo Bagnara</figcaption></figure><p><strong>By Nikki Larson, Erica Vendetti and Michael Beach</strong></p><p>Our mission is simple: We seek the truth and help people understand the world. Accomplishing that mission requires each of us to build trust, seek out different perspectives, collaborate respectfully, and continuously strive to deliver the very best in all we do. But what exactly does that mean for Product Managers (PMs)? How do we earn and maintain trust with tens of millions of global readers in an era of misinformation? Where are new readers finding and developing a daily habit with our original reporting? How can Product unlock growth opportunities for our business? How can we help readers connect with their passions by building products like Games, The Athletic or Cooking? How do we use AI to responsibly advance our mission? How do we scale our systems to support breaking news? PMs at The New York Times solve a variety of problems for our readers, our journalists, other product and functional teams. To meet the evolving needs of our almost 12 million subscribers, we rely on a Product team with a wide range of unique backgrounds.</p><p>It takes creativity and expertise from people in every part of the company to fulfil our mission. So it’s not uncommon for our PMs to have started their careers in roles outside of product. To build a world-class Product team, we’ve hired both PMs by trade as well as those who were previously dancers, consultants, entrepreneurs, journalists, strategists, operations specialists, and engineers. Our diverse experiences fuel the analytical rigor, creativity, communication skills, leadership, and execution skills needed to excel in getting to know our users’ needs and leading teams to build solutions to meet them. This post will help introduce you to PMs whose backgrounds outside of Product Management help them excel in their current roles at The New York Times.</p><h3>One guiding framework, many possible solutions</h3><p>While our PMs work on products across the entire business, they are all guided by a common set of competences. No matter how different the problem, PMs continually develop their skills in these areas to help guide our teams to solutions and root us in our <a href="https://www.nytco.com/mission-and-values/">values</a>.</p><ul><li><strong>Product Strategy: </strong>PMs use structured thinking to define and prioritize impactful work that solves the right problems for our users, our business, and our journalistic mission. They analyze trends and develop product vision with empathy for our various users, including readers seeking a range of news and life needs, journalists who use our editorial tools to produce the most compelling and distinct multimedia reports, and engineers to enhance developer productivity.</li><li><strong>Execution: </strong>PMs are here to move the business forward by unlocking product-driven growth opportunities. They are resourceful, driven, and resilient, overcoming obstacles with determination and focus to help teams move with urgency, focus, and impact.</li><li><strong>Leadership:</strong> PMs are critical for team dynamics, helping to facilitate inclusive, high-performing teams where everyone feels valued. They lead with influence, building trust and communicating clearly to achieve shared goals.</li><li><strong>Cross-Functional Collaboration: </strong>PMs foster strong cross-functional relationships, collaborating effectively to set priorities and drive results.</li><li><strong>Technical Acumen: </strong>PMs possess a strong understanding of technology, enabling them to effectively engage with engineering decisions, including helping to uncover the possibilities and consequences of technical choices.</li></ul><p>The New York Times welcomes PMs of different backgrounds and skills, who add a richer understanding of human experience into the very core of the products they manage. PMs share a deep commitment to the importance of independent journalism, and leverage their unique perspective to make the essential value of our journalism better understood and more useful to more people. <strong>This series is meant to help you discover how our Product team maintains the integrity of our mission while navigating the digital transformation around us all. </strong>Their stories are a nod to the power of transferable skills and the nature of product leadership at The New York Times.</p><p>Ultimately, this series is more than just a showcase of our team. It’s an exploration of how diverse pathways can lead to extraordinary impact and a reflection of our belief that the best products are built by teams who bring the richness of the human experience to the challenge of solving problems. They build solutions for our readers, our journalism, and the world.</p><p><strong>Curious about joining our team? Check out our </strong><a href="https://www.nytco.com/careers/"><strong>career boards</strong></a><strong> for more information.</strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ff9b43222537" width="1" height="1" alt=""><hr><p><a href="https://open.nytimes.com/building-the-new-york-times-product-team-the-right-team-for-the-job-ff9b43222537">Building The New York Times Product Team: The Right Team for the Job</a> was originally published in <a href="https://open.nytimes.com">NYT Open</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How The New York Times is scaling Unit Test Coverage using AI Tools]]></title>
            <link>https://open.nytimes.com/how-the-new-york-times-is-scaling-unit-test-coverage-using-ai-tools-fa796bf9b8d2?source=rss----51e1d1745b32---4</link>
            <guid isPermaLink="false">https://medium.com/p/fa796bf9b8d2</guid>
            <category><![CDATA[test-coverage]]></category>
            <category><![CDATA[unit-testing]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[scaling]]></category>
            <category><![CDATA[ai-tools]]></category>
            <dc:creator><![CDATA[The NYT Open Team]]></dc:creator>
            <pubDate>Tue, 13 Jan 2026 18:51:55 GMT</pubDate>
            <atom:updated>2026-01-13T19:57:12.601Z</atom:updated>
            <content:encoded><![CDATA[<h4>How AI tools are helping our software engineers write better tests at scale</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7Ydoq98t7mlPQUZkpTz4Mg.gif" /><figcaption>Illustration by Nick Little</figcaption></figure><p><strong>By Eric Chima and Leonardo Quixadá</strong></p><p>At The New York Times, we’re all excited to build fresh new experiences that delight our users. Our product managers are driven to find new ways to get our work in front of our audience and build reader engagement. Our engineers are motivated to solve unique technical challenges. And just when you think all that work is on track, breaking news strikes and all of our plans change at once.</p><p>With all that going on, who could blame us if our test coverage couldn’t quite keep up?</p><p>Like every engineering organization, The Times deals with routine maintenance tasks: updating dependencies, cleaning up old code, <a href="https://open.nytimes.com/accessibility-requirements-not-features-76d9758665cd">maintaining accessibility standards</a>, and, yes, building testing into all of our products. Our engineers are committed to quality, but when you work at the speed of news, there’s always a new issue that needs to be addressed. Fortunately, generative AI has arrived with the promise of tidying up after us, taking care of the busy work, and giving time back to our developers to focus on feature development. But how far can you trust it?</p><p>Recently, one of our platform teams used AI tools to build out unit tests across our flagship product, the <a href="http://nytimes.com">News site</a>. This was an opportunity for our testing to catch up with our rapid development. Our goal was to improve the reliability of our web app, but also to evaluate AI products and determine how far we could push them to do work in bulk across our codebase. As expected, the agents required strict human supervision, but they improved our efficiency in writing tests and let us quickly expand coverage on some of our most critical code.</p><h3>AI accelerates, but humans test the tests</h3><p>Unit tests are a crucial part of the development process. The idea is to divide application code into small pieces and write tests for each, creating guardrails to ensure that changing one piece of the site doesn’t impact the rest. Having comprehensive unit tests makes the site more reliable and lets us roll out major changes (like <a href="https://open.nytimes.com/enhancing-the-new-york-times-web-performance-with-react-18-d6f91a7c5af8">last year’s React upgrade</a>) more confidently. But like many fast-moving engineering teams, we found our test coverage hadn’t kept pace with our rapid development. When we audited our News site, we found that we only had about 60 percent test coverage across our codebase. To compensate, we had to conduct extensive manual testing before releasing new features.</p><p>To us, expanding our test coverage seemed like an ideal application for generative AI. We knew that AI could struggle to deal with code context across a large project, but that’s less of an issue when the agent is writing self-contained unit tests for modular code. The tool only needed to cover each individual file, a task it should be able to replicate reliably without understanding the entire codebase.</p><p>We set out with an initial goal of reaching 80 percent test coverage across our entire News monorepo, touching code owned by over a dozen different teams. We initially covered six different projects, taking them from an average of 28% code coverage all the way to 83%. We estimated that AI cut the required work by 70%, reducing the effort for a single project from weeks to hours.</p><p>But while AI accelerated our work, it wasn’t reliable or autonomous enough to cover the whole monorepo without human supervision. The tools had to be closely monitored and each test verified. It took enough work that even with AI assistance, our small group wasn’t going to be able to cover everybody else’s code. Instead, we chose a variety of different web projects and key news pages as test cases, and then wrote up instructions that other teams (and now, you, our Open Blog readers!) can use to generate their own unit tests.</p><h3>The test generation process</h3><p>Before we could begin, we needed a reliable way to measure test coverage throughout our code base. To do that, our engineers wrote a small package called test-cov that would iterate through a directory and measure the test coverage of each file. That package was key because it not only allowed us to measure our progress but would be repeatedly fed into the AI prompt as it learned which files needed additional tests.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*rzTURaBzUS-gcVgGilPuTg.png" /><figcaption><em>Sample output from test-cov on a GraphQL subgraph directory</em></figcaption></figure><p>With that in place, we began working with simpler, standalone projects within our monorepo. They allowed us to refine our prompting strategies to consistently achieve our 80% coverage target before moving on to more dense code. Eventually, we advanced all the way to the Story template, the critical page type that displays <a href="https://www.nytimes.com/2025/11/12/opinion/ai-coding-computer-science.html">most</a> <a href="https://www.nytimes.com/2025/11/12/us/politics/scott-bessent-irs-loophole.html">of</a> <a href="https://www.nytimes.com/2025/11/12/opinion/ai-coding-computer-science.html">our</a> <a href="https://www.nytimes.com/2025/11/12/science/brain-implants-technology-disability.html">articles</a>. By that stage, we had distilled our process to a point where we were able to quickly and efficiently improve Story’s code coverage from 53 to 83 percent.</p><p>The actual workflow consisted of two loops: a big loop (the “User Loop”) and a smaller one (the “AI Loop”). The User Loop involves creating a chat, feeding it a prompt, and then letting the AI Loop run. Once the AI’s work is done, the user can review the results and trigger the AI Loop again if necessary. Ideally, the User Loop would only need to run once, but that isn’t always the case. The AI Loop involves the AI agent running the test-cov package, identifying coverage gaps, generating new tests, running the tests to validate them, and then running the test-cov package again to see whether more iterations are necessary.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*EwRL1q3JUn1rxyygZHipHw.png" /></figure><p>Allowing an LLM to edit production files carries inherent risks. One is that the agent will introduce regressions or security holes into your code. Another is that the AI will “cheat” by editing the source code to conform to existing unit tests, rather than the other way around. To combat that, we introduced the most important instruction in our prompt:</p><p>“Do not touch the source code.”</p><p>That command was one of two “walls” that confined the AI agent. The other wall was our test-cov package, which ran on every iteration and produced a read-only report. That was another constraint that could not be cheated. Together, the two walls trapped the AI loop in a silo where it could only leave if it hit the goal.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*yij_DC2xGLf_ZmJIxT9SNg.png" /></figure><h3>The agent in action</h3><p>Our early experiments involved a lot of trial and error. Sometimes, the agent would create useful unit tests. Other times, it would hallucinate unknown commands, get stuck in a loop of creating and deleting tests, or just run too many commands and hit our LLM’s limit. Even with our smallest applications, the work was slow going.</p><p>To solve this, we asked the agent itself to refine the rules we were feeding into the tool. Our final prompts were roughly seven pages (largely AI-written but reviewed and edited by humans) and included strict guidelines on how the LLM should write and verify its tests. We gave it rules to follow across a wide variety of categories, including:</p><ul><li><strong>Agent role </strong>— If you tell the agent that it’s a senior Javascript engineer, it’s more likely to write code like one. See example below.</li></ul><pre># Unit Test Coverage Improvement Agent<br><br>You are a senior JavaScript/TypeScript engineer specializing in unit testing with Jest and React Testing Library. Your mission is to systematically improve code coverage for JavaScript/TypeScript source files in a monorepo environment.</pre><ul><li><strong>Target identification and coverage areas</strong> — Coverage goals, what directories to search, what testing frameworks to use, and where to target tests. We focused on things like components, utilities, API calls, etc.</li><li><strong>Test structure and generation </strong>— What types of tests to create, where to put them, how to name them, etc.</li><li><strong>Available commands </strong>— Tools the agent can use to do its work. That included our test-cov package and the tools necessary to execute and validate tests in our system.</li><li><strong>Success criteria and quality standards </strong>— All tests pass without flakiness, tests are maintainable and use standard conventions, and test coverage goals are met.</li><li><strong>Safety guardrails </strong>— Focus on quality over quantity, report obstacles rather than making assumptions, and back out after 10 iterations to avoid infinite loops. Most importantly, <em>never modify the source code!</em></li><li><strong>Error handling </strong>— What to do if coverage becomes flaky, test coverage decreases, or the agent becomes stuck on complex logic.</li><li><strong>Reporting template </strong>— How the agent should report the results when it’s done.</li></ul><p>Below: An example rule for reporting templates</p><pre>## Reporting Template<br><br>```markdown<br>## Code Coverage Improvement Report<br><br>**Project**: [Project Name]<br>**Date**: [Date]<br>**Duration**: [Time Spent]<br><br>### Summary<br>- **Files Processed**: X<br>- **Initial Average Coverage**: Y%<br>- **Final Average Coverage**: Z%<br>- **Target Achievement**: [Met/Partially Met/Not Met]<br><br>### Detailed Results<br>| File | Initial | Final | Improvement | Status |<br>|------|---------|-------|-------------|--------|<br>| file1.ts | 45% | 85% | +40% | ✅ Target Met |<br>| file2.ts | 60% | 78% | +18% | ⚠️ Close to Target |<br><br>### Challenges Encountered<br>- [List any significant obstacles]<br><br>### Recommendations<br>- [Suggestions for further improvement]<br>- [Areas needing manual review]<br><br>### Files Requiring Manual Attention<br>- [List files that couldn&#39;t reach 80% threshold]<br>- [Include specific reasons and suggestions]</pre><p>Once we had refined the prompt enough, the process was much smoother. We still kept close supervision on the process, but the agent was efficiently creating effective tests, and the rules were reusable across larger projects. We had initially hypothesized that large routes would be more difficult than standalone projects, but even when we graduated to the Story route, which is a very complex codebase, we were able to create the tests we wanted with only minimal effort and human supervision. The time required to add or improve unit tests shrank from weeks to hours.</p><h3>Limitations and learnings</h3><p>Despite our success with a longer prompt, we found that it’s important to limit the chat context to only essential information. It’s tempting to drop entire directories or pages of technical documentation into the chat, but this can lead to a phenomenon called <em>context rot. </em>As the tool iterates, it becomes distracted by irrelevant information or calls incorrect tools. In some cases, it will hallucinate into the context and then reference its own hallucinations, causing a feedback loop that poisons the test-generation process.</p><p>It was also important to limit the agent’s scope each time we ran it. Although we were eager to scale the project quickly, applying these methods to an entire codebase at once introduces several problems. Iterating over large codebases expands the context, increasing the potential for context rot. And even if everything runs successfully, the resulting commit might be too big for a human to effectively review.</p><p>In fact, code reviews had been one of our major early concerns with the project. Because our group didn’t own most of the code we were working on, we would be turning over our generated tests to other Times teams to approve. Would The Times’s Story team accept the AI-generated tests we were adding to their project?</p><p>To mitigate those worries, we kept our client teams apprised of our goals and plans throughout the early stages of the project. When the new unit tests were ready, we arranged a formal review with Story engineers to make sure they met their standards. In the end, we were pleased by how well our work was received. The tests were effective and well-written. And, as it turns out, teams are very excited when you automate their grunt work.</p><h3>What’s next?</h3><p>When we think about the potential of AI in web development, we usually imagine a brand-new website spun up in seconds. We’ve all seen the demos where AI agents quickly build and deploy a web-based version of Tetris. For teams like ours, though, who are building features on top of extensive existing web infrastructure, the appeal of AI is its potential to manage the busy work associated with large codebases. After the success of the test generation project, we’re experimenting with other ways that AI can make our lives easier. Can we use the same principles to update our old, end-of-life CSS framework? Could they help us migrate separate projects to a common monorepo?</p><p>We started on News, but our AI work is already paying dividends across the bundle. Last week, our Games team tried our test-generation process on their latest puzzle game, Pips. When they told us that it turned their unit test epic into a two-point ticket, it validated everything we’d done in this project. We’re hoping these techniques become a standard used throughout the company.</p><p>This project was just one focused experiment within a broader set of initiatives exploring how AI can responsibly support software development at The Times. Under the close supervision of our engineers, product managers, and designers, it’s clear that AI tools can meaningfully enhance our productivity and change the way we work.</p><p>After all, the more we can automate the mundane, the more we can free up our talented contributors to focus on the things they do best. At least until breaking news strikes again.</p><p><a href="https://www.linkedin.com/in/lquixada/"><em>Leonardo Quixadá</em></a><em> is a Senior Software Engineer at The New York Times, working for the Web Platforms team. He is passionate about using GenAI tools and strong engineering practices to improve developer productivity and the speed at which high-impact journalism reaches millions of readers.</em></p><p><em>Eric Chima is the Senior Technical Product Manager for the Web Platforms team at The New York Times. His team builds the web foundations that enable Times engineers to create new experiences and ways of delivering our journalism.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=fa796bf9b8d2" width="1" height="1" alt=""><hr><p><a href="https://open.nytimes.com/how-the-new-york-times-is-scaling-unit-test-coverage-using-ai-tools-fa796bf9b8d2">How The New York Times is scaling Unit Test Coverage using AI Tools</a> was originally published in <a href="https://open.nytimes.com">NYT Open</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Designing a Digital New York Times Museum]]></title>
            <link>https://open.nytimes.com/designing-a-digital-times-museum-for-all-405331352189?source=rss----51e1d1745b32---4</link>
            <guid isPermaLink="false">https://medium.com/p/405331352189</guid>
            <category><![CDATA[product]]></category>
            <category><![CDATA[intern]]></category>
            <category><![CDATA[design]]></category>
            <category><![CDATA[product-design]]></category>
            <category><![CDATA[archive]]></category>
            <dc:creator><![CDATA[The NYT Open Team]]></dc:creator>
            <pubDate>Wed, 15 Oct 2025 18:30:40 GMT</pubDate>
            <atom:updated>2025-10-16T00:34:00.011Z</atom:updated>
            <content:encoded><![CDATA[<h4>How five product design interns created an award-winning virtual museum experience for employees</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*TkpbB5_zR3GSXWYSeqaBQQ.jpeg" /><figcaption>Illustration by Lucas Burtin</figcaption></figure><p><strong>By Nuoran Chen, Mina Chung, Frannie Ello, Christina Su, and Bella Rinne</strong></p><p>A key element of The New York Times Product Design Internship Program is a cohort project where all the interns work as a group to create a deliverable that is presented to the entire product design team. We, the summer 2024 cohort, were tasked with designing a proof-of-concept that brings the internal company museum in the NYC office to remote employees.</p><p>The Times Museum is home to multiple artifacts where visitors can freely roam from each display to understand how their stories intertwine. From an official letter authorizing the publication of the leaked Pentagon Papers to physical effects of our war correspondents, each display is unique and speaks to the history behind The Times’ mission to bring independent journalism to all.</p><p>Our group began our work with a tour of The Times Museum led by retired journalist and museum curator, <a href="https://www.nytimes.com/by/david-w-dunlap">David W. Dunlap</a>. On this tour, David guided us through each artifact in chronological order, describing the stories behind each piece and additional facts that were not fully captured with labels. His guided tour was essential for understanding the details behind The New York Times history and journalistic evolution. This tour informed two key requirements for the virtual museum.</p><ol><li><strong>A guided tour with pathways for autonomous exploration</strong>: Visitors should experience the museum in order as it was curated while also being able to explore independently.</li><li><strong>Highlight important artifacts</strong>: We want to reduce the cognitive load for visitors and prominent artifacts so visitors are not overwhelmed when they enter the virtual museum.</li></ol><p>With these defined requirements, we began the design process.</p><p>We held multiple brainstorming sessions to prioritize features and propose user flows in this virtual museum, but quickly found ourselves at a roadblock. For many weeks, we focused on nailing down our key features without bringing our ideas to visual sketches and designs. Not only did this make it challenging to scope our work, but it made it challenging for us to get feedback from other designers.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ocK45EteCnD3nvJWSVisZA.jpeg" /><figcaption><em>Putting our ideas to visuals helped with deciding the scope of our project.</em></figcaption></figure><p>We divided responsibilities to each team member: Bella took charge of onboarding to the museum, Nuoran and Mina shared the interactive map and navigation, and Christina and Frannie worked together on the artifact pages. Additionally, each team member rotated responsibility in preparing for our biweekly workshops with designers who gave us feedback on our deliverables.</p><p>To bring the New York Times museum to a digital audience, we realized the first challenge was how to display artifacts online. How can users navigate the space and discover artifacts that interest them? With this question in mind, we began a deep dive into inspiration. We explored various museums’ digital websites and then drew from the company’s internal communication platform to ensure our design reflected the brand’s editorial integrity. The early explorations based on these inspirations show the spectrum of displaying artifacts — in a more editorial way or spatial way.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*D86GSdDVSQM8GjIyB6OxFw.jpeg" /><figcaption><em>Low-fidelity iterations of the museum navigation.</em></figcaption></figure><p>After discussion, we agreed that since the physical museum already exists, our goal was to recreate its sense of physical presence — helping users feel connected to the artifacts and experience the excitement of walking through the space. To achieve this digitally, Nuoran used photogrammetry to stitch together photos of the museum, creating a 3D environment where users can either follow a guided tour or explore freely.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*8nHCTDBKNXHA04krZwmy2g.gif" /><figcaption><em>Creating a digital twin of the museum using photogrammetry</em></figcaption></figure><p>This 3D foundation led us to adopt a spatial UI, which ensures the UI doesn’t distract from the artifacts; it becomes part of the physical space itself. Nuoran and Mina designed unified spatial UI components, such as the floor-plan map and the side panel for filtering and navigation. These navigation methods ensured a cohesive and accessible experience. Upon landing, users enter a panoramic 3D view that visualizes the museum, map and the side rail. This helps users understand where they’re located in the space and helps them efficiently navigate between highlighted artifacts.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*tjVgUweNfTQOfdml4PXM3g.gif" /><figcaption><em>Visitors land onto a 3D map where they can understand the physical space of the museum.</em></figcaption></figure><p>Users can feel as if they’re walking through the museum by following visual cues such as locator pins embedded in the 3D space or directional arrows on artifact cards that guide exploration and highlight key artifacts.</p><p>To support this system, Bella designed an onboarding flow that introduces users to each component’s functionality before they begin their journey.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*lEftq7cAeEsoeApzPh0tNw.gif" /><figcaption><em>Onboarding that tells users how to navigate through the digital museum</em></figcaption></figure><p>The second design focus was creating interactive experiences around the highlighted artifacts. With limited time, we only focused on ten key artifacts that best represent the museum while minimizing visitor decision fatigue. David provided the list of artifacts, and we began exploring ways to present them in the virtual space. Our initial approach used editorial-style layouts with static images and text, but we quickly realized this fell short. These artifacts are powerful touchstones of journalism’s history, and much of their presence is lost on a flat screen. Unlike physical museums where artifacts are behind glass and out of reach, virtual environments offer opportunities for more immersive interaction and deeply engaging storytelling.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*WozMSqSB_gFJW9Kk4vD6xg.gif" /><figcaption><em>Earlier artifact page iterations felt static and focused on the text.</em></figcaption></figure><p>Realizing that, we regrouped and had a workshop to brainstorm interactive features that could make key artifacts more vibrant and help tell their stories more effectively. Christina and Frannie tried out different animations to bring these ideas to life. For example, the <em>“Man Walking on the Moon”</em> artifact includes three newspaper front pages, each with a different headline — highlighting how journalists in the analog era adapted to rapidly unfolding events. We stacked these newspapers together so users can click on tags to switch between the headlines, mimicking the experience of flipping through real newspapers. For Barney Darnton’s story, rather than stacking all his related objects together, we chose to display them across a single page representing his role as a World War II correspondent in the Pacific Theater. Users can zoom in on individual items to explore his story in greater depth.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*OLraX__r3_z5RNATfwpF5g.gif" /></figure><p>Instead of treating the ten artifacts as separated experiences, we organized them in chronological order under a unifying theme: “Integrity of the Times.” This structure forms the basis of the guided tour we offer users from the start. To make this tour’s storytelling more engaging, we introduced a scavenger hunt element throughout the experience. As users explore each artifact, they can collect clues that unlock the full story behind it which adds a layer of interactivity that makes the journey more educational and gamified.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*1arEaUTRCOf2RoP81SrslA.gif" /></figure><p>As we wrapped up this project, our team understood the responsibility of designing a virtual museum and the rich history that comes with it. Throughout our design process, we used real imagery and content from the museum in our sketches, constantly questioning if our ideas were the best way to showcase a specific artifact. <em>Is this interaction too playful for this artifact? Does this artifact need more visuals to tell the story?</em> As product designers, we often think about design consistency at the system-level, but as designers working at The New York Times, it is this kind of editorial thinking that brings our work to life. We ended this project by presenting it to the Product Design team, The Times Brand and R&amp;D teams. Our work went on to earn the <a href="https://ifdesign.com/en/winner-ranking/project/virtual-new-york-times-museum/702910">2025 iF Design Award </a>and the <a href="https://www.idsa.org/awards-recognition/idea/idea-gallery/virtual-new-york-times-museum/">2025 IDEA Bronze Prize</a>, highlighting the strength of our approach and storytelling as recognized by the broader design community.</p><p>These awards could not have been won without each member of the team and the strength of our collaboration in this project. Sharing project management responsibilities among this team of designers each step of the way helped with decision-making and keeping us prepared for deadlines. Lastly, for designers looking to hone their process, it can be daunting to jump into designs before everything is answered; however, sometimes visuals are necessary provocations that help a team align on an idea.</p><p><em>A special ‘thank you’ to former product designers Bella Rinne and Christina Su for being a part of this project and contributing to this article</em>. <em>We could not have done this project and won this award without you two!</em></p><p><em>Nuoran Chen is an associate product designer who has worked on The New York Times’ internal design system and currently designs mobile subscription experiences across different Times products.</em></p><p><em>Mina Chung is an associate product designer who has worked on the core news home page and currently designs current subscribers’ experience managing and upgrading their digital subscription across different NYT product surfaces.</em></p><p><em>Frannie Ello is an associate product designer who designs across visual formats and video experiences for the news web and app platforms.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=405331352189" width="1" height="1" alt=""><hr><p><a href="https://open.nytimes.com/designing-a-digital-times-museum-for-all-405331352189">Designing a Digital New York Times Museum</a> was originally published in <a href="https://open.nytimes.com">NYT Open</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Scaling Subscriptions at The New York Times with Real-Time Causal Machine Learning]]></title>
            <link>https://open.nytimes.com/scaling-subscriptions-at-the-new-york-times-with-real-time-causal-machine-learning-5f23a7b24ff4?source=rss----51e1d1745b32---4</link>
            <guid isPermaLink="false">https://medium.com/p/5f23a7b24ff4</guid>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[data]]></category>
            <category><![CDATA[causal-inference]]></category>
            <category><![CDATA[machine-learning]]></category>
            <dc:creator><![CDATA[Rohit Supekar]]></dc:creator>
            <pubDate>Fri, 03 Oct 2025 15:19:24 GMT</pubDate>
            <atom:updated>2025-10-06T01:45:28.397Z</atom:updated>
            <content:encoded><![CDATA[<h4>How real-time algorithms and causal ML transformed our digital subscription funnel from static paywalls to dynamic, millisecond decision-making</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*UITh9uwuoXVxScBnf9JnJQ.gif" /><figcaption>Illustration by <a href="https://mathieulabrecque.com/">Mathieu Labrecque</a></figcaption></figure><p>The New York Times became a subscription-first news and lifestyle service with the launch of its paywall in 2011. Since then, our subscription strategy has evolved substantially. Initially, users could access a limited number of free articles per month before they encountered the paywall. In 2019, we began personalizing this number using a Machine Learning (ML) model — <a href="https://open.nytimes.com/how-the-new-york-times-uses-machine-learning-to-make-its-paywall-smarter-e5771d5f46f8">The Dynamic Meter</a>. In the past few years, we have replaced this model with real-time algorithms that decide, typically within milliseconds, whether to grant access. These algorithms are tailored to balance and optimize the tradeoff between several business Key Performance Indicators (KPIs), while also allowing us the flexibility to adjust for any business constraints. This article further details the motivation behind these algorithms and their design based upon principles from causal machine learning and multi-objective optimization.</p><h3><strong>Our subscription funnel</strong></h3><p>The New York Times has a tiered subscription funnel (Figure 1), consisting of unregistered, registered, and subscribed users. This funnel is designed to provide non-subscribers with limited access to our content, allowing them to discover our offerings. At other times, the content may be blocked by a digital “wall”. We have two types of walls — a registration wall that asks a user to register for a free account or log in, and a paywall that asks a user to subscribe. A large number of users are unregistered — they may be shown either a registration wall or a paywall. Once a user is in the registered state, they can be shown only a paywall.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*0tTyFxroIibVwYrsKNXu0Q.png" /><figcaption>Figure 1: The New York Times subscription funnel</figcaption></figure><h3><strong>Optimizing the subscription funnel</strong></h3><p>Optimizing who sees the registration wall or the paywall — and when — is a very relevant question for our business. While blocking access encourages users to subscribe, we do not want to block all our content because it prevents users from discovering the breadth and depth of our journalism.</p><p>Our holistic business strategy is made up of distinct KPIs for registered and unregistered users. For registered users, the business KPIs are two-fold and related to engagement and conversion, similar to our previous <a href="https://open.nytimes.com/how-the-new-york-times-uses-machine-learning-to-make-its-paywall-smarter-e5771d5f46f8">model</a>.</p><p>For unregistered users, we are interested in subscriptions as well as registrations. Post registration, we can offer tailored experiences to provide users with a more rewarding experience. In addition, we consider <a href="https://en.wikipedia.org/wiki/Bounce_rate">bounce rate</a> as a relevant KPI, for better user experience and since it is often relevant for SEO (Search Engine Optimization).</p><p>Due to such competing business KPIs, the decision to show a registration wall or a paywall is nuanced. Our goal is to build a smart system driven by ML that can decide when to show a paywall, a registration wall, or allow access. As an example, if a user is highly likely to only register but not subscribe, we might show them a registration wall. If they are not likely to subscribe or register, but might engage, the best decision is to allow access so that they get interested in our content.</p><p>Beyond sophisticated machine learning methods, the success of our approach is possible due to a close collaboration with our business leadership in clearly defining business KPIs. The data-driven culture at The Times enables this seamless collaboration. Our shared understanding of the fundamental tradeoffs in our business is crucial for building customized ML models with business constraints.</p><h3><strong>Machine learning for optimizing the subscription funnel</strong></h3><p>Machine learning has been a key component of how the Times’ paywall has been operating. We published an article on our previous <a href="https://open.nytimes.com/how-the-new-york-times-uses-machine-learning-to-make-its-paywall-smarter-e5771d5f46f8">Dynamic Meter model</a> that used to personalize the number of free articles every registered user could access in a month. This model was implemented at the start of each month as a batch process to balance and maximize KPIs for subscription as well as engagement. While very successful for our strategy, this approach did not allow us to take real-time data, as well as article information, into account for making a paywalling decision.</p><p>To leverage the capabilities provided by our Machine Learning Platform, we revamped our algorithms to be real-time and took a holistic approach to apply machine learning in optimizing our subscription funnel. We also apply custom manual rules that can override algorithmic outcomes — for instance, designating certain content as open access for public service.</p><h3><strong>Modeling</strong></h3><p>From the perspective of prescriptive ML, different user types have different sets of “actions”. For unregistered users, the actions include (1) showing a registration wall, (2) showing a paywall, or (3) allowing access. For a registered user, the actions are (1) showing a paywall or (2) allowing access. Since the actions, as well as the number and nature of objectives, are different for our two user groups, we decided to build separate prescriptive models. Here, we describe the construction of the models holistically.</p><p>For each user type, we have a set of supervised predictors <em>fᵢ(𝐗, a)</em>, where <em>i</em> represents a specific objective, such as propensity to subscribe or engage with an article. These predictors take in <em>𝐗</em>, a vector of features available at inference time, and an action <em>a</em>, such as showing a paywall or allowing access.</p><p>The supervised predictors <em>fᵢ(𝐗, a)</em> are trained on data collected by a Randomized Control Trial (RCT), which is set up to randomly take an action with equal probability. The RCT is always on and the collected data from it ensures that the supervised predictors are causal, which means they are able to make predictions about counterfactual actions for any given request. This supervised learning setup — where the action <em>a</em> is used as a feature — is typically referred to as an <a href="https://causalml.readthedocs.io/en/latest/methodology.html#s-learner">S-Learner</a>.</p><p>Since we have multiple objectives, we take a convex linear combination to construct a single objective using weight factors <em>δᵢ</em> such that <em>∑ ᵢ δᵢ = 1</em>. These weight factors determine how much we value one objective over the others and are chosen based on certain business constraints that we detail below.</p><p>The models then take an action <em>a*</em> as per the policy in Equation 1.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*r8XP2CuCUSE7Nyg_PG1hhw.png" /></figure><p><em>a*</em> is the action that maximizes the combined objectives.</p><p>Our modeling approach is schematically shown in Figure 2. Real-time features are ingested into supervised predictors, which predict the objectives for each action. These predictions are then multiplied by their weight factors and combined. Finally, the action is chosen that leads to the largest objective.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*6tOWgAWkVsux3cRFgrIEfg.png" /><figcaption>Figure 2: A schematic for our algorithmic approach that determines whether to show a registration wall, a paywall, or allow access to a user accessing a specific article. Using a real-time feature vector <em>𝐗, supervised predictors return different objectives for each action. These objectives are combined using pre-determined weight factors based on business constraints. The final chosen action maximizes the combined objective. This process is typically executed within milliseconds when a user accesses an article.</em></figcaption></figure><h3><strong>Respecting business constraints</strong></h3><p>The weight factors in our models crucially determine the rates at which registration walls and paywalls are shown, which are often determined by business constraints. For example, for the registered user model, a constraint might be a desired paywall rate in aggregate over a day. For the unregistered user model, we may have multiple constraints, such as those on registration wall rate and paywall rate.</p><p>We solve an optimization problem to figure out the weight factors by backtesting on a recent RCT dataset. By applying a trained model on this dataset, we can construct functions <em>rₖ(δ₁, δ₂, …; χ) </em>that return any statistics about the actions that the model takes on the dataset. Our problem is to match <em>rₖ</em> with a business-specified target <em>tₖ</em> by continuously changing the weight factors <em>δᵢ</em>(s). For example, <em>t₁</em> might represent a paywall rate. Correspondingly, <em>r₁</em> returns the paywall rate that the model would have achieved if it were applied to the historical RCT dataset.</p><p>We pose the above problem as yet another multi-objective optimization problem to minimize the losses in Equation 2, which are the squared errors between the achieved and the target rates.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Vktf7PDQhetX6Z0FVeW3dg.png" /></figure><p>The Pareto optimization problem for these losses is in Equation 3.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7ziYo3DDqTEyjaOJ4_cjPQ.png" /></figure><p>We may solve Equation 3 in its native multi-objective formulation, or combine the individual losses and treat it as a single objective problem. Our investigations showed that the loss landscape is rugged and steep, and we usually have 3–5 parameters to optimize. We tried several derivative-free optimization algorithms like <a href="https://link.springer.com/article/10.1007/s10589-010-9329-3">Nelder-Mead simplex algorithm</a> and <a href="https://arxiv.org/abs/1807.02811">Bayesian optimization</a>, and found that <a href="https://www.egr.msu.edu/~kdeb/papers/c2014022.pdf">U-NSGA-III</a> worked the best.</p><p>After a sufficiently low loss is achieved, the model is deployed to production with the weight factors that are found. We run this optimization as frequently as every day to adapt to the changing traffic patterns. This approach also allows us the flexibility to respond to business requirements at a rapid pace. Any changes typically involve a simple edit to our configuration files to reflect the modified targets, and the tuning of weight factors is triggered right after.</p><p>To anticipate the impact of weight factor changes on outcomes such as subscription rate, we utilize <a href="https://arxiv.org/abs/2106.07695">Inverse Probability Weighting (IPW)</a>, specifically Hajèk estimation, similar to our <a href="https://open.nytimes.com/how-the-new-york-times-uses-machine-learning-to-make-its-paywall-smarter-e5771d5f46f8">previous work</a>. This estimation also helps us inform our stakeholders about the expected impact of changing any business constraints, such as the paywall rate.</p><p>The training of the supervised predictors and the tuning of the weight factors operate in a control flow as shown in Figure 3. We only deploy a new model if the optimization loss for respecting business constraints is sufficiently low. Otherwise, our team gets alerted while the previously trained model continues to remain in production and serve traffic.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*yhONASebsHkbv33sEfdY2g.png" /><figcaption>Figure 3: A pictorial representation of the control flow for training supervised predictors and tuning the weight factors to respect business constraints. This process is executed on a schedule every day so that the algorithm continuously adapts to changing traffic patterns and maintains business constraints.</figcaption></figure><h3><strong>Performance measurement</strong></h3><p>To ensure that our productionalized models are performant, we compare their performance against our constantly running RCT. This helps us understand how much better the personalization of the model is as compared to a purely random policy.</p><p>Say we are operating with 3 KPIs. From the RCT data, we can develop a 3-dimensional plane with its extreme points being the average of data points where only a specific action was taken (registration wall, paywall, or allowed access). Any intermediate percentages of actions would correspond to a point on this plane. If the model is better than the randomized policy, then, in this space of objectives, the model point lies on the side of the RCT plane where the objectives are increasing. This is schematically shown in Figure 4. The dotted red lines help us define the performance improvement for each KPI over the random policy. As we vary the weight factors to adjust for business constraints, the model point moves around along the <a href="https://en.wikipedia.org/wiki/Pareto_front">Pareto front</a><strong>.</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*V3NMrJDAvHTDTEjWjnAIdA.png" /><figcaption>Figure 4: A schematic representation for comparing model performance against the Randomized Control Trial when we are considering three competing KPIs (Key Performance Indicators). Examples of such KPIs include subscription rate, registration rate, and engagement. The model point lies “above” the RCT surface, thus indicating that its policy is better than a purely random policy. While holding any two KPIs constant, the model does better in terms of the third KPI.</figcaption></figure><p>In addition, we also conduct A/B tests to validate any improvements or changes we make to the models as compared to their old counterparts.</p><h3><strong>Conclusion</strong></h3><p>The New York Times registration wall and paywall are now driven by dynamic real-time algorithms. These algorithms are inherently causal and learn the efficacy of these walls based on a variety of factors using Randomized Control Trial (RCT) data. They are designed in a customized way to balance the tradeoff between multiple KPIs, while also helping provide an explicit control over business constraints. As a result, these models have achieved a boost in subscription rate and registration rate while maintaining the rates at which registration walls and paywalls are shown.</p><p><a href="https://www.linkedin.com/in/rsupekar/"><em>Rohit Supekar</em></a><em> is a Lead Machine Learning Scientist at The New York Times, focusing on algorithmic targeting for access and messaging systems. He is passionate about developing and deploying ML solutions, drawing on his doctoral research in applied mathematics and scientific machine learning.</em></p><p><em>Many thanks to the Algorithmic Targeting, Machine Learning Platform, and Meter teams for their invaluable contributions to this collaborative effort.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5f23a7b24ff4" width="1" height="1" alt=""><hr><p><a href="https://open.nytimes.com/scaling-subscriptions-at-the-new-york-times-with-real-time-causal-machine-learning-5f23a7b24ff4">Scaling Subscriptions at The New York Times with Real-Time Causal Machine Learning</a> was originally published in <a href="https://open.nytimes.com">NYT Open</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The New York Times Games’ Path to Dark Mode]]></title>
            <link>https://open.nytimes.com/the-new-york-times-games-path-to-dark-mode-345dfe464e1a?source=rss----51e1d1745b32---4</link>
            <guid isPermaLink="false">https://medium.com/p/345dfe464e1a</guid>
            <category><![CDATA[design]]></category>
            <category><![CDATA[games]]></category>
            <category><![CDATA[development]]></category>
            <category><![CDATA[dark-mode]]></category>
            <dc:creator><![CDATA[The NYT Open Team]]></dc:creator>
            <pubDate>Thu, 25 Sep 2025 15:25:52 GMT</pubDate>
            <atom:updated>2025-09-25T15:24:30.929Z</atom:updated>
            <content:encoded><![CDATA[<h4><em>How we designed a deceptively complex feature</em></h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*2IrBHYTCuNopOZNZI4KYnA.jpeg" /><figcaption>Illustration by Alessandro Gottardo</figcaption></figure><p><strong>By Joel Urena, Jenna Kim, Kenneth Ofosu, Shaka Clark, Raven Adaramola, Dylan Campbell</strong></p><p>At least once a day, our players requested a Dark Mode feature for The New York Times Games, particularly those who enjoy playing at night. The bright screens were negatively impacting their satisfaction and hindering accessibility. Our players often wondered why this seemingly simple feature — <em>just a switch from white to black</em> — took so long to deliver.</p><p>People may think designing for Dark Mode is about inverting colors from white to black, one of the points players often cited when asking for Dark Mode. But there are numerous factors that make up little decisions when designing and implementing Dark Mode. Not only do we have to consider the color accessibility, we have to make intentional color decisions to preserve the brand of each game and The New York Times Games.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hsYW6kpGTSohh2-EWpTpyQ.png" /><figcaption><em>A compilation of negative Games app reviews that reference dark mode</em></figcaption></figure><h3>Primer: Design Systems &amp; Feature Discovery</h3><p>In the Spring of 2023, The New York Times Games was evolving quickly and we found ourselves with a user experience that was vastly different depending on which game you played and where you played it. We began to rectify this problem with the development of a Design Systems strategy. Like most other tech orgs, we hypothesized that if we began approaching product design and front-end development systematically, surely that would orient us toward a more consistent product experience, right? The answer: yes, eventually.</p><p>From a business and user impact perspective, the clearest Design Systems opportunity to prioritize first was Dark Mode. However, given the maturity of The New York Times Games digital product, the operational complexity we were tackling was massive. At the time, we had eight live games spread across three surfaces: The New York Times Games apps, the The New York Times news app and on the web. We also had ancillary experience for some of these games in the form of our companion articles (e.g: <a href="https://www.nytimes.com/spotlight/spelling-bee-forum">Spelling Bee Forum</a>, <a href="https://www.nytimes.com/spotlight/daily-crossword-column">Daily Crossword’s Wordplay column</a>).</p><p>What we thought would be a simple color switch became an extensive exercise in paying down years of design debt. We discovered that each game had its own distinct user experience, with different fonts, color palettes, and component styles. Implementing a consistent Dark Mode across everything meant we couldn’t just add a new theme; we had to audit every screen and standardize the design from the ground up. This complexity forced us to make a critical decision: should we spend years paying down this debt, or could we find a way to scope Dark Mode in a way that delivered immediate value to our players? Our solution was to prioritize the user-facing experience over a complete systemic overhaul, giving us a path to delivery while continuing to chip away at the underlying issues.</p><p>We made the strategic choice to restrict Dark Mode to the Games app, which allowed us to narrow our technical and design focus to typical mobile and tablet display sizes. This decision also served to distinguish the Games app experience from our other platforms, aligning with our long-standing objective of establishing it as the premier destination for playing our games.</p><p>We then began unpacking the remaining scope. Product managers developed a visual to help cross-functional teams get a sense of what work needed to be done to get us to ship our first Dark Mode experience.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*di6wTNBmhRlFkqQAr89D-Q.png" /><figcaption><em>Path to Dark Mode In Apps</em></figcaption></figure><p>Once we understood the work ahead, our next step was to pitch the project to Games leadership for a spot on the roadmap. We publish our games using a hybrid strategy — our apps are a native shell built around our web-based games — which meant we needed to align our web and app teams. Given the competing priorities we faced on the Games App roadmap, this was a hard sell. We ultimately struck a balance between the two teams by aligning on a UX that leveraged our hybrid model to give users the maximum amount of flexibility when determining their theme preferences. Just like with Wordle, every game would have its own Dark Mode setting.</p><h3>Designing the System</h3><p>Now that it was approved, we wondered, “what are steps to actually get to Dark Mode on Games? How does engineering come into play as designers are exploring?” We decided to create a strategic chart outlining the high-level steps to the feature: <strong>Alignment, Foundation, Exploration, Integration</strong>. There were still unanswered questions, such as whether to launch all the Dark Mode features at once or to keep the scope narrow by launching it for each game individually. Creating this outline of the design strategy helped us gain clarity that we need to launch Dark Mode individually, reducing the complexity of the project by breaking things down, a common pitfall when working on design systems. This approach also mitigated risk as it allowed us to go through visual quality assurance for each game.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*eZRXcJRgV4iMaA4D4i0Umg.png" /><figcaption><em>Product strategy diagram</em></figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*o9FO4w6AckmpZks11l--9w.png" /></figure><p>To ensure <strong>Alignment</strong>, we presented to the wider team on the importance of investing in design tokens. Because the wider team was so accustomed to using hex values for colors, it was crucial to change the way designers and engineers work with color.</p><p>For <strong>Foundation </strong>work, we conducted an audit of all game surfaces to identify any misalignments or discoveries. The New York Times Games product expanded significantly from its initial focus on The New York Times Crossword to include over eight distinct games. This growth, however, occurred without a unified design system. Consequently, each game surface utilized unique hex values, even when visual similarities were present.This investigation again proved the importance of design systems. To address this problem, the hex values used within our products were assembled into primitive/base tokens which will later be used to create semantic tokens which are more complex, intricate tokens used for more specific use cases. Also, thinking about how to help other designers discover the color they want efficiently in the future and understanding the complexity of our surfaces, we made a decision to establish three separate color component libraries with semantic libraries: Games Home Surfaces, Games Brand, Gameplay Color libraries.</p><ul><li><em>Games Home colors </em>are general, foundational colors primarily used for utilitarian purposes for text, background, and stroke.</li><li><em>Game Brand colors</em> consist of design tokens used for brand purposes, such as Spelling Bee Yellow, Connections Purple, The Mini Blue, etc.</li><li><em>Gameplay Color</em> tokens are color tokens used for gameboards and game interactions.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Rd4eTzPLsAVUyGsUyY3kIw.png" /></figure><p>When the time came for diving into <strong>Exploration</strong>, we approached each game as a separate project. We held each other accountable by reviewing each others’ design tokens. We discussed and consulted with one another on names for alignment — beginning the development of a ‘shared language’ for the system that we could leverage cross-functionally. We put strong emphasis on <em>playability</em> in gameplay through multiple rounds of prototypes to make sure the game experience of a light mode feels the same as the dark mode. For example, tiles within Connections have a very distinct beige color and do pass the accessibility test on a dark black background. However, it can feel overly bright on a dark background, so we adjusted the saturation of the beige to have a consistent game experience.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*a1bT8n8qu-ltL9a3Uehf3A.png" /></figure><p>As we were getting close to handing off the designs to engineers, we also implemented a way for users to switch preferences within Settings since not all users prefer to have Dark Mode at night or they would prefer to have only certain games in Dark Mode. We wanted to deliver a flexible game experience where users can have a choice and be in control of what mode they want for each game.</p><h3>Putting the Pieces Together</h3><p>Our design system was engineered with the primary goal of ensuring consistency and scalability across both our internal and external products, beginning with our games. By providing a unified set of design tokens and reusable components, it bridges the gap between design and development, enabling our team to deliver a cohesive user experience. While still in development, the system is already proven to be a valuable tool for maintaining visual and functional consistency across our products. Collaboration has been at the heart of this effort, with several members of our engineering, product, and design functions working closely to align on requirements, standards and best practices.</p><p>The design system is built using React with CSS modules for styling, and it is documented in Storybook to ensure ease of use for all stakeholders. We leverage TypeScript for type safety, Vite as a build tool for fast and efficient development, and Playwright for visual regression testing to maintain quality. Design tokens, such as spacing, colors (used to implement dark mode) and typography are implemented using CSS variables to ensure scalability and flexibility. To enforce code quality and consistency, we use Stylelint and ESLint, integrated with Git hooks via tools like lint-staged and husky. These tools ensure that every contribution adheres to our standards, making the system reliable and maintainable. We then created a React Provider and utilized React hooks, which enabled us to roll out the new palette per game and per surface, to ensure we did not have to add the new feature to all surfaces and games at once. Overall our implementation strategy worked, allowing us to incrementally add color tokens without users noticing the difference. Many of the challenges we faced in the project related to our previous css and styles decisions in our code base.</p><p>One of the most rewarding aspects of building the design system has been the collaborative process. Weekly and ad-hoc meetings with engineers, engineering managers, product managers, and designers allowed us to align on requirements, share knowledge, and roadmap future improvements. This has fostered a shared sense of ownership and ensured that the system meets the needs of all users. Looking ahead, our vision is for the design system to become the single source of truth for all design-related information, encompassing both design tokens and shared components. By continuing to iterate and expand, we aim to make the system an indispensable resource for our team and a foundation for scalable, high-quality product development.</p><h3>Takeaways</h3><p>Building Dark Mode for The New York Times Games revealed two significant wins. First, what started as a focused effort to improve workflow through design systems evolved into a powerful tool for consistency and scalability across our products. We’re now extending this foundational work to other elements like fonts, spacing, buttons and shared patterns.</p><p>Second, the data clearly shows the positive impact of Dark Mode on our players. Users who engage with Dark Mode exhibit higher engagement, playing and completing more puzzles. Our strategic decision to prioritize the user-facing experience allowed us to deliver immediate value while systematically addressing underlying design debt.</p><p>Our efforts have delivered a feature that directly addresses our players’ long-standing desire for a more comfortable nighttime solving experience, allowing them to enjoy their favorite puzzles well into the evening.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*6CY9TY3d82kFNCDWeAe8nw.png" /></figure><p><em>Special thanks to:</em></p><p><em>Jennifer Scheerer &amp; Jessica Gerson for design support and leadership. Michael Beach &amp; Blake Spikestein for product support and leadership. William Frohn for motion design. Ian Hipschman, Ashby Thornwell, Lauren Yew, Goran Svorcan, Ihor Shamin, Katie Leavitt and the entire Games App Squad for their partnership. Emily Ngo for the Data Insights and Experimentation Support. Nick Ritenour, Coryn Brown and The New York Times Marketing team for their partnership</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=345dfe464e1a" width="1" height="1" alt=""><hr><p><a href="https://open.nytimes.com/the-new-york-times-games-path-to-dark-mode-345dfe464e1a">The New York Times Games’ Path to Dark Mode</a> was originally published in <a href="https://open.nytimes.com">NYT Open</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Using Provocations to Shake the Status Quo]]></title>
            <link>https://open.nytimes.com/using-provocations-to-shake-the-status-quo-7e884b866310?source=rss----51e1d1745b32---4</link>
            <guid isPermaLink="false">https://medium.com/p/7e884b866310</guid>
            <category><![CDATA[figma]]></category>
            <category><![CDATA[engineering]]></category>
            <category><![CDATA[visualization]]></category>
            <category><![CDATA[design]]></category>
            <category><![CDATA[ux-design]]></category>
            <dc:creator><![CDATA[The NYT Open Team]]></dc:creator>
            <pubDate>Tue, 22 Jul 2025 16:23:11 GMT</pubDate>
            <atom:updated>2025-07-22T16:23:11.612Z</atom:updated>
            <content:encoded><![CDATA[<h4>The bold approach NYT Cooking used to define a strategy for recipe cards.</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hOaZxaNhjA2fAFt9Dz2H_Q.jpeg" /><figcaption>Illustration by Ben Denzer</figcaption></figure><p><strong>By Jayne Lee</strong></p><p>In The New York Times Cooking, “cards,” or the containers that represent our content, are the first impression of our brand. They’re the window into our recipes and users rely on them to evaluate and choose what to cook.</p><p>NYT Cooking users are particularly attracted to our appetizing food photos. In every research session, participants get distracted by a delicious-looking dish while answering the moderator’s questions. People tend to browse with their stomachs first and then see if the recipe specifications meet their personal criteria (For example, do I have enough time to cook this?).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*puGPGkTNj0c0zIUN" /></figure><p>A problem we encountered: our beautiful photography was muddied by the page’s gray background. The white card containers sitting on top of the gray page forced your eyes to focus on the container instead of the photo. The visual hierarchy was at odds with the preferred reading and browsing order of our audience: photo first, then recipe information.</p><p>Recipe information varies, so the card needs to afford that variability. Using containers forced us to keep our cards the same height, resulting in unnecessary whitespace. Since the cards were so tall, this limited the amount of recipes a user could see at once, increasing the time users scanned for recipes.</p><p>We needed to highlight our photography, modernize the format and streamline the recipe evaluation process.</p><p>While designing solutions, I realized that I had more questions than answers. Answering each question would have significantly bloated the project scope. Some examples were: What order of recipe information is most helpful when deciding between recipes? Are recipe bylines important? Which presentation of ratings is more effective?</p><p>Because the priority was to update the format rather than improve comprehension, and to avoid bloating the scope of the project, I started with design <strong>provocations</strong> over design specs. Because cards are systematic, they appear across many surfaces of our product. Provocations gave me confidence in my design decisions without having to design every card for every context.</p><p>Provocations are high-fidelity designs that may look and feel like design specs, but only gesture at solving a problem. The solutions are rough concepts that haven’t been validated with research and data.</p><p>Temporarily solving my questions via provocations gave me enough information to focus on my main objective: highlight the photography and modernize the format. Think of it like Ikea’s approach to building furniture: you nail pieces together just enough for the furniture to be stable temporarily. This way you can make adjustments at the end if pieces are misaligned, before final assembly. I knew that I could return to properly solve the questions later on.</p><p>Provocations can inspire more creative and unexpected solutions because they liberate me from the constraints momentarily. To make the provocations useful and not just beautiful rough concepts, I came up with principles that mapped to each provocation.</p><p>Codifying the ideas into principles allows the team, especially non-designers, to understand the intention and goals of the new ideas.</p><p>Here is an example:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*rasbtN43QxxOm_2Q" /></figure><h3>Lead with visuals; use text conservatively</h3><ul><li><strong>Make our visuals do the heavy-lifting.<br></strong>Cooking is a sensorial experience. Highlighting our delicious photography and showing how easy our recipes are to make, will inspire users and help them make decisions faster.</li><li><strong>Our visuals speak the loudest.<br></strong>Ensure visuals are displayed at a large enough scale to entice users, while still giving users the ability to choose. Text is supplementary to the visual, not in competition.</li></ul><p>These provocations served as a vision for how our cards should evolve over time. They allowed us to see where our cards were headed in the future and reverse-engineer small, conservative tweaks we could make to our product now. The expectation of the first iteration was not to move the needle, but to improve the overall product’s quality of life. We removed the old card containers, changed the page background color, grouped the recipe information together and rounded the corners on our photography.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*ndMQY3qsDsJALpzS" /></figure><ul><li>The old card containers were a fixed height, regardless of the amount of content within it. This created unnecessary whitespace, made our cards taller, and required users to scroll more to see more recipes.</li><li>Removing those meant some recipe information would look disconnected from the card. Grouping all of the information together allows users to scan and make decisions faster.</li><li>We rounded the corners of our photography to signify the clickability affordance, now that the card container was no longer visually present. The rounded corners introduce an overall sense of warmth to the product and visually aligns with the other products within the NYT suite.</li></ul><p>Making these small tweaks put the focus more on our photography, captivating more stomachs and eyes. Our early data shows users can now find and cook new recipes at a much faster rate. And our research participants can stay happily distracted.</p><p>Provocations helped us define a direction for how our cards would evolve over time. It showed us what could be shipped now and how to continue building toward the future. Shifting to provocations allowed us to be more innovative and set a confident, clear strategy of where to go next.</p><p><em>Jayne Lee is a lead product designer for The New York Times Cooking app, driving innovation through design. She strives to create experiences that are both functional and beautiful.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7e884b866310" width="1" height="1" alt=""><hr><p><a href="https://open.nytimes.com/using-provocations-to-shake-the-status-quo-7e884b866310">Using Provocations to Shake the Status Quo</a> was originally published in <a href="https://open.nytimes.com">NYT Open</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Developing an Internal Tool for Our Puzzle Editor]]></title>
            <link>https://open.nytimes.com/developing-an-internal-tool-for-our-puzzle-editor-d5dc7a9a6464?source=rss----51e1d1745b32---4</link>
            <guid isPermaLink="false">https://medium.com/p/d5dc7a9a6464</guid>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[internal-tools]]></category>
            <category><![CDATA[puzzle]]></category>
            <category><![CDATA[code]]></category>
            <category><![CDATA[dashboard]]></category>
            <dc:creator><![CDATA[The NYT Open Team]]></dc:creator>
            <pubDate>Mon, 02 Jun 2025 15:54:44 GMT</pubDate>
            <atom:updated>2025-06-02T15:54:44.772Z</atom:updated>
            <content:encoded><![CDATA[<h4>How we developed a dashboard tool created to help ease the workflow of managing puzzles for our Connections editor.</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hHwPctJuLkCU-hW_nurB_g.jpeg" /><figcaption>Illustration by Su Yun Song</figcaption></figure><p><strong>By Shafik Quoraishee and Wyna Liu</strong></p><p>In the game Connections, every puzzle is a meticulously crafted challenge designed to captivate our audience and spark intellectual curiosity. Developing these puzzles can sometimes be a time consuming and intricate task. Each puzzle requires planning, beginning with conceptualizing fresh categories and plausible misleads, followed by testing the combinations for balance and solvability, and concluding with refinement and publication-ready formatting. The process requires both creativity and quality control.</p><p><a href="https://en.wikipedia.org/wiki/Wyna_Liu">Wyna Liu</a>, the editor of Connections has the responsibility of constructing and reviewing multiple puzzles spanning various dates, ensuring that each board remains consistent, fresh and challenging to our puzzle solvers. This is a challenging endeavor where there isn’t much room for error. In order to address the challenge, we developed the Connections Reference Dashboard — an in company tool aimed at streamlining data management while providing the puzzle editor with an intuitive, aesthetically pleasing interface that enhances the daily workflow.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*bleB6X_GIsHXBv7E" /></figure><p>There were two considerations in developing this tool. Firstly was technical work in handling a dynamically changing payload of puzzle data. We wanted to create a rich and visually resonating interface that was easy to navigate and gave a bit of the feel of the Connections game itself.</p><p>Therefore, everything from the board results to the search interface was designed with these ergonomics in mind. We wanted to create a level of tactility to the tool which was reminiscent of and which reduced the number of manual steps needed to cross reference both categories and words in individual boards.</p><p>The primary functionality that Wyna was after was the ability to quickly identify words that have appeared in previous Connections boards, as well as their contexts — the categories they were members of, and the other categories that belonged to that board. Connections is a puzzle built around the novel ‘misleads’. A “mislead” in the game refers to the specific way words are presented or combined within a particular puzzle that might tempt a player to form an incorrect group. An example is the word “ARCHER”, which might mislead you to group it with “BOW”, “ARROW”, and “TARGET” (for “archery terms”), when its intended category is actually “TV SHOWS” with words like “LOST” and “FRASIER.” While words and categories can be repeated over time, the misleads ideally should not. With more than 700 puzzles, keeping track of what has run, on what date, and in what context, has been a vital part of the construction workflow.</p><p>Previously, there was no comprehensive search view to assist in checking this easily, either in Google Sheets, where the game is constructed, or in our internal admin tool, where the game is published. The dashboard tool provides all the necessary information at-a-glance, which has been an enormous time saver, especially since searching for words that have previously run is done multiple times while constructing each Connections board.</p><h3><strong>The Back End</strong></h3><p>The backend, built with <a href="https://flask.palletsprojects.com/en/stable/">Flask</a>, serves as the cornerstone of the dashboard. It is responsible for fetching puzzle data from external sources, caching it locally, and ensuring that the data is available in real time for the frontend. One of the key components of our backend is the data caching mechanism, which minimizes unnecessary network calls by checking if data for a given date is already available. If it is not, the system fetches the data from the NYT Connections API and caches it on disk. This not only speeds up subsequent requests but also provides redundancy against network issues. For example, consider the function below, which checks for cached data before fetching new data:</p><pre>if resp.status_code == 200:<br>   data = resp.json()<br>   if data and isinstance(data.get(&quot;categories&quot;), list):<br>            puzzle_obj = {}<br>            ALL_PUZZLES[date_str] = puzzle_obj<br>            try:<br>                with open(cache_filename, &quot;w&quot;, encoding=&quot;utf-8&quot;) as f:<br>                    json.dump(puzzle_obj, f, ensure_ascii=False, indent=2)<br>            except Exception as e:<br>                print(f&quot;[fetch_puzzle_and_cache] Write error {date_str}: {e}&quot;)<br>            return puzzle_obj</pre><p>In addition to this caching strategy, an auto-fetch mechanism has been implemented to preload upcoming puzzles. This ensures that even future, unpublished boards are available for planning and review. The auto-fetch function calculates a date range that includes several weeks into the future and then iterates through that range to fetch and cache each puzzle.</p><pre>&lt;script setup&gt;<br>import { reactive, watch, onMounted } from &#39;vue&#39;;<br>import axios from &#39;axios&#39;;<br><br>const state = reactive({<br>  includeUnpublished: true,<br>  puzzles: [],<br>  loading: false,<br>});<br><br>const fetchPuzzles = async () =&gt; {<br>  state.loading = true;<br>  try {<br>    const { data } = await axios.get(‘api’&#39;, {<br>      params: {<br>        start: state.dateRange.start,<br>        end: state.dateRange.end,<br>        includeUnpublished: state.includeUnpublished,<br>      },<br>    });<br>    state.puzzles = data.puzzleData;<br>  } catch (error) {<br>    console.error(&#39;Error fetching puzzles:&#39;, error);<br>  }<br>  state.loading = false;<br>};<br><br>onMounted(fetchPuzzles);<br><br>watch(<br>  [() =&gt; state.dateRange, () =&gt; state.includeUnpublished],<br>  fetchPuzzles<br>);<br>&lt;/script&gt;</pre><p>The above snippet is an example of how the front end communicates with the server component we set up, through the data api, and all updates occur seamlessly, and allow the addition of filtering parameters that allow for checking whether the results should contain unpublished boards.</p><h3><strong>The Front End</strong></h3><p>The layout is created using <a href="https://en.wikipedia.org/wiki/Vue.js">Vue.js</a>, employing a grid system to structure four vertical columns, each containing a heading and a list of related items. Each column is encapsulated as an individual Vue component or dynamically rendered from an array of category objects. Data management typically involves an array of objects, with each object containing a category label (e.g., “CONSUMED”, “ALSO”) and an associated list of terms.</p><p>Components such as &lt;CategoryColumn&gt; accept props like title and items, displaying each entry within styled containers. Conditional styling for special cells, such as highlighting “HORSE” in yellow, is managed through props or reactive state, signaling active or selected items.</p><p><a href="https://unocss.dev/">UnoCSS</a> is utilized for styling, ensuring uniformity and rapid development with concise, utility-first CSS classes. The grid layout leverages CSS Grid or Flexbox, providing clear borders, appropriate padding, and interactive hover states.</p><p>On the frontend, the choice of Vue.js significantly contributes to a clean, responsive design. The Vue-based interface is intuitive, adapting smoothly across multiple devices and screen sizes. Its reactive nature ensures immediate reflection of changes from the puzzle editor — such as adjustments in date ranges or toggling puzzle visibility — with no noticeable delay.</p><pre>&lt;template&gt;<br>  &lt;div class=&quot;my-4 p-4 border rounded-lg bg-white shadow&quot;&gt;<br>    &lt;input<br>      v-model=&quot;searchTerm&quot;<br>      type=&quot;text&quot;<br>      placeholder=&quot;Search for a word...&quot;<br>      class=&quot;w-full p-2 border rounded focus:outline-none focus:ring focus:border-blue-300&quot;<br>    /&gt;<br>    &lt;ul v-if=&quot;filteredWords.length&quot; class=&quot;mt-4 space-y-2&quot;&gt;<br>      &lt;li<br>        v-for=&quot;word in filteredWords&quot;<br>        :key=&quot;word&quot;<br>        @click=&quot;selectWord(word)&quot;<br>        class=&quot;cursor-pointer p-2 bg-blue-100 hover:bg-blue-200 rounded&quot;<br>      &gt;<br>        {{ word }} (found in {{ getFrequency(word) }} puzzles)<br>      &lt;/li&gt;<br>    &lt;/ul&gt;<br>    &lt;div v-else class=&quot;mt-4 text-gray-500&quot;&gt;No words match your search.&lt;/div&gt;<br>  &lt;/div&gt;<br>&lt;/template&gt;</pre><p>Vue’s built-in directives like <a href="https://vuejs.org/guide/components/v-model.html">v-model</a>, <a href="https://www.w3schools.com/vue/vue_v-if.php">v-if</a>,<a href="https://vuejs.org/guide/essentials/list"> v-for</a>, and <a href="https://vuejs.org/guide/essentials/event-handling">@click</a> dramatically simplify the process of building interactive components. In our live word search feature, these directives let us handle input binding, conditional rendering, list generation, and event handling — all in a few lines of clean, declarative markup. This approach reduces boilerplate and eliminates manual DOM manipulation, allowing the puzzle editor to interact with a responsive, real-time interface without the overhead of complex logic or state wiring.</p><h3><strong>Search Functionality</strong></h3><p>An advancement in the puzzle review process brought about by this dashboard is the ability to look up the construction history of puzzles in terms of how often duplicate words occurred. What was once a time-consuming procedure has been streamlined into an efficient workflow. We are able to, through the backend, look up duplicates through the use of a word-frequency count. This allows one to observe how often puzzles with recurring words are present and the spacing in time of puzzles with duplicate words occur. Take for instance ‘BALL’, which occurred 23 times so far through the history of connections.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Lv-et845769XCaxq" /></figure><p>We also created a convenient method of looking up the frequency count of all words used in connections in descending order of their usage. This not only provides interesting construction information about connections, but could also lead to interesting statistical analysis, such as separation distance between words over time, least frequency used words, and most frequently used words.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*xr-RAjBVp7510g69" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*pz4b-7OYJRzWBXfT" /></figure><p>The search itself is multi-facetted with several capabilities. The user can search through all the words in all the boards over the duration and lifetime of Connections through using a basic word search for exact matches, and for words contained as parts of other words through the use of regex search. Not only words on their own can be searched for, but categories and category titles as well, which present different snapshots into how words were previously used and in what combination with other words in specific groupings. This can help with organizing future puzzles and providing insights to new category possibilities.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/1*ZgdW6v_QTbTNgxHFZ0w9NQ.gif" /></figure><h3><strong>Collaboration with Wyna</strong></h3><p>I was excited to collaborate with Wyna on this opportunity as it was an amazing experience in working with editorial to make useful tools to come to life, and give me as a developer the feeling im building something positive that contributes to the mission of the Games Team bringing joy to our users by helping the creative process (in addition to working on the actual games).</p><p>In order to build the tool to be as useful to Wyna as possible, I had to make sure to really work the board to be rapidly prototypable so different combinations of optimal systems could be swapped in and out. Sometimes the optimal product and tool isn’t known until you go through several iterations.</p><p>And unsurprisingly to me, the process was not only an incredible learning experience, and an opportunity to work with Wyna, but also a ton of fun, and an opportunity to exercise my skills in tool building, which as a developer I feel compelled to build systems that provide joy to others.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*NTSnOZ9kJlSGNzG4" /><figcaption>The original dashboard specs. Figma has nothing on pencil and paper.</figcaption></figure><h3><strong>Future Advancements</strong></h3><p>Looking ahead, the architecture of the Connections Reference Dashboard is designed to be future-proof and scalable. The modular approach — where the backend and frontend operate as separate yet integrated components — allows for easy enhancements and the addition of new features over time. This flexibility means that as the needs of the puzzle editor evolve or as new challenges arise, the dashboard can be updated and expanded without disrupting the existing workflow</p><p>Happy puzzling!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d5dc7a9a6464" width="1" height="1" alt=""><hr><p><a href="https://open.nytimes.com/developing-an-internal-tool-for-our-puzzle-editor-d5dc7a9a6464">Developing an Internal Tool for Our Puzzle Editor</a> was originally published in <a href="https://open.nytimes.com">NYT Open</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How The New York Times Game Designer Heidi Erwin Creates Variety Puzzles]]></title>
            <link>https://open.nytimes.com/how-new-york-times-game-designer-heidi-erwin-creates-variety-puzzles-41b9bf0abb2b?source=rss----51e1d1745b32---4</link>
            <guid isPermaLink="false">https://medium.com/p/41b9bf0abb2b</guid>
            <category><![CDATA[creativity]]></category>
            <category><![CDATA[puzzle]]></category>
            <category><![CDATA[games]]></category>
            <category><![CDATA[design]]></category>
            <category><![CDATA[game-design]]></category>
            <dc:creator><![CDATA[The NYT Open Team]]></dc:creator>
            <pubDate>Tue, 27 May 2025 15:13:33 GMT</pubDate>
            <atom:updated>2025-05-28T15:26:48.366Z</atom:updated>
            <content:encoded><![CDATA[<h4>An in-depth look at the process of making the weekly “Brain Ticklers” for The New York Times Gameplay newsletter.</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*VXoc787GGktvYR_M-3FC_g.jpeg" /><figcaption>Illustration by Claire Merchlinsky</figcaption></figure><p><strong>By Heidi Erwin</strong></p><p>Most of my work as a Senior Game Designer at The New York Times is oriented around the design and development of larger puzzle games, but one unexpected and delightful part of my job for the past two years has been writing variety riddles for The New York Times Gameplay <a href="https://www.nytimes.com/newsletters/gameplay">newsletter</a>. As someone who loves to see the process behind the scenes of the media I enjoy, I wanted to share the experience of creating Brain Ticklers.</p><p><strong>What are Brain Ticklers?</strong></p><p>“Brain Ticklers” is inherited from Will Shortz, and how his variety puzzles have run over the years. Will’s variety puzzles are typically word puzzles, and ask solvers to anagram phrases or build words from other words, for instance. In fact, I sometimes catch myself writing “Brian Tickler” by accident in my TODO list; I guess Brian’s a hidden fictional character associated with these puzzles who exists only in my mind.</p><p>Brain Ticklers are variety puzzles that could run in print (do not require a digital interactive format to be solved), whether that puzzle asks the solver to use deductive logic, wordplay, lateral thinking, visual analysis, or something else. We run one each week in The New York Times Gameplay newsletter, as well as in other parts of the print paper. Here’s one that ran shortly after my puzzles began running in the newsletter in January of 2023:</p><p><strong>Move the following five letters into the grid below, such that you spell two words that form a phrase meaning “personal perspective.” Be creative!</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/720/0*o8UzQIPkeTQ3wpA1" /><figcaption>The answer to this one is at the end of this blog post!</figcaption></figure><p><strong>Process Overview</strong></p><p>The end-to-end process for creating a Brain Tickler generally involves the following:</p><ul><li>A source of inspiration</li><li>A first draft</li><li>Editing</li><li>A final graphic.</li></ul><p>More on each of those steps…</p><p><strong>Inspiration</strong></p><p>Inspiration could be anywhere! One of my favorite parts of writing these puzzles is that I feel encouraged to look at the world through different lenses when I’m out and about.</p><p>Inspiration could come from a sign on the street in the real world (that’s right gamers, I’m touching grass), a format restriction, a puzzle I play online — the world is full of puzzle potential.</p><p>Three contexts in which ideas for Brain Ticklers spawn for me are 1. Being out and about interacting with the world, 2. NYT Games team activities that prompt thinking about puzzles, and 3. other media (art, books, games, puzzles).</p><p>For example, here’s a Brain Tickler from 2024:</p><p><strong>What item might be seen with each of these five shapes?</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*T-J3n2mlFCnAovFw" /></figure><p>Solution: A bicycle. They’re all bike rack shapes!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*ZfozppVYrkUleh8N" /></figure><p>This puzzle was inspired by the bike racks I was seeing on runs around Queens. I started photographing them for reference; you can tell that the puzzle graphic pulls pretty directly from these!</p><p>There are several opportunities to participate in new game ideation within The New York Times Games team. One of these is the game jams the team hosts, where people on the team put aside their other work for a couple days to ideate, prototype, and pitch. At one point, some work friends and I pitched a Venn diagram puzzle game during game jam, which did not turn into a full game, but did inspire this Brain Tickler (solution at end):</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*q_wJ0Q3wZA0xjk6_" /></figure><p>I’ve also been inspired by the formats of other cool puzzles out in the world. In March 2023, we ran 5 puzzles for a “March Matchsticks” puzzle series (like March Madness). These puzzles riff off of the classic matchstick puzzle format. Here are two from our month of matches (answers at the end of this blog post):</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*raSNMBAc4fCszOGP" /></figure><p>In the puzzle below, 18 matches spell out the word “sled.” Rotate one thing to ‘make friends.’</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*6k1iGvkEz3KHovv-" /></figure><p><strong>Editing</strong></p><p>Every few weeks or so, when I have anywhere from three to eight new puzzles drafted, I hop on call with our Puzzle Editor Sam Ezersky, where he plays the puzzles in real time.</p><p>Watching someone else solve a puzzle in real time is helpful in shaping it further: Sometimes it becomes immediately obvious that the setup of a puzzle is unclear if I observe Sam heading down an unintended path. But on top of that playtester feedback, it’s awesome to witness Sam’s puzzle brain in action.</p><p>A recent example: I proposed a Brain Tickler where solvers were asked to untangle letter sequences to reveal four phrases of the format “____ in ____.” Sam took one look at “TJIUMSTE” and said, “Just in time.” This was followed by seeing “WLAIAITIDNYG,” immediately thinking out loud, “Lying in wait? No! Lady in waiting!” and then rapid-fire recognizing “CEDHITIOERF” and “LSOANW” as “Editor in chief” and “Son in law.” Sometimes I wonder if Sam solving a puzzle really says anything about whether the puzzle is fair to the average solver, but fortunately he definitely also has puzzle design sensibilities tailored to a general audience. Reviewing puzzles with Sam is a moment to test the accessibility of a puzzle so we can adjust the framing, presentation, or even concept, as needed.</p><p>Here’s another recent puzzle that became more elegant during the editing process.</p><p>What I initially presented to Sam:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*dwLgoGCNpNZlQXiw" /></figure><p>After Sam’s feedback:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*hSnkT0eG5Cn6m6-B" /></figure><p>[Spoiler Warning] The solution is that Marie likes Juliet, because Marie likes words that start with a shortened month name: aprons (Apr), mayonnaise (May), jungles (Jun), and Juliet (Jul).</p><p>He noticed that April, May, and June were all consecutive, and offered up “Romeo and Juliet” as an alternate fourth pair to continue the consecutive months using the “Jul” in “Juliet” for July. This is the kind of small adjustment that makes a puzzle that’s mostly solid feel tighter and more elegant.</p><p>For this type of puzzle, I include an easter egg where the character’s name hints at the quality of words they like. In this case, Marie also begins with a shortened month name: Mar.</p><p>Here are two more of this puzzle format for you to solve.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*8srogUMgBEzfRXR1" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*g_SuIX7dBiXV8OvR" /></figure><p>In terms of graphics, the Brain Tickler graphics are fairly simple, and typically I make them in Figma. For some puzzles, being precise with graphics matters more:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*wpK38mAn7VDbUTSO" /></figure><p>My younger self would be in awe at the opportunity to work with so many brilliant puzzle minds, all in one place. Working on these puzzles has made me a stronger designer and solver, and I feel gratitude for all of the thought that goes into puzzles across the team, and all of the thought solvers put into playing our games: Humans make our puzzles what they are.</p><p><em>Below is the answer to the puzzle from the start of this piece:</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/720/0*EsQZ1XcyN3g3VEfl" /></figure><p><em>And here are other answers to puzzles in this post:</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*nfN_Iq7Tie65f2o0" /></figure><p>Rotate the image 180 degrees. It now reads as “pals.”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*HIJN7LtpDU-R_j0_" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*uzpmYG3uZ-mYmEtb" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*RGuXnZFNAnHdv5nh" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=41b9bf0abb2b" width="1" height="1" alt=""><hr><p><a href="https://open.nytimes.com/how-new-york-times-game-designer-heidi-erwin-creates-variety-puzzles-41b9bf0abb2b">How The New York Times Game Designer Heidi Erwin Creates Variety Puzzles</a> was originally published in <a href="https://open.nytimes.com">NYT Open</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>