<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[84.51° - Medium]]></title>
        <description><![CDATA[A deep dive into our Data Science, Product &amp; Design, and Engineering (among other things) told by 84.51°ers themselves. - Medium]]></description>
        <link>https://medium.com/8451?source=rss----2ec5e2df7046---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Tue, 07 Apr 2026 01:38:49 GMT</lastBuildDate>
        <atom:link href="https://medium.com/feed/8451" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[600% Faster Stencil Components in Angular]]></title>
            <link>https://medium.com/8451/600-faster-stencil-components-in-angular-d442fb87babd?source=rss----2ec5e2df7046---4</link>
            <guid isPermaLink="false">https://medium.com/p/d442fb87babd</guid>
            <category><![CDATA[angular]]></category>
            <category><![CDATA[performance]]></category>
            <category><![CDATA[stenciljs]]></category>
            <category><![CDATA[web-components]]></category>
            <category><![CDATA[stencil]]></category>
            <dc:creator><![CDATA[Dan Bellinski]]></dc:creator>
            <pubDate>Sat, 10 Jul 2021 12:31:30 GMT</pubDate>
            <atom:updated>2021-07-10T12:43:28.484Z</atom:updated>
            <content:encoded><![CDATA[<p><em>By Dan Bellinski, Director, Software Engineering, 84.51°</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*pWIxbwz-SWxH_ZJj" /><figcaption>Photo by <a href="https://unsplash.com/@o5ky?utm_source=medium&amp;utm_medium=referral">Oscar Sutton</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>Have you noticed your Stencil components taking a long time to render in Angular? We did — as our design system components built with Stencil were adopted more and more by our internal customers in their Angular applications, we increasingly heard that their pages felt slower. As mentioned in my previous article <a href="https://medium.com/8451/make-your-stenciljs-web-components-faster-by-using-shadow-dom-d010a9f0cdda">Make Your StencilJS Web Components Faster by Using Shadow DOM</a>, we switched our Stencil components to use the Shadow DOM and saw a 400% reduction in load times. While this was a huge step forward, it wasn’t enough. Even after this change, our components in Angular still felt slow.</p><h3>Finding the Source</h3><p>With further research using Chrome DevTool’s Performance tab, we noticed that change detection was getting triggered many, many times for a single Stencil component to render in Angular. Analyzing our code and doing some testing with that component in React, we didn’t see multiple change cycles occurring in React. So, why was change detection occurring so many times for the component in Angular but only once (as expected) in a React app?</p><p>Zone.js, Angular’s change detection engine, monitors an Angular app for changes. When it detects a change, it runs through a process to determine if the page needs to be re-rendered and will force a re-render if needed. Something was triggering Zone.js to think the component kept changing as it was rendering, forcing many change cycles to occur — this significantly slowed down the time to render the component.</p><h3>Fixing the Slowness</h3><p>We discovered a very simple change that fixed this issue and reduced the render times of our Stencil components in Angular apps by about 600%! We couldn’t believe it. After making a small update to our consumer’s Angular apps, their apps felt zippy again and everyone was happy. So.. what did we change?</p><p>Ionic 4, also built on Stencil components (both by the same company), had a note to their Angular users that experienced slow load times: stop Zone.js from messing with your web components. There’s a simple flag you can set in your Angular app’s polyfill.ts file before Zone.js is loaded to tell Zone.js that it doesn’t need to run its own change detection on web components — after all, the Stencil components already have their own change detection built in.</p><pre>(window as any).__Zone_disable_customElements = true;</pre><p>Without this flag, both your web components <strong>and</strong> Zone.js run change detection and together they trigger each other, causing many unnecessary change cycles and slowing down the time it takes to render the components.</p><p><a href="https://ionicframework.com/docs/troubleshooting/runtime#angular-change-detection">Here are the full instructions provided by Ionic</a>. Follow these same steps in your Angular apps that consume your Stencil components and you’re good to go!</p><h3>Moving Forward</h3><p>We were really skeptical about making this change because Zone.js is the backbone of Angular’s change detection. We assumed something would break by messing with Zone.js. After lots of testing and over 6 months in production, we’ve yet to find any issues due to making this change. Hurray!</p><p>We now include instructions for Angular users of our design system to include these changes in their apps when using our components. If you’re building Stencil components and using them in Angular, make this change and enjoy the speed!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d442fb87babd" width="1" height="1" alt=""><hr><p><a href="https://medium.com/8451/600-faster-stencil-components-in-angular-d442fb87babd">600% Faster Stencil Components in Angular</a> was originally published in <a href="https://medium.com/8451">84.51°</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The State of VR Heading Into 2021]]></title>
            <link>https://medium.com/8451/the-state-of-vr-heading-into-2021-73550bd59a64?source=rss----2ec5e2df7046---4</link>
            <guid isPermaLink="false">https://medium.com/p/73550bd59a64</guid>
            <category><![CDATA[voice-interfaces]]></category>
            <category><![CDATA[grocery-shopping]]></category>
            <category><![CDATA[oculus-quest]]></category>
            <category><![CDATA[ecommerce]]></category>
            <category><![CDATA[virtual-reality]]></category>
            <dc:creator><![CDATA[Mark Schauer]]></dc:creator>
            <pubDate>Wed, 16 Dec 2020 20:50:37 GMT</pubDate>
            <atom:updated>2020-12-16T20:50:37.233Z</atom:updated>
            <content:encoded><![CDATA[<p><em>By Mark Schauer, Director, R&amp;D Studio, 84.51°</em></p><p>With the winter holidays upon us, and the gift buying season in full swing, many shoppers are looking to technology for gift ideas to help check off their holiday lists. And while PlayStation and Xbox garner most of the attention, VR, or virtual reality, continues to build a solid following. VR has been around in some form for decades. However, recent advancements have made VR much more attainable to the average consumer, bringing this technology into living rooms in increasing numbers.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/1*M7Cq-S7OPc25P_yYHpbNKA.png" /></figure><p><a href="https://medium.com/@8451marketing/virtual-reality-retail-fd9ab2d039cc">In my previous article</a>, I covered the release of the Oculus Quest. This product was introduced to consumers at a far lower price point than its predecessors and eliminated the need for powerful PCs and separate tracking stations. Truly a leap toward bringing VR to the masses, the release of the Quest, Quest 2, and similar second-generation systems left retailers asking themselves what opportunities might be available to offer VR shopping experiences to the growing number of users.</p><p>Modern technology evolves in exponential fits. But 2020 brought with it its own set of unique challenges. So how has VR progressed over the past year? And, with it, the advent of the VR shopping experience?</p><p><strong>Virtual reality trends continue to develop</strong></p><p>The Quest and Quest 2 remain the standard bearers for consumer-grade VR, as expected. Both systems offer good hardware at a decent price. The only downside is Facebook’s ownership of the Quest and the required Oculus platform, which has left many developers and users with a bad taste in their mouths in light of recent privacy invasion allegations. Unfortunately, HTC (the makers of the Vive VR systems) failed to deliver a compelling standalone device. Users are left with one good option that has some tradeoffs.</p><p>The VR software ecosystem has continued to develop and become more robust. Popular VR games like Beat Saber have secured big name partnerships with well-known musical acts including Imagine Dragons and BTS. And while it’s not available for the Quest, the game Half-Life: Alyx introduced the successful Half-Life franchise to the VR ecosystem. And apps like Big Screen create virtual social environments where you can hang out with your friends and watch movies, TV, sports or other live events.</p><p><strong>The opportunities for retailers are still there</strong></p><p>In my last article, I wrote about the experiences that brands and retailers can create for their customers using VR. I still believe these opportunities exist. But for success, they’ll have to be better and faster than shopping in-store or using traditional e-commerce. So, what does this look like? Let me show you our treehouse:</p><figure><img alt="Relaxing cabin environment." src="https://cdn-images-1.medium.com/max/480/1*b1HoPCHb9__lS-9nY5GmBw.gif" /></figure><p>The treehouse provides a calm, clean and relaxing environment for customers to do their shopping.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/480/1*bxZR0JkQli7A06SdOx6kPg.gif" /></figure><p>Shoppers can use voice commands such as “pizza” or “usuals” to search for specific products or show a selection of products often purchased in the past.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/480/1*oa0I9IK20ZgbIjFw4QmzDQ.gif" /></figure><p>Detailed product info allows shoppers to compare products (in 2D for now but in 3D as the library gets built out) and easily build their basket, finally completing their order through the Kroger mobile app.</p><p>While the VR industry gawks at hand tracking, what I find far more interesting is the combination of VR with natural language processing and voice input. We can be choiceful when we use visuals, touch, sound, and voice to create experiences that are streamlined as well as easier, faster, and better than when used individually or in real life.</p><p><strong>Summary</strong></p><p>Despite all the leaps and bounds it’s made over the years, VR is still primarily a gaming platform. And right now, VR has to compete with the launch of new PlayStation and Xbox platforms. This is no easy task.</p><p>Should brands and retailers get into VR? My answer is yes. 3D assets required for VR have uses today in other e-commerce channels and will put you in a good position to accelerate future channels. Interactions explored in VR also translate well to other AR/XR solutions. The foundation is laid, the hardware has matured, and the software is compelling.</p><p>So, will you as a brand or retailer see immediate success integrating VR into the experience you offer your customers? There’s only one way to find out.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=73550bd59a64" width="1" height="1" alt=""><hr><p><a href="https://medium.com/8451/the-state-of-vr-heading-into-2021-73550bd59a64">The State of VR Heading Into 2021</a> was originally published in <a href="https://medium.com/8451">84.51°</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Boosting Data Value with Neural Embeddings]]></title>
            <link>https://medium.com/8451/boosting-data-value-with-neural-embeddings-e7c3e1af25de?source=rss----2ec5e2df7046---4</link>
            <guid isPermaLink="false">https://medium.com/p/e7c3e1af25de</guid>
            <category><![CDATA[neural-networks]]></category>
            <category><![CDATA[business]]></category>
            <category><![CDATA[embedding]]></category>
            <category><![CDATA[feature-engineering]]></category>
            <category><![CDATA[data-science]]></category>
            <dc:creator><![CDATA[Bernard Abayowa]]></dc:creator>
            <pubDate>Tue, 10 Nov 2020 18:34:05 GMT</pubDate>
            <atom:updated>2020-11-21T17:29:12.458Z</atom:updated>
            <content:encoded><![CDATA[<p><em>By Bernard Abayowa, 84.51° Director of Data Science</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/0*M0vlqAYocjCqq5IW" /><figcaption>Photo by <a href="https://unsplash.com/@urielsc26">Uriel SC </a>on <a href="https://unsplash.com/photos/11KDtiUWRq4">Unsplash</a></figcaption></figure><p><strong>Introduction</strong></p><p>Data available to businesses have grown significantly over the years. Large amounts of data are being generated daily from business operations, customer engagements, and external sources. This data needs to be analyzed to support strategic and operational decisions. However, there are some barriers that make turning data into insights challenging. These include data cleaning, curation from various sources, validation, and ultimately feature generation.</p><p>In this blog post, we will discuss challenges associated with feature generation from data and how neural embeddings can alleviate them and enable businesses to get more out of data for far less cost and effort.</p><p><strong>Feature Generation Barriers</strong></p><p>Data often contain hidden insights that are valuable to businesses but difficult and expensive to acquire. Machine learning algorithms can extract these insights and solve complex data-rich business problems. However, they require data to be transformed into features suitable for making predictions. The common approach businesses use for generating features is the manipulation of data with domain knowledge to create new variables. This approach is often referred to as feature engineering or handcrafting. Typically, feature engineering relies on the skills of the domain expert to understand which features to create and how? This step is manual and labor-intensive.</p><p>The sources and structure of data required to make contextually relevant and efficient models have grown significantly over the years. Handcrafting features from this data is time-consuming and demands the generation of very high dimensional features which can quickly become difficult to maintain. This problem is more emphasized in real-time scenarios where there may be limitations in computational and memory requirements to meet service-level agreements.</p><p>Moreover, there is quite a bit of domain knowledge that we cannot fully explain or put into hand-code formulas or rules. These include complex associations beyond our explicit understanding that are acquired through our senses or experience over time, as well as those beyond our awareness. Some of this knowledge can significantly boost the discriminatory and generative power of machine learning models. However, they are often ignored in feature engineering.</p><p>Furthermore, handcrafted features usually contain sensitive information. Privacy and security are more important than ever in business. This suggests the need for better ways of representing data to prevent the reconstruction of sensitive information, without a negative impact on the performance of machine learning models.</p><p>Many of the above challenges associated with feature engineering can be alleviated with Neural Embeddings.</p><p><strong>What are Neural Embeddings?</strong></p><p>Neural embedding is the transformation of high dimensional data into a low dimensional vector space that reflects the semantic or functional similarities of concepts in the data. It converts large texts of human-readable data and numbers into matrices, which are meaningless to humans but is a representation of the original data in a form that is readily usable by machine algorithms. This embedding approach can also encode implicit information in data that are difficult to explain. They are more privacy-preserving and have better security properties compared to feature engineering.</p><p>The machinery for generating neural embeddings are deep or shallow versions of neural networks, a general-purpose framework for learning representations from data directly from raw input with less-to-minimal feature engineering.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*qv9ef_BbfYaBERQounWGug.png" /><figcaption><em>Neural embedding transforms data into a lower-dimensional vector space that reflects the semantic or functional similarities of concepts in the data</em></figcaption></figure><p><strong>Types of Neural Embeddings</strong></p><p>Neural embeddings learn features from data such that similar input will result in similar vectors in the embedding space. However, the semantic or functional similarity patterns learned by models vary based on the input data structure and the training process.</p><p>The input data structures include <em>unstructured data,</em> such as the sequence of words in a sentence, group of products in a transaction, clickstream, images, and sensor data; <em>Interaction data</em> such as customers and their purchased products; <em>hierarchical data </em>such as taxonomy of products; g<em>raphical data</em> such as social networks, and customer or product knowledge graphs. The most common type of data in business, <em>tabular data</em>, can also be transformed into lower-dimensional embeddings.</p><p>Neural embeddings are usually extracted from intermediate or final activations of neural network models trained in a supervised or self-supervised fashion.</p><p>To generate supervised embeddings, we train a neural network model to map input data to target labels. This is done in a similar manner to a traditional supervised learning approach such as gradient boosting. We start with a set of features relevant to our prediction task, and then train a neural network end-to-end for classification, ranking, or regression tasks.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vybmFhZJZ7hCdZ4c2nJDXQ.png" /><figcaption><em>A supervised model with combined customer and product input features and embeddings</em></figcaption></figure><p>Another way to generate supervised embeddings is the metric learning approach. Here, we start with a set of input entities we want to associate, such as customers and relevant products, and then train a neural network to generate separate embedding vectors for the entities such that their level of association can be computed with a similarity metric such as a dot product of the two vectors.</p><p>The metric learning approach is especially useful in real-time services where there may be limitations on memory and computational resources, and in few-shot classification scenarios where we have many classes, but few labels are available for each class.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*5O7lctS1Oxrc3zA80erEzQ.png" /><figcaption><em>Metric learning approach with separate customer and product input features and output embeddings</em></figcaption></figure><p>For self-supervised embeddings, we use the data to predict itself. This could involve predicting the present, past, or future context of the data. A part of the data is used as input while other parts of the data are used as supervisory signal for the prediction. The self-supervision training could also involve encoding and decoding of the data input in its entirety as done in autoencoders.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wOVUdY4SkAl1mUbkjbseJA.png" /><figcaption><em>In self-supervised embeddings, we use the data to predict itself. For example, products or transaction embeddings can be generated by training a neural network to predict randomly masked products in a transaction</em></figcaption></figure><p><strong>Business Use Cases</strong></p><p>There are many applications of neural embeddings in business. We can group these applications into four categories: Implicit insights discovery, Segmentation and grouping, Search and retrieval, and Transfer learning.</p><p>Let’s look at each one of these use case categories.</p><p><strong>Implicit Insights Discovery</strong></p><p>Earlier we discussed the ability of neural embeddings to extract complex associations in data. These include implicit insights that are difficult or impossible to obtain through queries of structured or unstructured data. With neural embeddings, we can discover and visualize entity relationships and use that insight to solve a variety of business problems.</p><p>Some of the business solutions that can be developed from these insights include similarity or complements data products, which businesses can use to help customers discover offerings such as new or existing products, brands, or services that are relevant to those they like. The insights can also be used to generate personalized rankings of entities that are relevant to a customer in various business contexts.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*1_P4EUVI1KEwHFeYvN7lOw.png" /><figcaption><em>Insights from product-to-product and customer-to-customer embedding similarities can be used to generate personalized product recommendations to customers</em></figcaption></figure><p>Furthermore, neural embeddings can be used to discover associations in complex networks. Businesses usually have many data points about customers and products. However, chances are that this data will contain missing links. Techniques such as knowledge graph embeddings can be used to find these links and generate 360 views of customers or products.</p><p><em>Business Use Cases: Product and Service Recommendations for Customers, Complementary Products, Customer Marketing, Product and Customer Knowledge Graphs, Similar Entities e.g. Products, Customers, Content, Brands or Services</em></p><p><strong>Segmentation and Grouping</strong></p><p>Segmentation and grouping techniques are used in business to gain insights into the market landscape and to improve customer experiences. It involves the association of business entities such as customers or products based on shared qualities or characteristics.</p><p>Similar to clustering and traditional supervised techniques with engineered features, neural embeddings can be used to build segmentations of business entities. However, the unique properties of embeddings enable the development of segmentations more easily when there are few labels available for supervised learning, or when data is multi-modal or unstructured.</p><p>As in the segmentation use case, neural embeddings can also be used to solve problems involving the analysis of groups. The density of entities in the embedding space can be analyzed over time to identify trending topics such as product types and customer preferences. Moreover, we can use techniques such as hierarchical embeddings to refine taxonomies or discover a better organization of taxonomies that reflect customer needs. Furthermore, embeddings can also be used to identify outliers in a group of entities based on their locations in the embedding space.</p><p><em>Business Use Cases: Product Segmentation, Customer Preferences and Profiles, Product Taxonomy Building and Refinement, Business and Customer Trends Analysis</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*BQw7YiaTLYy7Fg27IB5E3g.png" /><figcaption><em>Embeddings form multi-dimensional clusters that can be manipulated to generate various segmentations such as customer and product segmentations from data</em></figcaption></figure><p><strong>Search and Retrieval</strong></p><p>Search interfaces are one of the most popular digital touchpoints customers use to find products and other business offerings. Businesses also use search and retrieval algorithms internally to drive information extraction systems for use in operations. However, most search and retrieval tools used in business rely on keyword-based linking of queries and data spaces or hand-crafted features for non-text search scenarios. Neural embeddings enable search and retrieval of entities with similar semantic meaning but different keywords or features by enabling the incorporation of contextual information, which can be difficult to handcraft with traditional techniques.</p><p>Furthermore, neural embeddings enable multimodal retrieval tasks like image-based product search or text-based image retrieval. Suppose you have an image of a product you would like to buy, but not the name. Multi-modal embedding search systems can be used to find products relevant to the query image. They integrate multiple data sources such as images and text or speech signals into a common embedding space where related items can be easily matched regardless of their input data modality.<br> <br><strong><em> </em></strong><em>Business Use Cases: Semantic Business Data Query, Text-Based Search, Image-Based Search, Voice-Based Search</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*QPicOzIi6dhFSr8v1AOg4Q.png" /><figcaption><em>Product images and text can be mapped into a common multi-model embedding space to enable image-based retrieval of products . This is useful in scenarios where customer has an image of a product but does not remember the description or for search convenience</em></figcaption></figure><p><strong>Transfer Learning</strong></p><p>The training or interpretation of machine learning models often requires labeled examples, which are laborious and difficult to obtain in many business scenarios. Transfer learning is one of the major breakthroughs of neural networks that alleviates this burden. It involves using a model or its neural embeddings as a starting point for training another model on a related task.</p><p>With transfer learning, businesses can pretrain models from structured or unstructured data. The resulting models or neural embeddings can then be used for downstream tasks such as classification, regression or ranking in the following ways.</p><p><em>Fixed feature extraction</em>: In this scenario, embeddings are extracted from pretrained models. The pretrained embedding is then used independently or in combination with engineered features for training another model, which could be a neural network or traditional models like gradient boosting. This is the most popular approach to transfer learning in business. It is helpful when the training data for the new task is similar to those used to train the source model.</p><p><em>Finetuning: </em>This approach involves retraining the source model to extract new embeddings or to solve a new task directly. Finetuning is useful when large amounts of training data is available but very different from the one used to train the source model. For example, a model trained for natural language generation can be finetuned for a classification task involving product descriptions. The benefit of this approach is that it speeds up model training time and convergence for the new task thereby improving operational efficiency. <br> <br> <em>Business use cases: Numerous Marketing and Operational Tasks Involving Supervised Learning including Tabular Data Analysis, Image Analysis, and Text Analysis</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*_0kv1wVpHZf93aLR3hHIvQ.png" /><figcaption><em>An illustration of transfer learning: knowledge from source model can be transferred to many target models to solve many tasks</em></figcaption></figure><p><strong>Conclusion</strong></p><p>Neural Embeddings can help businesses extract hidden insights from their data that would typically require significant manual effort or expensive acquisition. They efficiently generate informative features required for machine learning with minimal feature engineering, and drive solutions for many business analysis tasks. In addition, neural embeddings can be used directly to solve business tasks involving semantic or functional similarity. Therefore, there is significant value in using neural embeddings in data-driven businesses.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e7c3e1af25de" width="1" height="1" alt=""><hr><p><a href="https://medium.com/8451/boosting-data-value-with-neural-embeddings-e7c3e1af25de">Boosting Data Value with Neural Embeddings</a> was originally published in <a href="https://medium.com/8451">84.51°</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Scaling our Largest Angular Platform with Nx]]></title>
            <link>https://medium.com/8451/scaling-our-largest-angular-platform-with-nx-8aa70ee3619f?source=rss----2ec5e2df7046---4</link>
            <guid isPermaLink="false">https://medium.com/p/8aa70ee3619f</guid>
            <category><![CDATA[monorepo]]></category>
            <category><![CDATA[angular]]></category>
            <category><![CDATA[nx]]></category>
            <category><![CDATA[nrwl-nx]]></category>
            <category><![CDATA[front-end-development]]></category>
            <dc:creator><![CDATA[Tytus Planck]]></dc:creator>
            <pubDate>Tue, 22 Sep 2020 15:17:15 GMT</pubDate>
            <atom:updated>2020-09-22T15:17:15.858Z</atom:updated>
            <content:encoded><![CDATA[<h4>The story of how we scaled our Angular platform from 10 developers to 40 developers using a monorepo.</h4><p><em>By Tytus Planck, 84.51° Senior Software Engineer</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/720/1*Zbvfc-I_-4JlT0cSMNi7Xg.png" /><figcaption>Photo by <a href="https://www.8451.com/">84.51°</a> and <a href="https://nx.dev/">Nrwl</a>.</figcaption></figure><p>In the age of ever-expanding engineering organizations, building large platforms that require the sharing of assets across teams is paramount. Our engineering organization has rapidly expanded over the course of a few years and this has put us in the position to find tools that improve the developer experience at 84.51°. One of the most powerful tools we’ve used in our Angular development has been <a href="http://nx.dev">Nrwl’s Nx</a>.</p><p>Our journey started with a single developer team of about 10 engineers working on a brand new Angular UI for our advertising business called 84.51° Prism. The repository and app were built using the basic Angular CLI and created a project structure most Angular developers would recognize (a single app). This worked great as the small team of developers built out an initial MVP (minimum viable product). As the aspirations for 84.51° Prism grew, so did the team of engineers working on the product. We needed an effective way to spin up new UI teams that were seamlessly integrated to the platform yet independent enough to manage releases separately. Our solution? A monorepo backed by Nx.</p><p>Nx is a tool built by Nrwl meant to simplify the organizational challenge of maintaining and sharing code between multiple front-end applications. In short, we chose Nx to allow our developers to easily share code while also retaining the ability to independently deploy their team’s individual application.</p><h3>Transitioning to Nx</h3><p>We knew our platform had big aspirations to grow so we wanted to start leveraging Nx as quickly as we could. Using Nx required us to move almost every file, so we took a phased approach to limit the number of changes.</p><p>As a first step, we created a new repo with an Nx workspace then temporarily placed our only Angular app into the app directory. This gave us an Nx workspace without a big impact to work already in process. There were some challenges converting our code over but Nrwl has a <a href="https://nx.dev/angular/migration/migration-angular">great guide and tools</a> to handle the migration.</p><p>Next, we broke out a small portion of our original app, our “release notes” page, to another app. It was an easy slice of our code that could be moved with relatively low risk to our current work. This transition gave us the ability to independently deploy our release notes app from the rest of our app, providing a large convenience to our developers.</p><p>After the transition of our release notes app, we had the knowledge to create a documented process for us to add new apps and libs to our monorepo. We continued the process of splitting out code into apps and libs little-by-little to better leverage what Nx can offer. Since then, we’ve separated our 1 large app into 9 apps, 21 shared libs, and 42 domain specific libs, fully taking advantage of Nx.</p><h4>Do I really need to complicate my project?</h4><p>So why should you commit time to moving to a monorepo? Why is Nx the right tool? Will it really save you time and developer efficiency? To be honest, it depends on multiple factors. Every project, team, organization is different and monorepos fix a small set of problems. Let’s do a deep dive into the pros and cons of using Nx and why it might be the right tool for you and your team!</p><h4>Benefits of using Nx</h4><ol><li>Easily allows developers to share code.</li><li>Creates contracts between teams that can be handled at the time of code review (pull request).</li><li>Enables teams to spin up new UIs faster with the jump-start of shared code and patterns.</li><li>Provides independent deployability while also allowing teams to easily integrate with one another.</li><li>Freedom to use different Javascript frameworks in the same repo.</li></ol><p>With Nx, the main driver is consistency and the sharing of code. It’s really difficult to drive and share best practices across an organization. We’ve found that Nx has allowed us to not only speed up development time by sharing code, but it’s also made it much easier for teams to learn from one another. Every time a team decides to use a new pattern or attempt a unique approach in a shared library, every other team is exposed to it. Not only does this lead to the spread of ideas, but when a PR needs reviews from a wide-range of people it properly vets the code. You are no longer silo-ed to a team full of developers that slowly start to think the same over time.</p><p>Using a monorepo has allowed us to easily share talent around our organization. As our teams agree on best practices and ways of writing code it becomes much easier to float between them. This enables further cross-pollination in our organization. I hope to address and share how we’ve been able to effectively define and manage our best practices across all our UI teams in the monorepo in a future article.</p><h4>Drawbacks of using Nx</h4><ol><li>Management of CI/CD pipelines are significantly more complex.</li><li>External teams are now required to review PRs that touch shared code.</li><li>Cross-team collaboration on agreed-upon patterns <em>can be</em> difficult.</li></ol><p>Our main issue with using a monorepo is how you handle build and deploying your app. If your organization doesn’t have the right resources this can easily become burdensome. We’ve gone through multiple iterations on our CI/CD pipeline in order to make it more and more efficient. Luckily Nx offers great tools to enable this via their affected commands. However, there is a learning curve compared to standing up a simple pipeline for one Angular app. We have also struggled when shared code is touched, which ends up requiring all of our team’s approvals. This can be massively frustrating for teams who are both waiting on review and being asked to review code. We’ve done our best to utilize GitHub’s code ownership features to minimize the impact.</p><h3>“A Challenger Appears!”</h3><p>There are pretty clear patterns on the web for sharing code — mainly the use of a package management system like NPM. One major benefit of those tools is the ability to easily manage package versions as code changes. For example, if I’m on Team A and I make a commit to shared code that includes a breaking change, Team B doesn’t necessarily have to take that change immediately. The issue with this approach is when you have so many shared libs/packages (we have <strong>21</strong><em> </em>globally shared libs<em> </em>in our monorepo) between teams that managing those separate repos becomes extremely difficult.</p><p>With an Nx monorepo in this scenario, you would have to immediately address the breaking change by either fixing it for that other team or by branching the code. On one hand a small change may mean “extra work” because you’re now running tests for another app, but, on the other hand, the codebases are forced to stay in sync, further promoting code consistency.</p><p>Nx does offer organizations more flexibility though and breaks down inter-team dependencies. If the amount of sharable code is small or is universal to <strong>all </strong>teams, it might make sense to go with this approach.</p><h3>Conclusion</h3><p>Over the course of a few short years our platform has grown from one Angular app to <strong>nine </strong>independently deployable apps. Nx has been an easy to use tool that enabled our organization to grow rapidly. We’ve been able to drive consistency in our user experience through the sharing of components and code. Our monorepo has made our app creation and team on-boarding significantly more efficient. Our shared code drives our teams to work more closely together and promotes the use of similar patterns. We’ve been able to accelerate the growth of our engineering team as well as our platform through the use of a monorepo architecture powered by Nrwl’s Nx.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8aa70ee3619f" width="1" height="1" alt=""><hr><p><a href="https://medium.com/8451/scaling-our-largest-angular-platform-with-nx-8aa70ee3619f">Scaling our Largest Angular Platform with Nx</a> was originally published in <a href="https://medium.com/8451">84.51°</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Make Your StencilJS Web Components Faster by Using Shadow DOM]]></title>
            <link>https://medium.com/8451/make-your-stenciljs-web-components-faster-by-using-shadow-dom-d010a9f0cdda?source=rss----2ec5e2df7046---4</link>
            <guid isPermaLink="false">https://medium.com/p/d010a9f0cdda</guid>
            <category><![CDATA[design-systems]]></category>
            <category><![CDATA[shadow-dom]]></category>
            <category><![CDATA[meridian]]></category>
            <category><![CDATA[stenciljs]]></category>
            <category><![CDATA[web-components]]></category>
            <dc:creator><![CDATA[Dan Bellinski]]></dc:creator>
            <pubDate>Wed, 26 Aug 2020 17:09:42 GMT</pubDate>
            <atom:updated>2020-08-26T19:44:40.092Z</atom:updated>
            <content:encoded><![CDATA[<p><em>By Dan Bellinski, 84.51° Lead Software Engineer</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*2CWav9rCyNkrc6kt" /><figcaption>Photo by <a href="https://unsplash.com/@the_gerbs1?utm_source=medium&amp;utm_medium=referral">Jean Gerber</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>An application team using a design system should feel secure that the system’s components have been tested for performance, browser support, and accessibility. So when one of our applications was reporting slower than usual page loads on a template that was using our dropdown menu we knew something was amiss.<br> <br>The dropdown menu was used in a table row to group actions. A few dozen instances of the dropdown menu caused the page load to slow to a crawl — rendering most of the UI unresponsive. Perplexed, we investigated our StencilJS web components, spent hours debugging, and eventually found the root cause: not using <a href="https://developers.google.com/web/fundamentals/web-components/shadowdom">Shadow DOM</a> on our <a href="https://stenciljs.com/">StencilJS</a> components was a big mistake.<br> <br>Since then we’ve drastically improved the performance of our component and the page. In this article we’re going to share why Shadow DOM is important for StencilJS web component performance and other benefits we’ve learned along the way.</p><h3>Performance Improvement 1: Rendering Slots</h3><p>To understand the issue we need to first understand how StencilJS renders components, specifically components with slots. StencilJS lets you choose to render your components in 2 ways: with <a href="https://stenciljs.com/docs/styling">Shadow DOM <em>disabled</em> or <strong>enabled</strong></a><strong> </strong>and by default is it <em>disabled</em>. Some of the choice depends on <a href="https://caniuse.com/#feat=shadowdomv1">which browsers you need to support</a> and if StencilJS’s Shadow DOM polyfills are “good enough” for your use case — e.g. IE11 doesn’t support CSS Custom Properties (variables) but there is a polyfill that provides some functionality.</p><p>If browser support is a question for Shadow DOM, why would you want to use it? Shadow DOM offers both DOM and style encapsulation, preventing anything outside of your component from interfering with it. This is really helpful to preserve the intended functionality of your components if you don’t know where or how they’ll be used.</p><p>When we first implemented StencilJS we thought the choice to use Shadow DOM was just going to impact how styles were rendered on our components and made our decision based on that. However, we’ve since learned that StencilJS will actually render your component mark-up in a completely different way when using slots with Shadow DOM <em>disabled</em> versus slots with Shadow DOM <strong>enabled</strong> — and that difference can have large performance implications.</p><p>The <a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Element/slot">&lt;slot&gt;</a> element is actually part of the Shadow DOM specification, but StencilJS will still allow you to use &lt;slot&gt; without Shadow DOM. Following are 2 examples of how StencilJS will render content passed into a slot with Shadow DOM <em>disabled</em> versus <strong>enabled</strong>.</p><p>Take a simple “mds-card” component implementation:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/bf0fcec9c9799950a00c886ef29bb780/href">https://medium.com/media/bf0fcec9c9799950a00c886ef29bb780/href</a></iframe><p>And the following user provided usage of the component:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/433d00a2996868d949b4467b352e69d6/href">https://medium.com/media/433d00a2996868d949b4467b352e69d6/href</a></iframe><h4>Here is how StencilJS renders the &lt;slot&gt; contents with Shadow DOM <em>disabled</em>:</h4><p>1) First, the user provided mark-up is added to the DOM:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/433d00a2996868d949b4467b352e69d6/href">https://medium.com/media/433d00a2996868d949b4467b352e69d6/href</a></iframe><p>2) Next, the component is expanded in the DOM and the slotted content is <strong>moved</strong> in the DOM to the proper place for the final result:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/ccf1f6494602edc01a6a8380db600e60/href">https://medium.com/media/ccf1f6494602edc01a6a8380db600e60/href</a></iframe><p>3) The browser can now render the full picture of the component using the above DOM representation.</p><p>Note here that the slotted content was actually moved in the DOM and replaced the &lt;slot&gt; element — the final DOM that the browser renders no longer has a &lt;slot&gt; element in it.</p><h4>Alternatively, here is how StencilJS renders the &lt;slot&gt; contents with Shadow DOM enabled:</h4><p>1) First, the user provided mark-up is added to the DOM:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/433d00a2996868d949b4467b352e69d6/href">https://medium.com/media/433d00a2996868d949b4467b352e69d6/href</a></iframe><p>2) Next, the component is expanded in the DOM. Notice that the &lt;span&gt; is not moved and stays where it is. The &lt;span&gt; is in what is called the “light DOM” — it isn’t rendered in it’s current state.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/1a45092fb74553edca53af5aec770a59/href">https://medium.com/media/1a45092fb74553edca53af5aec770a59/href</a></iframe><p>3) Lastly, the browser has to do some work. During the browser’s render step, the browser actually flattens the DOM to get it into the proper structure — but this change is only made for rendering! The DOM itself is not changed and remains as seen above.</p><p>The key difference between the two ways of rendering is how the DOM itself is rendered. Changing the DOM is a very expensive operation — which is why frameworks like React and Angular have implemented their own techniques to avoid updating the DOM as much as possible (Virtual DOM and Incremental DOM respectively). With StencilJS rendering a &lt;slot&gt; with Shadow DOM <em>disabled</em>, it is actually moving slotted content around in the DOM — <strong>and this is costly</strong>.</p><h3>Evaluating Performance</h3><p>Our dropdown menu leverages a &lt;slot&gt; to allow the user to pass their own menu items. We took this poorly performing dropdown menu component and ran some performance tests against it. We rendered 200 instances of the dropdown menu with 3 elements slotted into each dropdown menu. We performed the render of the 200 dropdown menus on the click of a button to have control over the results. This is what we found:</p><p>With Shadow DOM <em>disabled</em>, rendering all 200 dropdown menus took:</p><blockquote><strong>5000ms</strong></blockquote><p>With Shadow DOM <strong>enabled</strong>, rendering all 200 dropdown menus took:</p><blockquote><strong>1600ms</strong></blockquote><p>The component renders 3x faster when using Shadow DOM! This discovery was a clear sign we needed to switch our StencilJS components to use Shadow DOM.</p><p>We are now wrapping up the conversion of all 30 of our design system components over to use Shadow DOM. Through this process, we’ve learned a few more reasons that this was the right switch to make: “conditional slots” and “user driven slot changes”.</p><h3>Performance Improvement 2: Conditional Slotting</h3><p>With Shadow DOM <em>disabled</em>, if a user provides slotted content to a component that doesn’t have a slot to put it in, the slotted content still renders on the page (at the top of the component). This makes it difficult or near impossible to conditionally render the user’s provided content which means we always have to render it and only hide it with CSS.</p><p>For example, here is our Tag component, which doesn’t have a &lt;slot&gt;, with content slotted into it and Shadow DOM <em>disabled</em>:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/444/1*JJknEp-ZNyf1MWYxp2TfKg.png" /><figcaption>Non ideal scenario — Component with Shadow DOM disabled renders slotted content even without a slot</figcaption></figure><p>With Shadow DOM <strong>enabled</strong>, if a user provides slotted content to a component that doesn’t have a &lt;slot&gt; to put it in, the slotted content is not rendered. This is because the browser will go to flatten the DOM at the time of render and see no place to put the content so it will stay in the “light DOM” which isn’t rendered. This opens up the door for conditionally rendering the user’s provided content which gives us further opportunities to tune the performance of our components.</p><p>For example, here is our Tag component (which doesn’t have a slot) with content slotted into it and Shadow DOM <strong>enabled</strong>:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/439/1*_-nXmZVLKZiXQEU9lp4YQg.png" /><figcaption>Ideal scenario — Component with Shadow DOM enabled does not render slotted content without a slot</figcaption></figure><p>We used this concept to remove our dropdown-menu’s “content” &lt;slot&gt; from the DOM when the dropdown menu wasn’t open. After making this change, we re-ran it through our performance test done above.</p><p>With Shadow DOM <strong>enabled</strong> <strong>+</strong> using a conditional slot, rendering all 200 dropdown menus took:</p><blockquote><strong>1200ms</strong></blockquote><p>We went from 5000ms to 1200ms to render 200 dropdown menus, now a 4x improvement! Things are starting to look better for this component.</p><h3>Performance Improvement 3: User Driven Slot Changes</h3><p>When we had Shadow DOM <em>disabled</em> on our components, we had a tough discovery — our Angular users were slotting content into our components using some Angular constructs like *ngIf and *ngFor and pointing them at asynchronous variables (pulled from NgRx state). The asynchronous behavior caused some of the slotted content to not be provided to the component at the first render of our component, but just shortly following the first render. Unfortunately this slotted content did not appear on the page at all until the next time a render was forced on that component.</p><p>What we learned is that StencilJS is not listening for slotted content changes, so we had to use <a href="https://developer.mozilla.org/en-US/docs/Web/API/MutationObserver">MutationObserver</a> to do our own listening. The short of the implementation is that the MutationObserver would look for node changes in the component and if they were inside of a &lt;slot&gt;, we’d force a re-render of the component. This was a bit ugly and caused some components to render twice on the initial load when they really didn’t need to, resulting in extra time to load.</p><p>With Shadow DOM <strong>enabled</strong>, we actually get this listening for free. The top level slotted content is observed and when it changes, the component re-renders without our intervention. We can even utilize the onSlotchange callback to perform any required updates to our component when the top level slotted content changes, e.g.:</p><pre>&lt;slot<br>  name=”popover-trigger”<br>  onSlotchange={() =&gt; this.triggerSlotChanged()}&gt;<br>&lt;/slot&gt;</pre><h3>Greener Grasses</h3><p>We covered 3 improvements to our component’s performance that were realized by <strong>enabling</strong> Shadow DOM on our StencilJS component:</p><ol><li>Performance of Rendering Slots</li><li>Conditional Slotting</li><li>User Driven Slot Changes</li></ol><p>We’re excited about the benefits we’ve gained and know it will lead to more performant user experiences in our applications leveraging our design system components. Aside from these benefits, there are plenty of articles on the web about the pros and cons of using Shadow DOM, mainly around style encapsulation. If you’re using StencilJS with Shadow DOM <em>disabled</em> on your components, we encourage you to take a look at what benefits you may gain from enabling it!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d010a9f0cdda" width="1" height="1" alt=""><hr><p><a href="https://medium.com/8451/make-your-stenciljs-web-components-faster-by-using-shadow-dom-d010a9f0cdda">Make Your StencilJS Web Components Faster by Using Shadow DOM</a> was originally published in <a href="https://medium.com/8451">84.51°</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Design Principles in Practice]]></title>
            <link>https://medium.com/8451/design-principles-in-practice-cd61829bdcb8?source=rss----2ec5e2df7046---4</link>
            <guid isPermaLink="false">https://medium.com/p/cd61829bdcb8</guid>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[data-visualization]]></category>
            <category><![CDATA[design-thinking]]></category>
            <category><![CDATA[design]]></category>
            <category><![CDATA[design-process]]></category>
            <dc:creator><![CDATA[Ryan Merrill]]></dc:creator>
            <pubDate>Tue, 21 Jul 2020 17:59:58 GMT</pubDate>
            <atom:updated>2020-07-21T17:59:58.602Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*UHQdtDQqIP2RHj60vbo6sA.png" /><figcaption>Photo by <a href="https://unsplash.com/@jessbaileydesigns?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText">Jess Bailey</a> on <a href="https://unsplash.com/?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText">Unsplash</a></figcaption></figure><p><em>By Ryan Merrill, 84.51° Experience Designer</em></p><p>Establishing a set of design principles is paramount to the success of a design system. The myriad questions and decisions that could stifle a system’s potential become easier to answer when the team is equipped with a strong set of principles.</p><p>Principles shore up a team’s confidence when faced with difficult design decisions. A design principle of “Make it Clear” vs. “Make it Comprehensive” may result in a user interface that has more white space and brighter colors.</p><p>Andrew Couldwell writes in his book <a href="https://designsystemfoundations.com/">Laying the Foundations</a>:</p><blockquote>“Your brand, design, and engineering principles are the mantra that guide everything you do. They are the driving force and inspiration. With every foundation, component, pattern, template, web page, or banner you design, and with each header and paragraph you write, ask yourself: “Does this align to our brand principles?”</blockquote><p>Design principles can help smooth over contentious disagreements in design critiques. A strong set of principles can put an end to debates much better than any mood or opinion ever could.</p><p>And as teams tackle increasingly difficult problems from their users, clients, and stakeholders, these same principles can justify why a team prioritized certain features over others.</p><p>Design and product principles should be carefully considered, opinionated and nuanced.</p><p>Don’t rush creating your principles. If your products have survived this long without them, it’s worth taking a slow and considered approach when writing them. They should be opinionated and nuanced.</p><p>A principle such as “simple” or “easy to understand” doesn’t say much. All products strive to be easy to understand. However a principle such as <a href="https://polaris.shopify.com/foundations/experience-values">Shopify’s “Empowering” is unique and tangible</a>:</p><blockquote>“We want people to feel like they can accomplish whatever they’re trying to do. Our experiences should give people confidence that they’re capable of achieving their goals, no matter their level of experience.”</blockquote><p>Despite their name, the initial draft of design or product principles should include a cross section of disciplines across an organization. Principles written in a vacuum and communicated from a design team’s ivory tower are likely to be met with disdain.</p><p>Cross-discipline collaboration creates an environment where participants share experiences that others may have been ignorant of. And by creating the principles together, everyone has a vested interest in their success.</p><h4>Creating the Principles</h4><p>The following is adapted from a <a href="https://uxmag.com/articles/creating-a-shared-vision-that-works">wonderful 2012 UX Magazine article by Alan Colville</a> about establishing a product vision, but can also be applied to creating a set of principles.</p><p>Invite a representative group of Engineers, Designers, Product Managers, and stakeholders to a workshop to produce a rough draft of principles.</p><p>Separate the participants into cross-disciplinary groups of 3–5 people. A <a href="https://smashingideas.com/five-warm-ups-ignite-design-thinking-workshop/">quick warmup exercise</a> helps get everyone’s brains engaged and focused on the task.</p><p><strong>Persona Creation</strong><br>The UX designers in an organization should have a solid baseline of a product’s user needs through their research work. They’ll likely have a set of personas, but as part of this exercise it’s important the team works together to create new personas. This helps by getting everyone as familiar as possible with their persona.</p><p>Once all teams have finished creating a persona, designate someone from each team to explain it to the others. Teams should aim for a familiarity with their personas akin to a close friend or family member.</p><p><strong>Value Definitions</strong><br>Once all teams have shared their personas, it’s time to define the values each persona receives from a product. Encourage participants to imagine themselves as a persona and to list the qualities and feelings that matter to them when using your product.</p><p>Weed out system quality properties such as responsive, direct, quick, accessible, secure, reliable, and safe. Again, we all want our products to be quick and accessible. Aim for words such as robust, connected, tailored, community, or personal.</p><p>Group common values together and label each group with a one-to-three word label that summarizes each. Then have each member of the group vote on their three most important values.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*IkznD3T-BKDVLOWP0VG8Yg.png" /><figcaption>An example of grouped value statements and sentences produced from a design principles workshop.</figcaption></figure><p><strong>Value Statements</strong><br>Once everyone has finished voting, select the top 3–4 values and turn these into sentences. This helps explain the meaning of the each value, which is easy to get lost with a single word. A value such as “familiar” could turn into “<strong>Familiar</strong> enough to understand and use right away.”</p><p>Once everyone has shared their sentences, the system team should take the raw material and distill it into a set of design and product principles.</p><h4>Communicating Principles</h4><p>Having a set of principles is great, but failing to regularly communicate them results in a little more than a set of fancy words.</p><p>In <a href="https://www.yeseniaperezcruz.com/expressive-design-systems">Expressive Design Systems</a>, Yesenia Perez-Cruz makes a case for using design principles as a tool:</p><blockquote>“Put your principles into practice by coming up with tools that make it easier to apply them. For example, you could create a design review template that has the principles baked in, or scorecards for teams to measure themselves against the principles in project retrospectives.”</blockquote><p>Those working on the design system team will be intimately familiar with these principles. But this familiarity shouldn’t let them get complacent of the need to consistently communicate these principles.</p><p>The systems team should act as stewards and continue to monitor and adjust an organization’s principles as their products evolve.</p><p>A harmonious and adopted set of principles prevents a product experience from becoming bifurcated and inconsistent. These principles, applied consistently and with care, can shape a suite of products into a cohesive family.</p><p>In full transparency and communication, here is a first draft of the design and product principles at 84.51°.</p><p><strong>Scale into Complexity</strong><br>Our data is complex, but it doesn’t mean our users’ experience has to be. Users should be able to understand our products’ content at a glance while empowering them to dive deep into the data to extract insights. When designing an interaction, err on the side of simplicity and add complex interactions when necessary.</p><p><strong>Trustworthy</strong><br>Our products should engender a sense of trust with our users by providing direct and actionable feedback when performing actions, especially when there is an error. They should promote a sense of safety within our applications and give users clear instructions when performing important actions and make it easy to recover if things go wrong.</p><p><strong>Consistent</strong><br>Users should feel powerful and comfortable using our products, whether they are novices or experts. Our products should be part of a cohesive system that is working to help users achieve their goals.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=cd61829bdcb8" width="1" height="1" alt=""><hr><p><a href="https://medium.com/8451/design-principles-in-practice-cd61829bdcb8">Design Principles in Practice</a> was originally published in <a href="https://medium.com/8451">84.51°</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Design System Adoption in the Enterprise]]></title>
            <link>https://medium.com/8451/design-system-adoption-in-the-enterprise-6165dfe10325?source=rss----2ec5e2df7046---4</link>
            <guid isPermaLink="false">https://medium.com/p/6165dfe10325</guid>
            <category><![CDATA[design-systems]]></category>
            <category><![CDATA[engineering]]></category>
            <category><![CDATA[design-thinking]]></category>
            <dc:creator><![CDATA[Ryan Merrill]]></dc:creator>
            <pubDate>Wed, 03 Jun 2020 14:44:02 GMT</pubDate>
            <atom:updated>2020-06-03T14:44:02.060Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*CEIZfvOnWIp5nGJldHf1eQ.png" /><figcaption>Photo by <a href="https://unsplash.com/@markusspiske?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Markus Spiske</a> on <a href="https://unsplash.com/s/photos/sprout?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></figcaption></figure><p><em>By Ryan Merrill, 84.51° Experience Designer</em></p><p>Congratulations on the first release of your design system! The hours spent sweating the pixel-perfect details, choosing a front-end framework, and building a distribution pipeline sure weren’t easy. But now the real work begins: having the system adopted by product teams.</p><p>Without that adoption, all of the blood, sweat, and tears that went into creating bespoke components will have been for nought. But changing the ways of working for your audience of designers, engineers, and product managers isn’t going to be easy—especially in an enterprise where processes and ceremonies have been ingrained in the culture.</p><p>Enterprise adoption requires both guerrilla support from product designers and engineers, as well as a strong mandate from leadership to use the system. Without both, you’ll quickly find yourself swimming against one current or another.</p><h4>Measuring Adoption</h4><p>Tracking adoption shows stakeholders their initial investment in the team was well-funded and can act as a catalyst for more resources. One way to track adoption is through differing maturity levels, <a href="https://medium.com/eightshapes-llc/adopting-design-systems-71e599ff660a">proposed here by Nathan Curtis</a>. Tracking teams as they move up the stack from non-adopter to master holds them accountable to each other, encouraging a friendly competition to stay up-to-date.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*FInGmmd-i0XVXl9n0OMSsQ.png" /><figcaption>Adapted from Nathan Curtis’, <a href="https://medium.com/eightshapes-llc/adopting-design-systems-71e599ff660a">Adopting Design Systems</a></figcaption></figure><p>It’s important to keep in mind that not all products consuming a design system will reach the mastery stage—they may be constrained by old or incompatible technology. Mapping a suite of products across an organization allows a design system team to prioritize where to put their training and outreach efforts.</p><h4>Support from the bottom up</h4><p>The designers, engineers, and product managers have the final say in how quickly a design system is adopted. Without their support, increasing adoption will be a long-fought battle.</p><blockquote>Communicating design system decisions akin to Moses descending from Mt. Sinai with his stone tablets is a quick way to build resentment.</blockquote><p>It’s imperative that the decisions communicated by the design system team have buy-in and input from members of the product teams. Communicating design system decisions akin to Moses descending from Mt. Sinai with his stone tablets is a quick way to build resentment. Don’t do this.</p><p>Include a subset of product team members in each component kick-off so everyone has their voice heard. Edge cases that may prevent a team from adopting a component will arise and can only strengthen its foundations.</p><p>The pixels and code of design systems are easy; it’s the people and relationships that are the most challenging and rewarding. As with any relationship, communication is key. As design system builders, it’s easy to get lost in the weeds and assume everyone has the context of the granular decisions that were made when building a component. Communicate to a fault. Run the risk of annoying your users about new features, modified guidelines, and deprecated components.</p><p>We created a number of regular outlets to share updates of our design system, <a href="https://medium.com/8451/meet-meridian-the-84-51-design-system-2b8dcbd20bf4">Meridian</a>:</p><ul><li><strong>Weekly Office Hours</strong>: The team holds office hours twice a week for design system users. It’s a place where users can directly communicate with the builders of the system and ask questions, share ideas, and propose new components.</li><li><strong>Regular Demos</strong>: Every six weeks the design system team demonstrates what components and guidelines were published. These demos also act as a forum for users to probe the team on why certain decisions were made and can inform what is the next on the roadmap.</li><li><strong>Communication Channels</strong>: We use a dedicated channel in Teams to field asynchronous requests. Additional feedback and questions are posted to a dedicated and monitored forum or sent via email.</li></ul><h4>Support from the top down</h4><p>Getting the initial investment from leadership of a design system in an enterprise is no small feat. But it’s equally important that leadership communicates the need for product teams to use the system.</p><p>It’s too easy for product teams to become mired in the day-to-day work and de-prioritize design system adoption. Leadership needs to have a persistent and vocal push about the importance of adopting the design system for an entire suite of products.</p><h4>An extension of the product team</h4><p>Even with a groundswell of enthusiasm from those pushing pixels and with leaders espousing their support, design system adoption can still be slow-going. It might take a bit of handholding from the design system team to help combat some of this systems adverse thinking.</p><blockquote>The pixels and code of design systems are easy; it’s the people and relationships that are the most challenging and rewarding.</blockquote><p>The builders of the design system know the components, guidelines, and best practices better than anyone—putting them in a perfect place to act as guides. The system team can generate goodwill on product teams by pairing with engineers when tackling thorny problems or participating in design critiques. This symbiotic relationship helps everyone by giving product teams an extra set of hands and educates the systems team in that much needed product context.</p><h4>This is a long play</h4><p>Design system work is difficult. Migrating an enterprise away from siloed-to-systems thinking won’t happen overnight.</p><p>Creating a way for teams to track adoption can help an organization measure their design system investments and give struggling product teams the resources they need. The bulk of the work falls on the design system team to communicate clearly, effectively, and inclusively. That communication, coupled with a top-down push from leadership, will help sow the seeds for a burgeoning adoption.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6165dfe10325" width="1" height="1" alt=""><hr><p><a href="https://medium.com/8451/design-system-adoption-in-the-enterprise-6165dfe10325">Design System Adoption in the Enterprise</a> was originally published in <a href="https://medium.com/8451">84.51°</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Watch Out! Challenges Using Web Components in Angular and React]]></title>
            <link>https://medium.com/8451/watch-out-challenges-using-web-components-in-angular-and-react-c7259661d766?source=rss----2ec5e2df7046---4</link>
            <guid isPermaLink="false">https://medium.com/p/c7259661d766</guid>
            <category><![CDATA[web-components]]></category>
            <category><![CDATA[angularjs]]></category>
            <category><![CDATA[engineering]]></category>
            <category><![CDATA[stenciljs]]></category>
            <category><![CDATA[design-systems]]></category>
            <dc:creator><![CDATA[Dan Bellinski]]></dc:creator>
            <pubDate>Tue, 31 Mar 2020 18:38:07 GMT</pubDate>
            <atom:updated>2020-03-31T19:16:34.397Z</atom:updated>
            <content:encoded><![CDATA[<p><em>By Dan Bellinski, 84.51° Application Developer</em></p><figure><img alt="Challenge climbing moutain" src="https://cdn-images-1.medium.com/max/1024/0*XXgk9bS0eIyEw8mC" /><figcaption>Photo by <a href="https://unsplash.com/@joshuaearle?utm_source=medium&amp;utm_medium=referral">Joshua Earle</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>84.51°’s design system, Meridian, is built using StencilJS. We covered why we chose to use StencilJS to create web components <a href="https://medium.com/8451/how-we-chose-to-build-our-design-system-using-stenciljs-web-components-4878c36743c5">in a previous article</a>. The web components built by StencilJS are used by both Angular and React applications in production. For the most part, our experience with web components has been very positive but there have been a few snags along the way which we’ll cover in this article. I’ll also share how we got around them at the end!</p><h3>Web Components in Angular</h3><p>Angular’s support for web components is pretty good — but not perfect. Angular has their own web component library (<a href="https://angular.io/guide/elements">Angular Elements</a>) so support in Angular is essential. With a line of code, you can include web components in your Angular app by adding the CUSTOM_ELEMENTS_SCHEMA to your App Module.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/e77b8a28cbfebf81d4334dea6a62b18b/href">https://medium.com/media/e77b8a28cbfebf81d4334dea6a62b18b/href</a></iframe><p>One line of code and you’re off and running — great, right?! Unfortunately, including the CUSTOM_ELEMENTS_SCHEMA became a sore subject with our developers.</p><p><strong>What is wrong with CUSTOM_ELEMENTS_SCHEMA?</strong></p><p>When you provide this schema to Angular, it will ignore any elements it doesn’t recognize (your web components) on build and serve and let the browser try to render them. What we came to realize is that Angular will ignore <strong><em>any</em></strong> element it doesn’t recognize, not just your web components. This resulted in a scenario occurring twice that cost a developer their afternoon.</p><ol><li>A developer, Jill, created an Angular component (e.g. &lt;demo-ng-component&gt; intending it to be used in another module but, on accident, didn’t export it from the feature module that declared it. In short, this meant the component couldn’t be used by another module.</li><li>Jill added the demo-ng-component to another component’s template in a different module in order to use it.</li><li>Jill started the app, went to the page they expected to see demo-ng-component, but nothing rendered.</li><li>Checking the console for errors, there were none! Jill spent a few hours confused, asked for help, and eventually we discovered that the demo-ng-component wasn’t exported.</li></ol><p>What was different about this scenario than normal Angular development? Normally when you try to use a component that isn’t properly provided, a quick attempt to build or run the app results in the error below:</p><pre>Uncaught Error: Template parse errors: ‘demo-ng-component’ is not a known element:</pre><pre>1. If ‘demo-ng-component’ is an Angular component, then verify that it is part of this module.</pre><pre>2. To allow any element add ‘NO_ERRORS_SCHEMA’ to the ‘@NgModule.schemas’ of this component.</pre><p>When seeing <em>that</em> error, the developer immediately realizes they missed an import or export somewhere. However this time, Angular didn’t throw any build-time or run-time errors. Angular didn’t recognize demo-ng-component but because we had set the CUSTOM_ELEMENTS_SCHEMA set, it was happy to ignore it.</p><p>Ideally, Angular should have the ability to configure the CUSTOM_ELEMENTS_SCHEMA to recognize what is intended to be a web component (don’t throw errors) versus what is a mis-configured Angular component (throw errors). One possible way for Angular to implement that is the component prefix — i.e. only allow unknown elements starting with “mds-”. Unfortunately that functionality doesn’t exist.</p><h3>Web Components in React</h3><p>React’s support for custom elements is just okay. The most painful gap we experienced was React’s inability to bind to custom events from web components.</p><p><strong>How to Bind to Custom Events in React?</strong></p><p>Rather than bind to the web component’s custom event in the template, like you would using a native React component:</p><pre>&lt;my-web-component (onMyCustomEvent)={respondToEvent}&gt;&lt;/my-web-component&gt;</pre><p>You have to instead use JavaScript to add an event listener to the web component to handle the custom event:</p><pre>document.querySelector(‘my-web-component’).addEventListener(‘myCustomEvent’, () =&gt; { // handle event})</pre><p>While passable, this leads to ugly code by separating the event handling from the use of the component and puts more work on the developers to use the web component.</p><p><strong>Web Component Properties</strong></p><p>The other challenge we had is React’s recognition of web component properties. Let’s say we created a property on our web component called ‘iconOnly’. In Angular, we can use that property like so:</p><pre>&lt;my-web-component iconOnly=&quot;true&quot;&gt;&lt;/my-web-component&gt;</pre><p>In React, trying that same code fails to set the property correctly. Instead, it will only recognize the property in all lowercase, like so:</p><pre>&lt;my-web-component icononly=&quot;true&quot;&gt;&lt;/my-web-component&gt;</pre><p>In order to reduce confusion for our users and simplify our Design System documentation site, we ended up naming all of our web component properties using lowercase so the code usage is the same no matter what framework you’re using.</p><p><strong>Other Gaps</strong></p><p>There are a few other issues with React’s web component support that we haven’t run into. A great resource to identify gaps with web component support for React, or any other modern JS framework, is <a href="https://custom-elements-everywhere.com/">Custom Elements Everywhere</a>. They have an automated suite of tests that verifies the level of support for the Custom Elements specifications.</p><h3>Conclusion — How We Solved for These Challenges</h3><p>None of the challenges we faced in Angular and React were show stoppers to use web components. I’d still recommend using them as framework support for web components will only improve over time.</p><p>That being said, we were fortunate that StencilJS open sourced their output target plugins for Angular and React shortly after we released Meridian. The output targets simply take your StencilJS web components and add a native Angular and React wrapper component around them. These allow your web components to be used just like any other native Angular or React component, removing the need to use CUSTOM_ELEMENTS_SCHEMA (Angular) or to add manual event listeners with JavaScript (React).</p><p>After adding these new output targets and switching our users to the native components, we were met with gratitude from our consuming developers!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c7259661d766" width="1" height="1" alt=""><hr><p><a href="https://medium.com/8451/watch-out-challenges-using-web-components-in-angular-and-react-c7259661d766">Watch Out! Challenges Using Web Components in Angular and React</a> was originally published in <a href="https://medium.com/8451">84.51°</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Meet Meridian, the 84.51° Design System]]></title>
            <link>https://medium.com/8451/meet-meridian-the-84-51-design-system-2b8dcbd20bf4?source=rss----2ec5e2df7046---4</link>
            <guid isPermaLink="false">https://medium.com/p/2b8dcbd20bf4</guid>
            <category><![CDATA[product-and-design]]></category>
            <category><![CDATA[design]]></category>
            <category><![CDATA[design-systems]]></category>
            <dc:creator><![CDATA[Ryan Merrill]]></dc:creator>
            <pubDate>Thu, 12 Mar 2020 19:16:32 GMT</pubDate>
            <atom:updated>2020-03-12T19:16:32.668Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*gF_e7BYApJTFtRntIspdKQ.png" /></figure><p><em>By Ryan Merrill, 84.51° Experience Designer</em></p><p>84.51° has been creating a suite of business-to-business products that empower Kroger and its consumer-packaged-goods partners to leverage data to make smarter decisions.</p><p>The products share similar audiences and users need to feel they are using a cohesive collection of products. To address this, we initially attempted to impose a standardized design language but found it stifled creativity and innovation.</p><p>While the intention was admirable, the rules and guidelines we explored limited how fast the products could be tested, improved, and released. In their infancy, the products needed room to breathe and grow without a strict ruleset weighing them down. We took a step back and decided to let each product find its unique voice to independently mature.</p><blockquote>We initially attempted to impose a standardized design language but found it stifled creativity and innovation.</blockquote><h4>Growing Up</h4><p>Now some two years later, both products are released and are maturing in both functionality and appearance. But that early stage of freedom gave way to an unintended but anticipated side effect.</p><p>The products resembled one another, akin to how one looks in a funhouse mirror: colors, fonts, and spacing were recognizable, but the distortions left users with a sense of unfamiliarity when jumping between applications. The products needed to feel like close siblings, but instead felt like distant cousins.</p><p>It was time to take a step back and examine what sort of experiences we value and how those can come to life through our brand and products. How could we begin to bring the products closer together, but retain the unique features developed in their infancy?</p><h4>Creating a System</h4><p>We knew that systems thinking would allow us to break down each product into common and shareable parts. We could choose what was working well for one product and see how that might work for the other.</p><p>This thinking kicked off a design system project at 84.51° meant to identify common design styles and user interface components with the hope of bringing the product experience closer together.</p><p>A design system helps teams create products and experiences that share design and code consistency across an organization. Teams can create standardized user experiences through documented design guidelines and principles. Product owners can leverage a shared component library to rapidly ideate, test, and learn from interactive prototypes. And developers can feel confident using battle-tested UI components built with accessibility in mind when translating design mockups into working software.</p><blockquote>Product owners can leverage a shared component library to rapidly ideate, test, and learn from interactive prototypes.</blockquote><p>But we knew that bringing our products closer together wasn’t going to be enough to create a cohesive experience to customers and partners of 84.51°. It’s one thing if our products look similar, but if our slide decks and marketing materials look disjointed, we are back at square one.</p><p>We are exploring how our products can influence our corporate brand and how the brand can influence the products. We know that each will have its own unique challenges, goals, and users, and are baking in room for self-expression along the way.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*oPa2d1iTdbb4yTq4sXmZLQ.png" /></figure><h4>Introducing Meridian</h4><p>At its core, a design system is a product serving products. And any great product should be branded in order to create lasting and memorable experiences for its users. We named our design system Meridian as a play on the navigational and data driven ethos behind our 84.51° name.</p><p>We released two major versions of Meridian and have learned a tremendous amount about how design systems can bring design, engineering, and product teams closer together. Design systems are a mountain of work and need to be given the resources to mature, fail, and grow. We’ve learned that the pixels and code are the easy part of creating a design system, it’s the people and relationships that are the most difficult and interesting.</p><p>There is a deluge of design system writing published these days, but we hope to offer a unique angle of our experiences creating one that influences products and a corporate brand for a large enterprise. We’ll be exploring topics such as <a href="https://medium.com/8451/how-we-chose-to-build-our-design-system-using-stenciljs-web-components-4878c36743c5">how we chose a technology stack</a>, how we are encouraging and measuring adoption, and what we are doing to create room for individual expression within our products and brand.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2b8dcbd20bf4" width="1" height="1" alt=""><hr><p><a href="https://medium.com/8451/meet-meridian-the-84-51-design-system-2b8dcbd20bf4">Meet Meridian, the 84.51° Design System</a> was originally published in <a href="https://medium.com/8451">84.51°</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How we chose to build our Design System using StencilJS Web Components]]></title>
            <link>https://medium.com/8451/how-we-chose-to-build-our-design-system-using-stenciljs-web-components-4878c36743c5?source=rss----2ec5e2df7046---4</link>
            <guid isPermaLink="false">https://medium.com/p/4878c36743c5</guid>
            <category><![CDATA[engineering]]></category>
            <category><![CDATA[stenciljs]]></category>
            <category><![CDATA[design-systems]]></category>
            <category><![CDATA[web-components]]></category>
            <category><![CDATA[product-design]]></category>
            <dc:creator><![CDATA[Dan Bellinski]]></dc:creator>
            <pubDate>Tue, 03 Mar 2020 20:38:23 GMT</pubDate>
            <atom:updated>2020-03-11T13:05:41.780Z</atom:updated>
            <content:encoded><![CDATA[<p><em>By Dan Bellinski, 84.51° Application Developer</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*MmSRmSBdpAVN2Gyw" /><figcaption>Photo by <a href="https://unsplash.com/@mirkoblicke?utm_source=medium&amp;utm_medium=referral">Mirko Blicke</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>Design Systems take many shapes and sizes. Here at 84.51°, we firmly believe that a Design System is not a Design System without reusable, ready-to-use code. Having pre-built components made of CSS, HTML and JS that developers can use and instantly be in compliance with our Design System is crucial, both for speed of delivery and consistency in user experience. The technology we chose to become the foundation of our Design System, Meridian, was critical to the success of our system.</p><p>As our Design System team formed, we took a three-step approach to choosing our technology:</p><ol><li>Audit our organization’s technology needs</li><li>Research our options</li><li>Evaluate frameworks with proof of concepts (POCs)</li></ol><h3>Auditing our organization’s technology needs</h3><p>As we began auditing our technology needs, we asked ourselves two questions:</p><p><strong>What are our current technology needs?</strong></p><p>Our major products are built using the latest versions of Angular. We also have some teams doing POCs with Vue.js, and a few static sites using React.</p><p><strong>What are our future technology needs?</strong></p><p>This was a little more difficult to answer, but Angular continues to be our technology of choice for new applications. While we don’t have any plans to switch, we didn’t want to be locked into Angular because of our Design System choices.</p><h3>Researching our options</h3><p>Knowing our technology needs allowed us to start evaluating our options. We landed on 2 major paths with a few different technologies we could use to fulfill them.</p><p><strong>Option 1: Build Native Angular Components and support our major applications in Angular</strong></p><p>Pros: Our company is invested in Angular, we hold experience in the technology, so building native Angular components would have been the quickest way to get started. It would have also given us first-class support for our Angular apps, ensuring no hiccups by introducing other technologies.</p><p>Cons: This approach would alienate teams building POCs with Vue.js or the teams building static sites with React. It would also prevent teams that wanted to use a framework other than Angular from using our Design System without a large investment in rebuilding our components to support the new framework.</p><p><strong>Option 2: Build Web Components and support any framework built on JavaScript</strong></p><p>Pros: All of our applications, static sites and POCs could leverage our Design System. This would mean faster building of POCs by leveraging our Design System, and more consistency of user experience across all the products we build. This also allows us to shift our technology of choice for new applications to anything built on JS and still get all of the benefits of our Design System.</p><p>Cons: The idea of web components was around for a few years but an agreed upon spec wasn’t finalized until 2019 (<a href="https://developers.google.com/web/fundamentals/web-components/customelements">Custom Elements v1</a>) — this meant it was a newer technology and that introduced risk for us. How much support would our supported browsers and our chosen JS frameworks offer and where would it break? Building with web components also meant spending time learning a new technology, giving us a slower start.</p><p>To help make our choice, we consulted internal leaders and engineers and asked the <a href="https://design.systems/">Design Systems community</a> questions on Slack and Twitter. We built a few POCs to test web component support in different frameworks to determine what was feasible.</p><h3>Evaluating Frameworks with POCs</h3><p>We had already been building Angular components for years, so there was no need to prove out Option #1; we knew it would work. We looked at several options to build out web components including building <a href="https://www.webcomponents.org/introduction">web components from scratch</a>, <a href="https://angular.io/guide/elements">Angular Elements</a>, <a href="https://lit-element.polymer-project.org/">LitElement</a> and <a href="https://stenciljs.com/">StencilJS</a>. We asked questions like:</p><ol><li>How many other companies are using the framework?</li><li>What is the framework’s documentation like?</li><li>How long has the framework been around?</li><li>How active is development on the framework and what is the company supporting it?</li><li>How much custom development is needed to build a component?</li><li>How easy is it to build a web component with this framework?</li><li>How easy is it to use a web component built with this framework in Angular, React, and Vue.js apps?</li></ol><p>We built a few POCs, evaluated the answers to questions above, and ultimately decided to build our Design System’s foundation on StencilJS.</p><figure><img alt="StencilJS Logo" src="https://cdn-images-1.medium.com/max/500/1*eofjSVBjyttyd5eiYs-GsA.png" /></figure><p><strong>Why StencilJS?</strong></p><ol><li><a href="https://stenciljs.com/docs/introduction">StencilJS documentation</a> is robust</li><li>There are great code examples on the web, including many <a href="https://github.com/ionic-team/ionic/tree/master/core/src/components">Ionic components built on StencilJS</a></li><li>StencilJS is specifically built for building Design Systems</li><li>StencilJS has been around since 2017, proving a long track record of success</li><li>StencilJS’s testing suite worked great for our way of working — they provide support for unit, E2E and snapshot testing</li><li>The web components we built in our POC were easily pulled into Angular, React and Vue.js apps</li></ol><p>It has been six months since we made the choice to use StencilJS. We’ve been happy with our choice to use StencilJS as our Design System’s foundation and now have StencilJS components deployed to production in our two major products. We’ve learned a lot along the way and will be sharing in future posts!</p><h3>Appendix — Other Frameworks</h3><p><em>Why not web components from scratch?</em></p><p>There is a large amount of code required to make a web component interact with the browser, and we determined it wasn’t the best use of our time when other options existed requiring less time investment.</p><p><em>Why not LitElement?</em></p><p>LitElement is a popular option and seems very robust. As we compared it to StencilJS and Angular Elements, it provided less abstraction to build a component, which meant us writing more code. We probably would have been happy with LitElement but StencilJS felt easier to use.</p><p><em>Why not Angular Elements?</em></p><p>On paper, this was the most appealing option for us, which would have allowed us to build native Angular components and then convert them to web components. Unfortunately, the documentation was severely lacking and the technology was fairly new. It also required that the Angular framework be bundled with each exported component and without Ivy, this meant big JS files.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4878c36743c5" width="1" height="1" alt=""><hr><p><a href="https://medium.com/8451/how-we-chose-to-build-our-design-system-using-stenciljs-web-components-4878c36743c5">How we chose to build our Design System using StencilJS Web Components</a> was originally published in <a href="https://medium.com/8451">84.51°</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>