<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[intive Developers - Medium]]></title>
        <description><![CDATA[At intive we’re building great digital products for our customers. Day by day. We want to share with you our way of doing things, the challenges we face, the tricks and shortcuts we discover. A little peek behind the scenes — welcome to our intive_dev blog! - Medium]]></description>
        <link>https://medium.com/intive-developers?source=rss----f34f16bef773---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Tue, 07 Apr 2026 21:35:57 GMT</lastBuildDate>
        <atom:link href="https://medium.com/feed/intive-developers" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Testing in Transition: Navigating Android UI Testing During an XML to Compose Migration]]></title>
            <link>https://medium.com/intive-developers/testing-in-transition-navigating-android-ui-testing-during-an-xml-to-compose-migration-b1f70715ce0c?source=rss----f34f16bef773---4</link>
            <guid isPermaLink="false">https://medium.com/p/b1f70715ce0c</guid>
            <category><![CDATA[test-automation]]></category>
            <category><![CDATA[android-app-development]]></category>
            <category><![CDATA[quality-assurance]]></category>
            <category><![CDATA[software-testing]]></category>
            <category><![CDATA[jetpack-compose]]></category>
            <dc:creator><![CDATA[Murray]]></dc:creator>
            <pubDate>Tue, 23 Sep 2025 12:40:38 GMT</pubDate>
            <atom:updated>2025-09-23T12:39:40.749Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/900/1*9U9MZ7eDqpFQbqizXTjWWg.png" /></figure><p>I have never seen th<em>e unresolved reference </em>error as often as I have in the last few months. The IDs we had been using for our UI tests were gradually dismantled, leaving our scenarios empty. The conversion to Jetpack Compose was in full swing, and we test engineers were on our toes, building a second scaffold next to the building that was being hollowed out.</p><h3>Our Testing Foundation: XML Views and Espresso</h3><p>Until recently, we were creating testing scenarios for the classic XML Views with Espresso and UIAutomator, using IDs where possible. No Cucumber, no bells and whistles, just straightforward UI tests. After discussing this step several times, the development team finally decided to make the transition to Jetpack Compose. The app needed to be future-proof, and we wanted to benefit from the advantages Compose offers. Google itself recommends Compose for Android UI development, making this transition inevitable.</p><p>Parallel to this choice, many features of the app were also being updated with a new design.</p><p>My test engineer colleague and I faced the challenge of updating existing scenarios and creating new test scenarios for elements built with Compose.</p><h3>Key Challenges</h3><p>What exact challenges did we have to overcome during this period?</p><ul><li>Jetpack Compose does not use any IDs</li><li>Whole screens of the app existed in a kind of twilight zone, containing both XML and Compose elements</li><li>This was our first time introducing Compose testing alongside Espresso</li><li>Several complex methods for assertions and interactions were built specifically for XML views and needed to be cloned and adapted for Compose</li></ul><h3>Bridging Two Worlds</h3><p>After completing the <a href="https://developer.android.com/develop/ui/compose/testing#setup">setup</a> and adding our Espresso dependencies for Compose, we began examining the Composables.</p><p>We quickly realized that we could no longer rely on IDs and that Test Tags were the new way forward. With XML, IDs are used by developers to reference individual views. As test engineers, we could normally count on having an ID available to locate elements. Now, every Composable we wanted to find using a semantic selector needed to have a test tag. Fortunately, we already had experience setting <em>accessibilityIdentifier</em> in SwiftUI for iOS, so adding these tags ourselves in the Compose code wasn’t a major adjustment. Here’s an example:</p><pre>Button( <br> onClick = { /* action */ }, <br> modifier = Modifier.testTag(“submitButton”) <br>) { <br> Text(“Submit”) <br>}</pre><p>In our testing code, this Composable could easily be located by</p><pre>composeTestRule.onNodeWithTag(“submitButton”).performClick()</pre><p>– perfect!</p><p>But how could we verify that our Test Tags were set at the correct place? Initially, we turned to the Layout Inspector, only to discover that the component tree did not show any Test Tags at all.</p><p>It turns out, you need to look at the <strong>Attributes</strong> panel of a selected node, not the Component Tree itself, to check if your Test Tags are properly applied.</p><p>Press enter or click to view image in full size</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*nGi9Rcp3r7cgteja.png" /></figure><p>Another approach we found helpful was printing the Compose Semantics Tree to Logcat using this snippet:</p><pre>composeTestRule.onRoot(useUnmergedTree = true) <br> .printToLog(“ComposeHierarchy”)</pre><p>Use <em>true</em> if you want to see the full semantics tree with all internal elements. More on that later. With this insight, we were ready to continue our journey.</p><p>But that was just the beginning. With Espresso, we had developed numerous utility methods that helped us wait for elements, extract text from views, tap on the nth item in a RecyclerView, and so on. All of this infrastructure needed adaptation, as the Compose Testing framework follows a different approach.</p><p>We couldn’t simply modify our existing methods — we had to build new ones that could handle both paradigms, as the app had become a mixture of XML and Compose elements. Step by step, we had to learn where to set Test Tags and how to interact effectively with the Semantic Nodes that Compose offers.</p><h3>Things we learned along the way</h3><p>I guess Compose is one of those technologies where you don’t want to go back once you started working with it. For instance, I really do prefer locating an element via a Test Tag over the sometimes sweeping search for an element via IDs. This can become quite complex and hard to fathom, where Test Tags offer a straightforward approach. You could even consider adding the Test Tag modifier as a parameter to a method that passes the Test Tag for a whole reusable section for example. On the other hand, the process of setting the Test Tags has either to be internalised by the development team or done by yourself.</p><p>In addition, there were many situations where I was creating test steps with ease using Compose Nodes and then struggling with more complex scenarios with the old XML Views.</p><p>For example, I had a RecyclerView with mixed content — some items were categories, others were items within those categories. I needed to count all items in the categories by their IDs. However, the list was long enough that not all items were visible simultaneously, which created a problem: Espresso can only interact with elements currently in the view hierarchy. Well, this is an issue, as only visible items exist in the view hierarchy at any given time. When the user scrolls, views are created and destroyed.</p><p>Not the case with a Compose Node Tree. The items there can have semantic properties that persist beyond visual rendering. What you get is a more complete representation of the UI structure. All you have to do is:</p><pre>composeTestRule.onAllNodesWithTag(“[tagName]”, useUnmergedTree = true).assertCountEquals([expectedNumber])</pre><p>Beyond just better element counting, the semantic tree structure is theoretically more efficient to traverse. Although we did not notice a significantly faster test execution, I guess this is the case because our App is still a mixed bag of XML and Compose.</p><p>The <em>useUnmergedTree</em> parameter can make the difference and it’s crucial to understand that some elements may be semantically merged with their parents in the default merged tree. Setting <em>useUnmergedTree = true</em> reveals these otherwise hidden semantic nodes, making them accessible for testing.</p><h3>Bonus Content</h3><p>One of our colleagues was using Appium with the UiAutomator driver for automated mobile testing and faced even other issues. He wanted to use the Appium Inspector to have a look at the Test Tags but was not seeing them at all. As he could not use IDs anymore he could only rely on displayed text or content descriptions. We <a href="https://developer.android.com/develop/ui/compose/testing/interoperability#uiautomator-interop">found out</a> that you have to enable the semantic property <em>testTagsAsResourceId</em> to make the test tags accessible for UiAutomator. He was ready to go then.</p><h3>Closing Thoughts</h3><p>Although the era of unresolved references is still not over yet, we are confident that now we can not only adapt existing scenarios but also create a bunch of new ones in the process. On top of that, we learned a lot about Compose and the differences compared to XML Views and got the opportunity to build more robust testing practices using semantic properties.</p><p>The key lessons we took away:</p><ul><li>Get to know debugging strategies early when facing a new framework</li><li>Test Tags offer clearer element identification than complex ID hierarchies</li><li>Define a process early in the migration where either test engineers or developers add Test Tags to new Composables</li><li>Compose’s semantic tree provides a more complete view of the UI structure</li><li>Understanding the <em>useUnmergedTree</em> parameter is crucial for complex UI hierarchies</li><li>Use transformation phases to rethink and rebuild assertions and interactions on a more stable ground</li></ul><p>Looking back, the temporary complexity and challenges paid off in more maintainable and reliable tests in the long run. We’re excited for what’s to come.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b1f70715ce0c" width="1" height="1" alt=""><hr><p><a href="https://medium.com/intive-developers/testing-in-transition-navigating-android-ui-testing-during-an-xml-to-compose-migration-b1f70715ce0c">Testing in Transition: Navigating Android UI Testing During an XML to Compose Migration</a> was originally published in <a href="https://medium.com/intive-developers">intive Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[AWS Cloud Computing: Kubernetes vs. Serverless. A practical case analysis.]]></title>
            <link>https://medium.com/intive-developers/aws-cloud-computing-kubernetes-vs-serverless-a-practical-case-analysis-812840521aa0?source=rss----f34f16bef773---4</link>
            <guid isPermaLink="false">https://medium.com/p/812840521aa0</guid>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[intive]]></category>
            <category><![CDATA[cloud-computing]]></category>
            <category><![CDATA[kubernetes]]></category>
            <category><![CDATA[serverless]]></category>
            <dc:creator><![CDATA[intive]]></dc:creator>
            <pubDate>Mon, 06 Nov 2023 13:56:21 GMT</pubDate>
            <atom:updated>2023-11-06T13:56:21.094Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/900/1*H4hnP85oAA0yHNKjbC98eA.png" /></figure><h4>By Emilio Gerbino, Software Architect at intive</h4><p>The purpose of this article is to present a real-life scenario and its corresponding analysis putting special focus on two different technologies:<br><strong>Kubernetes and Serverless</strong>.</p><p>It is based on a real-life project, which uses AWS (Amazon Web Services) as<br>the cloud provider.</p><p>For this article the whole scenario will be introduced, including business<br>requirements and the analysis for both technological approaches. Pros and<br>cons will be covered, as well as the corresponding cost analysis. As a final<br>conclusion, all information will be summarized and presented.</p><p>As a bonus track, a general analysis for both technologies, including a bit of<br>history and the technical recommendations for the suitable scenarios will be presented as well.</p><p>Hope you enjoy it.</p><h3>Scenario</h3><p>As part of a larger project, a component capable of processing invoices has to be developed. This involves connecting to a POS (Point Of Sale) and reusing its preexisting interface to generate invoices. The entire system is based on microservices, and AWS (Amazon Web Services) is the cloud provider.</p><p>A key aspect to mention is that, as a business requirement, the desired architecture has to support sales peaks coming from Hot-Sale/Cyber-week seasons marketing programs. In terms of numbers this means supporting 1500 transactions on a regular day basis, and peaks of 12000 transactions on a Hot Sale Day.</p><p>Two different technical alternatives are proposed: Kubernetes, and Serverless, both based on AWS.</p><p>The POS preexisting invoicing interface includes the following end-points:</p><ul><li><strong>Transaction data:</strong> This first message contains the general transaction data, such as customer data, date, total amount, total discounts, total number of items, etc.</li><li><strong>Items data:</strong> This message includes detailed information for each item, including EAN (barcode), quantity, selling price, etc.</li><li><strong>Payment data:</strong> This message presents the payment information such as payment type, card masked number, amount, authorization id, authorization timestamp, etc.</li><li><strong>Processed discounts:</strong> This message informs the applied discounts by the POS.</li><li><strong>Invoicing data:</strong> This final message contains the resulting invoice data, such as invoice number, POS id, cashier id, store id, date, invoiced amount, etc.</li></ul><p>To depict the complete scenario:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*cLCsuz4KcHOBnqNADWDFjw.png" /></figure><p>In technical terms, this could be solved using Kubernetes as described in the following diagram:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*5QthhihIuwxfqytJmKrV1Q.png" /></figure><p>Where each component handles:</p><ul><li><strong>Ingress:</strong><br>- Receiving the transaction data as a single message.<br>- Saving the message into a DynamoDB table.<br>- Acknowledging the transaction data reception.<br>- Notifying the invoicing result, either success or failure, with its<br>corresponding data.</li><li><strong>Handler:</strong><br>- Receiving the incoming messages from the POS.<br>- Retrieving the corresponding data from the DynamoDB table.<br>- Responding the incoming messages with the right data.</li><li><strong>Kong:</strong><br>- The NginX ApiGateway product to expose the required end-points.</li></ul><p>The second alternative is using Serverless, as described in the following diagram:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*IPHqXDOsVRSrgnYJRbZH3A.png" /></figure><p>Where each component handles:</p><ul><li><strong>Ingress:</strong><br>- Receiving the transaction data as a single message.<br>- Saving the message into a DynamoDB table.<br>- Acknowledging the transaction data reception.<br>- Notifying the invoicing result, either success or failure, with its<br>corresponding data.</li><li><strong>Lambdas:</strong><br>- One lambda to handle the necessary response for each endpoint<br>invoked by the POS.</li><li><strong>API-Gateway:</strong><br>- A single api-gateway component that exposes all end-points and, by<br>using resources, routes the messages to each corresponding lambda.</li></ul><p>Summarizing the key points for each approach:</p><ul><li>With Serverless approach, the dev-team needs not to worry about configuring the auto scaling thresholds parameters as in Kubernetes. This is particularly handy for the hot-sale event peaks.</li><li>The kubernetes approach requires the dev-team to learn about this technology. The learning curve on Serverless is easier.</li><li>Going for Serverless minimizes the dependency on the DevOps team.</li><li>The scenario is suitable for an event-driven architecture.</li><li>At first glance Serverless seems to be cheaper (this point will be covered in depth in the next section).</li><li>Serverless integrates seamlessly with the rest of the components for the required interfaces (SNS, SQS, API Gateway).</li><li>It is very unlikely to move to another cloud provider, other than AWS, in the future.</li><li>The Kubernetes approach has fewer components, however the serverless approach is easier to deploy, especially in the need of modifying some logic in a single lambda.</li></ul><h3>Costs</h3><p>Now let’s talk about a important point, the <strong>infrastructure cost</strong>. To perform the estimations for each approach, the tool that was used is the <a href="https://calculator.aws/">AWS Pricing Calculator</a>.</p><p>To access the details on Kubernetes based, please click <a href="https://calculator.aws/#/estimate?id=243c0a3fcd6a7449548714f53232040d942d3fc6">here</a> and to access the details on Serverless based, please click <a href="https://calculator.aws/#/estimate?id=366b8bf033e07d7aff139d0cc58b904a34101bf1">here</a>. Please notice that common components, such as the sns, sqs and dynamodbs were excluded from the estimations, since the key topic is to compare the costs using kubernetes against serverless.</p><p>The case of study is distributed as follows:</p><p><strong>In a 30 day month, 25 days respond to a normal workload of 1500 transactions per day and the remaining 5 days respond to a “hot-sale” event of 12000 transactions per day.</strong></p><p>Let’s take a closer look at the Lambda based scenario. If we add the costs (in US dollars) for both periods, normal workload and the hot-sale peak, we would get:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/520/1*Z-HDEN8sd13wf8bZCcExaQ.png" /></figure><p>Graphically:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*rw4h2yBpMqMrDjfk6XB-kg.png" /></figure><p>Now, let’s take a closer look to the Kubernetes approach, costs would be:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/517/1*T0HFaYQj-1FQw1UBz8BtKw.png" /></figure><p>Graphically:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*uG1dt4h8pWJePckOggtxrA.png" /></figure><h3>Conclusions</h3><ul><li>The Serverless infrastructure costs would be half from EKS (Kubernetes). To put it in numbers, Serverless would cost 68,13 dollars per month while Kubernetes would cost 179,61 dollars per month.</li><li>One thing to remark from the Serverless estimation is that the main costs would be related to logging (CloudWatch), unexpectedly more expensive than the processing costs themselves (even considering a modest amount of logging!).</li><li>For the dev-team, it would be much easier to engage the development on lambdas just focusing on the business logic and not worrying about how it should escalate.</li><li>The resulting number of components should remain stable: the POS interface will not change (as it is being used by several other external systems) therefore, it isn’t expected to have any extension on the API gateway interfaces.</li><li>It is easier for the dev-team to implement specific changes in the logic, reducing the deploy times, by using lambdas.</li><li>As AWS is the chosen cloud provider, it is easy to take advantage of seamless integration with all the rest of the AWS components (SNS, SQS and DynamoDBs).</li></ul><p>With those points on the table, for this particular scenario and its requirements, the Serverless approach would be chosen.</p><h3>Technology general analysis</h3><h4>Timeline</h4><p>Let’s talk a bit about history. The first release of AWS Lambda was in 2014. A year before (2013) Docker had its first official release, and started getting ready for production scenarios the very next year. It was also in 2014 when Kubernetes was born. So, technically speaking, both technologies were available more or less at the same time. This means serverless didn’t come after Kubernetes, and therefore serverless wasn’t thought of as a replacement to the containers.</p><p>On the other hand, Kubernetes adoption was much faster as it was a more friendly technology to migrate the existing systems, but as time went by, newer systems started adopting the serverless paradigm, gaining market share at a faster pace.</p><h4>Suitable scenarios</h4><p>So, when is it better to choose one technology over the other?</p><p>You should go with Kubernetes when:</p><ul><li>you need compatibility with the past. For example, when you are migrating an existing system.</li><li>you need to avoid vendor lock-in. This approach would simplify moving the infrastructure from one vendor to another (or at least should be easier than serverless).</li><li>your system should also support running on-prem. One big reason for this might be data privacy.</li><li>Prediction of costs. Costs can be predicted better than with the serverless approach, this can be an advantage when assigning budgets.</li><li>if you have predictable workloads, evenly distributed.</li><li>if you need to build stateful components.</li><li>if you have long running tasks.</li></ul><p>But there are also some other points you should keep in mind:</p><ul><li>Must pay as long as your infrastructure is up, even if there is little to no workload.</li><li>Need a DevOps team to manage the infrastructure.</li><li>Need to be prepared for workload peaks by ensuring having the necessary nodes available.</li><li>Your dev-team must be familiar with Kubernetes (there is a learning curve).</li></ul><p>On the other hand, you should go with Serverless when:</p><ul><li>you need a fast time-to-market.</li><li>you can’t predict your workloads.</li><li>you need (almost) infinite autoscaling.</li><li>you want to avoid taking care of the underlying infrastructure.</li><li>you need a pay-as-you-go model. You will get great savings when there is little to none workload because you are only paying for what you are using, in other words, zero use equals zero cost.</li><li>you don’t need/have a strong DevOps team.</li><li>you are building an event-driven architecture.</li><li>you want your dev-team to focus only on the business logic and nothing else.</li><li>you need high availability.</li></ul><p>But, there are also some other points you should keep in mind:</p><ul><li>Not suitable for all kinds of tasks. For example, long running tasks.</li><li>Vendor lock-in. Once you pick a cloud provider, it is hard to move into another.</li><li>Security/data privacy Issues. It relies entirely on the cloud provider and may be a no-go for banks or insurance companies.</li><li>Great for new systems but hard when migrating existing ones.</li></ul><h3>Final conclusion</h3><p>For the presented case of study, Serverless proved to be the right answer. But there is no silver bullet, and this was the architecture that adapted better for this specific scenario requirements. There are many other scenarios where Kubernetes may be the right answer. The key here is to understand carefully what the pros and cons for each approach are, including costs, in order to apply the best resulting technology that will suit the project needs. Whether you go with Kubernetes or Serverless, after reading this article, is up to you.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=812840521aa0" width="1" height="1" alt=""><hr><p><a href="https://medium.com/intive-developers/aws-cloud-computing-kubernetes-vs-serverless-a-practical-case-analysis-812840521aa0">AWS Cloud Computing: Kubernetes vs. Serverless. A practical case analysis.</a> was originally published in <a href="https://medium.com/intive-developers">intive Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to launch an e-commerce platform efficiently]]></title>
            <link>https://medium.com/intive-developers/how-to-launch-an-e-commerce-platform-efficiently-87a1a92f15fd?source=rss----f34f16bef773---4</link>
            <guid isPermaLink="false">https://medium.com/p/87a1a92f15fd</guid>
            <category><![CDATA[software-architecture]]></category>
            <category><![CDATA[ecommerce]]></category>
            <category><![CDATA[retail]]></category>
            <category><![CDATA[intive]]></category>
            <dc:creator><![CDATA[intive]]></dc:creator>
            <pubDate>Wed, 01 Nov 2023 19:00:54 GMT</pubDate>
            <atom:updated>2023-11-01T19:00:54.552Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*UeudDppDEd4nWvTs8BgMUw.png" /></figure><h4>By <strong>Emilio Oscar Gerbino, </strong>Software Architect in intive</h4><p>There comes a time when a company needs to decide whether to launch a new e-commerce platform that could potentially generate higher sales. However, without knowing exactly how much such a platform could generate, it is difficult to allocate a budget for its necessary development and infrastructure.</p><p>On the one hand, the objective is to <strong>minimize development times</strong>, looking for a go-to-market strategy as short and as agile as possible; and, on the other, <strong>investing in the strictly necessary underlying infrastructure</strong>, without wasting resources on idle capacity. This second objective hides two implicit factors: <strong>costs and performance</strong>. Infrastructure costs must be reduced, while e-commerce performance must be satisfactory for both low sales levels and massive sales peaks.</p><p>The question is, can such objectives be achieved? <strong>Spoiler alert: Yes.</strong></p><h3>Choose a cloud-based, serverless solution</h3><p>Achieving such objectives can be accomplished by choosing the correct architecture for the e-commerce implementation.</p><p>The first step is to <strong>develop a cloud-based solution</strong>. The necessary infrastructure to set up the new e-commerce must not be on-premises — meaning, not within the company’s own servers. Instead, it <strong>must be deployed in a cloud computing provider</strong>, such as AWS (Amazon Web Services), GCP (Google Cloud Platform), Azure (platform provided by Microsoft), etc. just to mention a few.</p><p>The second step is to <strong>design a serverless solution</strong>. What is serverless? In simple terms, it can be defined as being able to execute only the logic of your business <strong>without worrying about the servers or resources necessary for it</strong>. This includes your administration, and escalation in the event of demand peaks; all these factors should be handled by a cloud computing service provider.</p><p>The beauty of a serverless solution is that you only pay for what you use, or in other words, that <strong>zero usage equals zero cost</strong>. This model is known as pay-as-you-go, and it is particularly useful, for example, when launching a new e-commerce platform where the potential performance is not yet known. In this way, if the e-commerce platform does not achieve its expected success, no economic resources will be wasted on idle infrastructure, and conversely, if the success is much greater than expected, the performance of the system will remain at optimal levels. This means that your business can support growth in demand and avoid the possibility of having an undersized infrastructure. This is the second advantage of a serverless solution, <strong>scaling is automatic, requires no administration, and is virtually infinite</strong>.</p><h3>Go composable for greater flexibility</h3><p>The third step is going composable. Composable commerce allows companies to combine different, best-of-breed third-party and custom-developed Packaged Business Capabilities (PBCs) into a single solution built for specific business needs.</p><p>It’s an approach to building e-commerce applications by combining pre-built software components that represent a business capability of the application, (such as the shopping cart, check out, or search function) in the way that makes the most sense for how the business makes money.</p><p>E-commerce platforms therefore don’t need to develop each business function in-house, and can instead, leverage out-of-the-box tools from a SaaS vendor in a way that allows them to add-on to, or change existing features in real-time.</p><p>In the IT market, both regional and global, <strong>there are numerous organizations with vast experience developing serverless and composable solutions</strong> that have been successfully applied in different e-commerce implementations, achieving optimal levels of performance and low operating costs, with short delivery times.</p><h3>Apply a MACH mentality</h3><p>Since 2022 intive has been a member of the MACH (Microservices, API-first, Cloud-native SaaS and Headless) Alliance. The <a href="https://machalliance.org/">MACH Alliance</a> is a non-profit organization that advocates for open and best-of-breed enterprise technology ecosystems. MACH technologies support a composable approach in which every component is pluggable, scalable, replaceable, and can be continuously improved through agile development to meet evolving business requirements.</p><p>intive was an early supporter of MACH principles and has implemented MACH architecture for its customer Vorwerk in the award-winning Digital Recipe Ecosystem Cookidoo®, amongst others. intive has also built a strong, ever-growing partnership network with MACH Alliance founders such as Commercetools, Algolia, Amplience, and Contentful.</p><p>To sum up, many organizations face the dilemma of launching a new potential and promising e-commerce platform but hesitate to take the step forward due to investment costs and time reasons. Yet, it is important to know that <strong>choosing the right architecture</strong> — in this case serverless and composable — can help <strong>reduce uncertainties</strong>, <strong>keep budgets in control</strong>, and, in the event that the new e-commerce platform becomes very successful, to <strong>achieve optimal performance facing peaks of sales</strong> with practically infinite scaling.</p><p>Taking advantage of serverless technologies can provide numerous business opportunities for companies that apply them, as well as keep businesses rooted in the modern tech landscape.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=87a1a92f15fd" width="1" height="1" alt=""><hr><p><a href="https://medium.com/intive-developers/how-to-launch-an-e-commerce-platform-efficiently-87a1a92f15fd">How to launch an e-commerce platform efficiently</a> was originally published in <a href="https://medium.com/intive-developers">intive Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Theming SwiftUI applications]]></title>
            <link>https://medium.com/intive-developers/theming-swiftui-applications-412b0221c8cf?source=rss----f34f16bef773---4</link>
            <guid isPermaLink="false">https://medium.com/p/412b0221c8cf</guid>
            <category><![CDATA[intive]]></category>
            <category><![CDATA[app-development]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[swiftui]]></category>
            <dc:creator><![CDATA[Michael Kao]]></dc:creator>
            <pubDate>Wed, 06 Sep 2023 12:00:31 GMT</pubDate>
            <atom:updated>2023-09-15T11:41:04.233Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/900/1*EHXsbY8kHD92gogskokSEA.png" /></figure><p>When developing apps for our clients at intive, we strive to a consistent and clear design language. Throughout the development process, our designers typically provide components and screen designs optimized for the “light” mode, which ensures an optimal viewing experience in normal light environments.<br>Another aspect we consider is supporting dark mode, which offers an alternative version of the app that visually complements the light mode. These are typically the primary variables that affect the user interfaces we build, although there may be additional factors to consider.</p><p>Especially when working with larger clients operating at a significant scale across multiple countries, we recognize the need for increased flexibility in theming mobile applications. For example, some apps may require different appearances based on the country they are used in. Country A might need a distinct <a href="https://developer.apple.com/documentation/swiftui/color/accentcolor#">accent color</a> for its interactive UI elements compared to Country B, while still maintaining a common design language. Another scenario could involve apps that need to adapt their appearance during specific time periods, such as around Black Friday or during the Christmas holidays.</p><p>In this article, we will explore how to achieve this kind of theming for modern SwiftUI applications. Specifically, we will focus on theming buttons, although the proposed solution can also be applied to other system UI elements or custom ones.</p><p>Before delving into the topic right away, let’s take a step back and review the existing theming capabilities provided by SwiftUI.</p><h3>Button Styles</h3><p>In SwiftUI we have the concept of “Styles“ for defining a views appearance (and interaction behavior). For buttons this would be the <a href="https://developer.apple.com/documentation/swiftui/primitivebuttonstyle#">PrimitiveButtonStyle</a>, which is a protocol which comes with a couple of system provided implementations.<br>By default SwiftUI will select a default style based on the current context. The most likely default style will be the <a href="https://developer.apple.com/documentation/swiftui/borderlessbuttonstyle">BorderlessButtonStyle</a>¹.</p><pre>Button(&quot;Some Button&quot;) {}</pre><figure><img alt="Default button in SwiftUI" src="https://cdn-images-1.medium.com/max/362/1*JNLqObbhoFiJX_L0l-kFZQ.png" /></figure><p>To present a button with a solid rounded background, we can use the <a href="https://developer.apple.com/documentation/swiftui/borderedprominentbuttonstyle/">BorderedProminentButtonStyle</a>, which is also statically available as <a href="https://developer.apple.com/documentation/swiftui/primitivebuttonstyle/borderedprominent/">borderedProminent</a>.</p><pre>Button(&quot;Some Button&quot;) {}<br>  .buttonStyle(.borderedProminent)</pre><figure><img alt="Button with “bordered prominent” style" src="https://cdn-images-1.medium.com/max/362/1*PQUrpHTIImteALTAj3nGuQ.png" /></figure><h4>Custom Button Styles</h4><p>Tweaking the system styles is possible to some extend, but we often require more flexibility for customization. To do so, we can implement our own button styles by conforming a type to the <a href="https://developer.apple.com/documentation/swiftui/primitivebuttonstyle/">PrimitiveButtonStyle</a> or <a href="https://developer.apple.com/documentation/swiftui/buttonstyle">ButtonStyle</a> protocol. The latter is more about defining the appearance while keeping the default interaction behavior. PrimitiveButtonStyle is about specifying both¹.</p><p>Our own implementation of a custom button style could look something like the following PrimaryButtonStyle:</p><pre>struct PrimaryButtonStyle: ButtonStyle {<br>  func makeBody(configuration: Configuration) -&gt; some View {<br>    HStack {<br>      Spacer()<br>      configuration.label<br>      Spacer()<br>    }<br>    .font(.system(.title2, design: .monospaced).bold())<br>    .padding([.vertical], 24)<br>    .foregroundColor(Color.teal)<br>    .background {<br>      Capsule()<br>        .stroke(Color.teal, lineWidth: 3)<br>    }<br>  }<br>}</pre><p>The ButtonStyle <a href="https://developer.apple.com/documentation/swiftui/buttonstyleconfiguration">configuration</a> provides us with the label that is centered within an HStack. We also add a Capsule shape around the button, with a fixed teal color and some adjusted font design.</p><p>This button style can be applied with the buttonStyle view modifier.</p><pre>Button(&quot;Primary Button&quot;) {}<br>  .buttonStyle(PrimaryButtonStyle())</pre><figure><img alt="Button with custom “primary”  button style" src="https://cdn-images-1.medium.com/max/372/1*TUEnCZ0RqDotjhitvB-iwQ.png" /></figure><p>By extending the ButtonStyle protocol we can also shorten the expression as it’s done for system defined button styles.</p><pre>extension ButtonStyle where Self == PrimaryButtonStyle {<br>  static var primary: Self { .init() }<br>}</pre><p>Which allows to the following expression:</p><pre>Button(&quot;Primary Button&quot;) {}<br>  .buttonStyle(.primary)</pre><h3>Theming</h3><p>With that in mind, let’s look at the previously described scenario, where we have to support visually different versions of our UI. For demonstration let’s say we have an app, that has a default design, but during black Friday parts of the UI should have a darker look and feel.</p><p>Let’s first define a type that holds the moving parts of our UI, which can be themed. For that we can use a struct which holds the properties that can be adapted per component, in our case let’s start with just the color for the primary button.</p><pre>struct Theme {<br>  var button: Button<br><br>  struct Button {<br>    var primary: Primary<br><br>    struct Primary {<br>      var color: Color<br>    }<br>  }<br>}</pre><p><em>Note that this is a more deeply nested structure that really needed. But it’s a structure we can easily built upon and extend.</em></p><p>Our default theme could look something like this:</p><pre>extension Theme {<br>  static let `default` = Self(<br>    button: .init(<br>      primary: .init(color: .teal)<br>    )<br>  )<br>}</pre><p>To use this theme inside our custom button style we can leverage SwiftUIs “EnvironmentValues”. This has a couple of advantages, but we will come to that in a bit.<br>To make our Theme available to the SwiftUI environment, we need to extend the SwiftUI’s EnvironmentValues by implementing an EnvironmentKey.</p><pre>private struct ThemeEnvironmentKey: EnvironmentKey {<br>  static var defaultValue = Theme.default<br>}<br><br>extension EnvironmentValues {<br>  var theme: Theme {<br>    get { self[ThemeEnvironmentKey.self] }<br>    set { self[ThemeEnvironmentKey.self] = newValue }<br>  }<br>}</pre><p>The default value that is required by the EnvironmentKey, will be the previously defined static Theme.default. Further we need to define a property on EnvironmentValues which internally accesses the underlying structure through subscript syntax.</p><p>To make use of the new environment value, the PrimaryButtonStyle only needs a few adjustments:<br>First we plug out the environment value through the <a href="http://twitter.com/Environment">Environment</a> property wrapper:</p><pre>struct PrimaryButtonStyle: ButtonStyle {<br>  @Environment(\.theme) private var theme<br>  ...<br>}</pre><p>Accessing EnvironmentValues from within a ButtonStyle is already supported since iOS 13².</p><p>Then we use the theme in those places that need to change depending on the selected theme.</p><pre>  ...<br>  func makeBody(configuration: Configuration) -&gt; some View {<br>    ...<br>    .foregroundColor(self.theme.button.primary.color)<br>    .background {<br>      Capsule()<br>        .stroke(self.theme.button.primary.color, lineWidth: 3)<br>    }<br>  }<br>}</pre><p>We can also use Swifts key paths and access the theme that we are actually interested in directly through the property wrapper.</p><pre>struct PrimaryButtonStyle: ButtonStyle {<br>  @Environment(\.theme.button.primary) private var theme<br><br>  func makeBody(configuration: Configuration) -&gt; some View {<br>    ...<br>    .foregroundColor(self.theme.color)<br>    .background {<br>      Capsule()<br>        .stroke(self.theme.color, lineWidth: 3)<br>    }<br>  }<br>}</pre><p>The only thing that is missing is the definition for our new theme. Let’s call it blackFriday.</p><pre>extension Theme {<br>  static let blackFriday = Self(<br>    button: .init(<br>      primary: .init(color: .black)<br>    )<br>  )<br>}</pre><h3>Switching themes</h3><p>With that we can use our Theme and try it out in a SwiftUI preview.</p><pre>struct ThemingPreviews: PreviewProvider {<br>  struct ContentView: View {<br>    var body: some View {<br>      VStack {<br>        Button(&quot;Primary Button&quot;) {}<br>          .buttonStyle(.primary)<br>      }<br>      .padding()<br>    }<br>  }<br><br>  static var previews: some View {<br>    ContentView()<br>  }<br>}</pre><p>Without any adjustments, the button style will use the default theme. To switch the theme we can use the <a href="https://developer.apple.com/documentation/swiftui/view/environment(_:_:)">environment</a> view modifier and set the theme.</p><pre>ContentView()<br>  .environment(\.theme, .blackFriday)</pre><figure><img alt="A “Primary Button” with the black Friday theme." src="https://cdn-images-1.medium.com/max/373/1*cHiusZbDjjP35y79G-uS8w.png" /></figure><p>One great benefit of leveraging the SwiftUI environment for this purpose, is that it will propagate the theme down to any descendant of the ContentView. We only need to set the theme once, preferably high in the the view tree, and any view showing a button with the “primary” button style will automatically adjust it’s appearance.</p><p>In a real world application we would set the theme environment value depending on the current date and time or based on the country that app is build for or is distributed in.</p><h3>Flexibility</h3><p>Another interesting aspect of using the SwiftUI environment, is that we still have great flexibility to alter the theme in certain areas of our app where we need to. Imagine we have a “blackFriday” modal, which should show the “blackFriday” theme no matter what was set higher up in the hierarchy. This is possible by setting the theme environment value on the view presented in the sheet:</p><pre>struct ThemingPreviews: PreviewProvider {<br>  struct ContentView: View {<br>    @State private var isPresented = false<br><br>    var body: some View {<br>      VStack {<br>        Button(&quot;Primary Button&quot;) { isPresented = true }<br>          .buttonStyle(.primary)<br>      }<br>      .padding()<br>      .sheet(isPresented: $isPresented) {<br>        NavigationView {<br>          Button(&quot;Primary Button&quot;) { isPresented = false }<br>            .padding()<br>            .buttonStyle(.primary)<br>            .navigationBarTitleDisplayMode(.inline)<br>            .navigationTitle(&quot;Modal&quot;)<br>        }<br>        .environment(\.theme, .blackFriday)<br>      }<br>    }<br>  }<br><br>  static var previews: some View {<br>    ContentView()<br>      .environment(\.theme, .default)<br>  }<br>}</pre><figure><img alt="Modal with button with different theme" src="https://cdn-images-1.medium.com/max/1024/1*_jT1Jev3hte4DF4MD0uEsQ.png" /><figcaption>Using different themes in certain areas, for example within a specific modal</figcaption></figure><p>It’s would even be possible to make ad-hoc adjustments to the current theme when needed. This is possible because we use structs for defining our Theme.</p><pre>Button(&quot;Primary Button&quot;) {}<br>  .buttonStyle(.primary)<br>  .environment(\.theme.button.primary.color, .pink)</pre><h3>Theme structure</h3><p>The structure of the Theme type is completely up to us and the needs of our application. The “Primary” button can easily be extended by adding additional properties to the Theme.Button.Primary type. If we would need to support additional “Secondary” or “Tertiary” buttons we can add these to the Theme.Button type.</p><pre>struct Theme {<br>  var button: Button<br><br>  struct Button {<br>    var primary: Primary<br>    var secondary: Secondary<br>    var tertiary: Tertiary<br>    <br>    struct Primary {<br>      var color: Color<br>      var borderWidth: CGFloat<br>    }<br><br>    struct Primary {<br>      var color: Color<br>    }<br><br>    struct Tertiary {<br>      var color: Color<br>    }<br>  }<br>}</pre><p>Another possible extension to the Theme could be done for our custom views. Imagine we have a settings list in our app that should change its item appearances based on the used theme. To support this, we could add a Theme.Settings.Item type that holds the themed properties of a SettingsItem view.</p><pre>struct Theme {<br>  ...<br>  var settings: Settings<br>  <br>  struct Settings {<br>    var item: Item<br>     <br>    struct Item {<br>      var backgroundColor: Color<br>    }<br>  }<br>}</pre><p>If our settings item is built as a simple view, we can plug out the needed theme directly in our SettingsItem view by reaching out to the theme @Environment(\.theme.settings.item) private var theme.</p><h3>Summary</h3><p>By leveraging the SwiftUI environment we gain a simple way of communicating our theme down the view hierarchy. It is possible to change the theme dynamically and we are still able to specify the behaviour for certain areas of our applications. For further reading I highly recommend the excellent posts by <a href="https://movingparts.io/articles">moving parts</a> on styling³ and composability⁴.</p><p>[1] Exploring SwiftUI’s Button styles — <a href="https://www.fivestars.blog/articles/button-styles/">https://www.fivestars.blog/articles/button-styles/</a></p><p>[2] Environment Objects and SwiftUI Styles — <a href="https://www.fivestars.blog/articles/environment-objects-and-swiftui-styles/">https://www.fivestars.blog/articles/environment-objects-and-swiftui-styles/</a></p><p>[3] Styling Compontents in SwiftUI — <a href="https://movingparts.io/styling-components-in-swiftui">https://movingparts.io/styling-components-in-swiftui</a></p><p>[4] Composable Styles in SwiftUI — <a href="https://movingparts.io/composable-styles-in-swiftui">https://movingparts.io/composable-styles-in-swiftui</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=412b0221c8cf" width="1" height="1" alt=""><hr><p><a href="https://medium.com/intive-developers/theming-swiftui-applications-412b0221c8cf">Theming SwiftUI applications</a> was originally published in <a href="https://medium.com/intive-developers">intive Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Smooth Transition to Flutter: Essential Guidelines for Android/iOS Developers]]></title>
            <link>https://medium.com/intive-developers/smooth-transition-to-flutter-essential-guidelines-for-android-ios-developers-c09168940ba7?source=rss----f34f16bef773---4</link>
            <guid isPermaLink="false">https://medium.com/p/c09168940ba7</guid>
            <category><![CDATA[intive]]></category>
            <category><![CDATA[mobile-app-development]]></category>
            <category><![CDATA[flutter]]></category>
            <category><![CDATA[android]]></category>
            <category><![CDATA[ios]]></category>
            <dc:creator><![CDATA[intive]]></dc:creator>
            <pubDate>Thu, 03 Aug 2023 18:11:16 GMT</pubDate>
            <atom:updated>2023-08-03T18:11:16.618Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/900/1*_JOUpHDf7PlueUBj3yDgRA.png" /></figure><h4><strong>By Francisco Adrián Llaryora, Software Engineer at intive</strong></h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*n7ou1ZqD1EJkuLvmZSpMjg.png" /></figure><p>In this article, I will explore the transition from native development on Android or iOS to the Flutter platform, equipping mobile developers with essential insights. My focus is to provide a few guidelines for a seamless shift, answering 8 key questions that will help you navigate the process effectively.</p><p>These questions will serve as a starting point to ensure that the mobile project addresses the stakeholders´ main concerns regarding the technology used.</p><h3><strong>1. What advantages does Flutter offer for developing cross-platform applications?</strong></h3><p>Flutter is an open-source framework developed by Google, which enables the creation of visually appealing, cross-platform applications for Android and iOS using a single codebase. Its main goal is to streamline mobile app development, allowing developers to build one app that functions consistently on both operating systems. The “write once, run anywhere” approach eliminates the need for writing separate code for each platform.</p><p>One of the significant advantages of Flutter is the utilization of Dart, a client-optimized language for developing fast apps across multiple platforms. Designed as a better alternative to JavaScript in 2011, Dart offers a smooth learning curve and helps developers overcome common errors encountered in JavaScript. Targeting multi-platform can help reduce development time and cost.</p><h3><strong>2. How can Flutter ensure a seamless user experience for a mobile app?</strong></h3><p>Let me explain the fundamental building block to construct the user interface in Flutter. A widget is a description of a part of the app’s user interface, such as buttons, text fields images, and more. Widgets in Flutter are declarative, meaning that the user interface is described by composing widgets. Instead of making direct changes to the UI, the structure and appearance of the interface are defined by building a widget tree. Developers describe how their UI should look based on the current state of the app.</p><p>Developers can easily integrate animation and motion support into their apps, including motion effects, transitions, and gestures. Moreover, Flutter includes sets of predesigned widgets based on Material Design for Android and Cupertino aesthetics for iOS.</p><p>These components and widgets simplify the process of designing visually appealing attractive and functional interfaces, reducing the need to rebuild them from scratch.</p><h3><strong>3. Is Flutter scalable to accommodate increasing user demand and potential growth?</strong></h3><p>Flutter is considered scalable because, as we have seen in previous points, it supports code reusability and cross-platform development. Furthermore, the framework was designed to enable the creation of smooth and responsive user interfaces.</p><p>In addition, to show the widgets with performance it incorporates a rendering runtime engine. In some devices, it uses Skia, while in others it employs Impeller. That enables Swift or Android testing of views, ensuring their proper functioning across different engines, while eliminating the need for relying on emulators for testing purposes. By leveraging this, significant cost savings can be achieved on instrumentation testing in pipelines.</p><p>However, there are no silver bullets here. A solid foundation of scalability depends on the app’s architecture, backend infrastructure, and proper implementation of scalability strategies.</p><h3><strong>4. How does Flutter enable integration with other systems or services?</strong></h3><p>Integration with other services is possible given the support for accessing device-specific functionality, REST APIs, databases, and cloud platforms. This is possible thanks to an active community of developers and contributors.</p><h3><strong>5. What analytics capabilities does Flutter offer for tracking user behavior and gathering actionable insights?</strong></h3><p>Providing the user with a wonderful customer journey is no easy task. Similar to other mobile applications, Flutter app can track custom events, user conversions, and crash reports. It can send user data to your backend infrastructure or share the workload with Firebase or Dynatrace. When you don´t know what the customer wants, you can perform an A/B test using your backend or third-party services.</p><h3><strong>6. How easy is it to deploy updates and perform maintenance on the app using Flutter?</strong></h3><p>To keep the application on track, changes should be planned. Inexperience can lead to the creation of technical debts that will require additional efforts to cope with it in the future.</p><p>In the first place, all apps must define the update and rating flow for Android and iOS stores. To achieve this, it is necessary to review the store policies. In addition, the update flow should define when an app version is considered expired, meaning that the backend will no longer support it.</p><p>Secondly, both Play Store and App Store have a review flow that requires exploration of the app. When your application handles sensitive data or money transactions, you wouldn’t want to be in that predicament during the review flow. For this reason, creating an isolated user, in production, with dummy data, for the store revision flow does not seem so crazy. In addition to your front-end, the back end must deal with this eventual guest as well.</p><p>Working together, a team should create a good pipeline for those finished tasks in the code repository. To clarify this, let me explain three concepts:</p><ul><li><strong>Continuous Integration</strong> (CI) is when the pipeline server runs the test and builds commands every time someone pushes some code in a feature branch.</li><li><strong>Continuous Delivery</strong> (CD) is when the pipeline server creates a new artifact/release after adding code in the release branch. Without Continuous Deployment it requires manual deployment.</li><li><strong>Continuous Deployment</strong> (CD) is when the pipeline server deploys the product automatically after adding code in the release branch.</li></ul><p>Moving on to the third point, stakeholders should support TDD (Test Driven Development) and BDD (Behavior Driven Development) practices on each feature. With that in schedule, continuous integration will play a crucial role in ensuring quality, saving time, and optimizing the budget.</p><p>Regarding Google´s Play Store and Apple´s App Store, a remarkable way to achieve CI and CD is configuring a GitHub Action. The tedious part lies in setting up the signing part once and periodically configuring the API keys to make releases for the CI and publish the app to the store. The first publish is always manual and not via API. Although this task can be long, it is well worth the effort.</p><h3><strong>7. Is Flutter cost-effective while still meeting the desired functionality?</strong></h3><p>Flutter is a cost-effective option for medium and small projects, whereas native development is more suitable for larger projects.</p><p>When your app experiences significant growth, the app will be compelled to migrate to a native platform. From a budget point of view, migration is the worst scenario. In the case of migration, compared to starting from scratch in a native environment, the initial costs will be depleted due to potential additional expenses.</p><h3><strong>8. When will Flutter not be the best choice?</strong></h3><p>All in all, the decision to choose Flutter or native development depends on the specific requirements, constraints, and goals of your application. There are a few instances when Flutter will not be the best choice:</p><ul><li>When your app cannot wait for platform-specific features.</li><li>When your app already has native code base and re-writing the code is not a straightforward solution.</li><li>When your app has tight size restrictions on the artifacts.</li><li>When looking at the medium term, we plan for the project to be large.</li></ul><p>And so, I bid you farewell with gratitude for your time and attention. I hope I have helped you to know whether or not Flutter is the best option for your project.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c09168940ba7" width="1" height="1" alt=""><hr><p><a href="https://medium.com/intive-developers/smooth-transition-to-flutter-essential-guidelines-for-android-ios-developers-c09168940ba7">Smooth Transition to Flutter: Essential Guidelines for Android/iOS Developers</a> was originally published in <a href="https://medium.com/intive-developers">intive Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Microservices: High quality code development — Going into practice.]]></title>
            <link>https://medium.com/intive-developers/microservices-high-quality-code-development-going-into-practice-b468ea5b1a18?source=rss----f34f16bef773---4</link>
            <guid isPermaLink="false">https://medium.com/p/b468ea5b1a18</guid>
            <category><![CDATA[intive]]></category>
            <category><![CDATA[microservices]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[testing]]></category>
            <dc:creator><![CDATA[intive]]></dc:creator>
            <pubDate>Thu, 08 Jun 2023 13:39:47 GMT</pubDate>
            <atom:updated>2023-06-23T18:39:41.653Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/900/1*GPminE2-be_dptflcDNmcA.png" /></figure><h3>Microservices: High quality code development — Going into practice.</h3><h4>By Daniel Perazza, Emilio Gerbino and Nicolás Quintana</h4><h3>Introduction:</h3><p>The purpose of this paper is to present a practical approach of combining two different techniques: <strong>Semantic Testing</strong>, a concept that aims to focus on what to validate over how to implement the validations; and <strong>Mutation Testing</strong>, a technique intended to evaluate and improve the quality of unit tests, and in consequence, the overall quality of the code.</p><p>The theoretical aspects of these topics were covered in detail in a previous article: “Microservices: High Quality Code development — The theory”. This article will put the concepts into practice with a high level of detail.</p><h4>A practical case</h4><p>Let’s build a simple and small microservice. This will be the base to explain how to combine Unit Tests, mutation and semantic testing.</p><p>As described, “Calculator” is a simple microservice that allows performing math operations, and it will be used for practical purposes in the next sections.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*xij2kJxQNnaxFnBv84rCAg.png" /></figure><p>It is implemented in javascript, express with node, using Jest, Striker and cucumber for testing. (Link to the code example repository can be found in the Annexes section)</p><h4>Basic concepts</h4><p><strong>Unit Tests</strong></p><p>For example, functions considered as units:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*8zRPLKlPtez56n0B_x4g8A.png" /></figure><p>Unit test example:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*REv01kmjQt2NXtlYRAaFhg.png" /></figure><p>Continuing with the examples, for the <strong>average</strong> function, the <strong>add</strong> function is mocked in order to focus only on the logic of the function under test:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*2bOMw36OKVC6RbzCKLuz5g.png" /></figure><p><strong>Component Tests</strong></p><p>The <strong>average</strong> function above could be considered as a component, knowing it is composed of other functions, such as <strong>add</strong> and <strong>divide</strong>. Here below, the component test is presented. It tests the main piece of code and its dependency as a unique piece, let’s say, as a component.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1013/1*ugzpFcwESjZXs0VWRgWIAg.png" /></figure><p>Another example. In this case, the endpoint REST /add is considered a component.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/834/1*AWYZl1K6POkLUMuaCMilaQ.png" /></figure><p>This uses other units of code, but it is tested as only one piece.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*KGDzQRHMraZkeyzoie08Jg.png" /></figure><p><strong>Integration tests</strong></p><p>This test is working on the very same endpoint REST /<strong>add</strong> of the previous example, but here the difference lies in validating how the code is integrated with a server application.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YlNKje_FGzY44nwhpDNOPw.png" /></figure><p><strong>E2E tests</strong></p><p>In this case, the add operation is tested against a pre- production environment. This test is checking the functionality and also non-functional stuff for example, an api gateway, a firewall , a load balancer in front of the microservices, in this particular example, hosted at <a href="https://mycalculator-preprod.com/">https://mycalculator-preprod.com/</a></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*u0Z9kpJSNHvLfTT4stoODw.png" /></figure><ul><li>The end-point value is just an arbitrary example, not a real one.</li></ul><h3>Mutation testing</h3><p>To better illustrate the benefits of mutation testing, a new functionality will be added to the calculator.</p><p>This functionality is aimed to solve quadratic equations using the following formula:</p><p>Given an equation of the form:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/457/1*3Ry6NIYALfbY_A67UeGrsw.png" /></figure><p>The solution can be calculated as:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/308/1*LS9LydxAZbL6cNFSLxbRdw.png" /></figure><p>where:</p><ul><li>a,b and c are real numeric constants.</li><li>𝑏² — 4. 𝑎. c is called the Discriminant of the equation.</li></ul><p>Bear in mind that:</p><ul><li>The solution to the equation presents two possible variations depending on the discriminant square root value.</li><li>The solution can only be found if a is not 0, otherwise, it would not be a quadratic equation, or in other word, since the resolution formula is a division by 2 times a, we can not divide by zero.</li><li>If the discriminant is a negative value, then the solutions are imaginary numbers.</li></ul><p>With all this taken into account, the functionality can be expressed as the following function:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*397pVjA6ePyYHhRXk6QlCw.png" /></figure><p>As stated before, a set of unit tests needs to be provided in order to evaluate the function and ensure that it can solve all particular cases, and return an error whenever the parameters are not valid.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*OoE-5YfkvS9WKGxadW3kow.png" /></figure><p>The code coverage that can be extracted from this suite of tests can be found in the following image:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Jsh8YAMqVaMwl9e4pA8BoQ.png" /></figure><p>As expected, the coverage shows that the function is 100% tested, however it is actually only showing that it has executed all possible paths inside said function and that’s why it can not be used to measure quality.</p><p>In order to solve the question “how do we know that this function is fully and correctly tested?”, a mutation testing framework can be introduced into the stack.</p><p>In this instance, Stryker Mutator will be used, but also others can be used as well depending on many factors, like the languages they support, mutation operators, ease of use, etc.</p><p>Stryker Mutator is a well established mutation framework that supports multiple languages like Javascript, Typescript, C# and Scala and testing frameworks like Jest, Cucumber, Mocha and Karma, as well as providing high speeds and over 30 possible mutations. It provides very useful reports and is open source also.</p><p>To add Stryker Mutator to the mix, follow this steps:</p><ol><li>Run the command <strong>npm install -g stryker-cli</strong> to install the stryker cli.</li><li>Run the command <strong>stryker init</strong> to run the initialization prompt</li><li>Stryker will ask for information about the project, like what package manager is used and what library or framework cli is used (angular-cli, create-react-app, none, etc). Follow the prompt to finish the setup. At the end, stryker will create a stryker.config.json file with your setup configuration.</li><li>Run the command <strong>stryker run</strong>.This may take several minutes depending on the project.</li></ol><p>This last command will execute stryker against your tests. Stryker will notify each step of execution and will indicate if any failures occur.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*lzFPkqyFcS4c_BCpTbwegw.png" /></figure><p>For the quadraticEquation function created before, the console will show something like this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*FqGUyC5Y1ad3D2uIQCtSig.png" /></figure><p>Focusing only on the quadraticEquation.js file, Stryker reports that 37 mutations were made and 28 of them were killed by the test suite giving a mutation score of 75.68%. This means that 9 mutants, aka potential bugs, were introduced and the test suite was not able to catch.</p><p>Fortunately Stryker provides a more developer friendly report that can be found in the reports/mutation/mutation.html file, that when served looks like this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ZTrn1n-cIuG0vwnFO_iK6w.png" /></figure><p>Using this report, the developer can navigate to the function that is being analyzed, and find what mutants survived (as described in the following picture).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*du0x-J_Ni2ttZPUtx7v34g.png" /></figure><p>Clicking on each red dot will reveal the mutant.</p><p><strong>Mutation Analysis</strong></p><p>One way of interpreting the survival of a mutant is to understand it as the failure of the test suite to verify or assert the code where the mutant is present.</p><p>Taking the first mutant as an example, it can be point out that since Stryker introduced a mutation that essentially erased the text inside the Error, an assumption can be made that indicates that the suite of test didn’t verify that the error thrown had the original “Invalid Quadratic Equation — Cannot divide by zero” string.</p><p>Then, the next course of action would be to modify the test to take that mutation into account:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*03I_bWOq1t-l3UdutvbooQ.png" /></figure><p>The following mutation is related to arithmetic operators. Stryker decided to change a multiplication for a division and this mutation survived, which is alarming since changing the formula in any way should cause a miscalculated value of the quadratic equation for the same parameters expressed in the tests.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*gFPnxlUiAtLI-6A5Svut2w.png" /></figure><p>If looked closely, this mutation is indicating that the formula behaves the same way if the multiplication is changed by a division, and the parameters do not change. This might be showing that the example parameters used in the tests do not reflect all possible cases.</p><p>Sure enough, some of the tests use the value 1 for the first parameter “a”, that causes the mutation to survive. That is because a number multiplied by 1 and divided by 1 returns the same value, meaning that if the value 1 is used for the parameter a, the tests will not differentiate between the formula with a multiplication or a division.</p><p>To solve this, we could either change the value used in the test for something less trivial, or add more test cases with different parameters.</p><p>This solution will clear some of the other mutants, and the same could be applied for the tests that cover the inside of the if (discriminant &lt; 0) conditional, resulting in the following changes.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/699/1*1I0_FMr05nZiqK4wg4JgUg.png" /></figure><p>Finally, there is one more mutant that is surviving. This mutant changes a less than operator for a less or equal operator. This is indicating that our tests haven’t checked for the equality case, aka what should happen when the discriminant is equal to 0.</p><p>Once more, Stryker is helping to identify possible flaws in the tests, and in spite of having a code coverage of 100%, the suite has missed a crucial case.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*JooRPTvq8r1UY8r07_pusw.png" /></figure><p>To solve this final mutant, it is enough to add a test case where the determinant is 0.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*LF4AVe7JSoxWQCEaZT2-PA.png" /></figure><p>Running Stryker once more reveals the following score</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*X20FxJdfDGlwI6z6aEWKGg.png" /></figure><p>The quality of the tests for the quadraticEquation function has been improved and now the function is fully and 100% tested.</p><p>As an exercise, the reader is encouraged to check out the project source code where more examples of mutation testing can be conducted over other functions, like the “lineIntestects” function, included in the utils.js file. The source code link can be found in the “Annexes” section of this article.</p><h3>Semantic testing</h3><p>There are different tools that can be used for writing semantics tests. For this practical example, Cucumber is the chosen framework.</p><p><strong>Tool &amp; frameworks</strong></p><p><strong>Cucumber</strong> is defined by themselves as an open source tool that tests business-readable specifications against any code on any modern development stack. It is the world’s #1 tool for Behavior-Driven Development.</p><p><strong>Chai</strong> is the library used for assertion. It makes testing much easier giving a lot of assertions that can run against your code.</p><p><strong>Setup</strong></p><p>For this practical case, these are the steps to follow:</p><ol><li>npm install — — save — dev @cucumber/cucumber chai</li><li>In the package.json file add the command to run the cucumber tests</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/941/1*qVqsKif7o6NB37HMPdClOg.png" /></figure><p>3. Add the cucumber.js file to configure cucumber. The highlighted lines tell Cucumber that the semantic tests are in the <strong>feature</strong> folder and end with <strong>.feature</strong>. Also that the .feature files will require the code in <strong>step-definitions/**/*.js</strong> to implement the tests</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/912/1*X1gw_eJ4HE8tGlQyN19JKg.png" /></figure><p><strong>The semantic test</strong></p><p>So, finally this is the resulting test as desired. A test easy to understand that verifies functionality or behavior regardless of the test implementation:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3WcbQfdTSSZo-Q8NR800yA.png" /></figure><p>Behind scenes, its corresponding implementation:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vEKHQuF5rfRZFEOWlswbNg.png" /></figure><p>After running the semantic test, the following report is produced:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*tfFSA3OhppF47F4GWu51Og.png" /></figure><p>The image below shows a report with more features and scenarios. The green pie charts at 100% say that all features and scenarios were run successfully, that is great. If the semantics concepts are well applied a report like this gives a very accurate status of the health of our product.</p><p>Besides in the scenarios <strong>Divide two integers</strong> and <strong>Get the average of array of integers</strong>, it is seen that some steps are shared between both of them, meaning that steps and implementations can be reused reducing effort and time. The fact that many steps are shared gets more evident in the semantic tests since natural language and a business abstraction are used.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*FuZydzipQMDvtfscN6ajjw.png" /></figure><p>In the <strong>Annexes</strong> section it is placed a link to the repository with the examples used throughout this document. More examples can be found there either to view or run.</p><h3>Pros and Cons</h3><p><strong>Mutation Testing</strong></p><p>As it was shown, the application of mutation testing enables developers to detect potential bugs and ensures that the tests are created with quality in mind.</p><p>It also allows developers to understand and find cases and ambiguities in the source code that were not checked before.</p><p>And finally, mutation testing also makes sure that by eliminating small mutants, bigger, more costly and risky bugs are also eliminated.</p><p>When we talk about downfalls, the two major factors that can affect the application of this method are:</p><ul><li><strong>Cost of producing and evaluating mutants:</strong> Depending on the number of mutants generated, and the time it takes to run the test suites, the whole process can become extremely time consuming. Also, the number of mutants generated depends on the choice of mutation operators so the more operator developers wish to use, the more mutants will be generated. A test suite that takes about 5 seconds to run, with a code base that generates 2500 mutants can take approx. 4 hours. This issue can be addressed by putting in place optimizations techniques that are not in the scope of this paper (and most frameworks will implement by default).</li><li><strong>Cost of equivalent mutants:</strong> Equivalent mutants are changes made to the original code that differ only in the syntax but not in the semantics, meaning the mutant generated creates a change that does the same as the original code. These mutants cannot be detected and create false surviving mutants that can influence the final mutation score and also represent a problem for developers when interpreting the results and trying to eliminate mutants.</li></ul><p><strong>Semantic Testing</strong></p><p>Let’s mention some pros of semantic testing: it abstracts testing to the business domain without bothering with the code for those tests, plus it enforces quality by leaving coding issues behind.</p><p>Another benefit is test understanding. Semantic testing, as a first step towards semantic monitoring, is a technique that focuses on finding a standard way of testing software. This means to focus on “what” the system should be doing in a clear, easy to understand way. This is the key concept to ensure the complete behavior of the whole digital capability.</p><p>The last advantage is reporting improvements. Because of the way semantic testing works, it makes the reports easier and richer.</p><p>However a disadvantage of this approach is that extra complexity is introduced. In the worst scenario, without a deep understanding of the underlying tools, tests may become un-maintainable and unscalable.</p><h3>Final conclusion</h3><p>At this point, a clear idea of what mutation testing and semantic testing are should be on the table. The key point is to understand that combining both techniques will secure the code’s quality, as a manner to guarantee that the developed microservice is not only compliant with the intended behavior, but also that it has been correctly built internally.</p><p>This is the key point of this article. Both techniques are powerful and can be used in a stand alone fashion, however, by combining them the resulting quality gets boosted up. This produces a true confidence and sense of security in the delivered microservice. Also, by mixing both approaches other benefits come as well, like enhanced monitoring with health check alarms from Semantic Monitoring, meaningful code coverage that prevents subtle logic flaws or weaknesses from Mutation Testing, creating a well-built, solid software component.</p><p>As final tips, remember that:</p><ul><li><strong>Code coverage</strong> metric is a MUST, it tells how the tests are reaching the code, but it’s not sufficient to make sure quality is attained. Tests MUST be good and comprehensive, a way for achieving this is <strong>mutation testing</strong>, <strong>plus</strong> doing the <strong>right assertions</strong>. This metric also tells how good the regression test suite is.</li><li>Implementing a <strong>good test strategy</strong> is essential for quality. The goal is avoiding test overlapping. For instance, a good plan could be to cover most of the code with component tests, cover corner cases with unit tests and cover the acceptance criteria of main functionalities with e2e. Checking integration points either with e2e tests or with component ones.</li><li>Use <strong>semantic tests</strong> when it is more suitable to focus on quality over its implementation. It could be used for e2e, component or integration tests. The coverage would be the same but the test will be better.</li><li>Extends the <strong>semantic concepts to monitoring</strong> in order to join quality and observability in the same picture.</li><li>Implement a Continuous Integration (CI) pipeline to execute all this tool and strategies in an automatic, scalable and systematic way.</li></ul><p>For a concrete example of mixing these two concepts together, please refer to the source code link in the annexes.</p><p>Hope you have enjoyed reading this article.</p><h3>Annexes</h3><p>“<a href="https://medium.com/intive-developers/microservices-high-quality-code-development-the-theory-dea2a32dbac5">Microservices: High Quality Code development — The theory</a>” — by Daniel Perazza, Emilio Gerbino and Nicolas Quintana.</p><p>“Stryker Mutator” Framework:</p><p>Authors list: <a href="https://github.com/orgs/stryker-mutator/people">https://github.com/orgs/stryker-mutator/people</a> / Website: <a href="https://stryker-mutator.io/">https://stryker-mutator.io/</a></p><p>“Assessing Test Quality” — by David Schuler</p><p>“The Practical Test Pyramid” — <a href="https://martinfowler.com/articles/practical-test-pyramid.html">https://martinfowler.com/articles/practical-test-pyramid.html</a></p><p>Example repository: <a href="https://github.com/dperazza/paper">https://github.com/dperazza/paper</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b468ea5b1a18" width="1" height="1" alt=""><hr><p><a href="https://medium.com/intive-developers/microservices-high-quality-code-development-going-into-practice-b468ea5b1a18">Microservices: High quality code development — Going into practice.</a> was originally published in <a href="https://medium.com/intive-developers">intive Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Microservices: High quality code development — The Theory]]></title>
            <link>https://medium.com/intive-developers/microservices-high-quality-code-development-the-theory-dea2a32dbac5?source=rss----f34f16bef773---4</link>
            <guid isPermaLink="false">https://medium.com/p/dea2a32dbac5</guid>
            <category><![CDATA[testing]]></category>
            <category><![CDATA[intive]]></category>
            <category><![CDATA[microservices]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[intive]]></dc:creator>
            <pubDate>Tue, 02 May 2023 19:02:11 GMT</pubDate>
            <atom:updated>2023-06-23T18:40:22.436Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/900/1*sHeB8YmuSsg38yE8VgJjwg.png" /></figure><h3>Microservices: High quality code development — The Theory</h3><h4>By Daniel Perazza, Emilio Gerbino and Nicolás Quintana</h4><h3>Introduction</h3><p>Only what is measured can be improved, but, are you sure you have the right measure? How can you tell your microservice is doing the right thing? Moreover, let’s say you have, for example, 87% unit test coverage… is it meaningful coverage, or is it just coverage? How can you tell the difference, and how can you prove your development has the appropriate level of quality?</p><p>The purpose of this article is to address the questions above by presenting a theoretical approach of combining two different techniques: Semantic Testing and Mutation Testing.</p><p>Before you go on reading this article, make sure you are familiar with the following concepts: Unit testing and microservices, as they are the basis from where we’ll start building everything else.</p><p>Hope you enjoy the journey.</p><h3>Addressing issues</h3><p>Scenario Facing a microservices architecture has clearly several advantages but also presents some challenges. One of them is to guarantee the quality of each microservice (or small set of microservices), with a clear purpose: to make the whole integration smoother. Otherwise, a lot of effort would be required to make all microservices work together seamlessly, and of course, this may take a lot of time.</p><p>So, a key concept is to assure any digital capability (defined as a small set of microservices with a single functional purpose) has the exactly expected behavior. This is particularly important, as it will simplify a lot of the troubleshooting required when putting the whole system altogether before going into production; or when already in production, to quickly diagnose and fix any issues.</p><p>One traditional technique microservices developers use is to rely on a good set of unit tests in order to ensure the quality of the component delivered. Unit tests are great, but they are just the beginning, the building block that is necessary, but not enough, to build everything else.</p><p>The very first problem is how to differentiate just coverage from “meaningful” coverage. In other words, this means not just covering lines of code, but to make sure of writing the appropriate set of unit tests that will cover all real conditions. Failing on this topic may lead to a false sense of confidence in the code, and yet have room for potential problems that eventually will arise in time. Only high quality UT (unit tests) will help to solve this issue.</p><p>The second problem is, although a digital capability may have a good, high quality set of UT, it may still be hard to integrate it with the rest of the microservices of the whole system. How may that happen? UT will guarantee the quality of very small pieces of code, but we still need to guarantee the behavior of the entire capability before going into integration with all the others. In other words, UT will separately certify the pieces, but we need to make sure when putting them together the resulting behavior is also correct. Failing on this topic will definitely lead to rework.</p><p>Let’s go deeper into these concepts, but first some basic theory.</p><h3>Some basics</h3><p>The goal of the software development process is to create software that meets the domain requirements and also contains no defects. However, achieving this goal is almost impossible because most of the time not all requirements are known from the beginning and, even more, software can become so complex that there is no way to absolutely be sure that it contains no defects.</p><p>Quality in microservices is the combination of several factors, but the most important ones are: a team or organization willing to work on this topic, understanding testing time and effort, building testable applications, <strong>the theory and principles of testing</strong> and tools, for example, static analysis, reports, CI/CD, mutation testing and test coverage.</p><h4>Theory and principles of testing</h4><p>There are many different types of tests. There is also an overlap between them that can be confusing when going into implementation. The following graph summarizes them in 4 categories or types as described in the test pyramid below:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/893/1*t8t6C3PjR0FuTJs6waRt3Q.png" /></figure><ul><li><strong>Unit tests:</strong> UT ensures a function, an object or small piece of code does the expected. The idea of <strong>unit</strong> refers to the piece of code that has a unique purpose with clear boundaries. Unit tests operate at the most basic level, and scrutinize the behavior of those small well defined pieces of code, exercising the software against specific inputs and comparing the results. Dependencies should be mocked up. They provide early feedback as they are fast and easy to implement.</li><li><strong>Component test:</strong> They test several units of code working coordinately as a component. This kind of test ensures that the component behaves as expected. Dependencies outside the component should be mocked up</li><li><strong>Integration test:</strong> Evaluates how all parts work together. Some dependencies should be mocked up, while others should remain in place to check integration. Basically, they are quite similar to the component test, the difference is they should be focused on integration.</li></ul><p>Any given project may have integrations and components tests; OR just one of them that will test both the integration and the purpose of the component.</p><ul><li><strong>E2E test: </strong>They test the whole system, similar to how a client would use the product. They validate functionalities and some non-functional requirements. All the pieces of software are exactly the same as the production ones.</li></ul><h4>Testing metrics</h4><p>Developers apply multiple techniques that allow them to ensure that the code is behaving the way it is supposed to. Therefore, it becomes imperative that developers create good quality unit tests to obtain these benefits considering that the lack of well designed and efficient test suites can be as harmful as not having tests at all.</p><p>In order to do this, developers need to take into account metrics that will help them track and have a sense of how well the software is tested, and this is where code coverage enters into place.</p><p>Code coverage is a metric that indicates in what measurement the source code is executed by the test suites, and it is composed of several coverage sub-metrics that ensure that all statements, branches or conditionals and paths are exercised when running the test suites. In other words, code coverage states how much of our code is actually reached by our tests.</p><p>Satisfying this metric, with regards to achieving a high percentage of code coverage, does not guarantee to detect defects nor it can stipulate the absence of them. Furthermore, the coverage metrics do not assess how well the results of the program are being checked. This might end up in tests that trigger a defect but do not detect it.</p><p>Having a code coverage report of 100% only means that the tests have executed all the code but it does not tell us anything about if the code is being correctly tested or if it is doing what it is intended.</p><p>So this kind of metric cannot provide an accurate assessment about the quality of the checks that are performed to detect defects, although some correlation can be proven, but it is still useful since it can help developers identify those sections of the source code that are not being evaluated by their test suites.</p><p>The next section presents solutions to overcome these problems.</p><h3>Potential solutions</h3><p>As described in the previous section, UTs are the base, something necessary but not enough to address the issues. The lack of UT is simply catastrophic , as UT provide the following benefits:</p><p>● Reduces the number of bugs that reach production.</p><p>● Encourages a better code design, more modular, and also enforces the SRP (single responsibility principle).</p><p>● Allows adding new features without breaking existing ones.</p><p>● Supports refactors, as they can be faced without major regressions.</p><p>● Reduces development costs by early problem detection that otherwise would have a larger impact in later stages.</p><p>● Helps self code documentation since they explicitly describe how a certain section of the software has to work.</p><p>So, this is great as a first step. But one step further would be to introduce “Mutation testing” and “Semantic Testing” to ensure both, high quality at fine grain, and whole system quality with regards to the digital capability being developed.</p><p>Let’s go deeper into these last two concepts.</p><h4>Mutation testing</h4><p>Mutation testing is a technique used to evaluate and improve the quality of unit tests, and in consequence, the quality of the code. This technique consists on mutating the source code by introducing artificial defects, called mutants. Then, this mutated software is evaluated against the existing tests in order to see if the alterations are detected.</p><p>A mutation is considered detected and killed when at least one test that passes the control run (against the original code), fails to pass when run against the mutated code. On the other hand, if no test fails when run against the mutated code, then that mutant is considered undetected or a surviving mutant, meaning that the test suite is unable to identify the defect and to differentiate the original code from the defective one.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/871/1*rzyQrQoGSxu9_k2CdLWMsA.png" /></figure><p>The defects created by mutation testing are provided by mutation operators. These are well-defined rules that describe how to change elements of a software and aim at mimicking typical errors that programmers make. Usually, a mutation operator can be applied to the source code at multiple places, each leading to a new mutant.</p><p>The results of this testing process can be grouped in what is called mutation score, which provides a quantitative measure of a test suite quality. The mutation score is calculated as a percentage that indicates the number of detected mutants divided by the total number of mutants.</p><p>It is important to note that mutation testing does not test the software directly; instead it tests the quality of already existing tests and helps to improve them. The assumption is that tests that detect more mutations will also detect more potential defects. This is based on two hypothesis called “the competent programmer hypothesis” and “the coupling effect”, where the first one states that programmers have enough knowledge of the correct software that they tend to develop software close to it, but deviate from it because of small defects, and the second one indicates that software that deviates from the correct version due to small errors is so sensitive that can also detect more complex ones.</p><h4>Semantics Test</h4><p>When implementing tests, there are many suitable frameworks or tools to choose from. There are tools that allow writing tests in natural language while others require a development language, or even both of them.</p><p>Let’s imagine just paying full attention to writing the tests, in natural language, without coding, without dealing with implementation stuff. Just focusing on the desired quality in a very easy-to-understand fashion. This is actually possible: by using <strong>semantic tests.</strong></p><p>Semantic tests enable developers, testers and other stakeholders to write and validate:</p><p>● Acceptance criteria case.</p><p>● Any kind of requirement as a test in a natural language.</p><p>● Any other tests defined in a test plan or test suite.</p><p>● Allows BDD (Behavior Driven Development).</p><p>So, what is a semantic test exactly?</p><p>First of all, it is a concept that aims to focus on what to validate over how to implement that validation.</p><p>Secondly, it is an implementation technique that allows you to see a test as a view, plus its implementation. There are tools, frameworks and applications in the software world that allow this, and even your own developed version could be used.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/789/1*JjMPLv6wvgCcI5bGr7LIKQ.png" /></figure><p>To summarize the key of semantic tests, and to extend this concept to other cases, let’s say that there is a business-readable test, or set of steps, that validates and verifies some functionality or acceptance criteria in order to assure the quality. Behind this natural, human-friendly test, there is the necessary machinery to implement the validation and verification.</p><p><strong>Semantics Monitoring</strong></p><p>Nowadays with Observability tools, this semantic concept can be extended to monitoring systems, adding one more feature to our dashboards, alerts, etc. These tools provide the mechanism to define a test, or set of steps, in a view with all associated metrics and actions, as well as, the capability to implement these steps.</p><p>So anybody in charge of monitoring the applications, microservices and other components, already familiar with the observability tools can take advantage of this and apply the semantic concept to further enrich the monitoring ecosystem.</p><h3>Adding concepts together</h3><p>Everything is mixed and the magic is done! Surely not, but by combining them the results would be much better.</p><p><strong>Mutation concepts</strong> make more accurate, sensitive and reliable tests; furthermore, they can be applied to any level of the test pyramid. However, for simplicity and time reasons it is enough to do it just with unit tests. Combining mutation concepts, and good assertions will produce good quality tests. In other words, implementing mutation testing will transform plain simple coverage to “meaningful” coverage, with all its associated advantages.</p><p><strong>Semantics tests</strong> allow us to focus on validating and verifying functionality regardless of their implementations. While mutation testing is great for certifying the quality of the UTs, semantic testing is great to certify the quality of the whole component or microservice. And to make this even greater, in a very easy-to-understand, human friendly manner, that everybody can acknowledge.</p><p><strong>Semantics Monitoring</strong> is the next natural step. It gives exponential benefits to the existing monitoring ecosystem since, not only information will be received from the system, but also feedback by exercising the whole component functionality under monitoring by using semantic tests.</p><p>When both approaches are combined into a single project, quality is significantly boosted up. Semantics tests would be used to cover most of the functionality and its related underlying code, while mutation testing would ensure a truly reliable UT coverage. On the contrary, having in place just one of the two approaches would only produce half of the picture, yet with both together quality gets heavily reinforced.</p><p>Ok, enough theory, let’s move to the practice :)</p><h3>Pros and Cons</h3><h4>Mutation Testing</h4><p>This type of tests enables developers to detect potential bugs and ensures that the tests are created with quality in mind. It also allows developers to understand and find cases and ambiguities in the source code that were not checked before.</p><p>And finally, mutation testing also makes sure that by eliminating small mutants, bigger, more costly and risky bugs are also eliminated.</p><p>On the other hand, the major factors that can affect the application of this method are:</p><ul><li><strong>Cost of producing and evaluating mutants:</strong> Depending on the number of mutants generated, and the time it takes to run the test suites, the whole process can become extremely time consuming. Also, the number of mutants generated depends on the choice of mutation operators so the more operator developers wish to use, the more mutants will be generated. A test suite that takes about 5 seconds to run, with a code base that generates 2500 mutants can take approx. 4 hours.</li><li><strong>Cost of equivalent mutants:</strong> Equivalent mutants are changes made to the original code that differ only in the syntax but not in the semantics, meaning the mutant generated creates a change that does the same as the original code. These mutants cannot be detected and create false surviving mutants that can influence the final mutation score and also represent a problem for developers when interpreting the results and trying to eliminate mutants.</li></ul><h4>Semantic Testing</h4><p>Let’s mention some pros of semantic testing: it abstracts testing to the business domain without bothering with the code for those tests, plus it enforces quality by leaving coding issues behind.</p><p>Another benefit is test understanding. Semantic testing, as a first step towards semantic monitoring, is a technique that focuses on finding a standard way of testing software. This means to focus on “what” the system should be doing in a clear, easy to understand way. This is the key concept to ensure the complete behavior of the whole digital capability.</p><p>The last advantage is reporting improvements. Because of the way semantic testing works, it makes the reports easier and richer.</p><p>However a disadvantage of this approach is that extra complexity is introduced. In the worst scenario, without a deep understanding of the underlying tools, tests may become un-maintainable and unscalable.</p><h3>Going into practice</h3><p>Glad you reached this far in the article, so, to find out how to move into practice please read our next article: <strong>“Microservices: High quality code development — Going into practice”</strong>, for sure you will enjoy it :)</p><h4>Annexes:</h4><ul><li><em>“Assessing Test Quality” </em>— by David Schuler</li><li><a href="https://martinfowler.com/articles/practical-test-pyramid.html"><em>“The Practical Test Pyramid”</em></a></li><li><em>“Microservices: High Quality Code development — Going into practice” </em>— by Daniel Perazza, Emilio Gerbino and Nicolás Quintana</li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=dea2a32dbac5" width="1" height="1" alt=""><hr><p><a href="https://medium.com/intive-developers/microservices-high-quality-code-development-the-theory-dea2a32dbac5">Microservices: High quality code development — The Theory</a> was originally published in <a href="https://medium.com/intive-developers">intive Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Easy jump into TypeScript.]]></title>
            <link>https://medium.com/intive-developers/easy-jump-into-typescript-61d35ddadfea?source=rss----f34f16bef773---4</link>
            <guid isPermaLink="false">https://medium.com/p/61d35ddadfea</guid>
            <category><![CDATA[front-end-development]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[typescript]]></category>
            <category><![CDATA[web-development]]></category>
            <dc:creator><![CDATA[Aleksandra Martyna]]></dc:creator>
            <pubDate>Thu, 06 Apr 2023 13:39:43 GMT</pubDate>
            <atom:updated>2023-04-06T13:39:43.114Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/900/1*d2cfAQeiEiXNcxNnhomIQg.png" /></figure><h3>1. Introduction.</h3><p>TypeScript is a strongly-typed superset of JavaScript. Behind this vague term hides a vast amount of features added into JavaScript:</p><ul><li>Strong typing: TypeScript adds type annotations to JavaScript, making it easier to catch errors during development and improving the maintainability of code.</li><li>Enhanced IDE support: TypeScript offers better autocompletion, navigation, and refactoring tools in popular IDEs like Visual Studio Code.</li><li>Improved scalability: TypeScript supports modules, interfaces, and classes, making it easier to organize and scale large codebases.</li></ul><h3>2. Installation.</h3><p>We are going to install TypeScript and after that, we will jump right into code and explain piece by piece what is going on.</p><p>Make sure you have <a href="https://nodejs.org/">Node.js</a> installed. I’m using <a href="https://code.visualstudio.com/">VSCode</a> as my IDE. I will be using TypeScript and TS interchangeably.</p><p>How to run TypeScript from scratch:</p><ol><li>Create a folder of any name that will hold our project.</li><li>Navigate to your project directory and open terminal/cmd.</li><li>Run the following:</li></ol><pre>npm init -y<br>npm install typescript --save-dev<br>npx tsc --init</pre><p>In these three steps we have created the package.json file, installed TypeScript and created TypeScript configuration file tsconfig.json. Later on we will cover some of the options for that file, but for now let’s leave it with the default values.</p><p>Create index.html file in your project root:</p><pre>&lt;!DOCTYPE html&gt;<br>&lt;html lang=&quot;en&quot;&gt;<br>  &lt;head&gt;<br>    &lt;meta charset=&quot;UTF-8&quot; /&gt;<br>    &lt;meta http-equiv=&quot;X-UA-Compatible&quot; content=&quot;IE=edge&quot; /&gt;<br>    &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1.0&quot; /&gt;<br>    &lt;title&gt;Learn typescript&lt;/title&gt;<br>  &lt;/head&gt;<br>  &lt;body&gt;<br>    &lt;div id=&quot;app&quot;&gt;&lt;/div&gt;<br>    &lt;script src=&quot;./index.js&quot;&gt;&lt;/script&gt;<br>  &lt;/body&gt;<br>&lt;/html&gt;</pre><p>As you can see we included the script file, but instead of TypeScript file we included JavaScript file. Browsers do not understand TypeScript. TypeScript files are being compiled into JavaScript that means that TS has backward compatibility. It can be used with existing JavaScript code, so developers can gradually migrate their projects to TypeScript.</p><p>Create an index.ts file in your project root directory. We will build a small component that fetches user data. We will start from pure JS and gradually add TypeScript features into our code.</p><p>We will use JSONPlaceholder api to fetch the data, and modern async/await syntax to consume promise-based API. <a href="https://jsonplaceholder.typicode.com/users">https://jsonplaceholder.typicode.com/users</a></p><p>In your index.ts file:</p><pre>(async () =&gt; {<br>  let users;<br>  let url = &quot;https://jsonplaceholder.typicode.com/users&quot;;<br><br>  const data = await fetch(`${url}`);<br>  users = await data.json();<br>})();</pre><p>So, there is no TypeScript here or is there?</p><p>Yes, it’s there. With all the juice.</p><h3>3. Fundamentals.</h3><p>If you hover over the url variable you will see in the tooltip:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/946/1*vatXkXq_odR3XxTmw2dPOw.png" /></figure><p>The syntax in the tooltip is how you specify types:</p><pre>let url: string = &quot;https://jsonplaceholder.typicode.com/users/&quot;;         <br>//         L Thats type annotation.</pre><p>We didn’t do it and still TypeScript knows that the url is of type string. The same goes for constant data, if hovered upon it will show that the type of data is Response which is a build in Fetch API interface represents the response to a request. This process is called <strong>type inference.</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/748/1*AKZ4bwzi-aEnpjKv8Y42cA.png" /></figure><p>There are few situations when TypeScript compiler would infer the type:</p><ul><li>Variables are initialized — once you hover over the users variable you will see that it holds the type of <strong>any</strong>, because it was not initialized<strong>. </strong>Type<strong> any </strong>is like a wild card. It literally means that the variable can be anything and you will not receive any errors.</li></ul><pre>// No error, TypeScript thinks it&#39;s fine:<br>let url: any = &quot;https://jsonplaceholder.typicode.com/users/&quot;;<br>url = 59023;<br><br>// Error: Type &#39;number&#39; is not assignable to type &#39;string&#39;.<br>let url: string = &quot;https://jsonplaceholder.typicode.com/users/&quot;;<br>url = 59023;<br></pre><ul><li>Default values are set for parameters.</li></ul><pre>function log(message = &quot;Hi from logger&quot; /* message is infered as string */) {<br>  return message; <br>}</pre><ul><li>Function return types are determined.</li></ul><pre>function sayHello() {<br>  return &quot;Hi there&quot;;<br>}<br><br>const greet = sayHello(); // const greet: string</pre><h4>Back to the code.</h4><p>Let’s refactor our code and add helper function that will return the correct url. We will also add <strong>type annotations</strong>. Personally, I think it is a good practice to add <strong>type annotations</strong> anywhere you can, even if we can relay on type inference. It increases readability.</p><pre>// It&#39;s easier to read and understand logic of a code, <br>// especially if someone else has written it<br>// Instead of:<br>const ids = getIds();<br>// do:<br>const ids: Array&lt;number&gt; = getIds();<br>// With code above I don&#39;t have to think much if ids is a random string,<br>// or there is more than one id, even if naming suggest that there might <br>// be more than one id.<br>// At first glance I can see that this is an array<br>// that holds ids which are numbers.<br><br>// There are two ways of describing an array type, ex. <br>// Array&lt;number&gt; or number[]</pre><p>Our helper function should return default url if we won’t pass an id.</p><pre>(async () =&gt; {<br>  let users: any;<br>  let url: string = getUrl(); <br><br>  const data: Response = await fetch(`${url}`);<br>  users = await data.json();<br>})();<br><br>function getUrl(<br>  id: string | null = null,<br>  url = &quot;https://jsonplaceholder.typicode.com/users/&quot;<br>): string {<br>  return id ? url + id : url;<br>}<br></pre><p>Function getUrl also accepts as a first parameter id and what this syntax means id: string | null = null is that id can be either string or null and if none argument is passed to a function it will defaults to null.</p><p>This way of writing <strong>type annotations</strong> is called <strong>union type.</strong></p><p>A union type describes a value that can be one of several types. We use the pipe | to separate each type. It literally means “or”, so: string | number | null is a value that is string or a number or null.</p><pre>function getUrl(params): string {<br>  return ...;     //       L :string is how you define return type<br>}</pre><p>But what is the type of users in our fetch function? At first the users are type of <strong>any, </strong>but we can predict what the type of users will be. It will be an array of user and a user consists of id, email, name, phone, username, website, company and address.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/752/1*wCIXIAzgeR5uyhY0tu2c9w.png" /></figure><p>Also the address consists of its own properties, so does company.</p><p>There are two ways to define the shape or structure of a value in our case user object:</p><ul><li>interface</li><li>type</li></ul><p>An interface is a way to define a contract that a value must conform to. It specifies a set of rules that an object must follow in order to be considered an instance of that interface.</p><p>An interface can define properties, methods, and index signatures, and can be extended or implemented by other interfaces or classes. It is defined using the interface keyword, followed by the name of the interface and the rules that the interface defines.</p><p>In our example we will write an interface for a user object.</p><pre>// We omit company and address for now<br>interface User {<br>  id: number;<br>  name: string;<br>  phone: string;<br>  username: string;<br>  website?: string;<br>}</pre><p>As you noticed we did put a ? behind a website. That means that this filed is optional and that resolves to:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/506/1*FH19HpYpEP_x7ysOfiUp1g.png" /></figure><p>whereas all the other properties must be defined. It kind of resembles to inheritance in object-oriented programming. Interfaces can be extended as well as classes can be.</p><p>Imagine that a user has some kind of permissions granted. We will also use an interface to describe the data of permissions.</p><pre>interface Permissions {<br>  viewPage: boolean;<br>  createPage: boolean;<br>  deletePage: boolean;<br>  addUser: boolean;<br>}<br><br>interface User {<br>  id: number;<br>  name: string;<br>  phone: string;<br>  username: string;<br>  website?: string;<br>}</pre><p>I don’t have to type all those fields into a User interface, all I have to do is extend my User:</p><pre>interface Permissions {<br>  viewPage: boolean;<br>  createPage: boolean;<br>  deletePage: boolean;<br>  addUser: boolean;<br>}<br><br>interface User extends Permissions {<br>  id: number;<br>  name: string;<br>  phone: string;<br>  username: string;<br>  website?: string;<br>}<br><br>let user: User;<br><br>// This way user has access to all the fields <br>// that are both in Permissions and User Interface.<br></pre><p>A type is a way to create an alias for a specific shape or structure of a value. It is defined using the type keyword, and can be used to create a new type from existing types, as well as to define new types from scratch.</p><pre><br>// Type can be very simple:<br>type Text = string;<br>const description: Text = &quot;It&#39;s a description&quot;;<br><br>// Type can be literal:<br>type City = &#39;Berlin&#39;;<br>const capitalOfGermany: City = &#39;Dubai&#39;;<br>// Would produce an error:<br>/* Type &#39;&quot;Dubai&quot;&#39; is not assignable to type &#39;&quot;Berlin&quot;&#39;. */<br><br>// Or it can be more complex:<br>type Address = {<br>  city: &#39;Berlin&#39; | &#39;Dubai&#39; | &#39;London&#39;;<br>  geo: {<br>    lat: string;<br>    lng: string;<br>  };<br>  street: string;<br>  suite: string;<br>  zipcode: string;<br>}<br><br>type Company = {<br>  bs: string;<br>  catchPhrase: string;<br>  name: string?<br>}<br><br>// We have types for company and adress, let&#39;s put it into a User Interface<br><br>interface Permissions {<br>  viewPage: boolean;<br>  createPage: boolean;<br>  deletePage: boolean;<br>  addUser: boolean;<br>}<br><br>interface User extends Permissions {<br>  id: number;<br>  name: string;<br>  phone: string;<br>  username: string;<br>  website?: string;<br>  company: Company;<br>  address: Address;<br>}</pre><p>Lets add the correct types to the component:</p><pre>type Address = {<br>  city: &#39;Berlin&#39; | &#39;Dubai&#39; | &#39;London&#39;;<br>  geo: {<br>    lat: string;<br>    lng: string;<br>  };<br>  street: string;<br>  suite: string;<br>  zipcode: string;<br>}<br><br>type Company = {<br>  bs: string;<br>  catchPhrase: string;<br>  name: string?<br>}<br><br>interface Permissions {<br>  viewPage: boolean;<br>  createPage: boolean;<br>  deletePage: boolean;<br>  addUser: boolean;<br>}<br><br>interface User extends Permissions {<br>  id: number;<br>  name: string;<br>  phone: string;<br>  username: string;<br>  website?: string;<br>  company: Company;<br>  address: Address;<br>}<br><br>(async () =&gt; {<br>  let users: User[];<br>  let url: string = getUrl(); <br><br>  const data: Response = await fetch(`${url}`);<br>  users = await data.json();<br>})();<br><br>function getUrl(<br>  id: string | null = null,<br>  url = &quot;https://jsonplaceholder.typicode.com/users/&quot;<br>): string {<br>  return id ? url + id : url;<br>}</pre><h4><strong>Modules</strong></h4><p>In the above example we did put everything in one file.</p><blockquote><a href="https://www.tutorialsteacher.com/typescript/typescript-module">“TypeScript</a> provides modules and namespaces in order to prevent the default global scope of the code and also to organize and maintain a large code base.</blockquote><blockquote>Modules are a way to create a local scope in the file. So, all variables, classes, functions, etc. that are declared in a module are not accessible outside the module. A module can be created using the keyword export and a module can be used in another module using the keyword import.”</blockquote><p>Good practice here would be to split. Let’s do that. I will create a new file called <em>types.ts </em>but you can name it anyway you want. I will put all the types and interfaces that we have created to that file like this:</p><pre>// types.ts<br><br>interface Permissions {<br>  viewPage: boolean;<br>  createPage: boolean;<br>  deletePage: boolean;<br>  addUser: boolean;<br>}<br><br>export interface User extends Permissions {<br>  id: number;<br>  name: string;<br>  phone: string;<br>  username: string;<br>  website?: string;<br>}<br><br>type Address = {<br>  city: string;<br>  geo: {<br>    lat: string;<br>    lng: string;<br>  };<br>  street: string;<br>  suite: string;<br>  zipcode: string;<br>};</pre><p>The export keyword makes all the types private and visible only to that file. I did put an export only in front of User making it available to import anywhere in the application but with this one export I also made rest of the types and interfaces private and excluded them from global scope.</p><p>In order to use these types in index.ts I must import them like:</p><pre>import { User } from &quot;./types&quot;;<br><br>// As you can see there is no use for other types. <br>// They are included within User interface, hence I didn&#39;t export them.<br>// The extension .ts is not required.<br><br>(async () =&gt; {<br>  let users: User[];<br>  let url = getUrl();<br><br>  const data: Response = await fetch(`${url}`);<br>  users = await data.json();<br>})();<br><br>function getUrl(<br>  id: string | null = null,<br>  url = &quot;https://jsonplaceholder.typicode.com/users/&quot;<br>): string {<br>  return id ? url + id : url;<br>}</pre><p>There are more ways of how to use module imports. You can read more about it <a href="https://www.typescriptlang.org/docs/handbook/module-resolution.html">here</a>.</p><h3>4. Compile Time.</h3><p>Remember to save all the files that we have created and run in your terminal:</p><pre>npx tsc</pre><p>After compilation is done we should see the index.jsfile generated. Lets have a look what is on the inside.</p><pre>&quot;use strict&quot;;<br>var __awaiter = (this &amp;&amp; this.__awaiter) || function (thisArg, _arguments, P, generator) {<br>    function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }<br>    return new (P || (P = Promise))(function (resolve, reject) {<br>        function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }<br>        function rejected(value) { try { step(generator[&quot;throw&quot;](value)); } catch (e) { reject(e); } }<br>        function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }<br>        step((generator = generator.apply(thisArg, _arguments || [])).next());<br>    });<br>};<br>(() =&gt; __awaiter(void 0, void 0, void 0, function* () {<br>    let users;<br>    let url = &quot;https://jsonplaceholder.typicode.com/users/&quot;;<br>    const data = yield fetch(`${url}`);<br>    users = yield data.json();<br>}))();</pre><p>That looks very bizarre isn’t it? 9 line of our original code transformed to 13 lines of something very weird.</p><p>Instead of having a lean Promise we ended up with a generator, also <em>“use strict”</em> was added. This is where tsconfig.json file comes to play.</p><pre>{<br>  &quot;compilerOptions&quot;: {<br>    &quot;target&quot;: &quot;es2016&quot;,<br>    &quot;module&quot;: &quot;commonjs&quot;,<br>    &quot;esModuleInterop&quot;: true,<br>    &quot;forceConsistentCasingInFileNames&quot;: true ,<br>    &quot;strict&quot;: true,<br>    &quot;skipLibCheck&quot;: true <br>  }<br>}</pre><p>The result of index.js is determined by the target option in tsconfig.json file. By default our code will be compiled to ES2016 where async/await functions where not supported yet. TypeScript will add a polyfill and that’s how we end up with generator. In order to emit async/await without transpilation, you need to set the target to ES2017 or later. Let’s change the target option to “es2017” and see what we end up with.</p><pre>{<br>  &quot;compilerOptions&quot;: {<br>    &quot;target&quot;: &quot;es2017&quot;,<br>    ...<br>  }<br>}</pre><p>Result:</p><pre>&quot;use strict&quot;;<br>(async () =&gt; {<br>    let users;<br>    let url = &quot;https://jsonplaceholder.typicode.com/users/&quot;;<br>    const data = await fetch(`${url}`);<br>    users = await data.json();<br>})();</pre><p>Strict mode is determined by the strict option set to true in tsconfig.json file.</p><p>Open index.hmtlin the browser and see in the network tab if the code that we wrote works.</p><h3>5. TypeScript with frameworks and libraries.</h3><p>The frameworks have adapted typescript very well. When creating applications, whether it is Angular or React or Vue or anything else, we can use the CLI and just start our project with Typescript without any additional configuration.</p><h3>Angular</h3><p>In Angular, TypeScript is used extensively to define component classes, services, interfaces, and other types. For example, when defining a component, TypeScript can be used to define the component’s properties, methods, and the type of data that the component will receive or emit. TypeScript can also be used to define interfaces for HTTP responses, making it easier to work with data received from an API.</p><blockquote>Angular docs says that “Knowledge of <a href="https://www.typescriptlang.org/">TypeScript</a> is helpful, but not required.”</blockquote><p>This is kind of true, but not really. There is no option for having Angular application without TypeScript. Once installed you are in TypeScript.</p><pre>export class UserComponent implements OnInit, OnDestroy {<br>  users: User[] = [];<br>  private subscription: Subscription;<br><br>  constructor(private usersService: UserService) { }<br><br>  ngOnInit(): void {<br>    this.getUsers();<br>  }<br><br>  ngOnDestroy(): void {<br>    this.subscription.unsubscribe();<br>  }<br><br>  private getUsers(): void {<br>    this.subscription = this.usersService<br>                            .getHeroes()<br>                            .subscribe(users =&gt; this.users = users);<br>  }<br>}</pre><p>Code above is a simple Angular component that fetches user data using Angular service. The first thing you notice are type and return type annotations. Level of strictness of how your code should be maitained are set within TSLint which is also a part of Angular application. The second thing are access modifiers: public, private, protected — this is TypeScript. Public is the default and can be omitted (users). Private filed is accessible outside the class at runtime but it helps ensure we do not access that field improperly. In other words private field is not really private once an app is running.</p><p>JavaScript has its own <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes/Private_class_fields">access modifiers </a>and they can definitely ensure that a filed won’t be accessible anywhere.</p><pre>export class Users {<br>  #users = [] // this is a private file<br>}</pre><p>Truth to be told, if you are starting with Angular the basic knowledge of TS is enough. After reading this article you should be ready and set to go to start working with TypeScript.</p><h3>React</h3><p>React is a JavaScript library that allows developers to build reusable UI components that can be rendered on the web. Developers can start a new React project with TypeScript by creating a new project using tools such as Create React App. This generates a new React project with TypeScript support.</p><p>Let’s have a look on how would a component that renders user data can look using TypeScript.</p><pre>import { UserCard } from &quot;./User&quot;;<br><br>interface UsersProps {<br>  title: string;<br>  users: ComponentProps&lt;typeof UserCard&gt;[];<br>}<br><br>export const Users = ({<br>  users,<br>  title = &quot;Users list&quot;,<br>}: UsersProps) =&gt; {<br><br>  return (<br>    &lt;&gt;<br>      &lt;h1&gt;{title}&lt;/h1&gt;<br>      &lt;ul&gt;<br>        {users.map((user, index) =&gt; (<br>          &lt;UserCard key={index} userData={user} /&gt;<br>        ))}<br>      &lt;/ul&gt;<br>    &lt;/&gt;<br>  );<br>};<br><br></pre><p>In the code above we crated a component that renders a list of users. We <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment">destructurized</a> props object and gave it a type annotation of UserProps.</p><p>UserProps interface consists of two properties: title which is a string and users which is an array of user. As I mentioned before, to correctly type an array we use syntax like:</p><pre>Array&lt;UserCard&gt; <br>// or<br>UserCard[]</pre><p>So, what is going on here with this ComponentProps?</p><h3>6. Utility types and extras.</h3><pre>users: ComponentProps&lt;typeof UserCard&gt;[]</pre><p>ComponentProps is a type created by React team. In the example above we are nesting components. Users component renders a list of UserCards. UserCards takes all the properties of a single user in users array. A logic way to keep the types for UserCard would be within UserCard. Since we already imported UserCard we can also get all the types for props that UserCard receives.</p><p>The ComponentProps&lt;typeof UserCard&gt;[] literally means take the type of UserCard props and make it an array.</p><p>Type like this takes advantage of <a href="https://www.typescriptlang.org/docs/handbook/utility-types.html"><strong>utility types</strong></a>.</p><blockquote>TypeScript provides several utility types to facilitate common type transformations. These utilities are available globally.</blockquote><p>Utility types comes handy when we don’t want to double the code that we write. Consider having a User interface and we also want a small avatar that would display only the username and website:</p><pre>interface User {<br>  id: number;<br>  name: string;<br>  phone: string;<br>  username: string;<br>  website?: string;<br>}<br><br>type Avatar = Pick&lt;User, &quot;username&quot; | &quot;website&quot;&gt;;</pre><p>Type Avatar literally resolves to: from User interface take only username and website where “and” is a union type.</p><h3>7. Summary and next steps.</h3><p>There is more and more stuff in TypeScript to make your code better and clearer. TypeScript reduces time to think about the code, although it forces you to write more code in the end. It’s a great warning system for JavaScript developers. I strongly encourage you to use TypeScript if you haven’t already.</p><p>Next steps, learn about:</p><ul><li><a href="https://www.typescriptlang.org/docs/handbook/2/generics.html">Generics</a>,</li><li><a href="https://www.typescriptlang.org/docs/handbook/2/keyof-types.html">keyof typeof Operators </a>,</li><li><a href="https://www.typescriptlang.org/docs/handbook/2/indexed-access-types.html">Index Access Types</a>, mapped types and all the possibilities of types manipulation,</li><li><a href="https://devblogs.microsoft.com/typescript/announcing-typescript-5-0/">What’s new in TypeScript</a>.</li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=61d35ddadfea" width="1" height="1" alt=""><hr><p><a href="https://medium.com/intive-developers/easy-jump-into-typescript-61d35ddadfea">Easy jump into TypeScript.</a> was originally published in <a href="https://medium.com/intive-developers">intive Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[A comprehensive guide to MACH architecture]]></title>
            <link>https://medium.com/intive-developers/a-comprehensive-guide-to-mach-architecture-b427336920a1?source=rss----f34f16bef773---4</link>
            <guid isPermaLink="false">https://medium.com/p/b427336920a1</guid>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[intive]]></category>
            <category><![CDATA[software]]></category>
            <category><![CDATA[mach-architecture]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[Sebastian Kubiak]]></dc:creator>
            <pubDate>Wed, 18 Jan 2023 15:31:28 GMT</pubDate>
            <atom:updated>2023-01-18T15:36:35.647Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/900/1*Xs4k-57w2TjHQ8zFMeQi-Q.png" /></figure><p>If you ever heard about MACH architecture, you might be interested in learning more about it and why it is important. All of those queries are addressed in this blog post, which also goes through the advantages of using a MACH architecture.</p><p>MACH is the acronym for Microservices, API-first, Cloud-native, and Headless. We will discuss each component separately, along with its importance.</p><h3>What is MACH architecture?</h3><p>The MACH architecture is based on the idea that companies must have a high degree of control and adaptability to meet their consumers’ needs now and in the future. The microservices-based, API-first, cloud-native, and headless technologies that make up the MACH aim to eliminate old programs and replace them with a modular design that enables businesses to be more adaptable and agile.</p><p>MACH has four main components:</p><ul><li>Microservices-based</li><li>API-first</li><li>Cloud-native</li><li>Headless</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/941/1*uC6EdRjmzUB-ixy4jiGO9Q.png" /></figure><p>Together, these four components enable a versatile, scalable, and agile system. When developing an e-commerce platform, online retailers can use MACH technology to implement the best-of-breed strategy.</p><p>The MACH design enables you to pick the best available technology for each specific area of functionality rather than being forced to rely on a single system to manage all of your needs. Because a component is connected to a system as a whole, you are no longer forced to accept its inefficiency.</p><h3>Four main components of MACH architecture</h3><p>Let’s understand the four components of MACH architecture in detail.</p><p><strong>Microservices: </strong>A microservice architecture can be used to design a product as a collection of autonomous components (microservices) that operate independently and communicate with one another via API interfaces. You can thereby deploy, modify, and upgrade individual software components without causing a system-wide outage.</p><p>Microservices technology is advantageous for the creation of expansive and intricate applications because of these characteristics. For instance, they allow engineers to expedite development while lowering the risks associated with the integrity of their systems.</p><p><strong>API-first: </strong>Apps communicate with one another using the API or Application Programming Interface. Through the use of a common language, numerous APIs combine to create a microservices-based architecture that facilitates data interchange across the services.</p><p>To put it another way, an API builds a framework for the user interface so that the logic may be handled without the user having to know more about the engine. API hides the complexity and makes it simple to utilize in larger, loosely linked systems for example an eCommerce architecture.</p><p>An API-first strategy involves the development of APIs as the first step, as opposed to the code-first approach, where the developers first create the core services and the APIs afterwards enable communication.</p><p>To put it simply, APIs are created independently and then incorporated into an application to link many microservices and create a product. This makes it possible for several developers to collaborate on a bigger project.</p><p>For instance, flight booking software connects to the databases of different airlines via APIs and presents all flight details on a single screen.</p><p><strong>Cloud-native: </strong>Organizations may use public, private, and hybrid clouds to build solutions in dynamic environments and swiftly scale the necessary computing resources as needed. By doing this, businesses may receive more scalable software solutions and eliminate the performance concerns associated with on-premise systems, which enables them to innovate and grow continuously.</p><p>The main benefit of being cloud-native is that it makes it possible to scale microservices horizontally</p><p><strong>Headless: </strong>Going headless entails separating user experiences — the front end of delivery — from the back end of technology. A key advantage of such a configuration is the freedom from limitations imposed by a predetermined “head”.</p><p>The Headless method gives you entire design flexibility so that you can construct front-ends for any device while keeping the backend consistent, enabling you to engage with your clients through any device.</p><p>Businesses may deploy different frontend experiences across a range of devices with the support of the headless method, enabling them to interact with customers at every touchpoint.</p><h3>How does MACH architecture work?</h3><p>MACH architecture divides a large block of components into smaller, interconnected parts that can function more efficiently on their own.</p><p>In a conventional e-commerce platform, a single instance of a single database powers every component of the storefront. These services are independent and have their own databases when using microservices. This implies that the shopping cart, product administration, and customer service may all be handled by separate parts. To carry out their operations and collect data in their own databases, the microservices have their own load balancer and execution environment.</p><h3>What are the benefits of using a MACH architecture?</h3><p>By switching from monolithic to MACH architecture, you are free to select from the top tools available right now and are given a framework that will make it simple to add, change, or delete technologies in the future. Simply said, MACH architecture enables you to end the cycle of re-platforming once and for all.</p><p>Some of the other advantages of using a MACH architecture are:</p><p>- <strong>Seamless customizations:</strong> You must be able to adapt quickly as your consumers’ demands change. An important component of MACH design is the capacity to develop and adapt the consumer experience continuously. MACH makes it possible to redesign the entire architecture or a single component easily in no time.</p><p>- <strong>Speedy development with fewer risks:</strong> MACH Architecture gives businesses a flexible and agile structure, enabling you to create and market your goods much more quickly than using conventional approaches. It can assist you in quickly launching an MVP.</p><p>Before spending money on significant changes and advances for your product, this enables you to introduce product prototypes. You will be able to validate your concepts, improve your products, and put these improvements into practice prior to the launch more quickly.</p><h3>Conclusion</h3><p>MACH is one of those cutting-edge strategies that elevate your company to new technical heights while enabling you to give your clients a better experience. It enables creative changes in the frontend without requiring changes to the backend followed by an extremely simple way to scale the application on demand. Following in the footsteps of some big companies who have already started using this strategy might be advantageous for your company as well.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b427336920a1" width="1" height="1" alt=""><hr><p><a href="https://medium.com/intive-developers/a-comprehensive-guide-to-mach-architecture-b427336920a1">A comprehensive guide to MACH architecture</a> was originally published in <a href="https://medium.com/intive-developers">intive Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Software Engineering in the Agile Manifesto]]></title>
            <link>https://medium.com/intive-developers/software-engineering-in-the-agile-manifesto-ab603a6e5829?source=rss----f34f16bef773---4</link>
            <guid isPermaLink="false">https://medium.com/p/ab603a6e5829</guid>
            <category><![CDATA[agile]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[agile-methodology]]></category>
            <category><![CDATA[intive]]></category>
            <dc:creator><![CDATA[intive]]></dc:creator>
            <pubDate>Wed, 21 Dec 2022 19:11:26 GMT</pubDate>
            <atom:updated>2022-12-21T19:11:26.690Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Jwb2R1U9Fikchh2_Bgdbbg.jpeg" /></figure><p><strong>By Álvaro Ruiz de Mendarozqueta, Principal Project Manager.</strong></p><blockquote><em>“If you are not producing working, running, tested usable software in every single Sprint or iteration, you are not [yet] ‘doing’ Agile, you are not [yet] ‘doing’ Scrum.” — </em>Ron Jeffries</blockquote><h3>Introduction</h3><p>In his recent book, Clean Agile, Robert Martin states that the Agile Manifesto signees gathered with the aim of “creating a manifesto to introduce a more effective, lighter-weight approach for software development’ due to the ‘deplorable state of software development’.</p><p>Sometimes, because of the extensive deployment and usage of the Agile philosophy and of frameworks such as Scrum, the original focus on software is forgotten or it is not being considered as it used to be in that remote 2001 when the Manifesto was written. Not surprisingly, the Manifesto explicitly mentioned the software.</p><p>We would like to highlight the software engineering implications of delivering working software.</p><h3>Agile Manifesto</h3><p><em>We are uncovering better ways of developing software by doing it and by helping others do it. Through this work we have come to value:</em></p><ul><li><em>Individuals and interactions over processes and tools</em></li><li><em>Working software over comprehensive documentation</em></li><li><em>Customer collaboration over contract negotiation</em></li><li><em>Responding to change over following a plan</em></li></ul><p><em>That is, while there is value in the items on the right, we value the items on the left more.</em></p><p>Some of the principles behind the Agile Manifesto also emphasized the focus on software:</p><ol><li><em>Our highest priority is to satisfy the customer through early and continuous delivery of </em><strong><em>valuable software</em></strong><em>.</em></li><li><em>Deliver </em><strong><em>working software</em></strong><em> frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.</em></li><li><strong><em>Working software</em></strong><em> is the primary measure of progress.</em></li></ol><h3><strong>What is working software?</strong></h3><p>Working software is validated software that delivers value to the business, to the customers and the users. It is software that works well, does what it must do without errors, uses computer resources efficiently, and works in situations of security risks. It is software easily used and understood in all functionalities and situations; software that works in different situations without failure and that can be maintained.</p><p>In other words: working software makes your customers happy, has no bugs, it’s not slow, it doesn’t stop unexpectedly and it’s easy to use and understand. If you have a working software, the things you do with it are easily found, it keeps hackers away, your information is secured and the use of your computer efficient. Finally, software builders can modify, test, adapt, change and deploy it.</p><h3><strong>What is valuable software?</strong></h3><p>Gerald Weinberg, reviewing different definitions of ‘quality’ concluded that ‘quality is value for someone’.</p><p>As the stakeholders’ value is expressed in requirements, valuable software implements the customers’ needs into a software product that fulfills those needs and that is characterized by quality attributes.</p><h3><strong>What do we need to do to build working and valuable software?</strong></h3><p>We need to perform all the activities and best practices of the software development <strong>value chain</strong>.</p><p>The <strong>value chain </strong>is the transformation of our customer’s needs into a software product that fulfills those needs and it is characterized by the quality attributes.</p><p>The <strong>probability </strong>of building the <strong>right produc</strong>t increases with the application of the<strong> right construction</strong>. If you think that doing the right construction is expensive, try doing it with a bad construction.</p><p>We must establish an architecture of the solution and design it; develop the code, verify it with testing, apply peer review, and static analysis. Also, we must integrate software parts, build and test the software product components and validate them with the users, and deploy the software product in all the environments needed by the customers.</p><p>Software components must be managed, and product integrity should be maintained through configuration management.</p><p>The project team and their activities should be managed, measured, reviewed and improved continuously.</p><p><strong>References:</strong></p><ol><li><a href="https://agilemanifesto.org/">Agile Manifesto</a>.</li><li>Martin Robert. Clean Agile (Robert C. Martin Series) (p. 25). Pearson Education. Kindle Edition.</li><li>Article <a href="https://ronjeffries.com/articles/019-01ff/no-software/"><em>No Software: No Agile, No Scrum</em></a>, by Ron Jeffries.</li><li>Weinberg, Gerald. Quality Software Management (Vol 1 Systems Thinking). Dorset House.</li><li>Article. Boehm, Barry. Improving Software Productivity. IEEE Software. 1987</li></ol><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ab603a6e5829" width="1" height="1" alt=""><hr><p><a href="https://medium.com/intive-developers/software-engineering-in-the-agile-manifesto-ab603a6e5829">Software Engineering in the Agile Manifesto</a> was originally published in <a href="https://medium.com/intive-developers">intive Developers</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>