<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Applitools on Medium]]></title>
        <description><![CDATA[Stories by Applitools on Medium]]></description>
        <link>https://medium.com/@applitools?source=rss-803b5e1025b0------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Thu, 07 May 2026 04:17:55 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@applitools/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Future of Testing: Mobile Recap — All About Mobile Test Automation]]></title>
            <link>https://applitools.medium.com/future-of-testing-mobile-recap-all-about-mobile-test-automation-f3e22d3c2150?source=rss-803b5e1025b0------2</link>
            <guid isPermaLink="false">https://medium.com/p/f3e22d3c2150</guid>
            <category><![CDATA[dev]]></category>
            <category><![CDATA[mobile]]></category>
            <category><![CDATA[mobile-app-development]]></category>
            <category><![CDATA[testing]]></category>
            <category><![CDATA[developer-tools]]></category>
            <dc:creator><![CDATA[Applitools]]></dc:creator>
            <pubDate>Fri, 30 Apr 2021 20:09:18 GMT</pubDate>
            <atom:updated>2021-05-07T18:37:52.301Z</atom:updated>
            <content:encoded><![CDATA[<h3>Future of Testing: Mobile Recap — All About Mobile Test Automation</h3><p>A few weeks ago, we hosted a conference on the future of testing for mobile applications. Almost 4000 people registered for the event, creating a fun and exciting atmosphere in the chat for each session as well as for the live Q&amp;A that followed. There was a lot to learn and it was a great opportunity to engage with the testing community on such an important topic.</p><p>The videos are all available now in our on-demand library and can be watched for free. If you want to dive right in and watch them right now and skip this recap, go ahead, I won’t blame you 😉. You can check them all out <a href="https://info.applitools.com/udlth">here</a>.</p><h3>The Path to Autonomous Testing — Gil Sever</h3><p>The opening remarks were from Applitools CEO and co-founder Gil Sever. In this ten-minute presentation, Gil delivers a strong primer on what autonomous testing really is — and how machine learning can help assist humans and make testing much, much more effective. Tune in for a glimpse at the autonomous future.</p><h3>On the Same Wavelength: Adding Radio to Your Testing Toolbox — Jason Huggins</h3><p>Jason Huggins was the opening keynote speaker at the event, and he gave a fascinating talk on where testing is headed. As Jason says, “testing is getting weird,” and is increasingly about things you can’t even see. He argues that it’s time to move past an understanding of testing as just simulating what can be seen and tapped. What mobile testers are ultimately interested in today is the triggering of radio activity. That’s the essence of how your app truly performs, isn’t it?</p><p>Jason is a founder of Selenium, Appium, and Tapster Robotics, so he knows quite a bit about where testing has been and where it’s going. Check out his talk to hear what he has to say.</p><h3>Appium 2.0: What’s Next — Sai Krishna, Srinivasan Sekar</h3><p>Appium is a very popular test automation framework, and the upcoming release of <a href="https://info.applitools.com/uc6wH">Appium 2.0</a> is highly anticipated. Sai and Srinivasan are both contributors to the Appium project as well as lead consultants at ThoughtWorks, and in their presentation you’ll find a preview of what’s coming with Appium 2.0.</p><p>For example, today you need to install a large number of drivers when you <a href="https://info.applitools.com/uc79Z">install Appium</a> server — even ones you don’t need. With Appium 2.0, you can just install the ones you need. Another example has to do with bug fixes — a lot of fixes are added to betas but many people don’t install betas and miss out, so with Appium 2.0 the fixes will be attached to individual drivers and rolled out faster. There will be improved docs and it’ll be easier to build your own plugins… the list goes on.</p><p>Catch Sai and Srini’s presentation to learn all about it. And if you’re ready to try it out for yourself, read our blog post on <a href="https://info.applitools.com/uc6wH">Getting Started with Appium 2.0 Beta</a>.</p><h3>Coffee Break</h3><p>You might think there’s not much to recap during a coffee break, but during the first coffee break of the conference the brand-new <a href="https://info.applitools.com/udltj">Test Automation Cookbook</a> was introduced to the world. This is a collection of bite-sized recipes you can use to answer a number of specific and common questions you may have about test automation. This “commercial break” was very well received by the audience 😊.</p><h3>Mobile App Testing &amp; Release Strategy — Anand Bagmar</h3><p>Your mindset needs to be mobile-first. That’s how Anand, a Quality Evangelist and Solution Architect at Applitools, opened his talk. He followed that up with an overview of the differences between web and <a href="https://info.applitools.com/udltm">mobile testing</a>/releasing, including mobile test automation on a local/cloud device lab. Anand explains that even after all our hard work in continuous testing, sometimes visual tests can still come down to a game of manual “spot the difference.”</p><p>Visual AI is a difference-maker there, as Anand explains. He talks about the difference between Visual AI and pixel comparisons and how you can apply it yourselves. Take a look at this talk for a great overview of mobile testing and releasing.</p><h3>Next Generation Mobile Testing with Visual AI — Adam Carmi</h3><p>Adam Carmi, a co-founder and CTO of Applitools, picked up with Anand left off with a deeper dive into Visual AI. Adam walks through a live demo of Applitools Eyes so you can see it for yourself. He talks about the huge code reduction when you use Eyes — up to 80% — which also gives you increased coverage and no validation logic to maintain. He backs this up with hard data from a hackathon, highlighting the fact that many testers were completely new to Applitools and were able to pick it up quickly and get some really strong results.</p><p>Adam’s talk was full of examples of how Eyes can work in the kinds of scenarios you may be wondering about, including how Eyes deals with different mobile form factors and how it batches together similar errors that can be approved/rejected together. Check it out.</p><h3>Expert Panel: State of the Mobile Frameworks</h3><p>This panel gathered together three mobile development experts for a robust discussion of what life is like for developers using different mobile frameworks. Eran Kinsbruner, DevOps Chief Evangelist and Sr. Director, Product Marketing at Perforce Software, Eugene Berezin, iOS Developer at Nordstrom and Moataz Nabil, Mobile Developer Advocate at Bitrise shared a lot of great information about the frameworks they use, which included Flutter, Appium, Kif, EarlGray and of course XCUITest and Espresso.</p><p>The panel was moderated by Justin Ison, Sr. Software Engineer at Applitools. Justin led the panel through a conversation around framework limitations, how to make apps testable, and what could make mobile testing easier. You can check out the whole discussion below. And if you’re curious for a quick comparison, be sure to take a look at a recent writeup on our blog that tackles <a href="https://info.applitools.com/udca1">Appium vs Espresso vs XCUITest</a>.</p><h3>The Future of Multi-Platform Integration Testing — Bijoya Chatterjee, Rajnikant Ambalpady</h3><p>Bijoya and Rajnikant work on testing for the new SONY PlayStation 5, giving them a unique outlook on what it takes to deliver strong integration testing across platforms. In this talk, they describe the challenge of having many standalone apps that require automated testing, when there aren’t any off-the-shelf tools that are built to test a PlayStation! They ended up customizing Appium and making use of many other tools in their stack (this might be a good place to mention that Applitools is part of it, which I did not know until I heard Bijoya tell the audience 😊).</p><p>They cover the challenges of testing numerous standalone components within apps that must talk to each other, as well as testing across platforms from console to web to mobile. For a discussion of the pros and cons of end-to-end integration testing and much more, be sure to check out this talk.</p><h3>Let the Robots Test Your Flutter App — Paulina Grigonis, Jorge Coca</h3><p>It’s not easy to organize code so that it’s A) maintainable and customizable by development teams, and B) still easily understood and readable by business stakeholders. In this presentation, Paulina and Jorge, who are respectively business and technical experts at Very Good Ventures, walk us through a methodology they call the Robot Pattern. This pattern separates the “What” from the “How” of testing and can result in some pretty spiffy code. Definitely very easy to read even for a non-technical user.</p><p>Want to learn how to implement this pattern in your own development? Check out their presentation below.</p><h3>Your Tests Lack Vision: Adding Eyes to Your Mobile Tests — Angie Jones</h3><p>As humans, we can only pay attention to so much at one time — and that means we miss things, even in plain sight. The closing keynote from Angie Jones makes this clear from the first moments with a great video clip. I won’t spoil it, but it reminded me a lot of another video when I saw it, so after you watch Angie’s talk go ahead and take a look at <a href="https://www.youtube.com/watch?v=Ahg6qcgoay4">this one too</a> and see how it goes if you want to laugh at yourself.</p><p>After helping us all understand our blind spots, Angie provides a lot of great examples of how visual bugs slip through traditional testing processes. She then walks us through a demo of a new app and show us how Applitools Eyes can help us make sure it’s visually perfect. In the live Q&amp;A, Angie also answers a number of questions around handling multiple viewport sizes or when you have to scroll, and even testing variations like light and dark mode or dealing with pop-up notifications and alerts.</p><p>Angie also shared her inspiration behind launching the automation cookbook (hint: it’s making the life of fellow testers easier). If you haven’t taken a look at it yet, be sure to <a href="https://info.applitools.com/udltj">check out the automation cookbook here</a>.</p><p>You can see Angie’s full talk below.</p><h3>Thank You!</h3><p>And with that (and a few closing remarks from host Joe Colantonio of TestGuild) the Future of Testing Mobile event ended. We want to extend a huge thanks to everyone involved in making this event such a success, from Joe for his incredible hosting to the amazing speakers for sharing their insights to every attendee for adding your voices and presence. The event could not happen without all of you.</p><p>If you liked these videos, you might also like our videos from our <a href="https://info.applitools.com/udlth">previous Future of Testing events</a> too — all free to watch. Happy testing!</p><p><em>Originally published at </em><a href="https://info.applitools.com/udltr"><em>https://applitools.com</em></a><em> on April 30, 2021.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f3e22d3c2150" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Applitools Announces Future of Testing: Mobile Live Virtual Event]]></title>
            <link>https://applitools.medium.com/applitools-announces-future-of-testing-mobile-live-virtual-event-e36fe021d13e?source=rss-803b5e1025b0------2</link>
            <guid isPermaLink="false">https://medium.com/p/e36fe021d13e</guid>
            <category><![CDATA[events]]></category>
            <category><![CDATA[mobile]]></category>
            <category><![CDATA[dev]]></category>
            <category><![CDATA[test-automation]]></category>
            <category><![CDATA[testing]]></category>
            <dc:creator><![CDATA[Applitools]]></dc:creator>
            <pubDate>Fri, 02 Apr 2021 18:16:43 GMT</pubDate>
            <atom:updated>2021-04-02T18:17:51.212Z</atom:updated>
            <content:encoded><![CDATA[<p><em>Top QA professionals and test engineers present interactive sessions and live panels to discuss innovations and cutting-edge practices in mobile test automation</em></p><p>We are excited to announce “Future of Testing: Mobile,” a free, live-streamed virtual event dedicated to the current trends and innovations shaping mobile test automation. 🥳 Throughout the last year, thousands of live participants have joined host Joe Colantonio and thought leaders from around the world to discuss and share technology, trends and success stories focused around software quality. This iteration of the popular event features presentations that aim to upskill developers, test engineers and QA professionals responsible for ensuring the delivery of quality mobile apps and experiences.</p><p>“Staying ahead of the curve is critical as technology is constantly changing. Quality leaders need to have a 360 degree understanding of the trends, technologies, and approaches available in order to be successful,” said Parasar Saha, Director of Quality Assurance at SOTI. “Applitools Future of Testing events provide the perfect balance, ensuring quality professionals are equipped with the knowledge they need to excel in today’s fast-paced app dev environment.”</p><p><strong>For more information and to register for the live stream, visit: </strong><a href="https://info.applitools.com/udfrW"><strong>https://applitools.com/future-of-testing-mobile-north-america/</strong></a></p><p>The one day event features speakers from brands like Nordstrom, SONY PlayStation, ThoughtWorks, Very Good Ventures and more. Domain experts and test automation practitioners will lead sessions that cover the latest practices and tools in mobile testing, automation, release strategies and multi-platform integrations.</p><p>Keynote speakers include Jason Huggins, founder of Selenium, Appium, and Tapster Robotics, as well as Angie Jones, Principal Automation Architect and Senior Director at Applitools and Test Automation University.</p><p>“The Future of Testing: Mobile virtual event is an amazing opportunity to upskill the community on the latest frameworks, tools and strategies shaping mobile test automation,” said Angie Jones, Principal Automation Architect and Senior Director at Applitools and Test Automation University. “With compatibility and environmental stability a priority in the mobile testing space, it is important that we continue to improve testing efficiency and enable engineers to improve upon their own mobile test automation initiatives.”</p><p>Some of the exclusive, live stream content includes:</p><ul><li><strong>Appium 2.0: What’s Next</strong> — Join Sai Krishna and Srinivasan Sekar, lead consultants at ThoughtWorks and contributors to the Appium project, to learn <a href="https://info.applitools.com/uc6wH">everything you need to know about Appium 2.0</a> — the first major release of Appium in 7 years.</li><li><strong>Expert Panel: State of Mobile Frameworks</strong> — Join a discussion on the latest trends with <a href="https://info.applitools.com/udca1">Appium, Espresso, and XCUITest</a>. This candid conversation is perfect for any team researching mobile test frameworks and trying to understand the pros and cons of each.</li><li><strong>Let the Robots Test Your Flutter App</strong> — Paulina Grigonis, and Jorge Coca, of Very Good Ventures, discuss automation of mobile apps developed using Flutter-an open source UI software development kit created by Google to help develop applications across various platforms with a single code-base.</li></ul><p><strong>See the full agenda and register for free at: </strong><a href="https://info.applitools.com/udfrW"><strong>https://applitools.com/future-of-testing-mobile-north-america/</strong></a></p><p>See you there! 👋</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e36fe021d13e" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Unpopular Opinions: Software Testing Edition]]></title>
            <link>https://applitools.medium.com/unpopular-opinions-software-testing-edition-89049e59fd20?source=rss-803b5e1025b0------2</link>
            <guid isPermaLink="false">https://medium.com/p/89049e59fd20</guid>
            <category><![CDATA[testing]]></category>
            <category><![CDATA[developer]]></category>
            <category><![CDATA[dev]]></category>
            <category><![CDATA[software]]></category>
            <dc:creator><![CDATA[Applitools]]></dc:creator>
            <pubDate>Thu, 01 Apr 2021 17:45:59 GMT</pubDate>
            <atom:updated>2021-04-20T18:31:28.270Z</atom:updated>
            <content:encoded><![CDATA[<p><strong>This article was written by Applitools senior developer advocate, </strong><a href="https://twitter.com/techgirl1908"><strong>Angie Jones</strong></a><strong>.</strong></p><p>========================================</p><p>I stirred up a bit of controversy on Twitter by asking for unpopular opinions on software testing, and unsurprisingly, the community had some pretty hot takes! While I asked this some time ago, it appears that these takes are just as relevant today.</p><h3>Angie Jones on Twitter: &quot;Unpopular Opinion: Software Testing Edition pic.twitter.com/NxwYbEwY6V / Twitter&quot;</h3><p>Unpopular Opinion: Software Testing Edition pic.twitter.com/NxwYbEwY6V</p><p>Trish Koo says “You don’t have to argue with people all the time to be a good tester”.</p><h3>Trish Khoo on Twitter: &quot;You don&#39;t have to argue with people all the time to be a good tester. There are less combative methods of persuasion and negotiation that work more effectively and upset less people. https://t.co/Ib4I681v34 / Twitter&quot;</h3><p>You don&#39;t have to argue with people all the time to be a good tester. There are less combative methods of persuasion and negotiation that work more effectively and upset less people. https://t.co/Ib4I681v34</p><p>As testers, we are often thought of as the bearer of bad news and have accepted this as part of the job. We often find ourselves fighting for features and fixes in the name of customer advocacy. It doesn’t have to be this way. The best testers are the ones that people want to invite to the meetings. The best testers realize that their job is so much more than simply finding and logging bugs, as pointed out by Jason Phebus.</p><h3>Mr. Wally Brown Complaint Department on Twitter: &quot;finding/logging bugs is the least important/interesting thing about testing https://t.co/cAbqejQppu / Twitter&quot;</h3><p>finding/logging bugs is the least important/interesting thing about testing https://t.co/cAbqejQppu</p><p>In fact, the best testers have not only found a way to become a part of the team, but have gotten their teams to realize that testing is a team sport.</p><h3>angela | Black lives matter on Twitter: &quot;Testing should involve the whole team. https://t.co/5UkYCbLYer / Twitter&quot;</h3><p>Testing should involve the whole team. https://t.co/5UkYCbLYer</p><p>Speaking of team sport, this one is from a developer who says “If you don’t <strong>start</strong> <strong>with tests</strong>, the design is probably garbage. I can’t think of any APIs I am responsible for and satisfied with that didn’t start with docs and tests”. This developer gets it! He understands his responsibility for quality even as a developer.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*djXpiquey3ALGkRk.png" /></figure><p>But I also often encourage testers not to leave the testing solely to developers during the design phase. As Jason Phebus said in the earlier tweet, logging bugs after the fact is the least interesting thing about testing. Get involved in your design meetings and help the team spot flaws in their approaches before a single line of code is written. Now, that’s powerful!</p><p>Let’s head on over to test automation…</p><p>Paul Grizzaffi says “A failing automation script does not necessarily mean that the script is wrong, horror of horrors, the script may be accurate”.</p><h3>Paul Grizzaffi on Twitter: &quot;A failing #automation script does not necessarily mean that the script is wrong, horror of horrors, the script may be accurate / Twitter&quot;</h3><p>A failing #automation script does not necessarily mean that the script is wrong, horror of horrors, the script may be accurate</p><p>If you immediately question your test script when it fails, it could be a sign that you are not confident in this test. A lot of times people are not confident in the tests because they barely understand what it is that they’ve coded. As the director of <a href="https://info.applitools.com/uwi7">Test Automation University</a>, I have a lot of people ask me about learning Selenium or Appium, or some other automation tool. I get it, you need to learn the tool to be able to do the job. But can I tell you that you’re shortchanging yourself? Automation development is so much more than a tool. Learn the craft, learn the strategies, <strong>really</strong> learn to code! That way, when things fail, you know how to properly analyze what’s going on.</p><h3>Juho Perälä on Twitter: &quot;Too often testers don&#39;t really understand why automation failed and use the environment or tools as scapegoat. / Twitter&quot;</h3><p>Too often testers don&#39;t really understand why automation failed and use the environment or tools as scapegoat.</p><p>Just think about how much more value you can add if you’re truly code literate, as pointed out by Amber Race.</p><h3>Amber Race on Twitter: &quot;Testers should be code literate. Reading code is a crucial skill for any tester and writing code has so many uses beyond just boilerplate automation. https://t.co/Tts0rzHI4Y / Twitter&quot;</h3><p>Testers should be code literate. Reading code is a crucial skill for any tester and writing code has so many uses beyond just boilerplate automation. https://t.co/Tts0rzHI4Y</p><p>Even if you’re only able to read it, you now can understand much more about how your application works and sniff out poor coding practices that can lead to bugs.</p><p>And if you are writing code, you don’t have to stop just at automating tests. You can contribute so much more to automating processes, increasing test automatibility within the product itself, and even fixing bugs that are found.</p><p>You may be thinking “well Angie, if I’m doing all of that then I’m doing the job of a developer”. Yep, you absolutely are.</p><h3>Daniil Marchenko on Twitter: &quot;Writing code is a job of a developer. Therefore a person who writes automated tests is a developer. https://t.co/6apm5fcsat / Twitter&quot;</h3><p>Writing code is a job of a developer. Therefore a person who writes automated tests is a developer. https://t.co/6apm5fcsat</p><p>As Justin Ison so accurately says, quality managers expect you to test the entire site, write detailed bug reports, learn to program, automate all test cases, and setup CI/CD for half the salary of the developers creating the bugs!</p><h3>Justin Ison on Twitter: &quot;Manager: We&#39;ll need you to test the entire site &amp; write detailed bug reports. At the same time learn to program and automate all test cases. Additionally, setup a CI/CD process. To thank you for all of your hard work we&#39;ll pay you 1/2 the salary of the dev creating the bugs! ;) / Twitter&quot;</h3><p>Manager: We&#39;ll need you to test the entire site &amp; write detailed bug reports. At the same time learn to program and automate all test cases. Additionally, setup a CI/CD process. To thank you for all of your hard work we&#39;ll pay you 1/2 the salary of the dev creating the bugs!</p><p>So, to all of the quality managers and executives reading this, listen to David Burns and pay your testers the same salary as your developers, if not more!</p><h3>David Burns on Twitter: &quot;Pay your testers the same salary as as your developers, and sometimes pay testers more. They have amazing empathy for developers and users at the same time! https://t.co/Y7oYjKPCss / Twitter&quot;</h3><p>Pay your testers the same salary as as your developers, and sometimes pay testers more. They have amazing empathy for developers and users at the same time! https://t.co/Y7oYjKPCss</p><p>Because “a tester’s mindset is the most powerful thing in computing”, says Orandi Harris. Know this, believe it, and act accordingly.</p><h3>Orandi on Twitter: &quot;A testers mindset is the most powerful thing in computing. https://t.co/QwGIsw8O51 / Twitter&quot;</h3><p>A testers mindset is the most powerful thing in computing. https://t.co/QwGIsw8O51</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fgiphy.com%2Fembed%2FcYeTawXmSKVOaHU5XB%2Ftwitter%2Fiframe&amp;display_name=Giphy&amp;url=https%3A%2F%2Fgiphy.com%2Fgifs%2Fsorry2botheryou-sorry-to-bother-you-boots-riley-cYeTawXmSKVOaHU5XB&amp;image=https%3A%2F%2Fmedia4.giphy.com%2Fmedia%2FcYeTawXmSKVOaHU5XB%2Fgiphy.gif&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=giphy" width="435" height="244" frameborder="0" scrolling="no"><a href="https://medium.com/media/e94a90868af170febc08eebe9b0832a1/href">https://medium.com/media/e94a90868af170febc08eebe9b0832a1/href</a></iframe><p><em>Originally published at </em><a href="https://info.applitools.com/udik3"><em>https://applitools.com</em></a><em> on April 1, 2021.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=89049e59fd20" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Automate Your PDF Testing]]></title>
            <link>https://applitools.medium.com/automate-your-pdf-testing-79b685a83e16?source=rss-803b5e1025b0------2</link>
            <guid isPermaLink="false">https://medium.com/p/79b685a83e16</guid>
            <category><![CDATA[test-automation]]></category>
            <category><![CDATA[tech]]></category>
            <category><![CDATA[automation]]></category>
            <category><![CDATA[testing]]></category>
            <dc:creator><![CDATA[Applitools]]></dc:creator>
            <pubDate>Fri, 05 Mar 2021 03:57:51 GMT</pubDate>
            <atom:updated>2021-03-05T18:44:57.772Z</atom:updated>
            <content:encoded><![CDATA[<h4>Introduction</h4><p>Automated testing of <a href="https://acrobat.adobe.com/us/en/acrobat/about-adobe-pdf.html">PDFs</a> has traditionally been a challenging task in test automation. <a href="https://en.wikipedia.org/wiki/Bitmap">Bitmap imaging</a>, the legacy state-of-the-art approach, has proven unreliable. Most automation teams have focused on their end-to-end user tests, except PDF tests. Instead, they have relegated PDF validation to manual testers. And, alas, manual testing is prone to error.</p><p>In this article, we will review the requirements of PDF testing. We will also cover approaches to <a href="https://info.applitools.com/ucSZn">automating PDF tests</a> using <a href="https://info.applitools.com/uw9e">Applitools</a>.</p><h4>Why PDF Testing?</h4><p>More and more organizations transform their business process to online applications. The digital operating model requires that documents be electronically produced and sent to their customers. Assume a customer visits an insurance company or a bank to open an account. Increasingly, this work occurs exclusively with electronic records. After a successful setup, the institution sends a digital copy of the record to the customer. PDF offers the most sophisticated document layout and necessary security to serve as an electronic record. Account statements, invoices, receipts, documentation, and disclaimers get distributed as PDFs.</p><p>Organizations cannot own all possible UI failures. When users generate their own PDFs via the local operating system, they may encounter errors. But, when organizations generate receipts, account statements, and other documents for customer download, the organizations clearly own problems with those. A nd, in regulated industries, PDF problems can incur regulator wrath in addition to customer frustration. Thus, testing the generated outputs is mandatory from both quality and legal perspectives.</p><p>Organizations use Applitools to uncover problems in their transactional or customer-related PDF documents</p><h4>What to Automate?</h4><p>In regulated sectors like insurance, medical, and banking, end-user documents need to be accurate. Regulators who uncover inaccuracy in published documents can impose fines and other penalties. Organizations need to ensure that the PDFs are fully tested before being published to recipients.</p><p>Consider an application producing customer letters using a PDF template. Various sections of the PDF are dynamically updated with the customer data. A test for PDF needs to verify that both the content and layout of the output document are correct.</p><p>When testing for layout, the document should be fully formed with:</p><ul><li>the specific sections present</li><li>in the right location and</li><li>in the right order.</li></ul><p>When testing for content, testers need to ensure that:</p><p>A layout failure can impact the processing of the documents by downstream systems. Testers need to be able to check content on a page or on locations on all or selected pages.</p><h4>How traditionally organizations did PDF Testing?</h4><p>Let us consider the following (two pages) policy document which is published by a large telco provider with critical summary information of their services:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*rxenFnYLZJb-7mJy" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*TfY1Ic_qCIpn4Y3I" /></figure><p>Usually, organizations take an approach of validating the data using <a href="https://applitools.com/blog/test-your-service-apis/">API testing</a> and finally using solutions such as PDF boxes to test them on a page. However, most organizations rely on manual testing to validate the full document output. As organizations generate increasing numbers of electronic documents, they see the scope of effort needed for comprehensive testing. These organizations test PDF a sample of their documents manually.</p><h4>Application of Visual AI in testing PDF</h4><p>Applitools is an AI-powered Visual testing platform. Using a range of algorithms, Applitools compares images with the context of the human eye and speed of computer.. Applitools can test any user interface with 99.99% accuracy. Applitools reports only real differences visible to the human eye. These include any changes to color, contrast, position, size, or content.</p><p>In the case of a PDF, Applitools lets users inspect the entire document, or just selected pages. The scope of comparison matches the needs of the test. Users can target specific sections of a PDF for testing and ignore sections that are not relevant to a test. And, armed with the Applitools layout algorithm, users can validate a PDF layout even if the internal text has changed.</p><h4>PDF Testing solution</h4><p><a href="https://info.applitools.com/udaAc">Applitools PDF Tester</a> is a codeless utility for automating the PDF testing of any document using Visual AI. You can validate the content in a page or a region across selected pages or all pages of the PDF.</p><p>Look at how the previous example can be tested using Applitools PDF Tester. Configure the test job as an XML. Add the content validations as test assertions. You can create the XML manually, or you can build it programmatically using a script. Once completed, execute the PDF test using the Applitools PDF Test application. You can run the test from the command line using any batch process.</p><p>Following are the results:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*8CygYNZIS9-1KJRu" /></figure><p>The utility reports all the assertions in the PDF document and reports the result as ‘Passed’ or ‘Failed’. In addition, the utility tests the fully formatted output document against a baseline and reports any differences discovered.</p><p>Logging into Applitools dashboard we can review all the differences spotted by AI:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*BNPWVlfWqbfgvNUL" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*vLHsJ7ojBWiOCrUl" /></figure><p>Here, AI is highlighting all the content, size, color, positioning, and font differences. Engineers can validate the differences, ensuring document accuracy prior to publication. We can further instruct AI to specifically test or ignore sections of the page by using annotations.</p><h4>Conclusion</h4><p>While organizations have largely automated web and mobile application tests, they have still struggled to automate testing of PDFs. Utilizing the capabilities of AI in testing for a completed document has proven to solve PDF test automation. By adding both full document and dynamic data tests, teams can incorporate PDF testing in their end-to-end test automation.</p><p><em>Originally published at </em><a href="https://info.applitools.com/udaAi"><em>https://applitools.com</em></a><em> on March 5, 2021.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=79b685a83e16" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Get A Jump Into GitHub Actions]]></title>
            <link>https://applitools.medium.com/get-a-jump-into-github-actions-cf711d6794fa?source=rss-803b5e1025b0------2</link>
            <guid isPermaLink="false">https://medium.com/p/cf711d6794fa</guid>
            <category><![CDATA[test-automation]]></category>
            <category><![CDATA[testing]]></category>
            <category><![CDATA[github-actions]]></category>
            <category><![CDATA[github]]></category>
            <dc:creator><![CDATA[Applitools]]></dc:creator>
            <pubDate>Tue, 02 Mar 2021 16:24:17 GMT</pubDate>
            <atom:updated>2021-03-02T21:24:02.904Z</atom:updated>
            <content:encoded><![CDATA[<p>On January 27, 2021, <a href="https://www.linkedin.com/in/angiejones/">Angie Jones</a> of Applitools hosted <a href="https://www.linkedin.com/in/brianldouglas/">Brian Douglas</a>, aka “bdougie”, Staff Developer Advocate at GitHub, for a webinar to help you jump into <a href="https://docs.github.com/en/actions/learn-github-actions/introduction-to-github-actions">GitHub Actions</a>. You can watch the <a href="https://youtu.be/fW6dUuNr0gg">entire webinar on YouTube</a>. This blog post goes through the highlights for you.</p><h4>Introductions</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*J431TzdOOHeK-xKG" /></figure><p>Angie Jones serves as Senior Director of Test Automation University and as Principal Developer Advocate at Applitools. She tweets at @techgirl1908, and her website is <a href="https://angiejones.tech">https://angiejones.tech</a>.</p><p>Brian Douglas serves as the Staff Developer Advocate at GitHub. Insiders know him as the “Beyoncé of GitHub.” He blogs at <a href="https://bdougie.live">https://bdougie.live</a>, and tweets as @bdougieYO.</p><p>They ran their webinar as a question-and-answer session. Here are some of the key ideas covered.</p><h4>What Are GitHub Actions?</h4><p>Angie’s first question asked Brian to jump into GitHub Actions.</p><p>Brian explained that GitHub Actions is a feature you can use to automate actions in GitHub. GitHub Actions let you code event-driven automation inside GitHub. You build monitors for events, and when those events occur, they trigger workflows.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*SOUhBIDppdoOW0Qo" /></figure><p>If you’re already storing your code in GitHub, you can use GitHub Actions to automate anything you can access via webhook from GitHub. As a result, you can build and manage all the processes that matter to your code without leaving GitHub.</p><h4>Build Test Deploy</h4><p>Next, Angie asked about Build, Test, Deploy as what she hears about most frequently when she hears about GitHub Actions.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*pkjgeYg3dvg1flRH" /></figure><p>Brian mentioned that the term, GitOps, describes the idea that a push to GitHub drives some kind of activity. A user adding a file should initiate other actions based on that file. External software vendors have built their own hooks to drive things like continuous integration with GitHub. GitHub Actions simplifies these integrations by using native code now built into GitHub.com.</p><p>Brian explained how GitHub Actions can launch a workflow. He gave the example that a team has created a JavaScript test in Jest. There’s an existing test using Jest — either npm test, or jest. With GitHub Action workflows, the development team can automate actions based on a starting action. In this case, the operator can drive GitHub to execute the test based on uploading the JavaScript file.</p><h4>Get Back To What You Like To Do</h4><p>Angie pointed out that this catchphrase, “Get back to what you like to do,” caught her attention. She spends lots of time in meetings and doing other tasks when she’d really just like to be coding. So, she asked Brian, how does that work?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*e4Ar0JEQ2tovh5LJ" /></figure><p>Brian explained that, as teams grow, so much more of the work becomes coordination and orchestration. Leaders have to answer questions like:</p><ul><li>What should happen during a pull request?</li><li>How do we automate testing?</li><li>How do we manage our build processes</li></ul><p>When engineers have to answer these questions with external products and processes, they stop coding. With GitHub Actions, Brian said, you can code your own workflow controls. You can ensure consistency by coding the actions yourself. And, by using GitHub Actions, you make the processes transparent for everyone on the team.</p><p>Do you want a process to call Applitools? That’s easy to set up.</p><p>Brian explained that GitHub hosted a GitHub Actions Hackathon in late 2020. The team coded the controls for the submission process into the hackathon. You can still check it out at <a href="https://githubhackathon.com/">githubhackathon.com</a>.</p><p>The entire submission process got automated to check for all the proper files being included in a submission. The code recognized completed submissions on the hackathon home page automatically.</p><p>Brian then gave the example of work he did on the GitHub Hacktoberfest in October. For the team working on the code, Brian developed a custom workflow that allowed any authenticated individual to sign up to address issues exposed in the Hackathon. Brian’s code latched onto existing authentication code to validate that individuals could participate in the process and assigned their identity to the issue. As the developer, Brain built the workflow for these tasks using GitHub Actions.</p><p>What can you automate? Informing your team when a user does a pull request. Send a tweet when the team releases a build. Any webhook in GitHub you can automate with GitHub Actions. For example, you can even automate the nag emails that get sent out when a pull request review does not complete within a specified time.</p><h4>Common Actions</h4><p>Angie then asked about the most common actions that Brian sees users running.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Vi3xAleK1SU2MOWq" /></figure><p>Brian summarized by saying, basically, continuous integration (CI). The most common use is ensuring that tests get run against code as it gets checked in to ensure that test suites get applied. You can have tests run when you push code to a branch, push code to a release branch or do a release, or even when you do a pull request.</p><p>While test execution gets run most frequently, there are plenty of tasks that one can automate. Brian did something specific to assign gifts to team members who reviewed pull requests. He also used a cron job to automate a GitHub Action which opened up a global team issue each Sunday US, which happens to be Monday in Australia, and assigned all the team members to this issue. Each member needed to explain what they were working on. This way, the globally-distributed team could stay on top of their work together without a meeting that would occur at an awkward time for at least one group of team members.</p><p>Brian talked about people coming up with truly use cases — like someone linking IOT devices to webhooks in existing APIs using GitHub Actions.</p><p>But the cool part of these actions is that most of them are open source and searchable. Anyone can inspect actions and, if they don’t like them, modify them. If a repo includes GitHub Actions, they’re searchable.</p><p>On github.dom/bdougie, you can see existing workflows that Brian has already put together.</p><h4>Jump Into GitHub Actions — What Next?</h4><p>I shared some of the basic ideas in Brian’s conversation with Angie. If you want to jump into GitHub Actions in more detail, you can check out the full webinar and the slides in <a href="https://info.applitools.com/uc90i">Addie Ben Yehuda’s summary blog for the webinar</a>. That blog also includes a number of Brian’s links, several of which I include here as well:</p><p>Enjoy jumping into GitHub Actions!</p><p><em>Originally published at </em><a href="https://info.applitools.com/uc90j"><em>https://applitools.com</em></a><em> on March 2, 2021.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=cf711d6794fa" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Test Automation University is now 75,000 students strong]]></title>
            <link>https://applitools.medium.com/test-automation-university-is-now-75-000-students-strong-bc9764b26c82?source=rss-803b5e1025b0------2</link>
            <guid isPermaLink="false">https://medium.com/p/bc9764b26c82</guid>
            <category><![CDATA[testing]]></category>
            <category><![CDATA[api]]></category>
            <category><![CDATA[dev]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[test-automation]]></category>
            <dc:creator><![CDATA[Applitools]]></dc:creator>
            <pubDate>Tue, 23 Feb 2021 09:32:28 GMT</pubDate>
            <atom:updated>2021-02-23T20:55:06.085Z</atom:updated>
            <content:encoded><![CDATA[<p>What does it take to make a difference in the lives of 75,000 people?</p><p>Applitools has reached 75,000 students enrolled in <a href="https://info.applitools.com/uwi7">Test Automation University</a>, a global online platform led by <a href="https://info.applitools.com/uwi8">Angie Jones</a> that provides free courses on things test automation. Today, more engineers understand how to create, manage, and maintain automated tests.</p><h4>What Engineers Have Learned on TAU</h4><p>Engineers have learned how to automate UI, mobile, and API tests. They have learned to write tests in specific languages, including Java, JavaScript, Python, Ruby, and C#. They have applied tests through a range of frameworks including Selenium, Cypress, WebdriverIO, TestCafe, Appium, and Jest.</p><p>75,000 engineers would exceed the size of some 19,000 cities and towns in the United States. They work at large, established companies and growing startups. They work on every continent with the possible exception of Antarctica.</p><p>What makes Test Automation University possible? Contributors, who create all the coursework.</p><h4>Thank You, Instructors</h4><p>As of this writing, Test Automation University consists of 54 courses taught by 39 different instructors. Each instructor has contributed knowledge and expertise. You can find the list of authors on the <a href="https://info.applitools.com/uwi7">Test Automation University home page</a>.</p><p>Here are the instructors of the most recently added courses to TAU.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/964/1*0K_-tAA637s66mPRsqf7IA.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/956/1*kMZRt2tUcZbmVif9QLVh3A.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/954/1*FRCZSbATq6CZfOziRrUhcQ.png" /></figure><h4>Thank You, Students</h4><p>As engineers and thinkers, the students continue to expand their knowledge through TAU coursework.</p><p>Each course contains quizzes of several questions per chapter. Each student who completes a course gets credit for questions answered correctly. Students who have completed the most courses and answered the most questions successfully make up <a href="https://info.applitools.com/uc8IL">the TAU 100</a>.</p><p>Some of the students who lead on the TAU 100 include:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/738/1*PfDSD1FauX7WR0mK7GalVA.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/724/1*co7inFjg9paj12zmnU12Kg.png" /></figure><h4>Join the 75K!</h4><p>Get inspired by the engineers around the world who are learning new test automation skills through Test Automation University.</p><p>Through the courses on TAU, you’ll not only learn how to automate tests, but more importantly, you’ll learn to eliminate redundant tests, add automation into your continuous integration processes, and make your testing an integral part of your build and delivery processes.</p><p>Learn a new language. Pick up a new testing framework. Know how to automate tests for each part of your development process — from unit and API tests through user interface, on-device, and end-to-end tests.</p><p>No matter what you learn, you will become more valuable to your team and company with your skills on how to improve quality through automation.</p><p><em>Originally published at </em><a href="https://info.applitools.com/uc8IN"><em>https://applitools.com</em></a><em> on February 23, 2021.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=bc9764b26c82" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Visual Testing for Mobile Apps]]></title>
            <link>https://applitools.medium.com/visual-testing-for-mobile-apps-cf4fbac5f40a?source=rss-803b5e1025b0------2</link>
            <guid isPermaLink="false">https://medium.com/p/cf4fbac5f40a</guid>
            <category><![CDATA[testing]]></category>
            <category><![CDATA[mobile-apps]]></category>
            <category><![CDATA[test-automation]]></category>
            <category><![CDATA[mobile-app-development]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[Applitools]]></dc:creator>
            <pubDate>Mon, 22 Feb 2021 15:34:00 GMT</pubDate>
            <atom:updated>2021-02-22T18:16:23.045Z</atom:updated>
            <content:encoded><![CDATA[<p><strong>This article was written by Applitools senior developer advocate, </strong><a href="https://twitter.com/techgirl1908"><strong>Angie Jones</strong></a><strong>.</strong></p><p>======================================</p><p>A common question is “does Applitools visual testing work on mobile apps?”. The answer is a resounding <a href="https://info.applitools.com/uDlO">YES</a>! In this post, I’ll automate a couple of scenarios and show you the power of visual testing.</p><h4>The App</h4><p>We’re going to work with a Todo App which will allow us to add tasks. A cool feature of this app is the ability to add colors to tasks, which is a great way to designate categories for our various types of tasks.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/506/0*jOvHtlXlk-wUpm3w.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/512/0*E8mb95oQBBLaOY1e.png" /></figure><h4>Scenario #1: Add multiple tasks of different categories</h4><p>Our first scenario is to add four tasks, one for each of the different colors/categories. So here, we declare the 4 tasks and add them to the app: one for purple, blue, yellow, and pink. Next, we verify that the tasks were indeed added.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/1c64fc41c1288d788afb5b06d6942fe5/href">https://medium.com/media/1c64fc41c1288d788afb5b06d6942fe5/href</a></iframe><p>However, a key aspect of this scenario is to verify that the tasks were added as the right color. This is a native app, so we can’t make a call to get the CSS value as that is unsupported. And when looking in the Inspector, there was nothing there that identified these tasks by color. Fortunately, visual testing can help us with this!</p><h4>What is Visual Testing?</h4><p>Visual testing is an automated technique for verifying the state of your application by using snapshots. This means your tests are not limited to interrogating the DOM to determine status, like what is done with most other automation tools. Instead, with visual testing, a screenshot of your application is taken on every regression run and compared against a baseline image. Therefore, viewing and verifying your application as your users would!</p><p>Applitools’ visual testing API is called <em>Eyes</em>. The <em>Eyes</em> API uses AI to compare the images and is much more reliable than all other forms of visual testing, namely the notoriously flaky pixel-to-pixel and DOM-based approaches offered by other vendors.</p><h4>Adding Visual Testing</h4><p>The <em>Eyes</em> API can be added to your existing tests so there’s no need to throwout your test suite and start over! Let’s add visual testing to Scenario #1 so that we can make sure the tasks were added with the right colors.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/61f6d7d10d9555bc86168cfe37a070a3/href">https://medium.com/media/61f6d7d10d9555bc86168cfe37a070a3/href</a></iframe><p>Here we’ve added visual testing with 4 lines of code! 😲</p><p>On line 21, we’re simply initializing the <em>Eyes</em> API. On line 22 is where we tell <em>Eyes</em> that we would like to begin visually testing.</p><p>On line 23 is where the magic happens — we take a screenshot of our app. If this is the first time the test is run, this will become the baseline image. If there is already a baseline image, a new image will be taken and compared to the baseline image to determine if there are any differences. Notice the arguments sent into this call. This is telling Eyes to ignore the status bar region of our app because it is dynamic and will contain a timestamp and notifications. So this part of the app will be ignored every time the test runs!</p><p>Finally, on line 24, we’re done so we can close our <em>Eyes.</em> At this point, the screenshots are sent to the Applitools cloud, analyzed, and stored in a nice dashboard. In the event of a failure, your team can easily review the images to determine where the application has changed, and annotate the image with bugs and remarks.</p><p>But wait…</p><p>We added visual testing to verify the colors of the tasks. But it’s doing so much more. It’s verifying the entire screen (sans the status bar which we explicitly said to ignore). So that means all of our assertions above are already covered by <em>Eyes</em> and we can actually DELETE CODE!!! 🤯 In fact, we’ve deleted almost HALF of the code from this one test alone — with MORE coverage!!! Woohooo 🎉</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/437fa5a60bfa1c19196112f0959e0e61/href">https://medium.com/media/437fa5a60bfa1c19196112f0959e0e61/href</a></iframe><p>Let’s try another more scenario.</p><h4>Scenario #2: Complete a task from the list</h4><p>In our next scenario, we need to complete one of the tasks by clicking on it, and then verify that it is crossed off.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/9a5ee110ada8b3a843989814347186cc/href">https://medium.com/media/9a5ee110ada8b3a843989814347186cc/href</a></iframe><figure><img alt="" src="https://cdn-images-1.medium.com/max/512/0*N2FfzmpcGa7kX27X.png" /></figure><p>Now when it’s time to verify, we run into the same issue we did in Scenario #1. The strikethrough on the completed task is a CSS effect, and cannot be verified using the Inspector. So, let’s use visual testing.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/bf64f525a3ef50a4ed4f4c7f07a8f67f/href">https://medium.com/media/bf64f525a3ef50a4ed4f4c7f07a8f67f/href</a></iframe><p>Now we’re covered! Ok, one more scenario…</p><h4>Scenario #3: Add long tasks</h4><p>For the last scenario, we want to add long tasks. What is the purpose of this test? It’s to ensure that on this mobile view, the text is displayed properly and doesn’t get cut off or bleed off the edge of the page.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/504/0*-amgW4iRBvn6OzUC.png" /></figure><p>But how can we make sure the text is displayed correctly without using visual testing? We can’t! We can make sure it exists. We can make sure it’s visible. But only visual testing will make sure that it’s showing up just as we intend it to.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/55bcc34a49eee80f6fa739ee1dec48d1/href">https://medium.com/media/55bcc34a49eee80f6fa739ee1dec48d1/href</a></iframe><h4>Visual Bugs Love Mobile Viewports</h4><p>Mobile testing is such a challenge. In addition to not being able to access style attributes from the Inspector, the presentation of your apps can vary based on the various viewport sizes. This makes mobile apps ripe for visual bugs! To make matters worse, traditional mobile automation tools only offer so much and are incapable of catching these visual bugs, and heck, even some of the functional ones as we’ve seen in Scenarios 1 and 2. If you’re doing mobile app test automation, you <em>need</em> visual testing! Wait no more; create your <a href="https://info.applitools.com/uw9g">forever-free Applitools account</a> and improve your testing today!</p><p><em>Originally published at </em><a href="https://info.applitools.com/uc8sQ"><em>https://applitools.com</em></a><em> on February 22, 2021.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=cf4fbac5f40a" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Automate your Appium setup]]></title>
            <link>https://applitools.medium.com/automate-your-appium-setup-29c55b833aa4?source=rss-803b5e1025b0------2</link>
            <guid isPermaLink="false">https://medium.com/p/29c55b833aa4</guid>
            <category><![CDATA[testing]]></category>
            <category><![CDATA[appium]]></category>
            <category><![CDATA[test-automation]]></category>
            <category><![CDATA[dev]]></category>
            <dc:creator><![CDATA[Applitools]]></dc:creator>
            <pubDate>Fri, 19 Feb 2021 01:38:10 GMT</pubDate>
            <atom:updated>2021-02-19T18:33:02.357Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*dLQQ9oizj4XUzi9L.jpg" /></figure><p>Appium is an open source test automation framework for use with native, hybrid and mobile web apps.</p><p>It drives iOS, Android, and Windows apps using the WebDriver protocol and gives you the ability to automate any mobile app from any language and any test framework.</p><p>Appium released its first major version almost 7 years ago. Since then, Appium has rolled out a lot of new features and its automation backend architecture has evolved quite a lot.</p><p>That said, getting started with implementing automated tests and executing them using Appium the 1st time can be a daunting experience.</p><p>There are a lot of dependencies that need to be set up — node, android-command-line tools, appium, etc.</p><h4>Appium Setup Scripts</h4><p>In order to make this task easier, and avoid having to do this manually, one at a time, I have written a script that will help set the Appium-Android environment automatically for you. The script is available for Mac OSX and Ubuntu and will set up the latest available version of the dependencies &amp; appium itself.</p><p>You can find the scripts here:</p><p>MacOSX: <a href="https://github.com/anandbagmar/AppiumJavaSample/blob/master/setup_mac.sh">setup_mac.sh</a></p><p>Ubuntu: <a href="https://github.com/anandbagmar/AppiumJavaSample/blob/master/setup_linux.sh">setup_linux.sh</a></p><p>The mentioned setup scripts install all dependencies required for implementing / running tests on Android devices. To do the setup for iOS devices, run <strong><em>appium-doctor</em></strong> and see the list of dependencies that are missing, and install the same.</p><p>To ensure the setup is working properly, you can clone / download this sample repository ( <a href="https://github.com/anandbagmar/AppiumJavaSample">https://github.com/anandbagmar/AppiumJavaSample</a>) which has 2 tests that can be executed to verify the setup.</p><p>Refer to the <a href="https://github.com/anandbagmar/AppiumJavaSample/blob/master/README.md">README</a> file for specifics of the prerequisites to do the setup and also for running the tests.</p><p>To get full value of your functional test automation, add the power of Visual AI to your tests. You can refer to the relevant tutorial for adding Applitools Visual AI to your Appium tests from <a href="https://info.applitools.com/uDlO">here</a>.</p><h4>Summary</h4><p>Automate your Android and Appium test execution environment setup using an automated script on MacOSX ( <a href="https://github.com/anandbagmar/AppiumJavaSample/blob/master/setup_mac.sh">setup_mac.sh</a>) and Ubuntu ( <a href="https://github.com/anandbagmar/AppiumJavaSample/blob/master/setup_linux.sh">setup_linux.sh</a>).</p><p><em>Originally published at </em><a href="https://info.applitools.com/uc79Z"><em>https://applitools.com</em></a><em> on February 19, 2021.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=29c55b833aa4" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Thunderhead Speeds Quality Delivery with Applitools]]></title>
            <link>https://applitools.medium.com/thunderhead-speeds-quality-delivery-with-applitools-7a19f4920e2f?source=rss-803b5e1025b0------2</link>
            <guid isPermaLink="false">https://medium.com/p/7a19f4920e2f</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[dev]]></category>
            <category><![CDATA[test-automation]]></category>
            <category><![CDATA[tech]]></category>
            <dc:creator><![CDATA[Applitools]]></dc:creator>
            <pubDate>Tue, 16 Feb 2021 07:15:36 GMT</pubDate>
            <atom:updated>2021-02-16T16:00:52.942Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*XE2W9uBeqtrj-M3e.jpg" /></figure><p>Thunderhead is the recognised global leader in the Customer Journey Orchestration and Analytics market. The ONE Engagement Hub helps global brands build customer engagement in the era of digital transformation.</p><p>Thunderhead provides its users with great insights into customer behavior. To continue to improve user experience with their highly-visual web application, Thunderhead develops continuously. How does Thunderhead keep this visual user experience working well? A key component is Applitools.</p><h4>Before — Using Traditional Output Locators</h4><p>Prior to using Applitools, Thunderhead drove its UI-driven tests with Selenium for browser automation and Python as the primary test language. They used traditional web element locators both for setting test conditions and for measuring the page responses.</p><p>Element locators have been state-of-the-art for measuring page response because of precision. Locators get generated programmatically. Test developers can find any visual structure on the page as an element.</p><p>Depending on page complexity, a given page can have dozens, or even hundreds, of locators. Because test developers can inspect individual locators, they can choose which elements they want to check. But, locators limit inspection. If a change takes place outside the selected locators, the test cannot find the change.</p><p>These output locators must be maintained as the application changes. Unmaintained locators can cause test problems by reporting errors because the locator value has changed while the test has not. Locators may also remain the same but reflect a different behavior not caught by a test.</p><p>Thunderhead engineers knew about pixel diff tools for visual validation. They also had experience with those tools; they had concluded that pixel diff tools would be unusable for test automation because of the frequency of false positives.</p><h4>Introducing Applitools at Thunderhead</h4><p>When Thunderhead started looking to improve their test throughput, they came across Applitools. Thunderhead had not considered a visual validation tool, but Applitools made some interesting claims. The engineers thought that AI might be marketing buzz, but they were intrigued by a tool that could abstract pixels into visible elements.</p><p>As they began using Applitools, Thunderhead engineers realized that Applitools gave them the ability to inspect an entire page. Not only that, Applitools would capture visual differences without yielding bogus errors. Soon they realized that Applitools offered more coverage than their existing web locator tests, with less overall maintenance because of reduced code.</p><p>The net benefits included:</p><ul><li>Coverage — Thunderhead could write tests for each visible on-page element on every page</li><li>Maintainability — By measuring the responses visually, Thunderhead did not have to maintain all the web element locator code for the responses — reducing the effort needed to maintain tests</li><li>Visual Validation — Applitools helped Thunderhead engineers see the visual differences between builds under test, highlighting problems and aiding problem-solving.</li><li>Faster operation — Visual validation analyzed more quickly than traditional web element locators.</li></ul><h4>Moving Visual Testing Into Development</h4><p>After. using Applitools in end-to-end testing, Thunderhead realized that Applitools could help in several areas.</p><p>First, Applitools could help with development. Often, when developers made changes to the user interface, unintended consequences could show up at check-in time. However, by waiting for end-to-end tests to expose these issues, developers often had to stop existing work and shift context to repair older code. By moving visual validation to check-in, Thunderhead could make developers more effective.</p><p>Second, developers often waited to run their full suite of element locator tests until final build. These tests ran against multiple platforms, browsers, and viewports. The net test run would take several hours. The equivalent test. using Applitools took five minutes. So, Thunderhead could run these tests with every build.</p><p>For Thunderhead, the net result was both greater coverage with tests run at the right time for developer productivity.</p><h4>Adding Visual Testing to Component Tests</h4><p>Most recently, Thunderhead has seen the value of using a component library in their application development. By standardizing on the library, Thunderhead looks to improve development productivity over time. Components ensure that applications provide consistency across different development teams and use cases.</p><p>To ensure component behavior, Thunderhead uses Applitools to validate the individual components in the library. Thunderhead also tests the components in mocks that demonstrate the components in typical deployment uses cases.</p><p>By adding visual validation to components, Thunderhead expects to see visual consistency validated much earlier in the application development cycle.</p><h4>Other Benefits From Applitools</h4><p>Beyond the benefits listed above, Thunderhead has seen the virtual elimination of visual defects found through end-to-end testing. The check-in and build tests have exposed the vast majority of visual behavior issues during the development cycle. They have also made developers more productive by eliminating the context switches previously needed if bugs were discovered during end-to-end testing. As a result, Thunderhead has gained greater predictability in the development process.</p><p>In turn, Thunderhead engineers have gained greater agility. They can try new code and behaviors and know they will visually catch all unexpected behaviors. As a result, they are learning previously-unexplored dependencies in their code base. As they expose these dependencies, Thunderhead engineers gain greater control of their application delivery process.</p><p>With predictability and control comes confidence. Using Applitools has given Thunderhead increased confidence in the effectiveness of their design processes and product delivery. With Applitools, Thunderhead knows how customers will experience the ONE platform and how that experience changes over time.</p><p><em>Originally published at </em><a href="https://info.applitools.com/uc7qP"><em>https://applitools.com</em></a><em> on February 16, 2021.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7a19f4920e2f" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Skeptics Who Recommend Cross Browser Testing]]></title>
            <link>https://applitools.medium.com/skeptics-who-recommend-cross-browser-testing-907d630d7395?source=rss-803b5e1025b0------2</link>
            <guid isPermaLink="false">https://medium.com/p/907d630d7395</guid>
            <category><![CDATA[test-automation]]></category>
            <category><![CDATA[dev]]></category>
            <category><![CDATA[developer-tools]]></category>
            <category><![CDATA[cross-browser-testing]]></category>
            <category><![CDATA[testing]]></category>
            <dc:creator><![CDATA[Applitools]]></dc:creator>
            <pubDate>Fri, 12 Feb 2021 15:50:42 GMT</pubDate>
            <atom:updated>2021-02-12T17:39:48.355Z</atom:updated>
            <content:encoded><![CDATA[<p>Who recommends cross browser testing to their organizations?</p><p>In this series, we discuss the results of the <a href="https://info.applitools.com/uKtb">Applitools Ultrafast Cross Browser Hackathon</a>. Today, we will explain how those who were skeptics about cross browser testing would recommend that their organizations run <a href="https://info.applitools.com/uDTw">Applitools Ultrafast Grid</a> for cross browser testing.</p><h4>Reviewing Past Posts In This Series</h4><p>In this series we have covered the results of the Applitools Ultrafast Cross Browser Hackathon.</p><ul><li>My <a href="https://info.applitools.com/uc3Kt">first post</a> covered the overall results. I wrote specifically about the ease of creating cross browser tests with Applitools and introduced the topics to follow.</li><li>My <a href="https://info.applitools.com/uc4Ge">second post</a> covered how the speed of Applitools Ultrafast Grid makes cross browser testing a reality within the application build process.</li><li><a href="https://info.applitools.com/uc5Kw">In the third</a>, I explained how Applitools Visual AI tests provide much greater code stability compared with legacy cross browser tests, making the code easier to develop and maintain over time.</li></ul><p>With this post, I will cover the survey topic in the hackathon: would Hackathon participants recommend that their organizations adopt legacy cross browser testing approach to their peers, and would they recommend Applitools Ultrafast Grid for that purpose?</p><h4>Methodology</h4><p>For this survey, Applitools used an approach called <a href="https://www.netpromoter.com/know/">Net Promoter Score</a> (NPS). Net Promoter Score uses a survey question with the highest correlation to satisfaction:</p><p>“On a scale of 0 to 10, with 0 being not likely and 10 being highly likely, how likely are you to recommend [the survey object] to others?”</p><p>Researchers have shown that this question correlates most highly with satisfaction. Respondents who give a 9 or 10 (highly likely to recommend) get classified as “promoters.” Promoters have high satisfaction. Those who give a 7 or 8 get classified as neutral — they are neither satisfied or dissatisfied. Others with a 6 or below get classified as detractors. Detractors have been dissatisfied with something and have no willingness to recommend the survey object.</p><p>Break the survey responses into respondents by value and count. Add 1 for each 10 and 9. Give 0 for each 7 or 8. Subtract 1 for each 6 or below. Then, normalize the results to 100 by taking your count and dividing it by the number of respondents, and multiplying by 100.</p><p>Results can range from -100 to 100. According to Bain &amp; Co, the developers of NPS:</p><ul><li>80 rates as ‘world class’</li><li>50 rates as ‘excellent’</li><li>20 rates as ‘favorable’</li><li>Around 0 rates as ‘neutral’</li><li>-10 and lower rates as ‘negative’</li></ul><h4>Asking Hackathon Participants</h4><p>Applitools surveyed the the 2,224 Hackathon participants about their willingness to recommend the use of Applitools Ultrafast Grid to their peers. The survey also asked their willingness to recommend the use of legacy cross browser testing. Of the participants, 203 engineers were able to run both the legacy and the Ultrafast cross browser tests.</p><p>Here were the survey responses:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/762/0*G1cK9GJUdlNTBZAm" /></figure><h4>Few Fans Of Traditional Cross Browser Testing</h4><p>For legacy cross browser testing, using a traditional test application and validation process, 68% of participants got classified as detractors. 17% got classified as passives. Only 15% promoted the legacy approach. This gave a NPS of -54 (rounded down). Participants were, in general, not fans of legacy cross browser testing.</p><p>This result mirrors how much people use the legacy approach to cross browser testing. Companies listed in the graphic above provide the infrastructure to run tests. They don’t reduce the test load, or the code load. Their pricing reflects the cheaper cost for them to set up and maintain that infrastructure of devices, browsers, operating systems, and viewport sizes. Given the cost of setting up and maintaining legacy cross browser tests, it makes sense that not a lot of companies use cross browser testing actively.</p><h4>Willing to Recommend Applitools Ultrafast Grid for Cross Browser Testing</h4><p>Also, the survey asked participants their willingness to recommend Applitools Ultrafast Grid for Cross Browser testing. Here, 75% gave a 10 or 9 and got classified as promoters. Another 20% responded 7 or 8 and got classified as neutral.</p><p>The promoters valued:</p><ul><li>Speed of tests — noting they could run their tests in the build process</li><li>Simplicity of management — no tests needed to be run and tuned across multiple plaforms</li><li>Simplified code management — fewer locators meant easier to set up and manage</li><li>More accurate — the underlying Visual AI wasn’t plagued by false positives and caught all the code errors</li></ul><p>The promoters discovered the ease of creating and maintaining cross browser tests using Applitools Ultrafast Grid. The promoters also realized that, with tests completed and analzyed accurately in well under 10 minutes, Applitools Ultrafast Grid made it possible to run cross browser tests in the scope of a build or during unit tests. Legacy tests, even when run in parallel, took tens of minutes to complete and analyze.</p><h4>Implications of Recommending Cross Browser Testing</h4><p>If you read my earlier posts, you know that two camps existed related to cross browser testing. One camp ran cross browser tests because they had encountered issues in the past and saw cross browser tests as a safe approach. The other camp avoided it altogether and thought cross browser testing unnecessary.</p><p>There is a third camp using Applitools Ultrafast Grid. This camp recognizes that the combination of:</p><ul><li>test speed appropriate for software build processes</li><li>ease of deployment</li><li>lack of infrastructure to manage, and</li><li>simplified test code management,</li></ul><p>made cross browser testing feasible. This third camp can deploy Applitools Ultrafast Grid to run and validate rendering behavior for any combination of browser, operating system, and viewport size and run this test set quickly at the unit, build, merge, and final test timeframes.</p><p>What then are the implications for the Applitools Ultrafast Cross Browser Hackathon? 75% of the 208 engineers who completed both sets of tests could see the value of Applitools Ultrafast Grid just by using it. And, they would be willing to recommend it. They realized that, whether they had previously released a bug to the field that they could have caught with cross browser testing, that Ultrafast Grid changed their understanding about this kind of testing completely.</p><p>Applitools users know that Applitools Visual AI makes it possible to run visual tests as part of their unit tests. These users can incorporate visual tests at every build and merge. And with Applitools Ultrafast Grid, they can incorporate cross browser tests as well.</p><p>Importantly, Hackathon participants learned these lessons just through their time on the Hackathon.</p><h4>What These Recommendations Mean For You</h4><p>When you have 75% of engineers recommending something, it might be worth trying. Whether you classify yourself as a cross browser skeptic or a grudging participant, it is clear that Applitools Ultrafast Grid offers something that differs from your current conception of cross browser testing.</p><p>At the very least, read the Applitools results in detail. You will learn why the engineers gave these recommendations.</p><p>More importantly, why not give Applitools a try? Sign up for a <a href="https://info.applitools.com/uw9g">free account</a>? Or, if you prefer, <a href="https://info.applitools.com/uw9h">request a demo</a> from an Applitools representative.</p><h4>Next Week</h4><p>Next week, in my last blog post in this series, I will help you draw your own conclusions about Modern Cross Browser Testing.</p><p><em>Originally published at </em><a href="https://info.applitools.com/uc6TV"><em>https://applitools.com</em></a><em> on February 12, 2021.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=907d630d7395" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>