Appium’s Pros, Cons & What The (Testing Framework) Future Might Look Like

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Since Perfecto is sponsoring and attending the annual Appium 2018 conference today, it’s a great time to take a look at some pros and cons, how it stacks up against the competing tools and what might be around the corner.

Benefits of Appium

What’s great about Appium:

  • It has a strong active open source community
    • Appium is by far the leading open-source test framework for cross-platform(mobile) native test automation (iOS, Android)
    • Appium is consistently backed by a large, very dynamic community, with steady support, commits, etc.
  • It has strong support for end-to-end testing in multiple programming languages
    • Appium provides support for multiple development languages through Remote WebDriver language bindings (Java, JavaScript, Perl, Python, C#)
    • Appium can cover black box end-to-end test flows including outside-the-app scenarios (e.g. initiating a call, sending a text message)

Challenges with Appium

Some areas where Appium could be better:

  • Setting up Appium locally can be a challenge: Teams are required to download, install and configure their environment which means having a local Android and iOS device available and connected.
  • Working with app objects for both iOS and Android isn’t easy, and falls into the top challenges (below) coming from practitioners. Since the Appium framework relies on iOS XCUITest and Espresso (Android’s UI automation framework), users needs to be familiar with the object structure, and know how to use the Appium object spy correctly.
  • Slow test execution. Tests can be slow due to the remote webdriver dependency, network issues and processing commands.
  • Test framework stability – Stability issues can occur when executing through CI or at scale and in parallel.
  • Test automation coverage and keeping up with latest mobile OS – Being able to fully cover gestural inputs, environment conditions, device settings and more, as well as support immediately latest Beta and GA versions of iOS and Android. Community supported solutions frequently move slower than handset vendor/mobile OS innovations.

Top Challenges from users on the Appium discussion board

Comparison of Mobile App Testing Frameworks

Despite Appium’s leadership today, DevOps teams are also adopting Espresso and XCUITest.

Since there is no perfect solution with regards to testing frameworks, the best solution for your needs might be best met by mixing various test frameworks across the DevOps pipeline.

Here’s a comparison of the leading testing frameworks:

Where is Mobile Application Testing Headed?

My testing framework intuition tells me that:

  • Appium will share more of the testing framework market with Espresso and XCUITest.
  • Functional testing using Espresso/XCUITest will become part of commit- triggered built-testing.  
  • Full end-to-end testing using Appium will be leveraged during full regression testing.
  • Appium stability will improve and execution times will shrink – it will get better & faster!
  • Hybrid test execution will become supported. Appium scripts will be able to trigger embedded Espresso/XCITests.


Appium is great! It’s got:

  • A strong open source community
  • Outstanding support for a number of programming languages
  • The ability to handle end-to-end test flows

Appium is weak:

  • In test performance and stability
  • In keeping up with the latest OS features (e.g. gestural)
  • In setup time

XCUITest and Espresso are also leading the market with strong user bases and helps fill in the gaps where Appium falls short – so keep an eye out for those other tools!!

It will be interesting to hear the upcoming Appium roadmap at the Appium 2018 Conference. Sign up for our live webinar to hear our take on the current and future state of testing frameworks and how Appium might fit into your DevOps toolchain.

The post Appium’s Pros, Cons & What The (Testing Framework) Future Might Look Like appeared first on Perfecto Blog.

Perfecto Blog

Cash For Apps: Make money with android app

How to Improve Your Continuous Testing While Balancing Velocity, Coverage, and UX Risk

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

A stage-based methodology

Continuous testing is one of the keys to the DevOps kingdom.  Your pipeline needs to move fast to keep up with ever-shrinking release schedules but you can’t afford to sacrifice quality or UX in the name of speed. The solution? During each stage of development, Development teams need to balance testing every scenario against the amount of time needed to generate meaningful test results.  However, it’s understood that a “test everything” approach isn’t practical;  therefore, you’re left with a balancing act for teams to negotiate. This blog focuses on a continuous testing methodology to determine which devices to test at each stage of development.  The highest-performing teams are the ones whose game plans match target platforms with each development stage; this stage-specific testing strategy is fundamental to meeting your fast feedback needs while ensuring a great UX.

Breaking Down the DevOps Team Processes by Stage:

  • Unit Testing
    • Developers execute unit tests to get fast feedback – “does the code I just wrote behave as expected? Is it ready for integration and more rigorous testing?”  Maximizing platform coverage in this stage is inefficient and unnecessary. In this early stage of development, unit tests executed before or after a commit often use emulators and simulators to provide a quick thumbs up or down on whether the code works. In later test phases, most top teams agree that moving to real devices is required to assure user experience.
  • Acceptance Testing
    • Teams typically focus on verifying that new functionality- as well as old-  works according to the user story, and tests are executed over a large set of platforms that mirrors realistic customer patterns.
  • Test in Production
    • Many teams adopt DevOps; testing in production becomes part of the continuous testing scope. Once code ships, the objective changes from “does it work” to “is it still working as expected?” Teams recognize the value of leveraging hourly testing of key flows to create an early warning mechanism. Early awareness of production issues jump starts resolution efforts while (hopefully) few users are negatively impacted.

Factors: Your Coverage Crib Sheet for Continuous Testing

We’ve established that it’s important to know which platforms to test against, in which environments, and when to execute, in order to streamline the continuous testing process.  Everyone involved in the product release should understand both the testing trigger points that must be defined in each stage and how their tests fit into the overall pipeline in order to meet project schedules and reduce UX risk.  Perfecto’s Factors reference guide gives you a head start with guidelines for determining which platforms you need to cover and how to fit them into your DevOps process.  The table below summarizes the Perfecto’s research.

Our methodology:

  • Unit testing should be executed by devs on a small subset of platforms that may include emulators and simulators, and should be triggered pre- and post-commit locally against the developer workstation.
  • Build acceptance tests should be executed on a larger number of platforms (real devices and web platforms) daily and as part of the continuous integration (CI) process.
  • Acceptance tests should be executed on the full set of platforms in the lab to get maximum coverage and quality visibility. These cycles should run on a nightly basis, orchestrated by the CI process.
  • Production testing should run hourly and continuously to detect regression defects, outages, or performance degradations in the service. Such tests should not focus on maximum coverage of platforms;  select the top 2-3 platforms from the web and mobile and execute against these.

Up Your Application Testing Game Plan

In this blog, we’ve taken a look at DevOps team quality objectives and highlighted the differences in coverage levels required at each stage. In addition, we provide a methodology for tailoring platform coverage for mobile and web for each development stage in order to enhance continuous testing and minimize UX risk.  Perfecto’s Factors reference guide provides invaluable insight into developing your testing methodology as well as the current data you need to make critical coverage decisions.  Whether you’re a dev tester, a developer, or an R&D manager, it’s a tool that you need in your toolbox.  Grab your copy today!

Perfecto Blog

Cash For Apps: Make money with android app

Location Intelligence On The Rise: Three Use Cases to Optimize Your UX

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

In the age of Digital, there is still enough room for the physical.

It is clear that enterprises wish to bring the digital user experience into the branch or office. Today, the mobile and digital banking experience is already part of our visit to the branch. Banks are digitizing the customer experience in the branch by placing tablets, reducing the paperwork and therefore, also the time we are spending there (we all like an instant experience, right?). The need to ensure a perfect experience in this competitive reality is receiving prime location. In my journey to help digital enterprises with perfecting their user experience, I see a growing need from not only bankers and insurance agents that are now more dependent on their tablets but also from pilots, aircraft technicians and others who work in the field. They are using internal apps and services but with the criticality of these services, it does not leave room for mistakes or glitches.

Marketing forces have recently introduced a new model that combined the Digital & Physical aspects of targeted exposure: Cost-Per-Visit. This new business metric aiming to encourage in-store consumption of services/goods and for brands to pay publishers for ads only when a customer is being exposed to it while visiting a specific location. In other words, using the CPV model not only helps brands increase foot traffic and boost sales, but also helps foster a more trusting relationship between brands, agencies and vendors.

So now that it’s clear that digital enterprises that have physical branches are placing greater emphasis on location intelligence, the question is how do you ensure quality on all different locations?

Proximity advertising is a good example, but how about or more basic one: searching the right branch? Does your service have the right ingredients to allow location based search (e.g. find the close branches to my current location, etc.)? Let’s assume the answer is yes – how would you test that nationwide assuming you’re representing a bank with branches nationwide? Covering such use case, would be a trivial part of a standard daily/weekly regression cycle, right? The answer in most cases would be the ability to inject the location intelligence as part of the automation suite. I often encounter trivial use cases while working with Digital Enterprises to optimize their UX:

  1. Localization testing:
    1. Challenge: How do I test the level of service nationwide?
    2. Solution: Implementing sustainable automation process that includes:
      1. A Job triggered through CI (Jenkins) couple of times per day.
      2. Job runs a build of Parallel test executions using TestNG FW with 8 devices: 4 android & 4 iOS devices.
  • Environment conditions of the test includes emulation of 30 different locations where the service is being evaluated.
  1. Build include 4 basic cases:
    1. Initiate a Voice call
    2. Send & receive a text message
    3. Run a speedtest of the network up&down metrics
    4. Open a Youtube page and play a video

This is a real example where a customer is continuously testing the availability of services in critical locations and conditions.

2. Network condition testing:

  1. Challenge: Can I make sure that my service is rendering properly across different levels of networks?
  2. Solution: Define the right set of network engagement your customers are using:
    1. Run your regression cycle suite across the set of these network conditions, for example:
      1. 5G
      2. 3G
      3. 4G
      4. Airplane mode (as I may have sales reps traveling and they need to be able to consume services offline from the app).

3. Location testing on steroids:

  • Challenge: Can I tests advanced location-based use cases like driving, walking etc.?
  • Solution: Implementing sustainable automation process that includes one or more of the following:
    1. Inject GPS location intelligence as a capability in your script for static changes
    2. Inject mocked motion data to simulate driving/walking session that includes data from one or more of the smartphone sensors (GPS, accelerometer, gyroscope data). A great example of this is Usage Based Insurance (UBI) in the insurance industry.


As described in a previous blog – patterns are becoming more and more important.

The future of location based testing will likely include the implementation of smart intelligence and interactions with IoT (including BLE devices, other smart terminals and POS).

The market is already heading in this direction: by analyzing historical location data and detailed behavioral patterns, digital enterprises can gain comprehensive insights into consumer preferences and habits which can be used for hyper-targeted campaigns and other engagement models.

Remember, the key for perfecting the experience is to BE IN THE RIGHT PLACE AT THE RIGHT TIME.

Don’t miss anything: Sign up for our blog today:





Perfecto Blog

Cash For Apps: Make money with android app

How To Use HAR Files To Find The Hidden Performance Bottlenecks In Your App

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND


App performance can be a killer problem for any digital company, especially when the performance issues take too long to identify. What I have found by working with many of our customers is that the answer can be hiding in the HAR file data- which is why you should always check your HAR files.

Step 1: What is a HAR File?

HAR (HTTP Archive Viewer) is a JSON file that contains a record of the network traffic between client and server. It contains all the end to end HTTP requests/responses that are sent and received between the two network components.

Step 2: What can I do with a HAR file?

HAR files allow developers and testers to learn what actually happens when a transaction is executed and to help find performance bottlenecks and security issues in the original and 3rd party code.


One of our customers came to me with a recurring performance degradation in their native mobile app. They had no idea what was causing the given issue; no big changes were pushed to the code and the problem was not reproducing on the Dev/QA environments. After collecting the HAR file from the production environment, we found that the analytics calls were taking twice as long because of a change made by the 3rd party analytics company.

Step 3: How to record a HAR file?

In order to record the HAR, you should set a proxy between the server and the client. All the data which goes through the proxy will be stored in a HAR file.

A couple tools that can be used to record data :


Step 4: Visualize your HAR file:

I would like to suggest two UI tools in order to help you visual the HAR data and help focus on the interesting data.

  • HAR viewer – – a free web based tool showing a waterfall graph of all the calls with the ability to drill down to a specific request.
  • Charles Proxy – Charles is an HTTP proxy / HTTP monitor which allows you to record and see the data.

Step 5: How to analyze performance through the HAR file:

1.Execute one flow and record the data. In the following example I went to amazon and searched for a laptop.

Total transaction time was 12.45s , a good performance, but what is causing the difference in the html displays?


2. Drill down to a specific request

In the  drill down I found that as part of the display request the server also executes a request from the mobile-ads which takes 80% of the time.



I did the same exercise with Best Buy by opening a page and searching for a laptop.

This site transaction took 42.8 seconds!! I traced the longest calls and drilled down:


The search call took 1.1s.




As shown on Charles I can see that the delay (server query) took 1 sec.


One of the useful features in Charles is to get curl url – it gives you the full url call which you executed from your command line, in this case I found that this specific call for device details took around 2 seconds:

curl -H ‘Host:’ -H ‘Upgrade-Insecure-Requests: 1’ -H ‘User-Agent: Mozilla/5.0 (Linux; Android 6.0.1; SM-G935F Build/MMB29K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.84 Mobile Safari/537.36’ -H ‘Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8’ -H ‘Referer:’ -H ‘Accept-Language: en-US,en;q=0.9’ -H ‘Cookie……..c?id=pcmcat303600050004′</font=10>

It helps to isolate the issue and now I can drill down to the code and understand where the bottleneck is occurring.




On the Best Buy page I also found that  the ad image size is 1000×1000, good for big desktop but when I search on mobile I should get a smaller image. On mobile it takes time to download big images (network) but also to display it on screen (rendering). By reducing the image size, it will not affect the mobile user experience and it will improve the site performance.





 User expectations are raising the bar on app performance and release velocity are requiring Dev & QA to fix fast. In most cases the performance issues are related to access to the databases or network. Analyzing the HAR file will give you more information about both and help you with the following:

  • What has been transferred over the network
  • How much time it took to execute specific transaction
  • How the third party integration affects your app
  • Was the right image matched to the right screen size


Keeping everything we just spoke about in mind and how beneficial the HAR files can be, it is also important to understand that this process is very complicated and requires manual actions to setup an HAR file. This is why the Perfecto cloud made it simple to collect HAR file data as part of the automation scripts; in order to give our customer the ability to analyze their mobile and web  applications and improve quality based on the network data.

To read more on HAR file check out:



Perfecto Blog

Cash For Apps: Make money with android app

Future-Proof Your Test Lab: Take the Guesswork Out Of Your Application Testing Strategy

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Plan your digital application testing around expected market changes

If you’re a regular reader of this blog, or if you work in DevOps, you’ve probably already spent a fair amount of time considering which devices you need in your lab. Testing on the right platforms is key to ensuring a great experience for your users. However, in the scramble to make sure your lab is up to date today, you might overlook another critical component of a successful testing strategy: planning. Planning for future changes in the market makes the difference between success today and ongoing success.

Mobile and web markets are driven by recurring patterns in both platform/OS release schedules and customer adoption of these platforms. Here are a few examples of release patterns:

In the mobile space

  • Apple typically announces a major iOS release in June for release in September
  • iOS mobile updates occur monthly – historically, these releases are 80% bugfixes and 20% feature introductions
  • Google’s OS Dev Preview for a new major Android release happens each March

Minor updates to Android  do not follow a set schedule – therefore, constant scanning of sites such as and can provide some early warning of coming changes.

In desktop browsers

  • Chrome and Firefox browsers are updated monthly
  • Safari and Edge receive 1 or 2 major updates per year

…and customer adoption patterns:

For mobile

  • When a new major Android OS is introduced, it triggers the non-Pixel vendors to update their devices to the previous major release; hence, adoption of Android “Latest -1” grows.
  • Apple pushes iOS releases to devices automatically, encouraging rapid adoption-  this increases the urgency of application testing for the new OS

Desktop browsers

  • Chrome and Firefox monthly releases are automatically updated on the users’ desktops, therefore forcing high adoption rates

Use Perfecto’s market calendars to guide your test lab updates

Armed with an awareness of these patterns in market changes, developers, testers and lab managers can be proactive, setting trigger points to update their labs as well as their project planning.  Here are some of the key market-change points:

  • March is an important month in the mobile space with its post-MWC product launches
  • June is when Google releases its next major OS version
  • June is the initial major-iOS-release developer preview
  • September is the when Apple releases its new major iOS version
  • Monthly updates to Chrome and Firefox major and beta-OS versions.

Factors reference guide helps you plan for your future cross-platform testing needs by keeping track of these market patterns and giving you the advance notice you need to avoid project-planning headaches.

Use Perfecto’s calendars to align your entire lab configuration, and project plans for 2018 for your mobile and web applications.

The Bottom Line

In this article, we have taken a look at the importance of understanding mobile and web market patterns and the benefits of planning according to these patterns.  We also highlighted the importance of being proactive and continuous planning in order to accommodate market changes and mitigate UX risks.  Perfecto’s Factors reference guide offers easy-to-use, pattern-based market calendars for both mobile and web;  we track upcoming changes in the market so you can focus on your mobile and web application testing.  

Download your copy of Factors reference guide now and future-proof your test lab!


Perfecto Blog

Cash For Apps: Make money with android app

Perfecto’s Proven Methodology for Cross-Platform Application Testing Success

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

If you make your living building or testing applications, there’s one question you can’t avoid: “With the market constantly changing, which platforms do I need to test against to make sure I’m not missing something important?” Enterprise app development teams need to ensure a great user experience (UX), and in order to deliver that experience, they must keep up with change;  this is where Perfecto can be an invaluable partner. We continuously scan the market and map out specific tipping points that happen throughout the year: for example, when the latest major OS versions for iOS and Android become widely adopted. We also track shifts in the web market, such as silent updates of Chrome and Firefox browsers, of which dev and test teams must be aware in order to guarantee flawless UX. Building these market events into your planning cycle reduces risk and helps keep your customers happy.

In August 2017, Marshmallow was the most-used Android version, but Nougat was starting to make inroads

The Challenge

In order to minimize UX risks, you need to build a proper cross-platform testing strategy. For both mobile and web platforms, a few criteria must be taken into consideration. Whether you are a dev, test, or lab manager, you need to answer the following questions:

  • What are the highest traffic platform/OS combinations in a given location?
  • What is the leading OS platform (iOS vs. Android) in the relevant region?
  • What is the trend in the market for the most-used platforms?
  • What are the most common display types?

Factors reference guide breaks down each market into concise, useful categorizations

The Solution

With its ongoing, worldwide market analysis, Perfecto does the heavy lifting for you by recommending which mobile devices and desktop browsers you should be testing based on the location of your customers. Our methodology simplifies your decision-making process by breaking down test coverage into 3 categories: Essential, Enhanced, and Extended.

Graph showing Perfecto’s device coverage methodology


Examining the bulk of mobile traffic by operating system reveals that the Android usage is essentially split across five OS families while iOS usage is split between three OS families. Therefore, eight smartphones are required to cover roughly eighty percent of mobile traffic. In addition to operating system family, displays represent a second critical coverage aspect. Analyzing display characteristics identifies important clusters between screen size and pixel density. We have grouped these into categories: small, normal, large, and extra-large. Our top-10 list of smartphones and tablets represents the highest-usage platforms when accounting for operating system, display characteristics, and model.

While traffic share is easily determined for mobile devices, desktop browsers are updated more often and thus require a different approach. Perfecto builds the desktop browser set based on two criteria: browser version and operating system. Browser version selection should normally consist of the following: current version, current version minus one, current version minus two, and the most recent beta version. Given the long life of computer operating systems, it is best to look not only at the most current Windows OS but the previous two as well. For Macs, it is best to include the current version as well as the previous one.

Dial in to the right criteria to identify the desktop browser combinations required for your test lab

Wrapping up

In this blog, we have taken a look at the importance of having a solid approach to cross-platform testing. In a market where devices and platforms are constantly changing, such a strategy is essential to the efficiency (and mental health) of app development and testing teams. We offered a set of planning criteria for ensuring good coverage and great UX; this is followed by our proven methodology for guaranteeing good mobile and web testing coverage. Proper planning in this area, which requires constant vigilance with respect to market changes, is critical to reducing UX risk. Take advantage of Perfecto’s extensive experience tracking market changes and put our methodology to work in your application testing strategy.

For the up-to-date, real-world data you need to make sure your mobile and web testing strategies are up to the challenge…

Download Your Copy of the Factors Reference Guide Now

Perfecto Blog

Cash For Apps: Make money with android app

XCUITest – The Emerging iOS UI Test Automation Framework

In the last year, there has been a growing trend of iOS development teams adopting XCUITest and additional frameworks built on top of XCTest interface.

Development teams have started to adopt XCUITest to get fast and reliable feedback. There are a few clear drivers to this growing adoption:

  1. Intuitive – Using XCUITest is quite intuitive for developers as it runs from within XCode IDE
  2. Fast – Test execution against iOS devices is faster than any other UI test automation tool due to the framework architecture
  3. Reliable – Due to the architecture of the framework, test execution using XCUItest generates more reliable results and eliminates flakiness
  4. Mature – The API’s and the framework became significantly more mature during the last year
  5. Test maintenance – Since the app is instrumented, the framework works directly in the object level which reduce maintenance efforts that usually happens due to changes in the applications.

A quick iOS instrumented testing frameworks and terminology review

  • XCTest – Apple’s official framework for writing unit tests for classes and components at any level. These tests, like the app itself, can be written in Swift/Objective C.
  • XCUITest – a UI testing framework that is built on top of XCTest. Itincludes additional classes (such as UIAccessibility); these tests can be written in Swift or Objective C. The tests are packaged in a test ipa (iOS packaged application) runner that execute the tests on the AUT(application under test) ipa.
  • KIF (Keep It Functional) – A iOS native app that warps XCTest as well using undocumented iOS APIs. It requires the developer to add KIF framework to the project. It has simple and intuitive syntax.

    [tester enterText:@”” intoViewWithAccessibilityLabel:@”Login User Name”];
    [tester enterText:@”thisismypassword” intoViewWithAccessibilityLabel:@”Login Password”];

  • Earlgrey – Similar to KIF although developed by Google. EarlGrey has an advanced synchronization mechanism which means you don’t need explicit waits / sleeps. (For example, if tapping a button triggers a network request, EarlGrey will wait for the network request to finish before proceeding with the test). EarlGrey uses matchers extensively (Read selection API this), these give you the flexibility to interact with elements and write assertion logic in a variety of ways with simple APIs.
  • Cucumberish – Test automation framework for Behavior Driven Development (BDD) that integrates into XCode and uses the iOS interfaces XCTest/XCUITest.


The Challenge

The challenge we find is although the above test frameworks can solve significant challenges that other test automation frameworks cannot, in many cases, teams adopt these frameworks before considering the proper setup and infrastructure. The promise of XCUITest depends on the fact that the execution of the tests will be on a robust, reliable, and scalable lab infrastructure.

Even though the adoption of those automation frameworks grow, many teams still execute their tests on simulators / local device from the developers workstation. Those teams understand that they get significantly more value from executing XCUITests and therefore continue to consider leveraging them even more by executing them as part of the CI processes to provide continuous feedback on real devices and end-user conditions.

XCUITest advanced capabilities

Perfecto recently released advanced support for the above frameworks in order to enable development teams to leverage the advantages mentioned above, while leveraging Perfecto’s cloud based capabilities. In addition, Perfecto extended the XCUITest framework by adding the ability to control and setup the device the same way end users do, by which enables teams to validate that their apps will function as expected in the real world.

To learn more about the Perfecto solution please visit our documentation website or to read more about the differences between XCUITest and Appium: The Rise of Espresso & XCUITest; The Fall of Appium, click here.

Perfecto Blog

The 3 Big C’s of Agile Development and Testing

In the age of Agile and Digital Transformation strategies, every brand is looking to set themselves apart. In order to excel strategically in the implementation of your digital transformation you need to be offering services to end users on their terms, on their devices, at their convenience, streamlining and differentiating features. On top of that, end users expect everything to look great, work perfectly…quickly.

When choosing your digital transformation strategy, there are key tradeoffs to understand between what are seemingly conflicting agendas: getting features to market faster, increasing presence on users’ devices vs. maintaining high application quality. What’s commonly known is that acceleration can come in the form of adopting an agile process: highly independent dev teams who are responsible for a feature or area of the code and delivering incremental functionality from design to production. What is less of a known is that a proper quality methodology can not only ensure high quality application at the end of each sprint, it can actually help the team accelerate.

When thinking about adoption of agile schemes, some of the common concepts that come to mind are Continuous Integration (CI), Continuous Delivery (CD) and Continuous Testing (CT). While serving slightly different objectives, these elements can actually integrate to assist the team to achieve the goals we mentioned: velocity and quality.

Continuous Integration

The most dominate player between these three is Continuous Integration and it is a necessary approach for any agile team.  The image below depicts a team that has not implemented a CI process. You see a 60 day development period and then after all that, the team shares their code.The outcome of such a scenario is creating or extending the post-sprint stabilization phase, where developers need to test and redo integration points. For an organization trying to accelerate time to market, this is a very expansive practice. Naturally, this is also very frustrating to developers and testers.

Using CI, the team integrates increments from the main tree continuously. Using test automation, they are able to ensure the integration actually works. (image below). With the CI approach, the conclusion of each sprint is on time and within the defined quality expectation. Not only would it be possible to shrink the stabilization phase, it might be possible to get rid of it altogether. In a CI process the ideal would be a working product at the end of each sprint, maybe even each day.

Continuous Testing

Continuous Testing, sometimes called Continuous Quality, is the practice of embedding and automating test activities into every “commit”. Teams are looking at CT because developers spend precious time fixing a bug for code that was written long ago. In order to fix the bug, they first need to remind themselves of which code it was, undo code that was written on top of it, and retest. It’s an extended effort. Testing that takes place every commit, every few hours, nightly and weekly, is the type of testing that not only increases confidence in the application quality, it drives team efficiency. In order to achieve CT, please use our checklist below:

  • Assure stable test lab 24×7
  • Allow variety of test and dev tools usage within the pipeline for high productivity
  • Automate as much as possible but make sure you automate only the high value and stable tests
  • Make sure you properly size the platform and test coverage for your projects
  • Fast feedback with reporting and analytics

Continuous Delivery

Continuous Delivery is the practice of streamlining/automating all the processes leading up to deployment. This includes many steps, such as validating the quality of the build in the previous environment (ex.: dev environment), promoting to staging, etc. These steps, done manually, can take significant effort and time. Using cloud technologies and proper orchestration, they can be automated.

As opposed to Continuous Delivery, Continuous Deployment takes agility to the next level: the working assumptions are that, first, the code is working at any point in time (for example, developers must check their code before they commit) and second, a significant amount of testing is done automatically, such that we have confidence the build is solid. That level of test and orchestration automation is difficult to find, but some agile SaaS organizations are certainly benefitting from this approach. To complete an efficient CD process you need to ensure you have a monitoring dashboard for your production environment in place in order to eliminate performance bottlenecks and respond fast to issues.


The biggest hang up or resistance we see when it comes to agile development and digital transformation is that teams feel like they can’t do it quickly with the same quality they are used to. This is simply not true. To ensure success in a rapidly transforming marketplace, brands need to accelerate their time to market, increase presence and ensure high quality. CI/CD/CT are methods that, above the agile development methodology, enable the velocity and quality needed. Combining the three into the right formula that fits your organization goals and culture is the recommended next step.

Don’t want to miss anything new about all things Continuous Integration and DevOps? Sign up for our blog today and be the first to know.

Perfecto Blog

Mobile Testing on iPhoneX – What Developers Need to Know

Apple (again) reinvented the display with the introduction of the notch on iPhone X screens (soon on 3 more models, and likely, eventually, across the fleet). From a developer perspective, that innovation may not have been as popular. So much so, that Apple approved a ‘Notch Remover” app.

The introduction of the notch made it confusing for app developers to know exactly how to develop the app. There is what’s called a “safe area” to develop that does not include the notch and as a result, some apps do indeed decide to stay inside the “safe area”, creating a somewhat ugly layout:

Others expand outside the “safe area”, which comes with its own set of challenges:









Above is an example of how the notch will affect how the content and images render on the screen of the iPhone X.











To solve these problems some are taking a hybrid approach (see YouTube above), where the movie is played inside the safe area but the ads aren’t, some clipping could happen.









Testing your app on the iPhoneX presents even more difficulties. When taking a screenshot or even a video from the device, it will result in a rectangular image. Observe the image above, note how the weather channel logo to the left is cut.








In contrast, the result of taking the video or screenshot from the device, show a perfectly rectangular shape above.

The examples above are the end result of working in the “safe area” and venturing outside of it. iPhoneX notch is creating additional issues for developers, below are just a few pains that you might be experiencing if you try to develop the app without any additional help.

  1. Time – Obviously time is of the utmost importance when delivering a mobile website or app. If you cannot see the real-time website or app in your testing, you will find issues late, which will require you to redo the code you wrote, possibly undo code that’s built on top of it, and fix it all. You having to go back and forth is frustrating and will take you away from creating new code.
  2. Cost – “Time is money” as they say and if it’s taking valuable time away from developers and testers than it most certainly is costing you more money.
  3. UX – As a developer you are responsible for rendering, but what if you have no idea there is a problem until it’s too late? Unhappy users and poor reviews…not fun!

So the question becomes how can a developer prevent this problem and validate what users will really see and adjust accordingly?

You might need help! Perfecto now offers true rendered view from iPhone X. It shows accurately what will be shown to end users and developers can validate the rendered image. Whether in interactive or automated testing, the true rendered content is available to the developer/tester.

Still looking for tips on how to get your iPhoneX App working great? Read iPhone X and iOS11: 5 Tips to Ensure Your App Works Well With Both to gain additional iPhoneX knowledge.

Don’t miss anything: Sign up for our blog today:

Perfecto Blog

Will Your Mobile UX get Sacked by Bounce Rate Measurability in 2018?

There are always new challenges in the mobile world and as mobile usage continues to dominate almost every business vertical (both native mobile-web), having a testing strategy that can be modified to incorporate new use cases and interfaces is crucial.

Over the next few months I will dive into some of the hot topics and trends in the Digital sphere to look at the horizon. Today we will be talking about the Mobile UX and bounce rate.

Mobile UX will be redefined with measurable bounce rate

“Bounce Rate” is defined as “the percentage of visitors to a particular website/App who navigate away from the site after viewing only one page. Bounce rate is a measurable indicator for engagement and stickiness in almost any digital platform – just not on those pieces of hardware we use the most: Smartphones. Decreasing bounce rate keeps UX experts and other digital leaders busy at all times, as they enhance and optimize the position of page components, customize landing page experience and fitting the digital products to the tastes, interests and behavior of their audience.

Since the dawn of mobile, these smart machines still provide the same lame experience: you use an app and leave your smartphone aside (allowing the screen to be locked). Then, you come back a few hours later and unlock the phone – and the first thing you see is still the last app you were using.

In an age of ‘everything is implementing AI/ML practices’, the apps/screens displayed on smartphones screen still suffer from the poor limitation where they cannot be customized according to the user’s needs or condition.

What is means for you: The Apps you really need, when you need them

Why is bounce rate measurement over smartphones so important? Because smartphones are becoming really smart… in other words: Smartphones will soon actually open and close your apps only when you need/use to consume them. But how is that even possible?

The natural evolution of this pattern might be into the smartphones’ display. As just mentioned, smartphones already know which apps users typically use and when, where the users are when using a specific app and more. Analyzing these patterns should allow the smartphone to know what users want and smartly serve it to the them on any given device unlock.

Ok, I get it. Smartphones are getting smarter & Bounce rate on smartphones will be measurable. What does it have to do with me?

The big deal here is around the ability to distinguish between a bounce from a page/app that was initiated by the smartphone or the user. This is a whole new granular level of bounce rate analysis that will create a new and accurate perspective about UX.

Smartphone initiated bounce (where the page/app are closed) reasons may include:

  1. Incoming call
  2. Popup in the page/app
  3. Device is locked (after a “session” is expired)
  4. User analyzed usage pattern indicates the it can be closed

User initiated bounce (where user intentionally closed the app/page) reasons may include:

  1. Broken UX – there is a functional/UI issue that prevents the user to complete the action in the first page/flow. (example: how many apps’ UI were corrupted with iPhone X???)
  2. User was redirected without true need to view the page/app or opened it by mistake.
  3. User is being distracted by something else (text message, etc.)

This new reality will put a big mirror in front of digital enterprises with regards to their true mobile UX. Smartphone bounce rate (which was not really discussed during the last decade) will become center stage and increase attention on the smallest details of UX that need to be continuously tested.

How to plan my testing to accommodate the different usage patterns?

New questions around environment conditions and user types should be addressed constantly. Digital Enterprises should strive for segmenting their main user groups & interfaces, naming those profiles as Personas which resemble their main characteristics.



Below are the main questions that will help creating these personas:

  1. Where is the app being used? (one/many location) is it being used in a static mode / while walking or maybe during driving? (impacts on which sensors are also used on the device: GPS, Accelerometer, Gyro).
  2. What are the network conditions used (WiFi, 2.5G/3G/4G, Airplane mode)?
  3. Are there any app decencies (any specific app that triggers the use or running in the background)?
  4. What is the main screen orientation of the usage? Are there any changes of the orientation during an average flow?
  5. Which user interfaces are being used (chatbots, physical proximity-based features, biometric authentication such as Touch ID or Facial Recognition, etc.)
  6. Types of Media being consumed (Video, audio, other)


Mobile services consumption is facing a new challenging future. In the near future, we are expected to see a booming focus on measuring and reducing smartphone bounce rate.

which reinforces the need to increase test coverage and test against clear personas.

In my next article I will dive into how testing should be more focused on location intelligence.

Click here to learn more about persona’s and how to test mobile apps under real user conditions

Sign Up for Our Blog:


Perfecto Blog