The 3 Big C’s of Agile Development and Testing

In the age of Agile and Digital Transformation strategies, every brand is looking to set themselves apart. In order to excel strategically in the implementation of your digital transformation you need to be offering services to end users on their terms, on their devices, at their convenience, streamlining and differentiating features. On top of that, end users expect everything to look great, work perfectly…quickly.

When choosing your digital transformation strategy, there are key tradeoffs to understand between what are seemingly conflicting agendas: getting features to market faster, increasing presence on users’ devices vs. maintaining high application quality. What’s commonly known is that acceleration can come in the form of adopting an agile process: highly independent dev teams who are responsible for a feature or area of the code and delivering incremental functionality from design to production. What is less of a known is that a proper quality methodology can not only ensure high quality application at the end of each sprint, it can actually help the team accelerate.

When thinking about adoption of agile schemes, some of the common concepts that come to mind are Continuous Integration (CI), Continuous Delivery (CD) and Continuous Testing (CT). While serving slightly different objectives, these elements can actually integrate to assist the team to achieve the goals we mentioned: velocity and quality.

Continuous Integration

The most dominate player between these three is Continuous Integration and it is a necessary approach for any agile team.  The image below depicts a team that has not implemented a CI process. You see a 60 day development period and then after all that, the team shares their code.The outcome of such a scenario is creating or extending the post-sprint stabilization phase, where developers need to test and redo integration points. For an organization trying to accelerate time to market, this is a very expansive practice. Naturally, this is also very frustrating to developers and testers.

Using CI, the team integrates increments from the main tree continuously. Using test automation, they are able to ensure the integration actually works. (image below). With the CI approach, the conclusion of each sprint is on time and within the defined quality expectation. Not only would it be possible to shrink the stabilization phase, it might be possible to get rid of it altogether. In a CI process the ideal would be a working product at the end of each sprint, maybe even each day.

Continuous Testing

Continuous Testing, sometimes called Continuous Quality, is the practice of embedding and automating test activities into every “commit”. Teams are looking at CT because developers spend precious time fixing a bug for code that was written long ago. In order to fix the bug, they first need to remind themselves of which code it was, undo code that was written on top of it, and retest. It’s an extended effort. Testing that takes place every commit, every few hours, nightly and weekly, is the type of testing that not only increases confidence in the application quality, it drives team efficiency. In order to achieve CT, please use our checklist below:

  • Assure stable test lab 24×7
  • Allow variety of test and dev tools usage within the pipeline for high productivity
  • Automate as much as possible but make sure you automate only the high value and stable tests
  • Make sure you properly size the platform and test coverage for your projects
  • Fast feedback with reporting and analytics

Continuous Delivery

Continuous Delivery is the practice of streamlining/automating all the processes leading up to deployment. This includes many steps, such as validating the quality of the build in the previous environment (ex.: dev environment), promoting to staging, etc. These steps, done manually, can take significant effort and time. Using cloud technologies and proper orchestration, they can be automated.

As opposed to Continuous Delivery, Continuous Deployment takes agility to the next level: the working assumptions are that, first, the code is working at any point in time (for example, developers must check their code before they commit) and second, a significant amount of testing is done automatically, such that we have confidence the build is solid. That level of test and orchestration automation is difficult to find, but some agile SaaS organizations are certainly benefitting from this approach. To complete an efficient CD process you need to ensure you have a monitoring dashboard for your production environment in place in order to eliminate performance bottlenecks and respond fast to issues.

Summary

The biggest hang up or resistance we see when it comes to agile development and digital transformation is that teams feel like they can’t do it quickly with the same quality they are used to. This is simply not true. To ensure success in a rapidly transforming marketplace, brands need to accelerate their time to market, increase presence and ensure high quality. CI/CD/CT are methods that, above the agile development methodology, enable the velocity and quality needed. Combining the three into the right formula that fits your organization goals and culture is the recommended next step.

Don’t want to miss anything new about all things Continuous Integration and DevOps? Sign up for our blog today and be the first to know.

Perfecto Blog

Mobile Testing on iPhoneX – What Developers Need to Know

Apple (again) reinvented the display with the introduction of the notch on iPhone X screens (soon on 3 more models, and likely, eventually, across the fleet). From a developer perspective, that innovation may not have been as popular. So much so, that Apple approved a ‘Notch Remover” app.

The introduction of the notch made it confusing for app developers to know exactly how to develop the app. There is what’s called a “safe area” to develop that does not include the notch and as a result, some apps do indeed decide to stay inside the “safe area”, creating a somewhat ugly layout:

Others expand outside the “safe area”, which comes with its own set of challenges:

 

 

 

 

 

 

 

 

Above is an example of how the notch will affect how the content and images render on the screen of the iPhone X.

 

 

 

 

 

 

 

 

 

 

To solve these problems some are taking a hybrid approach (see YouTube above), where the movie is played inside the safe area but the ads aren’t, some clipping could happen.

 

 

 

 

 

 

 

 

Testing your app on the iPhoneX presents even more difficulties. When taking a screenshot or even a video from the device, it will result in a rectangular image. Observe the image above, note how the weather channel logo to the left is cut.

 

 

 

 

 

 

 

In contrast, the result of taking the video or screenshot from the device, show a perfectly rectangular shape above.

The examples above are the end result of working in the “safe area” and venturing outside of it. iPhoneX notch is creating additional issues for developers, below are just a few pains that you might be experiencing if you try to develop the app without any additional help.

  1. Time – Obviously time is of the utmost importance when delivering a mobile website or app. If you cannot see the real-time website or app in your testing, you will find issues late, which will require you to redo the code you wrote, possibly undo code that’s built on top of it, and fix it all. You having to go back and forth is frustrating and will take you away from creating new code.
  2. Cost – “Time is money” as they say and if it’s taking valuable time away from developers and testers than it most certainly is costing you more money.
  3. UX – As a developer you are responsible for rendering, but what if you have no idea there is a problem until it’s too late? Unhappy users and poor reviews…not fun!

So the question becomes how can a developer prevent this problem and validate what users will really see and adjust accordingly?

You might need help! Perfecto now offers true rendered view from iPhone X. It shows accurately what will be shown to end users and developers can validate the rendered image. Whether in interactive or automated testing, the true rendered content is available to the developer/tester.

Still looking for tips on how to get your iPhoneX App working great? Read iPhone X and iOS11: 5 Tips to Ensure Your App Works Well With Both to gain additional iPhoneX knowledge.

Don’t miss anything: Sign up for our blog today:

Perfecto Blog

Will Your Mobile UX get Sacked by Bounce Rate Measurability in 2018?

There are always new challenges in the mobile world and as mobile usage continues to dominate almost every business vertical (both native mobile-web), having a testing strategy that can be modified to incorporate new use cases and interfaces is crucial.

Over the next few months I will dive into some of the hot topics and trends in the Digital sphere to look at the horizon. Today we will be talking about the Mobile UX and bounce rate.

Mobile UX will be redefined with measurable bounce rate

“Bounce Rate” is defined as “the percentage of visitors to a particular website/App who navigate away from the site after viewing only one page. Bounce rate is a measurable indicator for engagement and stickiness in almost any digital platform – just not on those pieces of hardware we use the most: Smartphones. Decreasing bounce rate keeps UX experts and other digital leaders busy at all times, as they enhance and optimize the position of page components, customize landing page experience and fitting the digital products to the tastes, interests and behavior of their audience.

Since the dawn of mobile, these smart machines still provide the same lame experience: you use an app and leave your smartphone aside (allowing the screen to be locked). Then, you come back a few hours later and unlock the phone – and the first thing you see is still the last app you were using.

In an age of ‘everything is implementing AI/ML practices’, the apps/screens displayed on smartphones screen still suffer from the poor limitation where they cannot be customized according to the user’s needs or condition.

What is means for you: The Apps you really need, when you need them

Why is bounce rate measurement over smartphones so important? Because smartphones are becoming really smart… in other words: Smartphones will soon actually open and close your apps only when you need/use to consume them. But how is that even possible?

The natural evolution of this pattern might be into the smartphones’ display. As just mentioned, smartphones already know which apps users typically use and when, where the users are when using a specific app and more. Analyzing these patterns should allow the smartphone to know what users want and smartly serve it to the them on any given device unlock.

Ok, I get it. Smartphones are getting smarter & Bounce rate on smartphones will be measurable. What does it have to do with me?

The big deal here is around the ability to distinguish between a bounce from a page/app that was initiated by the smartphone or the user. This is a whole new granular level of bounce rate analysis that will create a new and accurate perspective about UX.

Smartphone initiated bounce (where the page/app are closed) reasons may include:

  1. Incoming call
  2. Popup in the page/app
  3. Device is locked (after a “session” is expired)
  4. User analyzed usage pattern indicates the it can be closed

User initiated bounce (where user intentionally closed the app/page) reasons may include:

  1. Broken UX – there is a functional/UI issue that prevents the user to complete the action in the first page/flow. (example: how many apps’ UI were corrupted with iPhone X???)
  2. User was redirected without true need to view the page/app or opened it by mistake.
  3. User is being distracted by something else (text message, etc.)

This new reality will put a big mirror in front of digital enterprises with regards to their true mobile UX. Smartphone bounce rate (which was not really discussed during the last decade) will become center stage and increase attention on the smallest details of UX that need to be continuously tested.

How to plan my testing to accommodate the different usage patterns?

New questions around environment conditions and user types should be addressed constantly. Digital Enterprises should strive for segmenting their main user groups & interfaces, naming those profiles as Personas which resemble their main characteristics.

 

 

Below are the main questions that will help creating these personas:

  1. Where is the app being used? (one/many location) is it being used in a static mode / while walking or maybe during driving? (impacts on which sensors are also used on the device: GPS, Accelerometer, Gyro).
  2. What are the network conditions used (WiFi, 2.5G/3G/4G, Airplane mode)?
  3. Are there any app decencies (any specific app that triggers the use or running in the background)?
  4. What is the main screen orientation of the usage? Are there any changes of the orientation during an average flow?
  5. Which user interfaces are being used (chatbots, physical proximity-based features, biometric authentication such as Touch ID or Facial Recognition, etc.)
  6. Types of Media being consumed (Video, audio, other)


Summary
:

Mobile services consumption is facing a new challenging future. In the near future, we are expected to see a booming focus on measuring and reducing smartphone bounce rate.

which reinforces the need to increase test coverage and test against clear personas.

In my next article I will dive into how testing should be more focused on location intelligence.

Click here to learn more about persona’s and how to test mobile apps under real user conditions

Sign Up for Our Blog:

 

Perfecto Blog

Increase Performance in Cross-Browser Testing with Zero Effort – Here’s How

Introduction

Are you used to getting a certain amount of data from your testing practices? Did you know that today you can extract more data from your existing testing practice…with zero additional effort. This all plays into the shift left movement that delivers insight – earlier and easier. When thinking about shifting left, you should answer one of these questions:

1. What new insights can I gain earlier?
2. How easy is it to implement?

Shifting left performance activities are top of mind for many engineering teams. The reason for this trend is because late discovery of extreme application latency typically leads to brand compromises on user experience in favor of time to market and/or the release may be delayed allowing an extended code undoing, a very expensive task for developers, and one teams are looking for eliminate.

The Challenge

There are various reasons why performance activities are usually done late, or outside the development cycle. Some of these include team structure, outdated perception of performance tests or the tools that are being used. This article will describe a web timing approach vs. the motivation and approaches to shifting left performance activities.

Web Page Timing

These are page level stats. Web page timers, defined in W3C Navigation Timing Specification isn’t necessarily new, however, they are very helpful in optimizing web content for various pages and browsers. The data is extremely detailed and readily available for analysis with almost all browsers supporting the API so you don’t need any special setup to collect and report these metrics.

Grabbing the page timers is fairly easy, simply leverage the following:

Map<String,String> pageTimers = new HashMap<String,String>();
Object pageTimersO =  w.executeScript(“var a =  window.performance.timing ;     return a; “, pageTimers);

Here’s an example of the timers resulting from a single page load:

 

 

 

Processing the timers can be done as follows:

long navStart = data.get(“navigationStart”);
long loadEventEnd = data.get(“loadEventEnd”);
long connectEnd = data.get(“connectEnd”);
long requestStart = data.get(“requestStart”);
long responseStart = data.get(“responseStart”);
long responseEnd = data.get(“responseEnd”);
long domLoaded = data.get(“domContentLoadedEventStart”);

this.duration = loadEventEnd – navStart;
this.networkTime = connectEnd – navStart;
this.httpRequest = responseStart – requestStart;
this.httpResponse = responseEnd – responseStart;
this.buildDOM = domLoaded – responseEnd;
this.render = loadEventEnd – domLoaded;

Now that we’ve got the page-level timers, we can store them and drive some offline analysis:

 

 

 

You can even decide within the test if you want to examine the current page load time or size, and pass/fail the test based on that:

// compare current page load time vs. what’s been recorded in past runs
public boolean comparePagePerformance(int KPI, CompareMethod method, WebPageTimersClass reference, Long min, Long max, Long avg){
switch(method){
case VS_BASE:
System.out.println(“comparing current: “+duration +” against base reference: “+ reference.duration);
return (duration – reference.duration) > KPI;
case VS_AVG:
System.out.println(“comparing current: “+duration +” against AVG: “+ avg);
return (duration – avg) > KPI;
case VS_MAX:
System.out.println(“comparing current: +duration +  against AVG: “+ max);
return (duration – max) > KPI;
case VS_MIN:
System.out.println(“comparing current: +duration +  against min: “+ min);
return (duration – min) > KPI;
default:
System.out.println(“comparing current: +duration +  against AVG method was not defined N/A: “+ avg);
return false;}}

Web Page Resource Timing

So far, we’ve been talking about the page level timing. Web page resource timing is a more in depth review of both your code as well as any third party code that you are using.. This is good data because you can detect latency in page performance across any page and any browser, and already get a direction whether the issue relates to DNS discovery, content lookup, download etc.

In reality, when you’re doing this in cycle, the big changes will come from the content that is being downloaded. Large images downloaded to small screens over cellular networks, downloads of non-compressed content, repeated downloads of JS or CSS etc.

Expert Tip:

(How can developers get immediate actionable insight to optimize the page performance? This is where the resource timing API comes to play.  There are great insights about every object that the browser requests: the server, timing, size, type etc. )

Again, to obtain access to the resource timing object, all that needs to be done is follow the following:

List<Map<String, String>> resourceTimers = new ArrayList<Map<String, String>>();
ArrayList<Map<String, Object>> resourceTimersO =   (ArrayList<Map<String,Object>>) w.executeScript(“var a =  window.performance.getEntriesByType(resource) ;     return a; “, resourceTimers);

And here’s an example of the data that is available. Lots of good stuff in here:

 

 

 

 

 

 

Each page would have a long list of resources like the above. You can summarize all the objects into types and produce a summary of totals and some distribution stats:

Below, for example, one can summarize the resources by type for each execution:

Or finally, simply gain access to all the resources directly:

Execution Time Comparison/Benchmarking

So far, we’ve gotten access to the raw data and conducted some level of analysis with it. At the beginning of this article we defined shift left as ‘deliver insight, early and easily’. Now, how about, given a web page, we set a ‘baseline’, and from then on, every execution, we would measure the responsiveness, provide a pass/fail, and provide a full comparison of the current page data vs. the ‘baseline’. Well, with a little code, that’s possible too:

Here’s the top-level, page level summary of current vs. ‘baseline’ run:

There isn’t a material difference in the number of items, but you can see that the page load time is almost 3 seconds longer. From a first look, it seems the rendering time is the one extended.

 

Now, here’s the comparison between the type summary:

 

 

 

 

This table compares the total items, size and duration by type against the baseline. It’s not surprising there aren’t any new types of content introduced in this page, nor are there massive changes in the number of elements per type given the last run was just a few days earlier.

Still, despite the fact that in total there is only one additional image, it appears images drive the most latency in loading the page.

To take a closer look, here are the images with the largest load time.

 

 

 

 

Interestingly, also images that were part of the older page, still took longer:

 

Putting It All Together

As we’ve seen, it’s possible to examine page responsiveness across different browsers. It’s also possible to compare the page and resource metrics against a previous run to extract action for optimization, or detect a defect. The nice thing is that this can be done for any test: smoke, regression, even production. It does not require any additional infrastructure as it simply runs within the target browser. Results can be embedded into your reporting solution and overall, performance can be part of your agile quality activity.

Code reference

The code used for this project is available as open source at https://github.com/AmirAtPerfecto/WebTimers

Follow up projects

  • More Performance Activities
    • HAR file: In addition to direct analysis of the page resources and metrics, it is also possible to analyze the HAR file. Unfortunately, it doesn’t seem like there are API-based analyzers readily available (most are web UI-based tools) but perhaps one can be built.
  • OCR-Based Analysis: Some tools (including Perfecto) offer visual based analysis for measurement of actual content render time. The accuracy of such measurement isn’t as high and the details available aren’t as easily translatable into action. Still, it’s a good method to measure user experience performance across screens. The OCR approach works well also for native apps.
  • Other tools: Google page speed, YSlow etc.
  • Other
    • Security: Similar to the performance testing, given the servers and resources downloaded are detailed in the logs, it should be possible to indicate the set of servers and countries contributing to this web page. Possibly not all are acceptable, that would be good to know and easy to add to the agile cycle.

Looking for more information on performance testing? Click here to read.

Perfecto Blog

What it Takes to Implement and Advance Continuous Testing Successfully

2018 is quickly becoming the year of DevOps and Continuous Testing. Some experts suggest that organizations that are moving towards DevOps should operate with the highest percentage of test automation and while this is a good suggestion, it takes more than just that to be successful in DevOps.

You need a mature DevOps strategy with a robust continuous testing method that is more than the simple automating functional and non-functional testing. While a clear key enabler to be agile is test automation and the ability to release software quickly; continuous testing (CT) does require additional implementations that are continuously measured, to achieve and sustain success.

The main question I get from organizations is how to implement Continuous Testing and advance my DevOps maturity successfully. Here are five steps you can utilize in order to implement CT for your business:

1. Risk vs. Reward – It’s obviously about coverage but you know you can’t test everything. You need to understand the best coverage for browsers and mobile devices for your business.

2. End-to-end testing – You need automated end-to-end testing that compliments your existing development process. In order to create this environment while excluding errors and allowing continuity throughout SDLC, you need to: Implement the right tests, make sure your CT test buckets are correct and leverage reporting appropriately. In addition, these tests that supports various team members and features, need to run per each code commit as part of a consolidated CI process.

3. Stable lab and test environment – The lab needs to be central to everything in your CT process. Your lab needs to be able to support your coverage requirements in addition to the test frameworks that were used to develop the tests.

4. Artificial Intelligence (AI) & Machine Learning (ML) –These can help you optimize your CT test suite and reduce the amount of time in release activities. If you are looking for more guidance on how to scale up your test automation, check out our latest ebook.

5. Software delivery pipeline and DevOps toolchain – CT needs to work seamlessly with everything. No matter the framework, environment (front or back-end) and IDEs that are used in the DevOps pipeline – continuous testing needs to pick up all the appropriate testing, execute them automatically and provide feedback for a GO/NO GO on the release.

In 2018, we will continue to see more companies transition into DevOps and Continuous Testing. Those that will stay ahead of the curve need to implement the correct foundation for continuous testing by adopting these five steps and creating a plan that is continuously optimized, maintained and adjusted as things change in the market or on your product roadmap.

Looking for even more insight on automation in DevOps and Continuous Testing?

Sign up for our Top 5 Test Automation Challenges and How to Solve them Webinar on January 30, 2018!

Perfecto Blog