The next generation automation testing for video streaming apps by Perfecto & Accenture S3

How many times have you heard this phrase while watching a game on your phone: “Touchdown!!” (or GOAL!!) but all you can see is the loading dial? How about this: you are sitting on an airplane, trying to watch a video on the flight WiFi and it just stops or pauses constantly. Maybe you’ve tried to watch some home improvement or recipe video that was impossible to watch because it just kept stalling?

The proliferation of video consumption on a variety of devices is unsurprising when one considers the media vertical: Sports, entertainment, drama, gaming etc. Nearly 60% of sports videos are on mobile devices in UK.

What is interesting is the same phenomena outside this vertical: for example, 80% of all banks are planning to offer video-enabled banking services as an evolution of their digital personalized presence (Source).

 

 

 

 

 

 

The main drivers: User satisfaction, competitive edge and increased customer loyalty.

 

 

 

 

 

 

 

The same trend is happening in retail, travel and many more.

With high hopes and success stories, comes the reality: Content quality on its own is insufficient to drive user adoption, satisfaction and drive business metrics. It’s all about the end user experience: the sum of content quality, delivery and rendering on the mobile device. All vendors in this ecosystem need to align on quality expectations and embed agile quality practices.

This is easier said than done when considering the pace at which mobile applications are refreshed, as well as the time pressure introduced with new video content and services. Teams need automated, stable and scalable methods to continuously test and monitor video services on every phone, tablet and browser and in any environment; at home, on the road, inside a plane, etc.

A collaboration between Accenture-S3, and Perfecto provides a solution for just that. Perfecto offers access to a variety of devices, following in interactive mode or automated. Tests can be carried out pre-production or in production, following DevOps team collaboration methodologies Scripts can be written in any language: BDD/Cucumber, Java, JS, C# etc., in a stable and scalable form. Devices can be connected over Apple TV or Chromecast so that video quality can be analyzed by StormTest.

 

 

Using this system, brands can conduct both functional testing, as well as video quality testing, in an automated, continuous manner.

The resulting report can contain both the details for the functional analysis:

 

As well as the StormTest video quality report:

 

 

 

 

 

 

 

 

VQM_monitoring_frame_914 : MOS avg = 73.921368599: good, TFQ avg: 74.4697488455, FQ avg: 75.4665172481, nb of jerkiness: 12, max jerkiness time in ms: 66.6666666667, nb of frames with jerkiness: 12, nb blockiness: 0, nb blur: 389, nb contrast: 453, temporal min: 0.64802035108, temporal max: 50.9574469522, temporal avg: 8.64656667633, temporal stdv: 7.90411712151, Audio MOS avg = 70.6908369241: good, nb of silence: 36, nb of saturation: 0, nb of breaks: 0, nb of distortion: 0

Closing Thoughts

Can brands get ready for the next revolution of digital engagements? Absolutely. Testing and monitoring of video engagements in an automated and scalable fashion, that fits tight release cycles and produces reliable data are here. It’s time to take advantage of it.

Looking for more about the importance of streaming video quality?

 

 

Perfecto Blog

Don’t Fail When You Upscale: 5 Best Practices for Scaling Up Your Test Automation

More and more markets are moving towards digital channels as their main means of engaging with customers.  A majority of companies in North America and Europe have embraced the agile philosophy;  close behind and coming up fast is the move towards DevOps practices, which allow teams to release higher quality software at a faster pace.  The importance of companies’ online presences is obvious and the advantages of a DevOps mindset are clear;  however, a recent Sogeti report shows that only 16% of testing is currently being automated.

What’s the holdup, you ask?  With all the obvious upsides to automating your testing regimes, why is its adoption rate lagging?  We’ve identified 5 potential reasons.  We’re also giving you solutions, along with an nice acronym to help you remember what you’re up against:  TUTOR.

T

esting and automation dilemmas produced by tighter release schedules

In a recent research report done by Perfecto, we found that the #1 pain for DevOps and Continuous Testing teams is time-  or rather a lack of it.  Under this umbrella, there are a few key sub-problems:

  • Integrating new automated tests into existing test frameworks
  • Innovation and new features requiring more tests in less time
  • New devices and browsers adding yet more testing to squeeze into existing schedules

Are there solutions to these dilemmas?  Well, we have a checklist of best practices to make sure your DevOps strategy is firing on all cylinders.  First, make sure that quality is a top priority for all of your agile teams. Then, check out our new eBook “How to Scale (Up) Your Test Automation Practices..” –  it’ll walk you through a detailed strategy on how to tackle these obstacles and more.

U

niversally-accepted best practices are nowhere to be found…yet

This one’s a biggie.  Flaky tests are an ongoing problem for automation engineers.  Properly identifying objects, understanding key performance indicators for test suites, and simple things like ensuring that platforms are in a proper state to facilitate stable operation-  these fundamentals need to be covered in order to get the best results in the least time.  If you want great results (that goes without saying, right?) you need to devote as much discipline to test automation as you do to coding.

We have some great strategies for making sure your DevOps pipeline is running as efficiently as possible.  Stay ahead of the curve; download our new eBook and let Perfecto help you out with the latest tips and tricks.

T

esting the “right” things becomes increasingly difficult as testing scales up

Test suites are getting bigger and time constraints are ever-tighter.  Obsolete or duplicate tests, lack of parallel execution and inefficient data management within test frameworks can lead teams to waste time testing the wrong things. The test automation industry is working to introduce machine learning and artificial intelligence in order to help analyze, optimize, and increase productivity.  What can you do in the meantime?  Make sure you have the best analytics you can get and use them to maximize your test suite’s efficiency and optimization.

O

pen source sometimes struggles to cover the edges

New technologies such as Face ID, image injection, and voice interaction are turning what were only recently edge cases into critical testing elements.  If you don’t have the proper tools to automate testing of these features, you’ll either be forced to do loads of manual testing or allow quality to suffer.  For mature agile organizations, the best way to ensure continuous quality is to employ a combination of open source and commercial solutions;  we love open source, but for enterprise-grade reporting, debugging, and test authoring capabilities, you really need commercial tools in your toolbox.

R

ev up your test automation toolbox to fit your organizational structure

Organizations are made up of individuals, and all individuals have their own specific skills and strengths. When your tool stack isn’t tailored to individual team members’ skills, the overall product quality is impacted.  On top of that, with the new agile development mentality, QA is steadily being integrated into a more end-to-end DevOps process, which in turn requires more adjustments in terms of tools.  

In order to scale up your automated testing processes, you need to match your tool stack to your SDLC methodology;  make sure that your dev and test teams use tools that interoperate well.  Also, as mentioned previously, keep an eye on cutting-edge developments in AI/ML.  They are very likely to be finding their way into test frameworks very soon.

The bottom line

Achieving continuous quality in DevOps demands a careful balance between tools, people and processes.  All your teams need to share the same goal:  fast, high quality releases.  The way to guarantee success in scaling up your automated testing is to get the right tools for the job-  and for your people.  

For a detailed breakdown on the issues mentioned in this post, and for some great insights and simple solutions that we didn’t have time to explore here, grab a free copy of our latest publication, How to Scale (Up) Your Test Automation Practices.  Armed with the wisdom, methodologies, and best practices in this ebook, you can help your organization take its application quality to new heights.  And, as we all know, great apps mean happy customers!

[/fusion_text][/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]

Perfecto Blog

iPhone X and iOS11: 5 Tips to Ensure Your App Works Well With Both

To celebrate the tenth anniversary of the introduction of the iPhone, Apple wanted to make a splash in 2017. On top of their traditional release schedule, they set out to offer their customers a new, premium experience with the iPhone X.  Along with iOS 11, they have introduced a raft of cool new tech and features; however, their frenetic release schedule has been accompanied by something we’re less used to from the masters of fit and finish: quality issues.  

With the iPhone X, Apple took a bigger jump in terms of innovation compared to previous iPhone releases.  After receiving criticism for not innovating enough with the iPhone 7 or iPhone 8, Apple wanted to remind us that they aspire to be the leaders in mobile innovation.  The X departs from previous designs with a unique new screen, removal of the physical home button, and the introduction of Face ID.

It’s also fair to say that the new iOS 11 has had more than its fair-share of serious performance and functionality issues, ranging from unusual battery drain to unexpected data usage to security problems.  In fact, Apple has been forced to release OS patches 6 times in the barely 6 weeks since its introduction.  Possibly due to these teething pains, adoption rates for the new OS have been unusually low (roughly 30% in the first 6 weeks).

When it comes to iPhone X / iOS 11 combo, as well as the rest of the current iOS ecosystem, DevOps professionals need to take a multipronged approach in order to make sure their code doesn’t get caught up in Apple’s current growing pains.  

But hey, you came here for the hot tips for iPhone X / iOS 11 testing, right?  Well, let’s dig in then…

1. Test across all platforms

Even if you already understand the importance of cross-platform testing, the current state of affairs at Apple means you need to take extra care to make sure you have the coverage you need.  Some of the most widely used iPhones can’t run iOS 11, and iOS 9 still has a significant market share, so you need to be testing on these 3 operating systems.

2. Beware release numbers

Not only do you need to make sure you have 3 OSes covered, there’s another thing to keep in mind-  there are different patch levels for different phones. The iPhone X comes out of the box with 11.0.1 but updates straight to 11.1, whereas previous phones such as the iPhone 7 and 8 received incremental updates, so many phones are out there with intermediate versions of iOS.  If you want to be certain your app behaves properly under all circumstances, you should have at least one phone running an intermediate version-  say, an iPhone 7 on 11.0.3.  

3. Triple-check your responsive designs

You might call this a case of “you try to do something nice…”  As previously mentioned, the iPhone X screen is one of its big marketing points.  It has a much greater resolution than previous models, a higher PPI, and a different aspect ratio.  It’s a thing of beauty, but it breaks responsive app designs.  Bummer.  Check out the example of the Starbucks and CVS app below:

 

 

 

 

 

 

 

 

 

 

It’s pretty clear this isn’t rendering as intended on the iPhone X.  On top of this, designs need to betested in both landscape and portrait mode.  We’ve found examples of broken interfaces, inaccessible buttons, missing text, and other weirdness in major apps, including, for example, some menus in Hulu being completely inaccessible.  Developers need to check, double-check, and check again their responsive layouts, graphics, etc. to make sure that things still look and work great on the X.

4.  Go with the flow

The iPhone X has broken with the past and ditched the physical home button, whose function is now replaced with an upward swipe gesture.  This means that actions such as starting the task manager, killing background apps, or simply returning to the home screen requires a new flow; you need to be testing these new flows, and if you’re going to automate (you do automate, right?) you need to adapt your code to cover them.

5. So long, Touch ID, say hello to Face ID!

Jokes aside- Touch ID isn’t going anywhere soon- this introduction is a biggie, and there are some breaking-news-caveats attached too.  Check the postscript!  Face ID is cool new tech, but it is clear from a quick survey of leading apps that it has caught some developers off guard.  Certain apps aren’t correctly identifying which technology is available and consequently offer incorrect or unavailable choices for authenticating.  We talk about it a lot, but this is clear proof of why you need to be testing early and often.  Test on beta versions of OSes and software, and start testing new devices as early as humanly possible.

The Bottom Line

Apple’s introduction of the iPhone X hints at an exciting new phase of mobile tech innovation. It’s cool new features offer fascinating new ways to excite and engage end users, and, by extension, enhance your company’s digital presence.  However, in the rush to capitalize on these new capabilities, it’s important not to neglect quality; otherwise, you might find yourself losing customers rather than winning them over.  This is a critical time to reevaluate your testing and QA processes.  Make sure your apps don’t fail the new-tech test!

The Postscript

In the past 36 hours, Face ID has hit another speed bump.  A Vietnamese security firm claims to have cracked the Face ID security with a composite 3D mask.  While this remains to be verified by more sources, it does correlate with separate reports of Face ID being fooled by non-twin siblings and other similar faces.  Expect lots of talk about this in the coming weeks related to the Face ID debate.

You might also like this blog: 7 Plays to Handle Bug Fixes in Your DevOps Process to Change the Game

Perfecto Blog

Website and App Testing Fails: Don’t Let Black Friday Ruin the Biggest Day of the Year

Here we are again, another post about Black Friday. Why should you read this one? What is different about it? Well, for starters, I promise to be short. I also think you will find at least one useful fact to aid in your checklist of items to make sure your site and app don’t become a Black Friday horror story.

Here are three common Black Friday pitfalls to be aware of:

1. Performance and User Conditions

Even though this might seem like “old news”, you want to confirm that you prepared before the Black Friday rush. Black Friday will obviously bring more users to your website from various channels where the leading ones will most likely be mobile, therefore, you need to make sure your mobile web, desktop web and mobile app will function seamlessly with the influx in visitors.

Macy’s Black Friday 2016

User Experience is one of the biggest concerns on Black Friday. How are they going to navigate through your website or app? Are they going to use the camera in app, push notifications, AR, VR, etc.? Will your website or app have trouble loading or crash with the increase in visitors? You need to test in a real environment based lab, to make sure you are ready. Mimicking sensors and camera as well as the entire UX is not simple, therefore you need to have such lab in place as part of your development and testing environment.

2. Context based UX assurance

Apps, especially retail ones are quickly adopting context based methods to better serve their customers. Among the context specific functionalities that we see, you’ll mostly find location based offerings and coupons, popup notifications around deals for that day, as well as AR specific capabilities that might be also related to the location of the customers. If we look into the location aspect, we all know that depending on your location, you might have different products on sale or promoting different items for different demographics, hence, you need your app to run smoother for the UX regardless of location. You want them to be able to seamlessly check out without running into any snags that result in abandonment. Testing against various user conditions that includes locations, network conditions across various devices and OS versions is key to assure such UX.

3. Platform

Making sure the same User Experience is present across the entire digital platforms (learn how here) that includes mobile and desktop web, is important and this becomes a little trickier when traffic volume increases by 2x or maybe 3x on one day. The digital experience across the entire platform is something you want to pay special attention to so you don’t lose any sales in the user process.

Learn more about Cross-Browser Testing or App Testing Today!

Perfecto Blog

Mobile Testing: The Balance Between Real Devices and Emulators

I believe that the debate of mobile testing on real devices vs. emulators is one of the oldest and most emotional debates in the mobile space over recent years.

In this blog, I would like to try and make “peace” between the parties who are in favor of each.

Before I give my POV on that, let’s clarify the exact meaning of “testing mobile apps” so we’re all on the same page (I wrote an entire book about it 😊).

Mobile app testing has a wide scope that includes unit tests, functional end-to-end (E2E) tests, performance and UX tests, security and accessibility tests, and compatibility tests. Some innovative apps will also require advanced audio and gesture test cases for chatbots, fingerprint and Face-ID authentication.

While we’re all familiar with the traditional test pyramid, when we apply it to mobile, the testing strategy often looks different – especially when we try to include the large scope and the market platforms scale into the testing cycle.

 

That’s why organizations are trying to balance between the test types, software development life cycle (SDLC) app phase, and other considerations to address the challenge of quality and velocity.

Emulators, on one hand, serve a very important goal during the SDLC. They greatly reduce costs to the developers and testers. Some would argue that they are faster to set up and execute. They have lower error rates and are already embedded in the developer’s environment in most cases, which is very convenient from a fast feedback perspective.

On the other hand, emulators are not running the formal carrier OS version – they do not run on the same real device hardware – they cannot cover all required environment-based testing like specific carrier configuration or unique sensors and gestures testing, and mostly – they cannot serve as a single Go/No-Go decision platform. End-users that are consuming the app from various locations and environments in the world are not using emulators but real devices with varying conditions, OS versions and many competing apps running in the background. Therefore, this should be the formal test bed for your advanced testing activity.

The best practice for mobile app testing should rely on a mix of tests that are handled by different personas in the mobile product team that are spread between emulators and real devices, based on the build phase. In the early sprint phases, when the features are only shaping up, it makes a lot of sense to run smoke tests, unit tests and other fast validations against emulators from the developer environment. Later in the build process, when the coverage requirements and the quality insights are greater, launching the full testing scope in parallel against real devices is the right way to go.

Real Devices vs. Emulators – LinkedIn Case Study

I presented last month at Android Summit 2017. The event was awesome, and I enjoyed great sessions and community networking. One of the presentations at this conference was given by Drew Hannay, Staff Software Engineer from LinkedIn.

LinkedIn (acquired by Microsoft) is now serving more than ½ a billion users across multiple platforms like Web and mobile. According to the company, it is constantly trying to attract new users, mostly of a young age (college graduates – the numbers of which are estimated at 180M by LinkedIn’s university campaign), or from expanded geographies such as India & China, which include 140M users in China alone.

LinkedIn used to suffer from poor quality, stability issues and bad reviews in the past. While trying to solve the quality issues, as well as keep growing, LinkedIn announced more than a year ago project Voyager: a new release model and SDLC strategy that is aimed to improve both quality as well as release velocity for the organization. LinkedIn shifted to an admirable 3×3 release strategy. As Drew presented, LinkedIn are now able to push a new version to production 3 times a day, every 3 hours from developer’s code commits – this is due to the announced project Voyager.

 

 

While the release cadence improved, allowing the LinkedIn product team to quickly adapt to changes, fix bugs faster and innovate, the end-user experience somehow dropped, and many of the mobile users are vocal about it in the app stores, stating that they would rather use the desktop browser instead of the mobile app. The thing about quality, especially in the mobile space, is that it is “a moment in time”. With such an aggressive release schedule to production, the build that was in production 3 hours ago is irrelevant from quality and stability perspectives, therefore, the product team must know at all times how the app works on real devices in production rather than assuming that it works just based on emulator testing.

As shown in the above image, LinkedIn are running as part of their test plan the entire test scope on 16 emulators in parallel. There is zero coverage on real devices, because as Drew stated – our implementation is agnostic across devices.

If we validate the above statement and strategy against the app store, and my own personal device experiences, the implementation isn’t that device-agnostic, and there are many escaped defects to production that are impacting real-device users.

 

 

The above experiences show: crashes of the app on Android devices when switching from Wi-Fi to real carrier networks, invites from mobile to various connections not working properly, sync issues between the LinkedIn feed and what’s shown on the browser version, installation issues on real devices and more.

When the majority of traffic to the LinkedIn app is coming from mobile devices, LinkedIn testing strategy should be tuned accordingly.

Summary

With the above stats and quality reality of LinkedIn users, the testing strategy needs to change. Future usage growth is expected to come from India, China and non-U. S demographics, building the case for different devices, OS, form factors and network conditions.

 

LinkedIn needs to base their mobile testing strategy on real life personas operating from different geographic locations, varying conditions, background apps and more.

Testing on emulators is essential and should be kept as part of the strategy, but it cannot be the only platform for testing this app – as seen above, it does not guarantee continuous quality and UX.

 

Perfecto Blog