Visual regression testing


Photo by James Lee on Unsplash

Automated regression testing on the big projects is one of the important approaches. A lot of projects are using this approach to reduce costs on manual regression testing, speed up the delivery, reduce costs of bug fixing. Automated UI testing is not an easy task because it’s tightly coupled with data, it’s highly dependent on HTML mark up, it’s slow (because communicating with browser via web driver, grabbing data from different HTML elements is slow), it’s really hard to verify visual issues with this approach (for example positioning of dialog on the viewport).

There is another approach which could be used along with other types of automation – visual regression testing. The idea is quite simple. Instead of analyzing resulting HTML we concentrating on comparing screenshots of the application in critical moments of time.

Here is the set of tools which I’ve used on the project to be able to use visual regression testing:

  • Fake back-end. It’s not mandatory, but most likely you’ll need something similar, because image diffing is very sensitive to the content of the application. You’ll receive a lot of false negatives alerts in case of small data changes;
  • Standardized environment. For example virtual machine with fixed version of OS and other software. A good examples could be Vagrant or Docker. Again it’s all because any changes can lead false positives (new browser updated algorithm of rendering border radius, new update in OS changed the font rendering, etc.). So you need to control as much as possible;
  • Browser. In our case we currently only focused on headless Chrome. Because it’s fast (I don’t recommend PhantomJS, because it’s not supported anymore, have a lot of bugs and quite unstable);
  • Testing framework. The good news that most likely you don’t need to introduce any new framework into the project. In our case we adopted Nightwatch, which was already used on the project for E2E testing;
  • Image diff tool. A good examples could be Resemble.js or ImageMagick;

I’ll not describe all the points in details. I just want to stop on the couple of important things in Nightwatch configuration:

The first step is to use headless Chrome instead of regular one:

"desiredCapabilities": {
	"browserName": "chrome",
	"chromeOptions": {
		"args": [
			"--no-sandbox",
			"--headless",
			"--disable-gpu",
			"--window-size=1920,1080"
		]
	}
}

It’s very easy, because in Nightwatch you can directly bypass the argument. In our case we are interested in “–headless”, “–disable-gpu”.

The second step is to add a custom assertion for Nightwatch. You can take a look at assertions which are part of the Nightwatch repository. In my case I used ImageMagick and image-diff so in my case it was pretty straight-forward.

The last but not least is to write a wrapping function for saving screenshots. This wrapper is inspired by this enhancement request: link. This is a sample code, the intend is to demonstrate the idea rather that providing a fully working code snippet (for working with ImageMagick cropping gm is used):

let imageMagick = require('gm').subClass({ imageMagick: true });

takeScreenshot(browser, selector) {
	browser.perform((done) => {
		browser.execute((selector) => {
			let element = document.querySelector(selector);
			return element.getBoundingClientRect();
		}, [selector], (data) => {
			browser.saveScreenshot(screenshotPath, () => {
				imageMagick(screenshotPath)
					.crop(data.value.width, data.value.height, data.value.left, data.value.top)
					.write(screenshotPath, (error) => {
						browser.verify.customAssertion();
						done();
					});
			});
		});
	});
}

The most important thing here is the additional browser.perform call. It’s mandatory, because saving and cropping the screenshot are asynchronous operations and most likely they will finish after test will be completed.

That’s it, now you are ready to go and write you own visual tests.

Conclusion

Let me try to summarize pros and cons of visual regression.

Pros:

  • These tests most likely will be faster. At first you can cover several UI components with only one screenshot, which reduces the overall number of tests. At second amount of communication between browser and E2E test is much lower because you don’t analyze the content of HTML. Most of the test scenarios will be simple, e.g.: click – click – type something – click – screenshot;
  • These tests can cover visual issues which are hard to catch in regular E2E testing (positioning, colors, etc.);

Cons:

  • You need to carefully select an area which you want to analyze. If the area is too narrow you can miss something important. If it’s too wide global changes (like adding footer, changing a top menu) will break a lot of unrelated tests;
  • You are introducing additional complexity to the system (additional mocks or even fake back-end, virtual machine/container);
  • If you want to use these tests for multiple browsers most likely you’ll need to store separate set of screenshots for each browser;

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s