Logo of System Design Partners System Design Partners

Chrome Headless, Selenium, and AWS

There are dozens, if not hundreds of solutions out there for handling application testing. For web applications, the ability to conduct accurate, exhaustive functional and load testing on a myriad of platforms, browser versions, screen sizes, and screen resolutions requires equally many testing environments. To mitigate platform sprawl, many shops still restrict their production environments. “Given our resources, we just can’t test all possible browser configs” is a common refrain of development and test managers. This restriction places limits on scalability and imposes more or less severe, and more or less transparent, opportunity costs to the business, as the workforce constrains itself to the “approved” environment even after that environment has long passed its prime. It is one thing to live with this constraint for native apps, but even with browser apps, it seems that many shops still cannot shake this cramped approach.

The increasing pace of app evolution, particularly given DevOps trends, requires a capable and robust testing environment to keep up. The current reality screams out for a virtualized approach, and following from that, a cloud-based testing strategy. After all, web apps are accessed in a cloud-like arrangement in production, even if they are in-house apps. The ability to spin up whatever client stack is needed, whenever it is needed for testing (and however many are needed), is best handled in a virtualized cloud environment.

But far too often, cloud-based testing approaches require a paradigm shift, sometimes a profound one, in your test environment and test case construction. Commitment to a custom cloud, migration of your tests to another toolset, adoption of a one-size-fits-all test environment which may or may not accurately simulate the users’ software stack, or some other compromise which is not driven by your app’s needs, your customer, or your business, are common examples of the compromises we make in order to achieve any reasonable level of testing at all. This is often the point in the project when the Test Manager (or his/her boss) says “I don’t have the money….”  The sad part is that the phenomenon ends up strangling the evolution of the production environment (how many of you work in shops with an “approved browser?”), and keeps many businesses perennially behind in productivity. As an opportunity cost, it is difficult to detect or quantify…which of course, is self-perpetuating.

A Testing Paradigm

Below we describe a testing paradigm which combines some typical legacy techniques with a cloud-based environment, in an attempt to demonstrate that without throwing out our old test suite, we can migrate to a cloud approach and run the same test suite on the new environment, opening the possibility of cheaply and easily adapting and increasing our test environment, not our test suite itself. We use AWS, Headless Chrome, and Docker to spin up an environment that can take a typical legacy test suite and run it as-is. For demonstration, we assume you’ve got a bunch of Selenium / TestNG test cases written in Java. More importantly, we illustrate an approach that is fairly simple and inexpensive, and provides a path to truly exhaustive testing, to whatever extent is needed by your app. In the future, the strategy will remain the same if for instance you are running tests in Python / node.js, or most any other combination of test tools and production code.

The Preliminaries

Before proceeding further, there is an important disclosure to make: the approach described here, and essentially all “open platform” cloud testing, requires a headless browser. This means the browser that you would like to support in production must provide a “headless” mode, which you will use for testing automation. You can do without, but it completely vitiates the flexibility of the approach, and sacrifices too much testing accuracy as well. Essentially you end up with the same narrowly-focused testing climate that you have today, just with different tools.

Testing with a headless browser accomplishes two all-important goals: 1. We can run all of our tests, including those dependent on screen layout, in a totally automated fashion, and 2. We can run our tests against the same exact software that our users will employ in their production activities.

Until recently, the most commonly used headless approaches to testing have relied on one of a number of simulated browsers. As browser vendors release headless versions, this approach is rapidly becoming unnecessary. Chrome and Firefox already have headless versions, and Microsoft is working on one for Edge as of this writing.

Alas, legacy browsers will not have the capability, and of course legacy browsers are a big reason that we test in the first place. If you find yourself needing to test old versions of Internet Explorer, this approach is not for you, at least not yet. Sorry about that! Hopefully, that era is coming to a close.

A Cloud-Based Testing Environment

We will build a Docker image to act as client for our test cases. The AWS Cloud has all the features we need in order to employ it in our testing, efficiently and cheaply. For this example AWS CodePipeline is our build stack, mainly because it is cheap. CodePipeline is by no means as flexible as Ansible, Chef, or any of the other prominent DevOps frameworks, and in fact is in need of some obvious enhancements if Amazon wants to seriously compete in that space. But it is easy to set up and maintain (via a simple YAML file), and saves us the cost of a private GitHub account, as well as the need to manage any DevOps servers. To emphasize, only the Docker image is germane to our approach. The fact that we run on AWS is a matter of preference. We could just as easily run on any other cloud environment. For code that is already in GitHub, AWS CodePipeline integrates seamlessly just as with any of the other prominent DevOps tools. Also it is easy to host tools such as Ansible or Chef on AWS servers if desired.

Docker as a Test Client

Docker is the easiest and most reliable way to create our test machine images with the desired software stacks. We’ve created one on Docker Hub (also available on GitHub) for you to follow along, or download for your own projects. We describe it briefly below, to illustrate the simplicity of setting up the virtual client. Remember that the Docker image simulates the client’s environment, though most of us are accustomed to using Docker as a server virtualization tool.

Our Dockerfile starts with Ubuntu 16.04 (Xenial). This is not necessarily the best choice for a real testbed environment; there are other more commonly used operating systems for browser-based apps. But it is common enough, and more common as a development platform. So if you are using it, you can follow along on your development machine. The advantage of having at least one test image that exactly matches your development environment is obvious: each developer can quickly eliminate his own environment as the possible source of test failures (or quickly diagnose it as the problem).

Next, we install some minor utilities and then we choose our Java platform. We install Java only because we want to use TestNG (or JUnit) to test the same old Selenium test cases that we’ve always run in the past. In our case, we install Oracle JDK 9. OpenJDK works equally well here, but as of this writing is trickier to install correctly (for Xenial). Feel free to drop us a line if you want some help with it.

After Java, our Docker image then installs Chrome (the latest stable versions support headless), and then ChromeDriver, which implements the WebDriver protocol. The WebDriver protocol is of course needed so that we can drive the browser in our tests.

Next we install Maven, because we choose to manage our tests via TestNG, and Maven2 is the typical way to set up tests using TestNG. Maven2 supports TestNG out of the box, and most TestNG projects use it. A viable alternative is Ant, but productivity tends to be higher with Maven.

Finally we add the AWS CLI, only because we want to stand up our test server on the AWS Cloud, and installing the CLI gives us some more useful capabilities. In our context, it is just another utility.

With our client running in Docker, running the environment of our choosing, driving a headless browser, it’s clear to see that we can run any tests we want, and collect any results we need. We’ve effectively offloaded the entire testing environment to the cloud, driving the same browser software that our clients will use in production, and we’ve not had to alter our previous approach of using TestNG, Selenium, and Java, or give up any control of our test scripts. We are not forced to rewrite anything to a different environment, nor do we have to split our tests based on the common scenario where “this tool works best for this kind of testing, that tool for the other kind of testing.” And we know that load can be simulated by standing up more of these servers (though there are often better approaches, which fit just as well into our scenario).

A Typical Test

A typical test might be to fill out a form on the website with some test data. A Java snippet that we use in Selenium/TestNG for testing our own Contact Us form at www.sdpartners.com is:

 

// Get the url
theDriver.get( targetUrl );
 
// Populate the Form  
WebElement emailElement = theDriver.findElement( By.id( "emailField" ));
 
emailElement.sendKeys( "noreply@sdpartners.com" );
WebElement subjectElement = theDriver.findElement( By.id( "subjectField" ));
subjectElement.sendKeys( "Selenium Test - contactUsCommentTest (" + targetUrl + ")" );  
WebElement commentElement = theDriver.findElement( By.id( "commentField" ));
String markerString = "Comment Marker: " + Long.toString( marker );
commentElement.sendKeys( markerString );
 
// Submit the form 
commentElement.submit();

 

In this code, “targetUrl” is of course the URI we want to test, and “marker” is a preset, unique numeric that we pass to our test so that we can keep track of it. What exactly does it buy us?

Leveraging Java

Here is where the flexibility of a higher level, fully-featured language such as Java makes itself useful in testing. To take a step back, it is probably fair to say that Java as a testing language is somewhat of an overkill. Simpler languages that may have less boilerplate are often easier to work with.

But our form test is clearly submitting some data to a persistent store somewhere. In our test cases, since we’ve used Java, we can rest pretty comfortably knowing that Java will almost assuredly have some way to connect to that store, verify that “marker” and the other data we submitted landed where, when, and how we expected them to, and be able to do this in a way that is totally, verifiably independent of the submission portion of the test. This is a pretty rock-solid way of verifying that our test worked, or, what is often more difficult, and probably more important to do, verifying that it did not. We can pretty much rule out false positives and missed negatives soon after we’ve written and perfected the test case itself. And we can be pretty secure that, even if our persistent store totally changes, we will not have to change the test case very much; we just need to assure connectivity. If we had used a less prominent language to test, it might have been faster to write the test case, but if we switch to a new backend technology in the future, will we be stuck waiting on the driver to be written? If we are using Java, probably not.

This is one of the reasons test cases are written in Java in the first place. And with our Docker-based approach, there is no reason to change anything in your existing test cases. It’s usually just a matter of setting up the right client environment via Docker.

One aside about AWS (and other cloud environments): you usually cannot assume that your test clients will be virtualized into the same network cloud (AWS calls this a VPC, for Virtual Private Cloud) as your web infrastructure, and in fact this is better because your real clients will not be either. But it makes it difficult to run direct test confirmations on your client (e.g., immediately after you’ve submitted your form, hit the database independently to see if the data got where it was supposed to). Ideally such tests shouldn’t use your web infrastructure, since they are not part of the behavior of the site. But how to connect to our backend from a test client that is not on the same network, without creating a security hole? Truthfully there are no ideal solutions but there’s a pretty good bootstrap way with AWS, which may be the subject of another short article in future. Drop us a line if you are interested in hearing more about it. AWS is also working on better ways of doing this.

Drawing Conclusions

Hopefully from the above, its easy to see that even with a totally different Docker stack, the overall testing strategy will remain the same. We merely change the Docker file, rerun our tests, and assess any failures. It is totally adaptable to the testing environment that you choose, and whatever best fits your firm’s need.

Furthermore, for load testing, we can stand up any number of test clients and run them simultaneously. Of course most testing frameworks, including Selenium, provide better ways to orchestrate load tests which is beyond the scope of this article. But it’s obvious we can spin up a lot of test clients quickly if the client is a Docker image.

We are interested in hearing more about how people are adopting headless testing, and cloud-based testing in general. Please leave a comment or drop us a line if you have a technique to share, or if you are interested in getting assistance with setting up an environment of your own.

Leave a Comment

Your email address will not be published. Required fields are marked *

*