If a feature fails to load on a live site, does it make a sound?
It is HuffPost software engineer Anil Kadimisetty's job to make sure that it does.
Anil addressed over 40 Meetup attendees on the sixth floor of AOL headquarters in New York Wednesday night and explained how engineers can verify users are experiencing their products the way they should.
Anil, who came to HuffPost five months ago from Squarespace, has been working in test automation for almost 10 years. He described the "manual boring stuff" that used to be necessary to test software, and the audience audibly commiserated.
Automated tests act like a "proxy user," executing the functions of a web application like a human customer would. The tests result in a list of errors; specific places where the code failed and screenshots of the browser at the time of the error. Software engineers can then fix these errors before a real customer finds them.
The Huffington Post website might seem like an impossible product for testers.
Anil, an engineer for HuffPost Live, performs tests on the HuffPost Live Product server, but also on a controlled Beta server (a beta site is one with the same layout and features as a live site, but with dummy content, and great for experimenting with new code).
But even on a beta site, it's still very difficult to test dynamic elements, particularly if images and flash videos are involved.
As an engineer for Huffpost Live, Anil found a temporary way around this problem. He runs text comparisons, not image comparisons. Image comparisons are incredibly tedious and involve setting coordinate constraints on where the image is on the screen, which differs depending on what device is being used to view the image (a waste of bandwidth and a headache to write about). Instead of checking to see if the correct image is appearing, Anil's tests looks for titles and descriptions associated with the image.
Below is an illustration of the kind of questions a test would run through to make sure a video segment (in this case "Budget Cut Conundrum") has transitioned successfully on the air.
Simultaneously, other tests perform a search in the search box (Anil uses the test word "Obama" because there should always be new results), write a comment in the comments section and browse the archives. All of this is executed by Saucelabs, who have the hardware to run all of these tests.
This method is also a waste of power.
"Right now, there is currently no one framework that properly addresses our kind of content," he said.
What Anil proposed at the Meetup is an integration of two frameworks: Selenium and Siluki. Selenium is now an industry standard for automating browsers, while Sikuli is a newcomer and is still being developed out of the University of Colorado Boulder.
Here is what each script framework brings to the table:Sikuli can spot changes to
- Templates (in the CSS)
- Slideshow controls
- Drag and drop from desktop to the website
- Text Color
Selenium can spot problems with everything else.
To know with 100% certainty that the correct video is playing on Live or the right photo appears on the front page next to it's corresponding ALL CAPS HEADLINE in the splash, we need to use Sikuli's image based framework. To know our articles are loading and our comments are populating, we should keep using Selenium.
Anil is close to implementing a hybrid framework to work for all of the HuffPost Media Group sites.
"The way I think automated tests should be: check out code, run maven test. Two steps," he said. "All basic actions are abstracted and reused."
Translation: the tools we use should produce code flexible enough to deal with the ever-evolving products they test.
And where better to test code against fast-changing, multi-platform content?
Anil's slides from the Meetup are available here on his personal blog.