Software Testing Pitfalls

In this blogpost I like to discuss some of the software testing pitfalls I encountered over the last few years. Some of them are related to software testing in general, others are related to specifically test automation (Because that is my main area of interest).

Certification
This might be a never ending discussion. My personal opinion; certification is definitely not the guarantee for a good tester, but it allows people to talk a common language. Furthermore, it gives you some proven practices and methods. How much you can use of these methods and practices, depends on the context. I often saw people who became a tester with a background in the area of development or operations. It’s funny if you hear them talk about software testing. Meetings with people without a common knowledge are not efficient at all!

Testing Techniques
I mostly work in environments where we have to test an web application. I often hear tester say that they don’t see the value of applying testing techniques, because it’s just a web application. I think this is really odd, because why should you relate the use of testing techniques to the nature of the application under test (test object)? It makes more sense to relate the use of testing techniques to the business value of the application under test. You have to test more if the business value is high and testing techniques help to produce more efficient test cases.

Testware
You will see it more often that there is less time spent on the preparation and specification phase of software testing. It’s mostly coming from teams who are saying that they apply some form of Agile. I think, you should always define some sort of testplan and test cases or test charter, regardless of the development methodology you are using. The level of detail depends on the context where you’re working in.

Test Environments
One of the most heard problems in software testing is the lack of test environments or up-to-date test environments. Utilizing modern tools we can create and distribute test environments on the fly. We also often see that data across environments is totally different. Having those two problems it”s hard to automate testing.

Development Testing
Recently I heard something funny. In a company where they apply Agile, they said: unit testing is up to the developers, we shouldn’t even look at it. (To save time…)

Developers are not testers and (probably) never will be. So, we need to help them with writing proper unit tests. Think of positive and negative cases and even testing all possibilities of a decision. (Also known as: decision testing or multiple decision determination testing) I think, it should also be our task to check periodically the quality of the unit test and verify if they still use testing techniques.

These are just a couple of pitfalls in software testing.

Share This:

Functional Test Automation on a Massive Machine

Functional test automation is relatively slow and heavy in execution. Selenium scripts for example need to launch a browser, open the website and testdata needs to be present. We can speed up testing by running the test scripts in parallel on distributed machines. This incredibly speeds up the in-browser tests and quick and accurate feedback is given.

In the time that I worked for Spil Games we decided to set up a Virtual Desktop Infrastructure (VDI) for both manual testing and automated testing. It’s essentially a single blade with 2 x 6 Intel Xeon E5-2630 @ 2.3GHz, 1 TB of solid-state drives (SSD’s) and about 130 GB of memory. Performance was an issue in the past, that’s why we have chosen for solid-state drives (the response of the virtual machine is ‘almost’ instant).

Manual testing: We had a pool of preconfigured operating systems and browser versions. The Virtual Desktop Infrastructure will create a virtual machine on demand, every time somebody wants to access a specific machine. It’s made by using VMware View that allows graphics acceleration over the PCoIP (PC-over-IP) protocol which makes it possible to test everything you want to in a browser and it is close to optimal performance.

Automated testing: We had a dedicated pool of virtual machines with DHCP reserved IP-addresses with which we can establish a Selenium Grid. We used virtual machines with Windows 7 and 3 GB of memory and all the required browsers installed. Remote test execution of the Selenium scripts is, with the described setup, even faster than execution on the local machine.

VDI-GRID

Image above created with http://www.gliffy.com/

Alternatives:

  • Sauce Labs: Run automated Selenium tests in the cloud or manually test your site on any browser instantly. Videos, screenshots, and developer tools make debugging a snap.
  • TestingBot: Cross browser testing with Selenium Online. Automatically test your website on various browsers with Selenium.
  • BrowserStack: BrowserStack is a cross-browser testing tool, to test public websites and protected servers, on a cloud infrastructure of desktop and mobile browsers.

Share This:

Get started with Selenium WebDriver in Java and C#

As a test consultant I visit different clients. I have seen a lot of projects from various natures using different programming languages. Mostly I see projects using PHP, Java or C#. As a consultant we need to be very flexible, so we can work in those different environments. This is actually the best part of working as a consultant; you can develop a very broad knowledge.

So, I decided to put some very basic test automation projects on my personal github, written in C# and Java. These projects contain the use of a dependency management tool, Selenium/WebDriver and the basic use of the Page Object Model.

Java

Github repository: https://github.com/roydekleijn/webdriver_java_example

C#

Github repository: https://github.com/roydekleijn/webdriver_csharp_example

Remember these projects show you the basic usage of Selenium/WebDriver and may help you to get started with test automation.

Share This:

Do not abuse JMeter for -complex- Automated Functional Web Service Testing

Oohooh oh, more often JMeter is used to create complex test suites to execute functional test scripts against a Web Service. It”s perfectly possible with JMeter, but is this the most efficient way? It has to has to do with the project scope and priority, I think. As stated on their website, JMeter is designed to test performance.

“Apache JMeter may be used to test performance both on static and dynamic resources (files, Servlets, Perl scripts, Java Objects, Data Bases and Queries, FTP Servers and more). It can be used to simulate a heavy load on a server, network or object to test its strength or to analyze overall performance under different load types.” JMeter website

In my view JMeter is not the best tool to write a bunch of functional test scripts for a Web Service (infrastructure). You will mostly end up with some unmaintainable throw-away test automation scripts, which took quite some effort to create. However, JMeter is one of the best performance testing tools.

You might want to perform some pre (insert data in a database) and post (assert complex XML / JSON responses) actions while executing functional test scripts. Here for, JMeter is not that handy and flexible. Better is to write your test scripts in a programming language, this allows you to do more complex things and create a maintainable test suite (using abstraction). You can apply programming principles (such as: DRY (Don’t Repeat Yourself), KISS (Keep it Simple, Stupid!), etc.) and you can create an abstraction layer (Each significant piece of functionality in a program should be implemented in just one place in the source code.).

I will put my ‘RESTful Web Service Automation Testing Framework’ publicly available on Github, if I find some spare time. (This will include detailed example)

Share This:

Functional testing over a proxy with BrowserMob Proxy

Last week we experienced a little outage on our websites (We have website with a lot dependencies with external system and third-party content). It gave us the opportunity to take a closer look at our products while having production issue. One thing we noticed is that we don”t tell the end-user that we are having issues. We have learned that we have to built-in a more resilient user experience during issues.

We can simulate these kind of failures while testing over a proxy. BrowserMob Proxy can be controlled by a REST interface and has some great capabilities, such as blacklisting and whitelisting certain URL patterns, simulating various bandwidth and latency and controlling DNS and request timeouts.

As I said, BrowserMob Proxy is a REST tool. In order to perform REST calls we have to install cURL.

Windows

  • Download cURL from http://curl.haxx.se/
  • Extract the compressed file
  • Put curl.exe in c:/Windows/ (to enable curl from the command prompt)

Linux

  • sudo apt-get install curl

Or some other installation command which is related to the Linux distribution.

Starting the proxy

In the first command prompt

  • navigate to the bin directory
  • Start the proxy by entering
    browsermob-proxy.bat -port 9090

    or the Linux equivalent

    ./browsermob-proxy -port 9090

In the second command prompt

  • Perform the following curl command to create a new proxy,
    curl -X POST http://localhost:9090/proxy
  • This will return a new proxy port, something like: {“port”:9091}
  • You can create a new HAR to start recording data, like this:
    curl -X PUT -d ''initialPageRef=newHar'' http://localhost:9090/proxy/9091/har

InternetOptions-423x288

Set the proxy in browser

  • Navigate to Control Panel -> Internet Options * Goto LAN settings in the Connections tab
  • Thick Use a proxy server for you LAN
  • Fill in the address: localhost
  • Fill in the port: 9091 (the one returned from the curl command)
  • Click on OK twice

Limit the connection speed
Perform the following curl command in the command prompt:

curl -X PUT -d "downstreamKbps=50" http://localhost:9090/proxy/9091/limit

Now you can visit the website with a low connection speed.

Blacklist third-party content

Perform the following curl command in the command prompt:

curl -X PUT -d "regex=http://example.com/*.*&status=404" http://localhost:9090/proxy/9091/blacklist

Now you can visit the website while blacklisting some content.

Reference
Check the BrowserMob Proxy readme (on Github) for all available API commands.

Share This:

Performance testing of an AJAX web application

For my current assignment I was asked to test the performance of an AJAX-based web application, which was quite interesting. The customer connects different local authorities, so they use the same solution for their services and operations. Hosting and development are done by two different parties and they blame each other about the (bad) performance of the web application. The customer hired Polteq as an independent party to measure server response times as well as client-side rendering times.

Server response measurement
Jmeter was the tool of choice to simulate all the requests towards the server and measure their response times. Besides the obvious issue of JMeter not rendering JavaScript, the tricky part was that the server responded differently, 6 out of 10 times for the same request. This was solved by implementing Jmeter’s ‘logic if’ controllers. The next thing was making sure that all the relevant request headers and request parameters were present, inherited and reused. You can easily record all requests and responses with Fiddler 2 proxy.

capture-all-http-traffic-300x210A requirement for this project was to make it as maintainable and transferable as possible, since the client is not too technical. This can be achieved by implementing ‘CSV data set config’ and ‘HTTP requests defaults’ elements in Jmeter.

One of the most important things of a performance test is the reporting facility. I was suggested by Martijn de Vrieze to use jmeter-plugins, with which we can easily generate sophisticated reports. Jmeter-plugins can easily create the following reports: response times vs. threads, response times over time, response latencies over time and transactions per second.

Client-side rendering measurement
MavenTestNG and Selenium WebDriver were the tools of choice to measure the client-side rendering times. I built a command line tool which executes testscripts based on given command line arguments (browser, testscript, runs). The tool executes the test scenario in the browser and stores some measurements while running the scenario. Based on the measurements a graph is created after execution.

The fun thing was that I could put the server under load by the Jmeter scripts and then measure the client-side impact with my WebDriver tool. First testresults showed a badly performing system and a lot of 404’s.

I will put the censored client-side measurement code on my public github after finishing this project.

Share This:

Testautomation, it couldn’t be more fun!

As a technical test consultant I visit many companies to support them with testautomation. Every time I notice that there are still a lot of things to improve in this area. I mean testautomation on all levels, from planning to the specification anROId the execution of tests. It seems that companies think that the step towards the implementation of a structured testautomation framework is too high, because the return on investment is not immediately visible. Next to that, there are not a lot of success stories yet, so that the restraint strengthens. Furthermore, sometimes the illusion arises that testautomation makes the work of testers superfluous.

However, the opposite is true … The main benefits and added value of testautomation are described below.

Tool integration
Testautomation has many benefits which contribute to better quality software. In the first place, testautomation forces –indirectly- that the testing process is at a fairly mature level. You need to have specified test cases which you can automate. However, if there are no test cases specified, testautomation is still possible, but you will have to think about two things at once while automating testscripts. (Desired behavior and ‘how’ to automate it) With a proper testautomation implementation, all tools, such as specification tooling, bug tracker, test execution tooling, etc., communicate and integrate with each other. Through this total integration a lot of manual administrative tasks are no longer required and you can easily relate the test results to the original requirements.

Shortens the feedback loop
Adding value to a product can be done by releasing new functionality. The frequency at which this happens is the heartbeat of a project. In larger organizations, you see that they can release only a few times a year. That could be a lot more! You can execute the regression tests much faster, by implementing testautomation. Thus, more new functionality can be released more often.

Consistent quality factor
One of the goals of testautomation is that the tester is able to execute tests scripts repeatedly, without doing a lot of maintenance on those testscripts. It’s a good practice to start with the core functionality, because this functionality is subject to change the least. You can speak of a consistent quality factor, when all automated testscripts give the same positive results every time.

Tester’s motivation
By implementing testautomation, you can repeatedly execute the exact same checks. The power of a check is that the outcome is always binary; it is either right or wrong. There is no human interpretation involved by verifying the result. The automated checks remove a lot of work from the functional tester. The testers now have the opportunity to focus on other (more challenging / creative) testing techniques, such as exploratory testing (simultaneous learning, designing and execution of tests). In addition, the functional tester can also take the challenge to learn a programming language, so that they can write the testscripts themselves.

Modern software architecture
Today’s architecture software solutions are ideally suited for testautomation. Think of a service-oriented architecture where the business logic is separated from the application and the application integrates with (some) interface (s). These interfaces can be invoked by multiple applications, so it is imperative that this works well and that no regression occurs.

Broaden skills
Testautomation done by a tester is only possible if they are able to develop a broader skillset, so they can test applications from a more technical perspective. Especially in the agile software development approach, due to the iterative process, it is increasingly important that tests will be automated and the team gets the technical testers who can do that. Not doing testautomations means falling behind … The work can’t be done any longer manually, testers need to become more technical!

I look forward to the day when organizations recognize the benefits of testautomation and they start working goal oriented instead of tool oriented.

Testautomation, let’s do it!

This article is based on an originally in Dutch published article on the Polteq website (see http://www.polteq.com/weblog/testautomatisering-leuker-kunnen-we-het-niet-maken/)

Share This:

Performance Testing

Over the last months I have been working on several performance testing projects. They were all Commercial Off-the-shelf (COTS) applications, mostly SharePoint. For all projects it was mandatory to measure both server-side and client-side performance. With server-side performance testing it is interesting to see how the server behaves under different conditions of load. With client-side performance testing it is interesting to see how the applications presents the content to the user. I sort of like those projects because all kinds of expertise s come together, like: XML, JSON, regular expressions, XPath, JMeter, Selenium WebDriver and think of all the different monitoring tools.

During my journey to select the right tools, I came across JMeter Plugins WebDriver Set. The plugin allows you to use the features of JMeter (test executor and reporting engine) as well as the features of Selenium WebDriver (controlling a browser). The major benefit is that you end up with similar graphs and result tables. So, it was the perfect choice

In the next few weeks I will post more in-depth instructions on how the create a performance testplan.

Share This:

Use Sonar to Check the Quality of Test Automation Code

sonarsource-300x94I am working now for more than one year on a test automation project with thousands of lines of code. I started this project on my own, but during the year much more people got involved. Everyone with their own coding-style. We needed to guarantee the quality of the code written by those people. Therefore we introduced Sonar which performs static code analysis and can find violations of standards. The analysis include:

  • Coding standards;
  • Code duplication;
  • Code complexity;
  • Potential bugs;
  • Code comments;
  • Unit Test coverage;
  • And more…

Writing test automation code is like doing “normal” development. So you have to apply coding standards, patterns to avoid duplication and reduce complexity, code comments to describe what each function does. Sometimes when testing safety critical systems you have to write unit tests for the test automation code as well.

Sonar gives you insight in all those areas. It became very clear to me that some test automation developers have bad or uncommon practices. (See screenshot of the dashboard)

dashboard-300x138

Sonar gives the ability to fix or resolve those bad habits.

Installing Sonar is fairly easy

Analyzing your project is even more easy and can be done in three ways:

  • Sonar Runner
  • Ant Task
  • Maven Goal

That said, I think the first week of this year was very effective, we have taken the automation code to a higher level.

Share This:

Use Sonar to Check the Quality of Test Automation Code

sonarsource-300x94I am working now for more than one year on a test automation project with thousands of lines of code. I started this project on my own, but during the year much more people got involved. Everyone with their own coding-style. We needed to guarantee the quality of the code written by those people. Therefore we introduced Sonar which performs static code analysis and can find violations of standards. The analysis include:

  • Coding standards;
  • Code duplication;
  • Code complexity;
  • Potential bugs;
  • Code comments;
  • Unit Test coverage;
  • And more…

Writing test automation code is like doing “normal” development. So you have to apply coding standards, patterns to avoid duplication and reduce complexity, code comments to describe what each function does. Sometimes when testing safety critical systems you have to write unit tests for the test automation code as well.

Sonar gives you insight in all those areas. It became very clear to me that some test automation developers have bad or uncommon practices. (See screenshot of the dashboard)

dashboard-300x138

Sonar gives the ability to fix or resolve those bad habits.

Installing Sonar is fairly easy

Analyzing your project is even more easy and can be done in three ways:

  • Sonar Runner
  • Ant Task
  • Maven Goal

That said, I think the first week of this year was very effective, we have taken the automation code to a higher level.

Share This: