2018; The start of Testsmith.io

2018 started with a new exciting challenge! I will continue my career as a freelance Test Automation engineer, with the name Testsmith.

With this venture I like to achieve a couple of things:

I would definitely take the opportunity to speak at conferences.

Share This:

(repost) Using HTTP mocks when developing and testing

When developing applications the following can also happen to you. External service not being available, external services not completely finished (yet), external services not having a test environment, no ability to test unhappy flows (like functional error or more technical errors)

So, you might want to test your application in isolation with limited dependencies on external services.

This is where mockhub.io comes in. Mockhub.io allows you to create HTTP response messages using an intuitive form. No programming experience is required when using Mockhub. Mockhub.io works with sophisticated matching criteria to decide what kind of response it should return. Any HTTP response can be set in order to simulate faulty responses. Mocks and their logs can be suspended to allow faster performance.

Share This:

The tool is not the challenge…

but the process can be… Nowadays there are several modern and robust test automation tools out there, which can automate nearly anything. So, that should not be the real problem anymore. The problem lays in the process and how people approach test automation. I have seen people automating very big end2end flows, which result in a lot of maintenance work. Also, a common habit is to automate everything through the User Interface, which results in very long test cycles.

When doing test automation it’s very important to find the right balance between the types of tests. If you don’t find the right balance, you will definitely end up with high maintenance costs and very long test cycles.

Some projects have a limited budget and are therefore very focused on the output teams can deliver. How teams deliver becomes, for the decision makers, less important. Strange isn’t it? Because this way of thinking doesn’t contribute to the long-term goals of your product.

Imagine the situation where you can deliver fast, but code quality is poor and test coverage is even worse. After some time the costs will rise enormously, I’m talking about the costs for maintaining and testing the system.

So, the underlying message of this brief blog post is that technically everything is possible but think twice if you put the delivery of new feature above the delivery of high-quality software. In no time you will create technical debt, which is more expensive in the long run.

Share This:

Romanian Testing Conference 2017

This year I was invited to do a workshop of my own choice. It turned out to be an “Improve your Selenium WebDriver Testing” workshop. So, I went the 10th of May to Romania to join the conference. The first expression of Romanian people is that they are really friendly, open and honest and luckily their English is also really good. Anyway, those were very good ingredients to start an interactive workshop.

My workshop

There were 32 attendees for this workshop. It was a little bit overwhelming (in a positive way) because normally I put a limit of max. 15. Surprising for me was that half them were women. (That didn’t happen so far at workshops I gave in the past in mainly The Netherlands.) It went pretty good. I started with a theoretical overview of why and what you should automate. After that, we went to the specifics of Selenium WebDriver. All participants enjoyed the locator game, which is available here: http://locator-game.selenium-in-action.io/ . After the lunch, we started to iteratively fix tests and implementing the Page Object Model to achieve better maintainable tests.


On my second day of the conference, I was delighted to join the presentations of the day. So, herewith a brief summary of the presentations I attended.

Passion wanted to leave me, but I convinced it to stay, by Santhosh Tuppad (Twitter: @santhoshst)

A presentation presented with a lot of passion. He started his career with a few one-week jobs before he found his passion as a software tester. He said: ‘You need to find your passion because the passion doesn’t find you’. I think he is very right with this statement.

Testing the energetic consumption of software: why and how, by Paulo Matos (Twitter: )

Poorly developed applications consume more energy and they can drain your batteries. (from your phone/ iPad / smart watch). Energy might be cheap in the Western world, but it isn’t in, for example, Africa or remote areas. Also, the batteries of mobile devices are very limited. So, it might be good to optimize your application in regards to the energy consumption, especially when developing mobile applications. CPU/GPU cycles are the most expensive. After all, he demonstrated some tooling you could use to visualize the energy consumption per application.

Succeeding as an introvert, by Elizabeth Zagroba (Twitter: @ezagroba ‏)

A nice presentation about how to survive in a project/organization as an introvert person. Surrounding people should be aware how to react to introvert people. (they won’t keep asking things to identify all possible risks up front) Funny thing was that I actually recognized a lot of the things she said.

Debugging your test team, by Keith Klain (Twitter: @KeithKlain )

One of the points he made was to fire all test managers who spend more than 25% on non-testing activities. Like the typical spreadsheet managers 🙂


Test automation – the bitter truth, by Viktor Slavchev (Twitter: @Mr_Slavchev)

He nailed the points on why automating everything is impossible or inefficient, really spot on. He also pointed out that we need to change the definitions, in order to set the correct expectations. “Programmatic testing” in stead of “test automation”. (so, managers don’t expect a huge cost saving and traditional testers don’t expect that their testing will be automated.

Independent tester – A game changer, by Uros Stanisic (Twitter: )

Organizations are requesting more and more T-shaped testers. He described what value a tester can add other than ‘just’ testing.

12 vs. 18, who says “we” cannot be “them”? , by Harry Girlea

This was an amazing talk from a 12-year-old boy standing in front of about 500 people. He pointed out that children start early with playing games, nevertheless, they are never part of testing (because they are legally too young). A solution he presented that they (kids) can be paid with game points.


All in all an awesome conference! Very eager to attend next year as well.


Share This:

Tips to make your (web) application testable

This blog post will provide you some tips to make your (web) applications better testable. This is an initial list of tips and can be extended of course. The idea is that you can use this list when you have discussions with your developers about testability.

  1. Unique identifiers
    Especially when doing UI testing, it’s important to have unique identifiers on the page. There are different approaches: assign an `id` to every element or assign a unique identifier to each component and input element.Implementing id’s for every element is costly and unnecessary.Implementing unique identifiers for every component is a better approach because they can search relatively within that component. Like, you have a search component with an input field and button:

    <section id="search">
     <input type="text" name="query">
     <button type="submit" name="search">Buscar</button>

    Now you can construct the following CSS locators:

    Although it’s not needed, I’m in favor of defining the type of the element.

    When not having these unique identifiers you will probably end up with very hard to maintain locators. A few examples:




    The first is too long and therefor unreadable. The latter is much shorter but is tightly coupled to the second list-item (li[2])

  2. Separate environment
    Once, I was on a project where the test-database was shared with multiple test environments. I believe this was done to save some costs, as we depend on a huge Ora cle database. Cost-saving is fine, but it had some drawbacks.

    • Different testers/teams are manipulating the same data;
    • Different versions of software are processing the same data;
    • Different message-consumers are messing with the data.

    This was not very convenient and after some debate we found the budget to duplicate the environment (including database and all services). The result was an isolated environment for running our automated tests.

  3. Mock third-party services
    When testing in general, depending on data which is outside of your control and become a nightmare. Because of the following:

    • Third-party service can decide to switch-off the servers;
    • Third-party service can decide to clean the (test) database, so the data is not present anymore;
    • Third-party dataset is very limited.

    You might decide to stub/mock the third-party services if you recognize one/some of the above. When stubbing/mocking third-party services you have full control over the data and you can even easily simulate error-responses/timeouts/etc.

It would be great to hear your tip, so I can add them to the list. Feel free to leave a reply (tip + argument and in what situation did you benefit from it).

Share This:

Automating Jumbo App – Shopping Apple Pie

This Testing Showcase demonstrates the automation of the Jumbo App (supermarket) for Android. The Jumbo App is automated with Appium.

Share This:

A “Gated” environment

A little bit of context; in my current project we work with 7 scrum teams on the same product. In 2 week sprints, we implement new features on a legacy codebase. We have both manual and automation testers.

It’s a kind of challenge to end up with a repository with functionally broken code when 7 teams committing new code. Over the last 1,5 year, we had quite some issues with people committing SQL errors, conflicting bean names, non-compiling code. This all results in a broken test-environment, frustrations, a lot of wasted time. To overcome this issue, we introduced a ‘Gate’. Once a team tries to merge their team-branch with the sprint-branch, the following actions take place:

Merge –> Build (compile/unit test) –> Deploy –> Functional Tests (API’s) –> Store Build Artifacts

Every step explained:
Merge: merge sprint-branch with team-branch and team-branch with sprint-branch
Build: check if we have compiling code and if the merged code passes our first safety-net, the unit tests
Deploy: artifacts will be deployed to a separate environment with its own database, micro services and even stubs
Functional tests: the deployed application will be tested by our second safety-net, which exists out of end-user flows implemented on an API level (to ensure a fast feedback-cycle).

If one step fails the remaining steps won’t be executed.

For example:
– If one of the unit tests fails, the application won’t get deployed, functionally tested and artifacts won’t be stored
– If one of the functional tests fails, the artifacts won’t be stored

It can be visualized using the Jenkins Build Pipeline plugin:


So, now we introduced a safety-net and there is an incentive to push functionally working code. The moment existing tests fail due to newly introduced application code, you have full responsibility to repair the existing tests.

The Second Gate…
Now we are sure that all the code merged and tested (on API level) properly. However it can still happen that some bug in the user interface was introduced, therefore we have introduced a second Gate which runs front-end tests. In fact, the following actions take place:

1. The accumulated integrations will be versioned as a release candidate.
2. Release candidate will be deployed to a separate environment.
3. Front-end tests will be run against the release candidate.

Above described process takes place overnight, because these are long running tests. But in fact, it can happen anytime…

I can imagine that you are interested and keen in a ‘Gated’ environment like this. I’m willing to help you with the implementation if you contact me.

Share This:

Automating NS Reisplanner App

This Testing Showcase demonstrates the automation of the NS Reisplanner App for Android. The NS Reisplanner App is automated with Appium.

Share This:

Automating ParkMobile App

This Testing Showcase demonstrates the automation of the ParkMobile App for Android.

[screencast url=”https://screencast.com/t/MtIBUIegcr”]

Share This:

Clean(er) Test Automation Code

This blog post tries to explain why it is important to really care about your test (automation) code and write clean (automation) code. I’m mainly involved in test automation, so some of the examples given are related to that, but most are related to programming in general. Often test automation code is less exposed to a formal review process. Keep in mind; test automation is code and should threated like that.

I hope this blog post will bring you one step closer to clean(er) code.

Test Class Naming
Defining a classname for unit testing is rather easy; it’s the same name as the implementation class. In terms of functional or integration testing, the classname should reflect the feature or functionality you are testing.

For example:

public class LoginTest { }

public class RegistrationTest { }

public class OrderTest { }

Test Method Naming
I have been on many projects, some Greenfield* others continuing development on an existing codebase. Every project has its own naming convention* or not ☺. The following can happen in projects without naming convention:

public void test1() { }

public void test2() { }

// or:

public void testLogin() { }

Do you get the point? It’s all very descriptive, isn’t it?

Imagine you have to share this with a colleague or you have to present the results to non-tech colleagues, they have no clue what’s going on.

I very much like the naming convention introduced by Roy Osherove (http://osherove.com/blog/2005/4/3/naming-standards-for-unit-tests.html). The basic idea to describe the tested method, expected input or state and the expected behavior as the name of the test method.

public void nonExistingCredentialsGiven_ShouldThrowException() {


We can adopt this principle for feature testing as well.

public void nonExistingCredentialsGiven_ShouldShowErrorMessage() {


By following this naming convention we can make sure that the intent of the test is clear to everyone. (Off course the implementation should reflect the method name) Method names are a bit longer, but self-explanatory.

Naming of fields and local variables

Another thing that strikes me is the use of non-explicit fields or variable name. Like:

i for loops
page when applying the Page Object Model (everything is a page lol)
And so on..

We need to be more explicit if we want to create readable code.

Magic numbers
Magic numbers indicate the direct use of a number in your code. By doing this, your code becomes less readable and harder to maintain.

Example implementing magic numbers:

public class Username {

	private String setUsername;

	public void setUsername(final String username) {
		if (username.length() > 10) {
			throw new IllegalArgumentException("username");
		this.setUsername = username;


After refactoring

public class Username {

	private static final int MAX_USERNAME_SIZE = 10;
	private String setUsername;

	public void setUsername(final String username) {
		if (username.length() > MAX_USERNAME_SIZE) {
			throw new IllegalArgumentException("username");
		this.setUsername = username;


The refactored example allows you to easily update the maximum size of a username, even if it’s used in different methods.

A general rule of thumb might be when applying any naming convention is to be very explicit.

Greenfield: setup a project from scratch.
Naming convention: set of rules

Other Test Naming Conventions: https://dzone.com/articles/7-popular-unit-test-naming

Share This: