This Testing Showcase demonstrates the automation of the Jumbo App (supermarket) for Android. The Jumbo App is automated with Appium.
A little bit of context; in my current project we work with 7 scrum teams on the same product. In 2 week sprints we implement new features on a legacy codebase. We have both manual and automation testers.
It’s a kind of challenge to end up with repository with functionally broken code when 7 teams committing new code. Over the last 1,5 year we had quite some issues with people committing SQL errors, conflicting bean names, non-compiling code. This all results in a broken test-environment, frustrations, a lot of wasted time. To overcome this issue, we introduced a ‘Gate’. Once a team tries to merge their team-branch with the sprint-branch, the following actions take place:
Merge –> Build (compile/unit test) –> Deploy –> Functional Tests (API’s) –> Store Build Artifacts
Every step explained:
Merge: merge sprint-branch with team-branch and team-branch with sprint-branch
Build: check if we have compiling code and if the merged code passes our first safety-net, the unit tests
Deploy: artifacts will be deployed to a separate environment with its own database, microservices and even stubs
Functional tests: deployed application will be tested by our second safety-net, which exists out of end-user flows implemented on an API level (to ensure a fast feedback-cycle).
If one step fails the remaining steps won’t be executed.
– If one of the unit tests fails, the application won’t get deployed, functionally tested and artifacts won’t be stored
– If one of the functional tests fails, the artifacts won’t be stored
It can be visualized using the Jenkins Build Pipeline plugin:
So, now we introduced a safety-net and there is an incentive to push functionally working code. The moment existing tests fail due to newly introduced application code, you have full responsibility to repair the existing tests.
The Second Gate…
Now we are sure that all the code merged and tested (on API level) properly. However it can still happen that some bug in the user interface was introduced, therefore we have introduced a second Gate which runs front-end tests. In fact the following actions take place:
1. The accumulated integration’s will be versioned as a release candidate.
2. Release candidate will be deployed to a separate environment.
3. Front-end tests will be run against the release candidate.
Above described process takes place overnight, because these are long running tests. But in fact, it can happen any time…
I can imagine that you are interested and keen in a ‘Gated’ environment like this. I’m willing to help you with the implementation if you contact me.