On this article, I’ll clarify the speculation and apply of visible testing utilizing
robotframework and WatchUI library.
Merely because the identify applies, visible testing is once we take a look at our GUI utility with visible devices.
Within the automation realm, because of this we’re testing our GUI utility utilizing pc imaginative and prescient strategies.
The widespread theme of visible testing is taking a snapshot of our utility previous to the take a look at execution, this snapshot would be the benchmark, after which with every take a look at execution the GUI can be in contrast with this benchmark utilizing picture distinction calculations.
This usually includes strategies like Optic Characters Recognition (OCR) and picture segmentation.
Testing HTML paperwork solely with textual content comparability could be tedious and time consuming.
If the pages in your utility has dozens of textual content parts, because of this the corresponding take a look at case ought to have dozens of textual content assertion statements.
The upkeep of this could be overwhelming and take a look at code readability will undergo.
Textual content assertion isn’t all the time ample, for instance if the textual content content material in your web site is right however the parts are out of alignment, your take a look at case will run efficiently with a go outcome, however the who web page is a multitude with extremely antagonistic results on person expertise.
By taking a screenshot and examine in opposition to it with pc imaginative and prescient, you’re matching all of the attainable adjustments with a single assertion.
This doesn’t imply that textual content primarily based aspersions can be fully deserted, however these may be considerably diminished.
At all times keep in mind that GUI testing is on the prime of the testing pyramid and it ought to take second place to unit testing, integration testing and api testing.
Visible testing is a part of GUI testing and the objective of GUI testing is to ensure the information renders correctly for the person.
A lot of the validation will participate on the API degree and never on the UI.
I talked about
robotframework in a previous article.
Robotframework is a python primarily based open-source framework for software program acceptance testing.
With its keyword-driven method and tabular syntax, it may be used to generate acceptance assessments rapidly and it provides nice readability each on the take a look at code and the ultimate stories.
It provides a attain eco-system with many instruments simply accessible for a number of testing targets from internet testing to API testing to the database, cell or IoT.
WatchUI is a visible testing library for a robotic framework developed by Tesena.
Powered by Tesseract-ocr, its key phrases enable the automation developer to take a screenshot of the appliance after which to diff it in opposition to a picture with the assertion imposes a backside threshold for acceptable distinction.
It may well simply combine together with your current
robotframework scripts working selenium or playwright and you may promote these take a look at to be visible assessments with solely 2 strains of code 🙂
For extra about browser testing instruments learn my article here.
First, lets set up all of our dependencies from pip:
pip set up
Having our dependencies put in, now we have to
set up tesseract on our working system.
You possibly can examine putting in tesseract from watch ui official doc here.
Let’s begin by writing a primary internet take a look at case with selenium and retailer it as
take a look at.robotic:
To show how effectively visible testing can assist us, I selected the epoch converter website (for a cause, you’ll see :)).
The take a look at makes use of selenium to navigate the browser to the epoch converter essential web page, it waits till 3 parts are prepared:
- A div container.
- The epoch time textual content field
- The epoch to human time conversion button.
Having all 3 of them enabled, the internal textual content from the button is collected after which we assert that the textual content is the same as “Timestamp to Human date”.
Word — all of the xpaths have been extracted from the web site with ease utilizing SelectorHub, go test it out!
Let’s run this:
robotic -d outcomes take a look at.robotic
And take a look at the
Good, the take a look at handed.
Let’s check out the epoch converter essential web page:
As you may see there are numerous controllers and widgets, about 20 completely different controllers and all of them may be examined.
Writing express assertions for every controller can be extraordinarily time-consuming, troublesome to keep up, and create a variety of confusion within the take a look at code.
So we’ve reached to level the place we are able to improve our take a look at to change into a visible take a look at utilizing
tesseract-ocr and watch UI 🙂
As promised, we are able to promote this take a look at to be a visible take a look at by including solely 2 strains of code 🙂
First, we have to take a baseline picture for our web site.
I took a picture and saved it within the
pictures folder as
We must import
WatchUI library within the
***Settings*** part, this requires a full path to
Library WatchUI tesseract_path=C:/Program Information/Tesseract-OCR/tesseract.exe
Now let’s add
Examine Display screen key phrase from the WatchUI library to diff the display screen from the baseline picture.
Examine Display screen ./pictures/epoch.png save_folder=$EXECDIR/outputs ssim=0.99
We’re evaluating the pictures and permitting a distinction no larger than 1% (or, we anticipate no less than 99% similarity).
That is the ultimate script:
Now let’s run this take a look at:
robotic -d outcomes take a look at.robotic
And take a look at the report:
The take a look at handed, let’s examine the outputs of
Examine Display screen key phrase:
As you may see, the OCR mechanism detected all of the variations between baseline picture and present display screen.
This take a look at handed as a result of the distinction was decrease than 1% (or the similarity was increased than 99%).
Let’s run this take a look at once more, however this time round asserting a 100% similarity.
That is simply performed by altering
ssim choice within the
Examine Display screen key phrase:
Examine Display screen ./pictures/epoch.png save_folder=$EXECDIR/outputs ssim=1.0
robotic -d outcomes assessments.robotic
And take a look at the report
So we are able to see that the take a look at failed, and the diff resulted with ~99.4% similarity or ~0.6 distinction.
The first limitation is timing, the take a look at can fail if comparability is carried out to early whereas the web page is loading.
It may be overcome with implicit waits and generally even mounted delays, however mounted delays are typically considered test smells.
One other limitation is that pictures must be in the identical dimensions to be able to be in contrast.
Threshold setting isn’t all the time intuitive, particularly when fields like dates are advanced.
Setting the edge too excessive would possibly trigger flakiness, whereas too low results in pseudo take a look at instances.
Scrollable pages may also be problematic as it will possibly generate additional synthetic variations between baseline and present display screen.
Storage could possibly be one other problem because the variety of your take a look at instances proliferate.
Storing pictures (and another binary information) in a git repository is a nasty apply as git is optimized to investigate textual content information and all git operations equivalent to merge, rebase, cherrypick, and so on are primarily based on textual evaluation.
Visible testing is superior for including robustness to your GUI assessments.
Do not forget that GUI assessments needs to be considered lowest in precedence and visible assessments are a part of that.
On account of among the limitations round visible testing, the evaluation of which assessments are acceptable for visible diffing needs to be performed with warning.