Automating Quality

My work for Proptech in 2024

In November 2023 I joined a team formed to create a new product. Tasked with solely handling quality, the excellent software developers left minimal manual testing which allowed me to focus on automation. I was given freedom to choose tools and methods so my role became self-directed and therefore often didn't work from Jira or created my own tickets.

The product included web UI plus API for 3rd party integration. We passed information to our integration partner who generated quotes that we displayed to the user.

Our company used GitHub so I leaned heavily on GitHub Actions with cron timers. My logic was that because the codebase was in such a rapid state of flux, rather than forcing developers to run on commits I instead ran them daily and monitored results.

Why am I blogging this?
The team developed a fairly complete solution in a short period of time leaving me to integrate automation alongside that process. I've been in teams where automation was either an afterthought or poorly implemented. The latter simply wastes time as tests are uninformative, create unnecessary investigations and a "cry wolf" effect where outcomes are ignored, so wanted to provide an example of what could be achieved.

My test suite runs daily from 8am - 9am, separated by 10 minutes or so to stop them interfering with each other. Results are posted to Slack and I would investigate any failures upon starting work, raising issues at our 10am standup meeting.

Image 1
Image 2

Integration tests

First to run evey day, a "pulse" to our integration partner to ensure their service was up and running without breaking changes. A few simple GETs checked for known records.
Written in C# using RestSharp as the HTTP client and xUnit for assertions.

Image 3

Public API

Next up our Public API. Every endpoint tested with each possible response, i.e. Success / Bad Request / Not Found / Unauthorised etc, as applicable. Again in C# with RestSharp and xUnit. Specifically the log file clearly listed each endpoint, expected and actual reponses with failures repeated in a summary section which provided quick identification of problems. A public stream writer variable was created in the xUnit fixture and made available to all [Fact]s.

Image 4

Web API

Next the Web API, who's purpose is to serve our UI.
Same approach, coverage and implementation as the Public API.

Image 5

Web UI

Then UI automation, implemented in Playwright with Typescript. Our company was using Playwright elsewhere and I had heard good things about it. Multiple scenarios with alternative options covered, each with expected UI elements verified.

Image 6

Admin portal

Our admin portal was also tested with Playwright. Every page visited and verified for elements and data as appropriate.

Image 7

Email tests

Sent to Mailinator and retrieved via API, HTML was then rendered in a browser window and screenshot matched using Playwright's pixelmatch library. This had the added advantage of testing alignment, font and image changes. An idiosyncracy was that sometimes the image generated on GitHub would differ from that produced locally (mainly due to font rendering). So I took the screenshot from the failed run and upload it as the expected result, which worked well.

Anything else?

  • A test for integrator response duration. Results were logged to file with timestamps and metrics encapuslated in [] which allowed reliable Regex to extract values. I then wrote a Powershell tool that used the GitHub API to read artifacts, download selected logs, extract the metrics and export them to CSV.
  • An "API tool" in Powershell that provides a Windows GUI for all functions of our API. Testing is then much quicker than editing values in Postman JSON payloads.
  • Load testing an application with Playwright and Artillery running on AWS Fargate.
  • A bespoke webhook analyser which verified the requests and their payloads then posted the results to Slack.