The first job that I worked as a QA/Software Tester was for a tiny company. There were 3 software developers at the time I joined, and I was their first tester. The product was a website that let users do some basic filing tasks, to keep track of their data a little more easily. It would remind them about things like their car insurance renewal dates.
Looking back on it, it was a reasonably straight forward system that was little more complicated than a website backed up by a MySQL database.
Around 6 months into the job, my employers dropped the bombshell that I would be expected to set up automated tests to cover the website’s functionality. I wasn’t thrilled at the idea of this, but I went about researching ways of accomplishing the task I’d been set.
At that point in my life, I had always been interested in programming, but had never got past the “tinkering” phase. I would read a few tutorials for a language, set out with lofty goals of what I wanted to accomplish, and lose interest within a week or two. I had to translate that limited experience into something that would allow our company to run regression tests automatically all the way through our strict 2 week sprints.
So where did I start out? I started with Ruby and Watir. Well, in reality Watir came first. I saw a few references to it in books I was reading at the time, and they had positive things to say about it. It ran on Ruby, and therefore Ruby was the language I started with. After a few days of trying it out, I was sold. Watir was simple and intuitive, and Ruby was extremely forgiving. Ruby has a reputation for being easy to start learning, and that is definitely true.
Within a month (or two) I had a framework set up, and almost every part of the site covered with at least a couple of tests. I use the word “framework” incredibly loosely here. My code was bad, its structure was bad, it was unreliable, but it was a start.
- I had no idea how to structure things.
- There were raw calls to Watir everywhere.
- Almost everything that wasn’t a test was in one huge file called basics.rb (this thing haunts my dreams).
- The browser object was global, which is a terrible idea if you’re ever going to want to run your tests in parallel.
- The tests relied on accounts in known states, and would break the moment anything unexpected happened.
- In an attempt to fix the above, branching code was written to try to deal with various error states in accounts. This made it difficult to know when things were genuinely going wrong and the test code was just coping with the problems on its own.
- So much more awfulness.
It was really bad, but… it got the job done. I would run it against every build, and things would inevitably fail and require attention, or a human “no this is actually fine, ignore it”. That was still significantly faster than trying to regression test functionality every time the developers released a new build, which was daily.
The main problem was the automation effort was too difficult to maintain. A small change to the website would result in hours of altering element locators and logic to get the tests to stop ending in errors. With a little knowledge this is incredibly easy to avoid. Implementing a Page Object Model (POM), and basic abstraction solves it.
So that’s what I ended up doing. A class was made to represent each page, and Basics.rb was split up so that all the code within it had a proper home. Each page object contained the code required to take different actions on a given page. The tests then instantiated the page objects, and made calls to the methods, e.g.
What would have been:
$browser.text_field(:id => "username").set("Goose") $browser.text_field(:id => "password").set("Password1") $browser.button(:id => "submit").click
loginPage.fill_username_field("Goose") loginPage.fill_password_field("Password1") loginPage.click_login_button
This was immediately a lot more maintainable. It still wasn’t great, but it was definitely an improvement. Lots of little improvements like this added up over time. The framework became more reliable and needed less of my time per sprint. New tests would be added to cover new functionality, but the older tests were less prone to causing errors and failures where there were none.
So yeah, my career in test automation started with being thrown in at the deep end, making something fairly terrible and gradually making it invaluable. At the end it was still awful, but we wouldn’t have been able to get all the testing in a sprint done without it.
As a little bit of context: it’s been 7 or 8 years since I started this journey, and I’m now at the point where I don’t write automated tests anymore. I set up test frameworks, and leave the test writing to other people in the department.
If there’s a lesson to be learned here, I’d say it’s that making mistakes is fine. I’m hoping I can fill this blog/site with tips on how to not be as bad at this as I was back then.