As a little experiment, I thought I’d collect the questions that popped up in my head during the session on automated testing at this year’s final Dutch Dynamics Community meeting, and humbly ask speaker Luc van Vugt to answer them on his blog. I realise that the talk’s focus was on getting the Microsoft built-in tests to run properly, but perhaps Luc would also be willing share some of his experience with automated testing from ‘previous lifes’? 🙂
1. First and foremost – how do you decide what (and how) to test?
When I first started writing automated tests, I found myself testing things that were so obvious that they probably never posed any risk to the stability of the application in the first place. Can you give some rule of thumbs for what to focus our test effort on?
2. How does the need for automated testing affect development work?
You mentioned that testing NAV (ERP?) is different from testing most other systems, since practically everything goes through the database and there’s no easily available way to mock (simulate) this database interaction. Do the developers in your team have testability in mind when they are writing new features?
3. Using demo data as the basis for your test data
You mentioned that tests should ideally create (and clean up) their own data, returning the database to its pristine state after all the tests have run. In our experience, being overly strict about that costs time twice – once during test development, and during each test run. How do you feel about isolating some of the data creation in a demo data creation tool, and running your tests in a database that already has that generated data on board.
4. Have you considered running chunks of tests in parallel?
I guess that could significantly reduce the execution time, right? And that becomes even more relevant e.g. when you want to do some form of gated check-in, where tests must pass before a changeset is accepted into your code repository?
Also, running in parallel forces you to make your tests fully independent of each other – as they should be.
5. How do you design new tests?
In my experience, designing your tests in a code editor leads to the worst results. I think it’s best to formalise your (existing, manual) tests, i.e. listing the steps and verifications, in a text editor, in plain English before converting them to code. Would you agree?
6. Most of our tests were (consciously) implemented as UI tests.
Only having access to fields that are visible from the GUI can be quite limiting – there is no straightforward way to get e.g. the Line No. from a Sales Line. Any advice on that (apart from using unit tests instead)?
7. You mentioned the other day some strange differences between running the test suite from the Windows client, and running it ‘headlessly’ from PowerShell.
Can you elaborate a little on that? Did you manage to solve that issue?