NAV Application Testing – the Story Continues

Following up on our recent blog posts/comments about testing, Luc van Vugt, Mark Brummel and I decided to get together and talk some more about testing best practices in NAV. Below are my notes and our conclusions.

What Data to Test With

  • As far as test data is concerned, tests should ideally be independent of each other, and independent of the data already present in the database.
  • However, (re)creating all data as part of your test scripts can get quite labour-intensive, and could slow down your test execution considerably.
  • Luc’s compromise, a single initialization codeunit that gets run before each test codeunit, is probably an excellent trade-off between test authoring effort and test decoupling.
  • In addition, using a stable, well-known set of base data that is already present in the database can save a lot of time (both during development and during test runs).
  • The CRONUS demo data may suffice in many cases; if not, consider using RapidStart or generate your own set of base test data. Keep this set as small as possible, e.g. using only a handful of G/L accounts.
  • When generating your own test data, consider filling text and code fields with random strings of the defined length of the field (e.g. fill customer names with 50 characters) to test for any overflows in the application objects without any extra effort.

Unit Tests vs. Integration/System Testing

  • Often, the input of your system under test (SUT) is data in the database, and the same is true for the output (simply a consequence of an ERP solution that we not built with testability in mind?). This has certain consequences for the way we can test our applications.
  • Our conclusion was to use unit tests for any “pure functions”: code with little or no dependencies and side effects on the data, e.g. a function for turning some strings into a valid, localized address array.
  • If you are testing logic that depends on, or affects, data in the database, use system testing. The same is true if the testability of your application logic’s building blocks leaves something to be desired.
  • In general, if a system test succeeds, it is safe to assume that the component parts (which would normally be tested separately with unit tests) are also correct.

Testing through user interface (typically testpages) vs. testing application logic

  • UI testing is particularly useful for system testing.
  • Pro: UI testing tests exactly what the user will experience (if somebody “accidentally” 😉 put some additional logic in the OnAction trigger, that will automically become part of your system under test).
  • Pro: UI testing requires less maintenance. E.g. even if the logic called by a page action changes, it is likely that the page action itself will remain pretty much the same. A UI test would automatically test the new logic; an application logic test would need to be updated manually, but without any reminder to do so.
  • Pro: UI testing will still work even if you don’t have access to all of the source code, e.g. if Microsoft releases the base application as an extension somewhere in the future.
  • Pro: UI testing works against the sum of the installed extensions, and as such, allows you to test if your extension still behaves as expected when combined with other extensions.
  • Con: UI testing is significantly slower than C/AL testing. On the other hand – make your tests nice independent of each other, and you can parallelize them by dividing the test load between multiple servers, thus reducing the overall lead time.

Testing: Who Does What?

  • Coming up with test scenarios is primarily a functional task; coding test scenarios obviously has a more technical nature.
  • Functional/domain specialists should own the system tests; technical specialists are the owners of the unit tests (since the subjects of unit tests are on a scale that functional specialists generally don’t need to be aware of).
  • For decoupling these two disciplines, perhaps we would need a shared language (DSL?) to express the scenarios in.
  • The vocabulary for this language could be reasonably small, since mosts tests consist primarily of:
    • opening a page;
    • setting fields on that page;
    • (optionally) invoking actions;
    • testing page field values; and;
    • closing the page
  • Ideally, the language would be convertible to step-by-step, human-readable test instructions (in MarkDown format, of course :)) and machine-readable C/SIDE test codeunits.

Practical Test Script Matters

  • To cover all relevant scenarios, you typically need a substantial number of test functions. In order to find your way around these functions (today, and also in a few months’ time), we recommend using a well-structured naming convention for your test function names. You could, e.g. pick a limited set of “verbs” to prefix your function names with (a bit like what PowerShell does): Test…, Insert…, Delete… etc.
  • Mark, Luc and I also discussed putting comments with tags in the test function code. Comment text can be extracted from a NAV translation export file. It should not be very hard to create a little tool that can take these extracted comments and convert them to proper test function documentation, similar to .NET XML comment documentation.
  • If you work for an independent software vendor (ISV), ship your test scripts along with your application. Not only does this demonstrate your commitment to, and confidence in, your code’s quality, it also documents the inner workings of the application, and allows resellers and/or customers to build upon your work when creating their own test sets.
  • Both Van Dijk/The Learning Network (Luc’s employer) and Mprise Indigo (Jan’s employer) have scripts that report test results to relevant stakeholders via e-mail. When starting from scratch, it might make sense to use RSS or a web-based dashboard instead for your test results.

Actions

For a real-life project that Mark is working on, these are the first steps towards an acceptable level of code coverage:

  • Get the standard Microsoft test suite to run as much as meaningfully possible.
  • Optimize the application object code for testability (e.g. making functions more “pure” where possible).
  • Optimize the application object user interface for testability (e.g. giving page actions meaningful and recognizable names for use in test pages).
  • Start with an empty test database; generate a minimal set of base data.

1 Comment

  1. One more Con for UI testing in NAV is the Page Controls access from code limitation: If controls are not visible by default then they are not accessible from code for Set\Get their value. So you need to personalize required Pages before using Test version of that pages in code.

Leave a Reply

Your email address will not be published.

*