Thanks for taking the time to answer my questions, Luc! I’ve definitely learned a thing or two, and I’m glad that we apparently agree about most of this stuff. Just a few last responses below…
About #1. “what to test”
Like I said, my first few automated tests were unit tests testing the results of validations. That seemed like a good idea at the time, because it allowed the tests to a. be very limited in scope, and b. be very isolated from each other.
Would you consider making separate test functions for each (relevant, i.e., sufficiently complex) field validation, like you probably would for each (relevant) function, or is that the wrong scale as far as you are concerned?
About #2. “developing for testability”
One of the things I’m trying to do in my development work is to use expressions instead of statements as much as possible. I want my functions to be as pure as they can practically be, i.e., fully deterministic and without observable side-effects, in order to optimise their testability.
Related prediction for NAV development: functional concepts will soon become the new design patterns. #markmywords
About #3. “predefined test data”
I’m not sure if I fully understand when you say that your test data baseline should be 100% stable and known, but you are using CRONUS data? We have no real way of knowing what changes Microsoft makes to the demo data between releases, do we? Wouldn’t you be better off generating all of your own data in a new NAV company? Or is it just a trade-off between effort and security?
The rest of your replies make perfect sense to me. 🙂