When and Why I Avoid WITH Statements in C/SIDE

On a regular basis (but not regular enough to remember all the subtleties involved), I need to explain to people why I think using the WITH statement in C/SIDE is potentially a bad idea. This blog post gives me something to refer them to, and to occasionally remind myself of my reasons. 😉

First of all, the WITH statement certainly has it’s applications. We all have a limited number of keystrokes we can type in our lifes, so any time you can avoid typing something should be considered a win. However, if saving keystrokes increases your chances of wasting even more precious time debugging, there’s an easy trade-off to be made.

In my own experience, WITH can cause really mind-boggling bugs in places that already have an explicit or implicit “scope variable”:

  • within another WITH statement (explicit; the outer WITH’s variable is your “scope variable”);
  • (sub)objects that have an implicit record variable, e.g. Rec in tables and pages, data item variables in reports etc.

The risk here is that the meaning of your code can change dramatically without any changes to the code itself if a named item is introduced in your WITH statement’s variable and something by that same name already was reachable outside of your WITH statement. Let me give an example.

Imagine we have a codeunit with the following code. Normally, of course, you would have more statements nested within your WITH statement; I left them out for simplicity’s sake.

Initial Codeunit

Running this gives us the following – unsurprising – result:

Initial result

However, if I now introduce a table function with the same name, the code in my codeunit gets a different meaning, without me even opening the codeunit. Perhaps somebody else made that change, and I don’t even have permissions to edit the codeunit. Or the table, for that matter.

A New Table Function

When I run my codeunit again, things will look like this:

Oops. An unexpected result.

In other words, WITH in situations with a “scope variable” is a time bomb: it may work flawlessly now, but there’s a risk it will blow up in your face in the future, without an obvious cause. That does not sound like the defensive code I normally try to write, which is why I tend to avoid WITH statements in general.

Functions that act like properties

In C/SIDE, certain built-in functions like FILTERGROUP support two (semantically equivalent) calling syntaxes. Either you say


or you use

MyRecord.FILTERGROUP := MyFilterGroupLevel;

I always forget that some of this compiler cleverness also applies to our own functions. Sadly, I don’t think you can make your own “properties” read/write like FILTERGROUP is, but this technique does allow you to build e.g. codeunits that feel more like object-oriented classes with properties (that are, in this case, either read-only or write-only. For an example of a situation where that is not a serious limitation, please read on).

A C/AL function that looks like this

SomeProperty(Value : Text)

can of course be called like this:


but this feels much more natural:

SomeProperty := 'Foo';

Note that your setter-functions should have exactly one parameter, and your getters should obviously have a return value defined.

Imagine for example a codeunit for building connection strings to connect to external databases. The different elements of the connection string can be set using write-only “property” functions, after which the resulting string can be retrieved from a read-only “property”, effectively encapsulating the knowledge about how to build connection strings. Using such a codeunit might look something like this.

 ConnectionStringBuilder.ServerInstance := 'MyDatabaseServerInstance';
 ConnectionStringBuilder.DatabaseName := 'MyDatabaseName';
 ConnectionStringBuilder.IntegratedSecurity := true;


Not rocket science, still nice. 😉

Argument Completion for NAV PowerShell cmdlets

One of my pet peeves about the PowerShell cmdlets that ship with NAV is the lack of built-in argument completion: the module in question knows exactly which server instances exist (after all, it has a cmdlet that lets you retrieve a list of these instances), but lacks the ability to enumerate the instance names when you specify a -ServerInstance parameter and press the Tab key.

Luckily, PowerShell version 5.0 and up allow custom argument completers to be retrofitted to existing cmdlets. Let’s loop through the cmdlets that have a -ServerInstance parameter, and register an argument completer:

Get-Command `
    -Module Microsoft.Dynamics.Nav.Management `
    -ParameterName ServerInstance | 
        ForEach-Object { 
            Register-ArgumentCompleter `
                -CommandName $_ `
                -ParameterName ServerInstance `
                -ScriptBlock $ScriptBlock 

Before we can do that, of course, we will need to declare a scriptblock that contains the logic for our argument completer. The parameters that the scriptblock receives are defined (and passed upon invocation) by the PowerShell run-time. Our scriptblock may look something like this:

$ScriptBlock = {

    Get-NAVServerInstanceName | 
        Where-Object { $_ -like "$wordToComplete*" } |
        ForEach-Object { [System.Management.Automation.CompletionResult]::new($_) } 

where Get-NavServerInstanceName is a function that I declared earlier to extract and normalize the server instance names, like so:

function Get-NAVServerInstanceName {
    Get-NavServerInstance | 
        Select-Object -ExpandProperty ServerInstance |
        ForEach-Object { $_ -replace '^MicrosoftDynamicsNavServer\$', '' }    

Here’s the resulting user experience: you can simply tab through the list of server instances, or you can provide a wildcard pattern to further limit the instances that you tab through.

NAV Application Testing – the Story Continues

Following up on our recent blog posts/comments about testing, Luc van Vugt, Mark Brummel and I decided to get together and talk some more about testing best practices in NAV. Below are my notes and our conclusions.

What Data to Test With

  • As far as test data is concerned, tests should ideally be independent of each other, and independent of the data already present in the database.
  • However, (re)creating all data as part of your test scripts can get quite labour-intensive, and could slow down your test execution considerably.
  • Luc’s compromise, a single initialization codeunit that gets run before each test codeunit, is probably an excellent trade-off between test authoring effort and test decoupling.
  • In addition, using a stable, well-known set of base data that is already present in the database can save a lot of time (both during development and during test runs).
  • The CRONUS demo data may suffice in many cases; if not, consider using RapidStart or generate your own set of base test data. Keep this set as small as possible, e.g. using only a handful of G/L accounts.
  • When generating your own test data, consider filling text and code fields with random strings of the defined length of the field (e.g. fill customer names with 50 characters) to test for any overflows in the application objects without any extra effort.

Unit Tests vs. Integration/System Testing

  • Often, the input of your system under test (SUT) is data in the database, and the same is true for the output (simply a consequence of an ERP solution that we not built with testability in mind?). This has certain consequences for the way we can test our applications.
  • Our conclusion was to use unit tests for any “pure functions”: code with little or no dependencies and side effects on the data, e.g. a function for turning some strings into a valid, localized address array.
  • If you are testing logic that depends on, or affects, data in the database, use system testing. The same is true if the testability of your application logic’s building blocks leaves something to be desired.
  • In general, if a system test succeeds, it is safe to assume that the component parts (which would normally be tested separately with unit tests) are also correct.

Testing through user interface (typically testpages) vs. testing application logic

  • UI testing is particularly useful for system testing.
  • Pro: UI testing tests exactly what the user will experience (if somebody “accidentally” 😉 put some additional logic in the OnAction trigger, that will automically become part of your system under test).
  • Pro: UI testing requires less maintenance. E.g. even if the logic called by a page action changes, it is likely that the page action itself will remain pretty much the same. A UI test would automatically test the new logic; an application logic test would need to be updated manually, but without any reminder to do so.
  • Pro: UI testing will still work even if you don’t have access to all of the source code, e.g. if Microsoft releases the base application as an extension somewhere in the future.
  • Pro: UI testing works against the sum of the installed extensions, and as such, allows you to test if your extension still behaves as expected when combined with other extensions.
  • Con: UI testing is significantly slower than C/AL testing. On the other hand – make your tests nice independent of each other, and you can parallelize them by dividing the test load between multiple servers, thus reducing the overall lead time.

Testing: Who Does What?

  • Coming up with test scenarios is primarily a functional task; coding test scenarios obviously has a more technical nature.
  • Functional/domain specialists should own the system tests; technical specialists are the owners of the unit tests (since the subjects of unit tests are on a scale that functional specialists generally don’t need to be aware of).
  • For decoupling these two disciplines, perhaps we would need a shared language (DSL?) to express the scenarios in.
  • The vocabulary for this language could be reasonably small, since mosts tests consist primarily of:
    • opening a page;
    • setting fields on that page;
    • (optionally) invoking actions;
    • testing page field values; and;
    • closing the page
  • Ideally, the language would be convertible to step-by-step, human-readable test instructions (in MarkDown format, of course :)) and machine-readable C/SIDE test codeunits.

Practical Test Script Matters

  • To cover all relevant scenarios, you typically need a substantial number of test functions. In order to find your way around these functions (today, and also in a few months’ time), we recommend using a well-structured naming convention for your test function names. You could, e.g. pick a limited set of “verbs” to prefix your function names with (a bit like what PowerShell does): Test…, Insert…, Delete… etc.
  • Mark, Luc and I also discussed putting comments with tags in the test function code. Comment text can be extracted from a NAV translation export file. It should not be very hard to create a little tool that can take these extracted comments and convert them to proper test function documentation, similar to .NET XML comment documentation.
  • If you work for an independent software vendor (ISV), ship your test scripts along with your application. Not only does this demonstrate your commitment to, and confidence in, your code’s quality, it also documents the inner workings of the application, and allows resellers and/or customers to build upon your work when creating their own test sets.
  • Both Van Dijk/The Learning Network (Luc’s employer) and Mprise Indigo (Jan’s employer) have scripts that report test results to relevant stakeholders via e-mail. When starting from scratch, it might make sense to use RSS or a web-based dashboard instead for your test results.


For a real-life project that Mark is working on, these are the first steps towards an acceptable level of code coverage:

  • Get the standard Microsoft test suite to run as much as meaningfully possible.
  • Optimize the application object code for testability (e.g. making functions more “pure” where possible).
  • Optimize the application object user interface for testability (e.g. giving page actions meaningful and recognizable names for use in test pages).
  • Start with an empty test database; generate a minimal set of base data.

NAV Help Server Full-Text Search Issues

A new help system for our add-on

Last year, we took the first steps towards proper on-line help for our add-on. Before, we had always had documentation in the form of Microsoft Word and PDF files on a network share, but the new system, based on the NAV Help Server, was intended to provide an easy-to-find, uniform, multi-language-enabled, extensible, searchable source of information for both our consultants and end-customers.

The system is starting to take shape

Step by step, the new on-line help, with an initial focus on conceptual topics, is starting to take shape. Our documentation specialist writes the topics in MarkDown – a file format we chose for its shallow learning curve and emphasis on document structure. In addition, MarkDown is a text-based format (as opposed to e.g. Microsoft Word files, which are binary), which enables meaningful version management (e.g. comparing revisions) in TFS. Topic post-processing and conversion from MarkDown to HTML (which is the file format that the NAV Help Server needs) is done as part of the build process.

The problem: full-text search is not working

Everything worked swimmingly, except for the NAV Help Server’s full-text search – it simply would not find any of our topics. No problem in finding the NAV base application help topics, though. Hmmm…


Maybe our topics are not indexed?

I initially assumed that it was a permissions issue – perhaps the Windows Search indexer did not have the required permissions to index our topics? However, after figuring out how to query Windows’ search index (more or less like this), I could see that our topics were indexed just fine.

Maybe the help server only searches its own folders?

Another possible explanation for not finding our topics was that they lived in a subfolder of the NAV Help Server’s language folder (i.c. en). Maybe the help server only searches its own folders?

As a little experiment, I copied one of our topics to the en folder and searched for a word I knew it contained, but still no luck. This would turn out to be a bit of a red herring, as you can read below.

NAV Help Server’s inner workings

Flash-forward another 30 minutes. By then I had found out two rather essential things about how the NAV Help Server searches the index:

  • It does only search the language folder for the active language (‘active’ being the language referenced in the help server url). However, the reason why copying my add-on topic to the language folder still did not render any results was due to the other thing I found out:

  • From what it finds in the index, the search handler only returns topics that have both a file name (which our topics obviously did) and a title (which – as you may have guessed – they did not).


Pandoc, the tool we use for file conversion between MarkDown and HTML, can use meta information in your MarkDown to add, among other things, a <title> tag to your HTML, but – of course – only if you provide such information. After the summer holiday, that will be the first thing to implement. Also, our documentation build process should copy its output to the language folders of the NAV Help Server, possibly prefixing our topics to differentiate them from the base help and to prevent file name collisions.

Everything you always wanted to know about automated testing in NAV (cont’d)

Thanks for taking the time to answer my questions, Luc! I’ve definitely learned a thing or two, and I’m glad that we apparently agree about most of this stuff. Just a few last responses below…

About #1. “what to test”

Like I said, my first few automated tests were unit tests testing the results of validations. That seemed like a good idea at the time, because it allowed the tests to a. be very limited in scope, and b. be very isolated from each other.

Would you consider making separate test functions for each (relevant, i.e., sufficiently complex) field validation, like you probably would for each (relevant) function, or is that the wrong scale as far as you are concerned?

About #2. “developing for testability”

One of the things I’m trying to do in my development work is to use expressions instead of statements as much as possible. I want my functions to be as pure as they can practically be, i.e., fully deterministic and without observable side-effects, in order to optimise their testability.

Related prediction for NAV development: functional concepts will soon become the new design patterns. #markmywords

About #3. “predefined test data”

I’m not sure if I fully understand when you say that your test data baseline should be 100% stable and known, but you are using CRONUS data? We have no real way of knowing what changes Microsoft makes to the demo data between releases, do we? Wouldn’t you be better off generating all of your own data in a new NAV company? Or is it just a trade-off between effort and security?

The rest of your replies make perfect sense to me. 🙂

Everything you always wanted to know about automated testing in NAV but were afraid to ask ;-)

As a little experiment, I thought I’d collect the questions that popped up in my head during the session on automated testing at this year’s final Dutch Dynamics Community meeting, and humbly ask speaker Luc van Vugt to answer them on his blog. I realise that the talk’s focus was on getting the Microsoft built-in tests to run properly, but perhaps Luc would also be willing share some of his experience with automated testing from ‘previous lifes’? 🙂

1. First and foremost – how do you decide what (and how) to test?

When I first started writing automated tests, I found myself testing things that were so obvious that they probably never posed any risk to the stability of the application in the first place. Can you give some rule of thumbs for what to focus our test effort on?

2. How does the need for automated testing affect development work?

You mentioned that testing NAV (ERP?) is different from testing most other systems, since practically everything goes through the database and there’s no easily available way to mock (simulate) this database interaction. Do the developers in your team have testability in mind when they are writing new features?

3. Using demo data as the basis for your test data

You mentioned that tests should ideally create (and clean up) their own data, returning the database to its pristine state after all the tests have run. In our experience, being overly strict about that costs time twice – once during test development, and during each test run. How do you feel about isolating some of the data creation in a demo data creation tool, and running your tests in a database that already has that generated data on board.

4. Have you considered running chunks of tests in parallel?

I guess that could significantly reduce the execution time, right? And that becomes even more relevant e.g. when you want to do some form of gated check-in, where tests must pass before a changeset is accepted into your code repository?
Also, running in parallel forces you to make your tests fully independent of each other – as they should be.

5. How do you design new tests?

In my experience, designing your tests in a code editor leads to the worst results. I think it’s best to formalise your (existing, manual) tests, i.e. listing the steps and verifications, in a text editor, in plain English before converting them to code. Would you agree?

6. Most of our tests were (consciously) implemented as UI tests.

Only having access to fields that are visible from the GUI can be quite limiting – there is no straightforward way to get e.g. the Line No. from a Sales Line. Any advice on that (apart from using unit tests instead)?

7. You mentioned the other day some strange differences between running the test suite from the Windows client, and running it ‘headlessly’ from PowerShell.

Can you elaborate a little on that? Did you manage to solve that issue?

MS Connect Suggestion: Importance property for page controls within a repeater

Another suggestion I left on Microsoft Connect. Your vote could help make a difference!

Page controls have a property called Importance. Setting this property to the option “Additional” hides the control on card pages until the user clicks the “Show more fields” button. Toggling visibility like this requires fewer mouse clicks and is much more discoverable than the most common alternative (i.e. changing control visibility in “Customize this Page”/”Choose Columns”).

Sadly, nothing similar appears to exist for page controls within repeaters. One of the reasons behind this might be that there’s no obvious, meaningful way to handle one of the other options of the Importance property – Promoted – in the context of a list. Nonetheless, I feel that support for Importance = Additional for controls within repeaters would have a significant, positive impact on usability at (what I, as a layman, assess to be) a reasonable cost. 

P.S.: Having this kind of additional meta-information about list page controls could even help the platform to decide how to make optimal use of limited screen real-estate in e.g. the phone client.

Vote here!

MS Connect Suggestion: Refresh action for test page objects

The other day, I entered the following suggestion on Microsoft Connect. If you agree, don’t forget to vote!

Test page objects support a number of actions/functions that correspond to built-in page actions, such as navigation actions (NEXT, PREVIOUS, FIRST, LAST) and actions to toggle the page mode (VIEW, EDIT). However, there seems to be no corresponding test page action/function for invoking the built-in Refresh action that is present on most page types.

A built-in Refresh action would be useful to re-retrieve the current state of the record from the database, or to force recalculation of FlowFields.

Vote here!

Go Home, NAV, You’re Drunk

Look carefully at the numbers below and see if you find anything unusual… 😉

Compilation Progress Dialog

« Older posts