Once Haven has been successfully installed and the client application is able to connect to the various web services of the system, end-to-end tests can be run from any remote client application using the API Test screen. Note that the account running the client application must be a member of the test role to display the API Test menu. The API tests themselves will be run using the authorization (method ACL) of whichever account was selected to connect to the system/web service. Also, all tests will be run using the selected mode (clear text, compressed, or compressed and encrypted) to transport the messages between the application and the server. It is useful to run the tests using different modes to observe the trade-off in the performance metrics between the size of the request message vs. the overhead of compression and/or encryption.
The remote API testing built into the Haven client application is using four features of the Distribware platform to make the end-to-end testing relatively easy. First, it uses the ability to batch multiple method calls per request message (up to the configurable maximum number of methods per message). This batching functionality a) allows the methods in a single test file to perform all the steps of a process using generated variable values that all “tie together”, and b) it also allows the test files to “clean up” after themselves regarding the test data they create. For example, the CRUD and Proc (process) test files follow a pattern: first they create all the new records in the batch, second they read the new records that have been written, third they modify/update the new records, and finally they delete all the new records that were created at the start of the batch. This is important because the client application can connect to any type of server environment, i.e. dev, test, and prod. It is very convenient that the tests run in a production environment automatically clean up the test records they create (also, the test data generated by test files is easily identified due to the non-human-readable values stored in most fields which makes them easy to hard delete).
The second feature is deferred execution. API messages are self-contained, so they can either be executed immediately or stored for execution at a later time (for example, this is particularly useful for system integrations that have an event driven source of data and an intermittently available destination system for that data). Test files are actually just persisted request messages with some extra elements added to them.
The third feature that makes the end-to-end testing relatively easy is the Tolerant Reader logic used to parse the request messages in the message processing pipeline. Each serialized method in a test file contains some extra elements it would normally not have, such as ExpectedCode (OK, Error, Exception) and TestDesription (description of the specific method call, such as “missing required fields”, etc.). These extra elements which are used by the test harness are ignored by the tolerant reader when parsing and processing the request.
The final feature is the generic assertions already built into the response message of every API method call. The last step in the message processing pipeline are the API mapper methods, which call the methods of the internal libraries that contain the actual system logic. The mapper methods then test the results of those internal method calls to categorize them as “OK” (happy path), or one of the alternative paths such as “Error” (logic error that did not throw an exception), or “Exception”. The test files use these result codes returned for each method in the response message in comparison with the expected result code for the pass/fail evaluation of each individual test method call.
The API Test menu displays a single screen. The left side grid is used to show a list of all the test files found in a selected folder. The browse button allows the user to select a specific folder containing test files. A test file is an XML request message that has been saved and modified to contain placeholders (variables) that will be replaced at runtime with appropriate values such as new timestamps, unique ID values, etc. These saved request messages test the functionality of one or more API methods, and return the result messages. Two buttons at the top of the screen will run either the specific test files selected, or all of the test files found in the folder. The tests can be run against either the REST or SOAP service by selecting the appropriate radio button, and the summary metrics of the test run are displayed at the top of the screen (in milliseconds). The results of all the response messages are then analyzed to determine whether each test was a success or failure, to calculate all of the various performance metrics, and then are displayed in the right side grid when finished.
The “Client” column shows the number of milliseconds measured for the request/response round trip by the remote client application itself (all measurements in both the client and the server are obtained from the ElapsedMilliseconds value of the StopWatch class). The “Server” column shows the same sort of start-to-finish measurement on the server to process the request message and return the response message. The “Network” column is the difference between the total client duration and the server duration. The “Method” column shows the start-to-finish measurement of each individual method call in the batch. The “Pipeline” column shows the difference between the total server duration and the sum of the method measurements, which represents the message processing overhead of the pipeline such as message parsing, authentication/authorization, etc. The “ReqInitialSize” and “ReqFinalSize” columns show the size in bytes of the original request message vs. the compressed or encrypted size. The same is true for the response message in the “RespInitialSize” and “RespFinalSize” columns. The “MethodCount” column shows how many method calls were contained in each request and response message.
Right-clicking on one of the result rows allows the matching response message to be viewed, as shown in the example above. The ReqGID value is used to match an asynchronous response to the original request in the client application. The AcctToken value is used to authenticate the server/service to the client application. The SvcDuration value is the number of milliseconds from the receipt of the request message to the return of the response message as measured on the server. The ReqCode value is used to return special error codes to the client application that pertain to the message processing pipeline itself, such as an expired or disabled user account, and so on (this type of error message would be shown in the ReqDetail field).
The MethodList element contains a serialized collection of the batch of method results. Each method can return a collection of one or more results, each value of which is read by the client application from the collection by name (the first of which is always the default name of “Default”). The other result fields are generally empty when the ResCode = OK, and are mainly used for errors and exceptions. The value returned by each result can be seen in the ResVal field.
Two buttons on the bottom right of the screen allow the results of a test run to be saved. The Save Results button exports the result grid to a CSV file, which can then be opened in a spreadsheet as shown above. The Upload Results button saves the contents of the result grid to a database table on the server the client application is currently connected to. Server-side processes then merge those individual tables into a centralized repository for long term analysis/trending. Note: details of both the client machine and the server are automatically added as test machines in the test results database when saving the data to the server. Also, the server state (CPU, memory, disk, network) are saved by including specific method calls into the test file batch. All results records are child records linked to the parent test run record.