Home » Distribware » RUM: End-to-End Testing and Workload

RUM: End-to-End Testing and Workload


RUM stands for “Real User Measurements”, and  are collected from software systems being used as they would be in actuality (prior to release) or during real-world use in production once that becomes possible.  There are two important rules to keep in mind when collecting Real User Measurements:  first, the system must be tested end-to-end, as it would be used in a real-world scenario; and second, the metrics depend a great deal on the workload the system was running at the time they were collected.  For example, a system will produce very different (higher) metrics with a single user than it would with many thousands of concurrent users.  A good shorthand way of looking at it is RUM = end-to-end testing + workload.

End-to-end system testing is not just the only way to collect realistic performance metrics, it is also the best way to test the functionality of a system.  Some complex issues will only occur while the system is under significant load (as various subsystems become performance bottlenecks).  End-to-end testing conducted while the system is processing a heavy workload is often the only way to expose these types of intermittent behavior.




The H4v3n native application contains test harness functionality used to run remote end-to-end tests of the API’s, as outlined in the diagram above (refer to the H4v3n Testing page for more specific information).  Any account assigned as a member of one of the testing roles can use this functionality.  The test files are essentially request messages that have been saved as files, and modified to include variables which are replaced with realistic values at runtime.  The test files also depend on the functionality of the Tolerant Reader parsing logic to be able to add extra elements to the request message that are needed by the test harness, yet do not “break” the parsing of the messages in the API pipeline.

In the environment shown above, remote end-to-end testing of the API methods tests the following functionality and/or performance (directly or indirectly):

  1. The remote end (number 1 above) starts with the client workstation network configuration, host OS configuration and security.
  2. [direct] The correct use of the keyfile for systems, connection endpoints and accounts.
  3. [direct] The connection from the client application to the chosen web service (endpoint URL), which includes the transport mode [cleartext message, compressed message, compressed and encrypted message] as well as the selected protocols (SOAP/RPC vs. HTTP/REST).
  4. [direct] The client application SDK library functionality (creation/format of request messages, etc).
  5. The remote user’s ISP network used to reach the organization’s DMZ firewall.
  6. The DMZ firewall rules which allow access to the reverse proxy.
  7. The DMZ network configuration.
  8. The reverse proxy configuration and functionality, which is generally only used as a port-forwarding pass-through.
  9. The internal firewall rules, which allow the reverse proxy to access the internal network load balancers.
  10. The internal network configuration.
  11. The internal load balancer configuration and functionality.
  12. The web server host OS configuration and security.
  13. The app server (IIS) configuration and security.
  14. [direct] The ASP.NET web service configuration and security, hosted under IIS.
  15. [direct] The web service message processing pipeline:
    • the request message was accepted and correctly parsed
    • the TTL check was successful
    • the authentication of the message account credentials was successful
    • the authorization of each specific method in the batch was successful
    • the mapping of the message to the correct API was successful
    • the mapping of message parameters to internal method parameters was successful
    • the DAC builder method correctly configured a SqlCommand object to be passed into a DAC utility method
    • the DAC utility method successfully executed the SqlCommand and results returned
    • when the batch of methods were all completed, the collection of results was returned back up the pipeline
  16. The database server host OS configuration and security.
  17. [direct] The app server (MySql) configuration and security.
  18. [direct] The response message was successfully serialized and returned to the client, with the correct compression and/or encryption.
  19. [direct] The client app SDK successfully parsed the response message.
  20. [direct] The returned results were the correct data and type.

The results of all the tests must be verified after they are run.  Positive tests are verified in the database server (number 2 above) by using the database server console to observe the data in the various tables, making sure the expected test records and values have been stored.  Negative tests are verified in the web service log files (number 3 above) using a text editor to observe that the error messages logged match the negative tests that were run.  Finally, database mirroring, replication and sharding (number 4 above) each require their own separate testing and verification.




Web applications test almost all of the same things as the API testing, but use a tool to automate the browser (Selenium in this case) to also test the functionality of the application itself.  The web application consists mainly of server-side logic to merge the data returned from the API method calls into the HTML rendered for each page.  Selenium scripts simulate the actions of a user as they navigate through the web application and use the functionality of each page.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: