Testing Procedures for Enterprise Revenue Management and Pricing Systems

As I described in a previous blog, an Enterprise Revenue Management and Pricing System (ERMPS), is a long-term commitment of your company towards increasing the sophistication and personalization of Revenue Management practices, leading to a significant increase of your business’ bottom line. Key stages of an ERMPS life cycle consist of (1) Discovery, (2) Prototype, (3) Development, (4) User Acceptance Testing, and (5) Rollout/Support. The robustness and scalability of ERMPS depend on using appropriate testing procedures at each stage of the lifecycle.

Without these processes, an ERMPS build will have inaccurate results and wasted resources. First, it’s essential to conduct data validation tests to ensure all input data is in line with business expectations. After each module is prototyped and implemented, unit tests will make sure every module is executed in a way that is precisely in line with the design specifications. Next, system integration tests will evaluate the correctness of results for the whole system by running it end to end, and investigation data transformations at interaction points between modules. Finally, the system itself should have the capability to detect potential issues and throw meaningful alerts and errors that allow straightforward troubleshooting.

Data Validation Tests

During the Discovery phase, before any model design is considered, a thorough data validation of the analysis dataset should be executed. This includes all available data elements that could be used in analytical modeling. In this stage, to understand the data quality and limitations and examine business reasonability, high-level statistics, such as data type, the percentage of missing values, mean, standard deviation, extreme values, and positivity should be generated and analyzed.

During the Prototype and Development phases when the model design is finalized, the code used for data quality checks should be maintained and reused every time the input data is refreshed, ensuring overall consistency. When new data elements are introduced, more data quality checks should be added to ensure that the intended model design requirements are met and, more importantly, that they generate the desired business outcome. For example, while competitor shopped pricing data is an extremely valuable data element for estimation of market response models, we have to ensure that the minimum number of the observations are met or fill-in missing values with business reasonable values using proximity-based algorithms.

Unit Tests

In the later stages of the Development phase when system components run successfully, key testing activities should focus on verifying that system behavior is aligned with detailed design specifications. For each component of the system, unit tests that identify differences between system generated outputs and known good outcomes should be executed.

This process will be time-consuming and will require multiple iterations and on-going cooperation between model designers, code writers, and business strategists. The goal of the test is to 1) ensure the model is correctly implemented per the design and 2) covers as many extreme cases as possible ensuring a robust system that continues to operate against any outliers.

Unit tests are a good opportunity to identify bottlenecks in the data flows.  For example, updates on the same output table across multiple modules should be avoided since it creates unnecessary dependencies between modules and makes it much harder to test and update the model.

System Integration Tests

At the end of the Development phase, once every module is unit tested, system integration tests will ensure that each component in the system is correctly assembled with other components of the system. Failures in system integration tests could point to faulty integration points across any of the system modules but also can identify missed bugs or system design issues. In the case of a system design issue, a solution that benefits the whole system not just the problematic module should be used. For example, a column that stores only decimal values between [0,1] will lead to a faulty integration test when bad data generates values that exceed the column width. In this case, removing or bounding bad data is preferable compared to changing the column width because it would prevent the bad data impacting other related modules which might not log that error but would certainly generate sub-optimal results.

A good practice that helps ensure a smooth integration test process is saving important calculations steps that lead to the final output in intermediate tables or at least in temporary tables. This identifies issues without running additional ad-hoc testing code. All of these designs and decisions on the data structure need to be considered as early as possible.

Troubleshooting

During User Acceptance Testing and Rollout and Support phases, the troubleshooting process is heavily influenced by the quality of all of the tests mentioned above. During this stage, there should be fewer test cases that need bug fixes and a majority of test cases will lead to setting up additional parameters that can handle edge cases.

Best practices for troubleshooting and user adoption include saving all test cases with information on taking actions for future reference and ensuring that design documentation is comprehensive and up to date. Comprehensive documentation is extremely helpful to understand the design context of a system failure, but also to provide guidance on how to adjust system parameters in order to prevent the system failure happening in the short run while working on a long-term solution. A comprehensive document includes but is not limited to sections such as module overview, input and output table with key variables, parameters with recommended default values and calibration instructions and detailed mathematical behavior.

In conclusion, the key to a successful ERMPS build is to have a comprehensive end to end testing and monitoring procedure during each stage of the build process, ensuring each component of the system is correctly designed, implemented and integrated, to identify efficient testing procedures that increase the user adoption and the long-term sustainability of the system. To build efficient testing procedures a robust data platform that maximizes the consistency, replicability and integration of different component outputs across the entire system is needed. To achieve this, keep in mind the following guidelines:

  • Keep key data elements used for intermediate calculations in output table(s): A down streaming model failure can impact the design of up streaming models. So, it’s vital to keep relevant information in output tables for each module to make it easier to trace the issue and identify the root cause.
  • Save intermediate results in a temporary table(s): Some modules can be complicated, leaving numerous data preparation steps before the core analytical model is executed. Saving the output of key transformation steps significantly reduces triaging time when bugs are identified, and they need to be reproduced.
  • Create a standard naming convention through the system: Having meaningful and consistent names in columns and tables will provide a self-explaining data model, which will make it easier to troubleshoot and debug.
  • Make the test code reusable: During the entire process of building the system, there will be many sanity checks, model output checks, and business reasonability checks. Establishing these procedures as regular and repeatable system health checks is key to a successful rollout and support cycle and ensures the ERMPS scalability over time.

Many thanks to Aaron Lu, Senior Consultant, Platform & Managed Analytics, for co-authoring this thought leadership piece with me.