Why do we test? To reduce or eliminate risk. Although patching and bug fixing are common terms in the world of software, the identification and elimination of “bugs” is a narrow perspective of testing.
We test to eliminate unexpected or unwanted events that negatively impact on systems performance or user experience. These could be categorized simply as those with unacceptable impact, or those that occur with an unacceptable frequency.
The failure of customer-facing systems can be embarrassing – we’ve all seen the headlines detailing website crashes during periods of unexpectedly high demand, or customer service applications stretched to breaking point following extreme weather conditions. However, the connected nature of most organisations means that even the failure of an internal system is likely to impact on the customer experience.
Remember, testing is not an activity, it’s a process. It takes place throughout the software development lifecycle and is particularly important when systems are being upgraded or business processes are evolving. At its heart, change involves risk and testing is all about eliminating risk.
The testing landscape is quite diverse. It’s not simply a case of testing or not testing. Different types of testing include user acceptance testing, functional testing, regression testing and load testing.
When dealing with ERP and other mission-critical software applications, a lot of time and effort is spent on functional testing (quite right too) to ensure that systems perform as expected – that a given input results in a given output.
However, functional testing alone is not enough to test the resilience and predictability of your system. Running a sequence of simulated functional tests is not an accurate reflection of the real-world demands that are likely to be placed on your systems.
During exceptionally busy periods, systems do not always perform as expected. That’s why stress testing or load testing your JD Edwards E1 system is so important.
Systems should never be designed with the lowest levels of utility in mind, they should always be built to perform during periods of sustained, heavy use. After all, it is during periods of highest demand that any systems failure has the greatest impact.
Capacity-related systems failures are less common than functional errors, but you shouldn’t fall into the trap of thinking less likely means lower risk. Once a system is in production, any downtime could have a significant impact on everything from customer satisfaction to production timescales and revenue generation.
The larger your organization, the more connected systems or the more varied the workflows, the more important load testing becomes. In the real world, your systems may need to cope with hundreds, thousand, or even millions of concurrent workflows (in the case of large-scale consumer-facing websites of e-commerce platforms).
Load testing in a JD Edwards E1 environment typically takes place towards the end of any development cycle. Resilience can only be established once functionality has been defined.
Load testing allows you to measure more than just the continuity of a desired outcome, it allows you to assess any latency or hang-time that may result from higher volumes of use. Again, this goes back to the user experience. Availability is one thing, but so is performance.
Load testing also needs to take into account the fact that not all users are the same. The broader the spectrum of users, the greater the chance of a variation in “input behaviour”. Straightforward functional testing often assumes all interactions are the same. Load testing a real-world scenario would involve a degree of individuality that would more accurately reflect user behaviour.
If conducted manually, load testing can be a burden on both time and financial resources. That’s why it lends itself so well to test automation. However, in order to get most value out of automated load testing, analysts need to be able to simulate and repeat a wide range of interactions.
When selecting a test automation product, it makes sense to choose one that is designed with your specific software in mind. A generic solution is likely to require either a lot of customization or compromising on quality.
As with any testing solution, it’s important to be able to monitor performance in real-time and to extract and analyze test results down to a forensic level of detail.
Abbey Barn Road
UK: +44 (0) 1494 896 600
US: +1 888 769 3248
ANZ: +64 (0) 9427 99 56