Over the past few months, I’ve talked about the importance of communications network testing end to end. Today I’d like to bring the idea through to its logical conclusion.
I’ve consolidated below some real life use cases into a single connected view of end to end testing. Through these examples, you can see the benefits of testing session border controls, voice applications, and IVR/voice portals, CTI data and routing testing, and agent desktop integration testing.
In one recent test Empirix was involved with, callers were complaining about long connection times and poor voice quality during a storm. The company involved conducted a test of its SBC to better understand the origin of the issue. During the test process, the organization discovered that, while having a registration flood of 10,000 concurrent SIP registrations, SIP call setup time doubled. Moreover, jitter increased on the SBC.
At this point, the company contacted their SBC provider, who informed them that their issue would go away if they upgraded to the latest firmware. They did – issue gone!
Read: Testing SBCs: Why Do It?
Voice Application Testing
Here’s an example where voice application testing really proved beneficial.
A large financial services company was seeing an increase in abandoned calls in the IP PBX, even when agents were available to take calls. Empirix performed a load test that was designed to push the IP PBX to the peak load of 10,500 concurrent calls with 7,000 agents.
During the testing, once we got above 7,000 concurrent calls into the system, the abandon rates did increase significantly. At the same time, the IP PBX was reporting port network blockages. By listening to the recorded calls, we determined that callers were hearing more than 30 seconds of silence after the greeting message and callers from the test system then hung up.
The root cause was based on the way the customer had decided the application flowed. Additional VAL boards were needed to support the messages required, so callers would not have to sit through such a prolonged silence.
IVR/Voice Portal Testing
In this example, IVR/voice portal testing proved was the way to go. An airline found that callers were complaining about slow response time in the flight status voice self-service application. Yet in the manual tests that were done by the airline, the response times seemed very quick. The customer decided to execute a performance test to better understand how the application performed at different call volumes.
White Paper: The Future of IVR Customer Service Assurance
With the help of Empirix, the organization determined that when they had 200 or less calls in the system, performance was very good: 0.5 seconds. However, when the concurrent call load was brought up to 350, the response time increased to almost 7 seconds. And the customer’s IVR was meant to support 700 concurrent calls!
While running the test, the customer was able to monitor its systems and determine that the cause of the performance degradation was a poorly designed database query.
CTI Data and Routing Testing
A larger retailer had agents complaining that occasionally they were not getting customer screen pops. To help identify the failure points, the company opted to do performance testing of its “home grown” IVR and its integration into its IP PBX.
Using an Empirix virtual agent simulator (VAS) to capture the screen pop data, it was determined that at 330 concurrent calls, the VAS was no longer getting data. Upon further investigation into the customer’s IVR and IP PBX, it was determined that the VXML servers that were servicing the IVR were not passing the data to IP PBX.
Once the customer added an additional VXML server, the test was re-executed. At that point, the team was able to validate that the system worked as expected, with up to 600 concurrent calls.
Agent Desktop Integration Testing
A retailer provider was experiencing issues with their agents’ CRM and CTI toolbar integration. During testing, screen pop performance was measured and with 250 concurrent calls response time was less than two seconds. However, when the load was increased to 310 concurrent calls, the response time for the screen pop increased to 38 seconds. In addition, while the agent waited for the screen pop, the entire custom CRM application froze, making it impossible for the agent to assist the customer in any way, even if they had simply asked for the account number.
After testing the agent desktop integration, the cause was discovered to be a poorly written database set of queries that logged the call and produced the screen pop.
Testing 1, 2, 3
As you can see, today’s communications environment is very complex. There are so many places where things can break down. If you go live without testing first, you’re asking for trouble.
Do you test new technology before adding it to your contact center? Want to know if others do too? Check out the Infographic and survey results in Customer Service: Who Cares?
Taming CX Disruption with Automated, Collaborative Testing