Conducting end-to-end CTI tests can be daunting, even to the most experienced professional. Yet if a few simple rules are followed, the level of effort and complexity in running a successful test can be reduced dramatically. Specifically, organizations should understand:
1. Who’s on First?
Define the Metrics! Specifically the ‘screen-pop’, in your organization does screen pop mean the call data arrival at the desktop or does it mean the customer data pulled from the back end after the call data arrives? In some of our load tests we’ve seen this confusion delay an entire deployment by six weeks due to this ambiguity between the performance test team and the UAT test group. The call data was delivered in 200 milliseconds, however, due to an inefficient query to the backend system the customers last 5 interactions from the CRM system were not available for up to 17 seconds. Make sure everyone is on the same page regarding metrics.
2. What’s on Second?
What is being tested? Are you doing a CTI test using a virtual agent simulator leveraging the vendor’s CTI SDK? Or are you testing through a CRM platform which has a custom communication layer back to the CTI SDK?
The correct answer should be both, and here is why, you’ll have clear evidence of where the performance bottleneck is and you’ll know who to hold accountable.
3. I Don’t Know is on Third
Default routing, bull’s-eye routing, IVR Exit point mapping. You’ll need the transitional states and exit points mapped to the customer experience journeys and the test data used to drive the test cases. You’ll also need the IVR team, the routing team, and in some cases the security team to be aware of the automation platforms capabilities so they understand what pass/fail IS and how it is deterministic based on
- Environmental conditions/ backend
- Consumer status/choice
- Agent availability
This will most certainly require run-time logic / synchronization and interoperability with both sides of the phone call and the CTI/CRM layer. Make sure you have all the stakeholders briefed on why the data is required, how it will be consumed and why their role is critical in a pass/fail decision on rolling out the deployment.
4. What’s Coming Tomorrow
Keep your eye on the ball. If you’ve read my “Ten Tips for Developing a Powerful End-to-End Contact Center Testing Plan whitepaper” you know the hard cost savings metrics and you know the ‘why’ the business is deploying this release. Focus on the key business objectives of the organization and map those down the customer satisfaction pyramid into key performance metrics captured in your automated load test plan. Make sure that you’re able to articulate the business impact that any defect or performance bottleneck you capture can be translated into $ and cents in best case and worst case scenarios for the decision makers.
5. Why You Need a Great Leftfielder
The logging paradox: turn logging all the way up and decrease performance…..or…cross your fingers and hammer the system until it breaks, but not know why? Have your cake and eat it too, any automation tool worth its salt can integrate and control the environment it tests… it’s RULE 1 of a test tool, turn logging all the way up…break it, archive the logs, turn logging all the way down and re-run the same scenario within 45 seconds all without human intervention. You’ll have to prepare for this, and because you are using automated CTI agents, automated test data, automated callers, and automated CRM, you’ll have to prepare the business for the test window, but you’ll also have to communicate the disaster recovery, failover, and the contingency re-test strategies you are planning so they understand why you need that extended test window. If you’re feeling a little like “Third base” right now (I don’t know)… then call Empirix today and learn how the Hammer can put the Spotlight on your end to end test engagement.
Erik is Empirix’s Test automation expert, with over 15 years in the telecommunications data networking and call center industry, Erik has architected, coded, executed, and managed some of the world’s largest and most complex stress tests for the top global service providers and enterprises. His experience includes the synchronization of web, voice, email, and screen pop testing enabling customers to test real world conditions before they launch new platforms.