There is no denying that the rollout of the new Health Insurance Marketplace in the US has been a debacle. After all, the person ultimately responsible for the project testified before congress and said, “Hold me accountable for the debacle; I’m responsible.”
Now the finger pointing has begun.
Who’s to Blame?
This sort of thing happens all the time. An organization deploys a new service and it does not perform as expected. The next two steps happen in parallel and at various levels of the organization: scramble to fix the problems and figure out who is going to take the blame.
Usually the blame game happens behind closed doors, with the IT staff answering to the CIO, and the CIO answering to the rest of the management team. In this case, it’s happening at congressional hearings and it’s painfully public for everyone involved.
It’s still early to tell exactly what happened, but it’s already clear that the system was not sufficiently tested before going live. There are also indications that people knew, based on preliminary testing, that large scale problems were likely.
This raises some interesting questions.
What’s the Right Call?
Hindsight is 20/20, but it looks like someone decided to take the risk and go live despite questionable test results. Apparently, the attitude was, “Let the users test the system for us and we’ll fix problems along the way.”
Clearly, the right decision would have been to delay the project and conduct end-to-end testing. No brainer, right? Not exactly.
Delaying a project, even if it means a much smoother rollout, has consequences. You need to rewind this one a bit further and develop a solid plan from the get-go, including end-to-end testing and monitoring, to avoid what we’re seeing now.
What About the Side Effects?
Lots of people who designed and deployed the Health Insurance Marketplace Contact Centers were probably breathing a collective sigh of relief at one point, knowing that they were not making news headlines. I’m guessing that tranquility ended when they heard the president say, “And in the meantime, you can bypass the website and apply by phone or in person.”
That means they’re going to have to deal with a large volume of phone calls they weren’t initially expecting. Are their systems ready to handle that kind of traffic? We’ll see.
So let me ask you – what actions do you take to ensure your contact center communications system is ready to go live? And what are you doing to make sure everything keeps going correctly once it’s been deployed? Crossing your fingers? Or something more?
Looking for more information about predeployment testing? Check out Ten Tips for Developing a Powerful End-to-End Contact Center Testing Plan.
Taming CX Disruption with Automated, Collaborative Testing
More on customer churn
When we talk about contact center assurance, interactive voice response (IVR) technology tends to monopolize the conversation. Almost everyone knows what a nightmare problematic IVR menus can be. If someone…
QoE with your businesses’ communication systems determines success Your customers’ satisfaction and loyalty hinges on the quality of their experiences. When there’s a breakdown in a company’s communication environment, customers…
As communications service providers (CSPs) race to implement 5G technology, customers’ expectations are at an all-time high. Consumers want best-of-breed service, yet they’re as price conscious as ever. Therefore, CSPs…
People expect bad weather, but when the power goes out they want answers. The way you respond in these delicate situations is crucial. It can make or break customer confidence….