I'm keeping a list of things to consider when you are deploying an IVR application. This is for a business with a customer service call center, looking to use an IVR server to automate the simple tasks, so agents can focus on higher-quality customer interactions. Here's what I have so far.
The goal of an IVR application is to save money on customer service transactions without sacrificing customer satisfaction. You should be providing reports to measure things that tell you how your are doing on your "goal."
- Currently, you probably use switch reports to analyze where customers are going in your phone system, as well as where they finish/disconnect. Your IVR's built-in reports should provide exit reports that can match up to the switch reports.
Switch vs IVR reports
- Beyond call routing, task and activity reports should provide counts of automated tasks. You may have to create these completely on your own, or you may be using a platform that has some built-in tools. As part of the development, audit the report accuracy.
- Keep in mind, tasks reports are what you need them to be. Don't expect custom reports to match up 100% with other reports, unless you plan on devoting time auditing the reports, and rolling out post-production fixes.
- To have a successful deployment, you should have a project manager running weekly meetings.
- There are many little tasks that need to be done. Sometimes they take a few tries to get technology working as expected. Sometimes they stall, with one group waiting on another. The PM is a big help in making sure people complete their tasks.
- On occasion you'll need to pull together a few groups to make sure all the technology is working from beginning to end.
- The contact center management will be helpful in telling you what your first application should do.
- Once you have the IVR application consistently handling a percentage of the call center traffic, any application "outage" will overwhelm the contact center.
- Log errors that will need attention once you are in production (i.e. timeout error during payment processing).
- Log information that will help you analyze what happened during a call.
- One tester should triage all testing team issues.
- Don't use email to document issues. Use a shared spreadsheet(good) or online issue tracker(better).
- Include the time of the test, what was expected and what actually happened. Also include the test case info to recreate it - the number called, the date and time, the information entered and where in the design/spec/test case the problem is. Test the report features
- Schedule a final go/no-go meeting.
- Have a rollout/rollback plan for everything. Use QA/UAT rollouts as an opportunity to build up the production deployment checklist.
- Send a communication to all concerned parties.
- Have two conference bridges during the rollout - one for the tech team that is doing the rollout and one for the high-level parties who just need the high-level progress report.
- After going live, someone will request an application change. You should have a parallel development environment for developing and testing the changes. You should also have a plan to deploy app changes to production, and to roll them back if there is an unforeseen problem.
- Roll out changes during the working day, if you can. Everyone is available and awake.
- If you are rolling out after hours, follow your deployment checklist.
- Don't roll out changes at the end of the workday or workweek.
- Provide IT with a list of log files to monitor, and when to trigger alerts.
- Use a log consolidation platform, so you can see what is happening across systems during each call.