Automating the Workplace: A Podcast with IPSoft’s Allan Anderson
In this episode of the Designing Enterprise Platforms Podcast at Early Adopter Research (EAR), EAR’s Dan Woods explored the top of robotic process automation (RPA). Woods spoke with Allan Anderson, director of enterprise solutions at IPsoft, one of the leading vendors in a space that includes RPA, conversational AI, and low code. RPA has become popular as companies have tried to expand automation throughout the front office. We’ve had many solutions focused on the back office but RPA is explicitly focused on helping expand automation to people who are usually using software. Many different systems can be tied together using these methods. Their conversation covered:
* 2:30 – The core dogmas behind RPA
* 7:30 – The range of IPSoft’s solutions using automation
* 23:00 – How IPSoft’s Amelia can help companies
* 31:30 – Technology leverage created by automation
Woods: First I want to talk about what I call the core dogmas. As part of my research process, I think the best way to understand a space is to identify the core beliefs that that space has about the market and then to examine how each company eventually has different core beliefs. The core dogmas for robotic process automation, or RPA, are essentially that there’s a huge opportunity for automation in the front office, that automation is going to have to be intermediated by different techniques, not just APIs but the ability to control software systems using screen scraping and other automation so that you can get access to systems without having to have a formal API. Business users have to be heavily involved in conceiving of and implementing these automations and these automations can eventually become either companions or digital workers on their own. Do you think that captures what most RPA vendors believe and are trying to do?
Anderson: Yeah, I think it does a good job at that. Let’s put the core RPA vendors in one area. They’re trying to do things that are going to be fairly easy for the business to get done so that they don’t have to have engineers with programming skills by using the existing applications so there’s relatively little disruption to the processes that people have today. That’s a big plus for RPA. It actually is a relatively easy tool to get started with. But that does come with some negatives as well because whenever you do UI work, and they’ve gotten an awful lot better from what started out as what we’d call desktop automation, but sometimes things break. And it breaks because there are changes in the environment and there are new applications coming out where the fields are being renamed or the function is completely different, and so it becomes a little bit harder to maintain sometimes and you need to go back in and redo the work. Sometimes RPAs are like a band-aid, but you do need a band-aid from time to time, and sometimes after a while you choose to rip off that band-aid and do it in a better way. However, APIs are much more stable backwards compatible way of doing the integrations and the robotic sort of process automation inside of these applications.
The idea is that, for IPsoft especially, it’s great to be able to do the desktop automations if you use mechanisms that are creating an accessible way to get to software through the UI, that can be brittle. When the UI changes, you know you have to change your access method. But IPsoft is arguing that it’s also important to be able to have those automations be complex enough to take a step into the back office, and so you’re not just doing desktop automation but more complicated, API-driven, automations that are doing something that would go beyond a front office problem. IPsoft takes a step into the low code world, which a lot of times is about creating applications that glue everything together and providing a significant amount of automation. How do you distinguish between a desktop-only automation and RPA and something that’s a more robust and enters the back office or low code space?
There’s a huge difference between the two approaches. What you’d call the front office or the more RPA style model is where you generally create bots that do what humans are doing today, but they just do it faster and cheaper. When you go into the more complete end to end automation where you involve lots of back end systems, and potentially APIs, and where you even involve potentially replacing the human that is actually having a dialogue, that’s where we get into conversational interfaces. You can run an entire end to end process, and when you do that you also have the opportunity to say, “How should we actually do this?” You have an ability to rethink and re-imagine how things actually should be done when you can do them with pure technology, through integrations, robotic process automation, command line interfaces, conversational interfaces. All of those technologies now start working together to redefine the process.
What you’re saying is that if you create a more substantial automation, you are then able to automate intelligent workers?
Correct. We’ve been doing that for almost 15 years now with primarily IT things. But if you start looking at the business side of this, our conversational intelligence, which we call Amelia, has actually been selling car insurance for over two years.
You now believe that the next step is to take that automation you’ve delivered and to have an intelligent layer that can interact not just internally with people, but also with the customer. And that’s your Amelia conversational AI technology.
Yes. So you can imagine if, in this case here that I mentioned before, selling car insurance, which is probably a 10 to 15 minute process where Amelia has a conversation, can context switch, can explain things, and then eventually is able to sell the car insurance by interacting with all the back end systems. Usually RPAs will go into a desktop application or into a web application, start filling it out, they’re faster than humans, but they still take seconds, 5 seconds, 10 seconds, 25 seconds. When you are dealing with a human on the other end and you’re saying let me enter all this information that I’m getting from you so that I can actually go in and get you a quote for this car insurance or I can add your wife to the policy Amelia can’t say, “Oh, it’s going to take me 25 seconds to do that.” That’s where APIs come in. Most of these enterprise systems have full API access because you’ve done that either from a mobile perspective or from a web perspective already. You have to build those APIs among your core applications. We really want to look at the entire journey that people are going through and automate it end to end.
You’re saying that the advantage of having this larger, more robust automation is that you can do much more complicated end to end processes and have multiple of these things working together, or have layers of them, and then you can expose that in a conversational AI that has enough semantics and capabilities underneath it to do a complex thing like sell insurance in a completely un-intermediated way. In order to do this, you do have to have developers developing the automation that is there. This is not going to be a do-it-yourself self-service thing completely.
You do need some developers. You’re going to likely integrate Amelia into back end systems, whether that is your CRM systems or it’s your systems of record like a SAP where she needs to act just like a user except she does it through a programmatical interface. But more of the core teachings of Amelia are actually done by business SMEs as well that are teaching her the process and then also building in the flexibility. Amelia’s not one of your run of the mill chat bots which uses a decision tree to go and ask you about things. It becomes a lot more complex when you actually have these 10 to 15 minutes of conversations with individuals because they ask for certain things in the middle of a sentence, they change their minds. Those changes are fairly complex to understand. So that’s what we have spent our time on developing.
What I still don’t understand about this is you create the digital finite state machines in the IP center layer. That layer does revolve around training as well. It observes what people do and it’s not a completely programmed method. It can observe actions and then suggest automations. Then the other thing that’s interesting about IPsoft is that when an exception is declared, it can observe what the person resolving the exception does and start to see if it can find patterns in those resolutions so it can expand the automations, either by suggesting that a programmer jump in or suggesting that the automation be improved in a certain way. The thing that I don’t understand yet about Amelia is the reason you have a richer semantic model that is also glued to the capabilities of the digital finite state machines. Amelia understands what those capabilities are so that it can understand, when something changes, how to go back. I still don’t understand how that richer semantic model gets developed and how Amelia then puts it to use to do these more advanced conversational AI-type interactions.
This is one of the key things about Amelia. Amelia’s not a simple one trick pony from a chat bot perspective. So she goes in, she has multiple parts to her brain. The first thing that any virtual agent or chat bot will do is try and decide the intent, what people want to do. Now, very importantly, there can be multiple intents. Having that understanding and logic around multiple levels of intent is very, very important. The second layer is when Amelia goes and integrates into the business process. She will follow a slightly more rigid but still flexible way of handling the situation. She can, by the way, go into underlying systems and potentially see what type of information she needs to request from people. It’s not a straightforward thinking of Amelia to go through this. She has a goal, obviously, but she has many ways to fulfill that goal, which in the case of her having to execute certain things in back end systems, it’s a certain amount of data that she needs to collect, and once she’s collected that information, she’ll be able to do it. The other key element to Amelia, is she can control what you see on the screen. So if you’re chatting with her over the web or if you are on a mobile platform with her, she can start showing you things and that allows you to make decisions on what you see on the screen, and that also furthers dialogue.
Let’s say that I’ve done a bunch of digital finite state machines, and that they can help me implement an auto insurance policy, and it starts out where a person who’s a call center rep uses this technology and can automate a lot of things that they would otherwise have to do in 20 or 30 steps. Okay, great. And I do that for three or four different steps of the process. Now I have this landscape of automation that could be orchestrated by something like Amelia. How do you then get Amelia to know what each part of those digital workers can do and how to direct the conversation?
What we do is go in and interview people, we look at transcripts, we use a lot of existing materials that people have, flow diagrams and other things to initially teach Amelia how to follow the process. We usually call that a happy path. So in the case of an IT problem, for instance, we would expect people to come in and say, “I have forgotten my password to this system, I need to get it reset.”
How do you actually teach it?
The underlying business process is semantically driven, so it teaches Amelia how to work through the steps, but there are steps. Amelia can jump around in those steps, so she doesn’t need to follow it precisely. If she’s already collected certain information from you—let’s say you wanted to get your password reset and you also mention the application that you need to get it reset for. Now that she knows that it’s SAP, she’s not going to ask for that. She may validate it with you, but you don’t need to explicitly teach her how to do those things. But if you didn’t give it, obviously she would ask for that question. Amelia has the ability to more fluidly go through this conversation. She can also predict where it is that you’re going to go based on machine learning classifiers that are built into the process. She may determine that based on your utterances. She’s very clear about what it is that you want to do and she will skip an awful lot of unnecessary steps and jump straight ahead to resolving your issue.
There’s a business process network that provides the underlying semantic understanding, and then there’s a contextual understanding that’s inside Amelia that lays on top of that, that can then navigate through that business process network.
Yes, and it also allows her to go in and do context switching. You may have started down a path and then either that was the wrong path or you change your mind about certain things. So you have the ability to also go and have Amelia switch over to a completely different process. So it may be that I have lost my credit card and I call into Amelia and I say, “Could you please block my credit card?” And then I suddenly realize, “By the way, could you go and check my wife’s account balance because now we’re going to use her card instead.” So before I finish the other thing, I now want to go over and check another account, which is a completely different process. And then once I’m finished with that, Amelia will be able to go and say, “Do you still want to cancel your card? Yes, okay, let’s jump back to that process.”
There’s a conversational intelligence on top of the business process network and that’s the way it all comes together?
Correct. Inside of that are other things around clarifying questions that Amelia will use dynamically as part of this, so that if people get stuck at certain places, she’ll be able to use other methods to help them go past these various things. It doesn’t all have to be programmed because human dialogue is not programmed. I do want to mention one thing that I think is important to understand, is that what most of our customers are focusing on might initially be to do what they do today faster, better and cheaper. But think about the opportunities that you could have by going into new markets, new services that you deliver, a new product that you deliver, if you can completely change the way that you support and service these products. It’s about taking advantage of these new technologies to do something that you couldn’t have done before in a price competitive fashion.
The biggest benefit comes from refactoring and doing digital transformation after you have the core automation in place.
Exactly. Many of our customers may start out doing some of the simpler things like working internally in IT. But then once they get to learn the technology and see what it can do, light bulbs happen in the company in areas that you never thought that this technology could actually be used, and that’s when people start thinking a little bit out of the box and you’re doing combinatorial innovation.
By combinatorial innovation, what do you mean?
Think about how you add something like Amelia to lots of other things. Mobile interfaces, if you’re a bank, how do you do some of these things by taking blockchain and financial applications that you may use together with a conversational intelligence, so you’re combining multiple things. A good example of that is Uber. Uber combines GPS, rating systems, automatic payment systems with mobile interfaces and lots of great algorithms in the back end to optimize the journey and how they’re bringing together a whole marketplace of drivers and people wanting to get driven around. All of those technologies are components that people put together in new ways so that they can go to market in a different way.
Now let’s talk about the digital workplace summit. What did you guys do last week in your digital workplace summit?
This was the third DWS and the theme of the event was called Realize. It was an amazing event. We had over a thousand registered for it. The focus was really not on us talking about our technology, it was about our customers really showcasing what it was that they were doing. And not only did several of the customers, like a Telefonica or a Blackrock, go and present onstage about what they were doing, they actually had brought the technology along with them and we had set up individual booths like a mini tradeshow where these customers and partners of ours were showcasing what they were doing with the technologies, the process that they had gone through, the successes and the failures.
What were some of the problems that people ran into once they actually got the foundation of IPsoft and one desk in place?
One of the key challenges people have upfront is choosing the right use cases. So what is it that they want Amelia or one desk to actually do for them? A lot of people early on started in IT so probably reset more passwords than most large service desks altogether. But once you get into the business and you start looking at what use cases do you really want her to do, there has to be certain characteristics for actually achieving the ROI in something like that. It has to deal with volume, sometimes complexity, sometimes business impact. Sometimes we sit down with our customers and they just have something in mind that they want to do. Sometimes that is great and awesome, and sometimes it’s just not the best thing.
You’re saying that you understand the fit of a problem and a problem area to the kind of automation you provide, and when you have a great fit, you can achieve great victories, and when the fit isn’t good, the victories are harder to come by. One of the characteristics that would fit what you said is volume, meaning like how many times is somebody doing this? If somebody’s doing this once a year, that’s not a good candidate for this type of automation. The other is business impact, and that is are there cost inefficiencies or revenue opportunities available by doing this process better?
Obviously there’s the ROI and the volume. The third is really that it has to be a use case that is reasonable to expect that Amelia can fulfill. So if you’re suddenly choosing something that is even complex for humans to do, it’s sometimes very hard for us to gather enough information about the process and build it in a way that can actually be satisfactory. So I’ll give an example more of a kind to the first one that I know one of our customers wanted to do. Amelia was being implemented as a service desk agent — resetting passwords, setting up printers, fixing Outlook issues, all those things. One of the things that they wanted her to do as well was whenever people had lost their laptop and manage that process of getting certain things set up in the security, figuring out what has to happen, maybe even order a new laptop for them. And a problem that came, that it was so low occurrence that it was hard to get enough benefits out of it. And also that the complexity of how people lost it and what they needed to do and was it stolen, was it just dropped and then they threw it out — all these things proved to be a really difficult thing for us to teach Amelia about.
What was the best refactoring story?
Electronic Arts is one of our customers and last year at DWS they talked about what they had done. They had some issues with fraud, people calling in and trying to steal accounts when other people are calling in very legitimately and saying that they’ve either forgotten their password or something was wrong with their account or they didn’t really buy this product. And so what we ended up doing with them was saying that the first thing that any agent needs to do is to actually go in and say, “I need to validate your identity.” But it’s a very uncomfortable thing for many agents. It’s something that you are just expecting that people are actually truthful and honest and they may come in, there may be kids screaming in the background that they can’t play their games on their Xbox or whatever it is, and you will just run through some things and actually get them back up and running. From a human perspective, that’s sometimes very tricky. So they taught Amelia how to do it by integrating into also back end systems and running fraud analytics, and she actually became better than humans to do that. Once they’d done those things, they figured out there are a lot of low hanging fruits that people really don’t want to deal with, and they said, okay, so now Amelia can do some of these things as well, whether or not it is putting some charges back on a credit card or fixing a problem with certain things that were acquired or bought or passwords or accounts. Their major benefit was that now their agents could actually spend more quality time with some of their players because their number one reason for implementing this technology was not to reduce costs, it was to give a much better service to their players. So they were able to free a lot of these agents up, especially when there were high volume calls coming in, and thereby they could spend a lot more time with their players and actually give them a much better service.
That shows a progression from initial solution to expansion into new ways and also changing the automation architecture. Are there any other challenges that people have besides fit?
They eventually realize what they can use the technology for and then projects grow and become a lot bigger. Now, I would also say sometimes we have security, making sure that Amelia has access to all the various systems that she needs to get access to can sometimes be a stumbling things projects. But overall, I think that the impact that we’re seeing is where people are really rethinking some of these things. We have a lot of exciting things going on in the medical space where doctors are using Amelia for applications that they need to operate when they’re doing surgery. Just imagine somebody, not necessarily a surgeon but somebody in the field trying to fix certain things and technology like Amelia guiding them so that when new people come onto the job, they can go and ask and actually get answers in ways that weren’t even possible before.