Appian World – bringing generative AI to heel
Let’s face it, generative AI is already upsetting quite a few apple carts one way and another, and there is yet one more waiting for it. That old hackneyed phrase ‘This will end in tears’, is one of the greater understatements that can be made about what may well happen.
The awaiting applecart in question is the administration of the business, and in particular the running of the underlying business processes that make it tick, for those processes will be the point of direct interaction with a new gen AI implementation and the point of initial – and quite often instant – impact.
Having this environment primed and ready for the job that will then be in hand, and ready to handle the outcomes in real time will, arguably, be one of the most important steps any enterprise can take this year. As Matt Calkins, the CEO and co-founder of process management company, Appian, said at the company’s Appian World conference in Washington last week, the very first thing that has to be understood about gen AI is that it is not in isolation:
We have to get this fundamental out there – AI is not a standalone technology. It thrives based on being accompanied by a couple of other things, data and process. We all know that AI is nothing without data. The trouble is, it’s not external generic data that makes AI truly powerful; it is your data. It is unfortunately also the most difficult to get, it’s the one you’re least likely to feel good about sharing, or using for training, and it’s probably the most difficult to source and dig up around your enterprise. To make it practical for you to make any value add, you have to find a way not just to learn this technology, but make it productive in your environment.
With a 25-year track record of developing tools for building and managing digital business processes, Appian already has a history of working with earlier implementations of AI technology and, with the new developments it introduced at the conference, now sees itself as one of the few capable of providing business users with secure and advanced process management, together with the ability to ‘socialize’ generative AI systems and bring them to heel.
This has even prompted the company to offer competitive one month fixed price contracts to existing users looking to start down the road of integrating AI services into their Business Process Management environments, so that they get a taste of what is possible. For those who are not existing customers, the company is offering a new Starter Kit, made up of what the firm feels is the ideal basket of goods expected to give a chance of early success, at what Calkins pitches as “a very good price on it”.
Use, but protect, your data
Data is, of course the lifeblood of both gen AI and business processes and inevitably it is the same data that gets inextricably linked across both, so getting the right data in play and keeping it secure are equally important. Here is the first and perhaps most important danger area of working with gen AI services.
They can be trained up to a point on generic public data sources, but in the end it is putting internal, company-specific data in play that will generate the most benefit to the business, not least because that data will be updated as necessary, and the latest data will often be the most valuable in terms of business results. It is also the most valuable to a company’s competitors and simply must not leak from the gen AI service into becoming part of its available generic data, said Calkins:
AI cannot be allowed to work alone. It’s what I call ‘the mixed autonomy’, which basically says that AI is driving the car, but you probably want to keep your hands on the wheel, as you know it makes a lot of mistakes. AI doesn’t have human judgement. It can propose ideas, but you should check out that we’re at a moment of mixed autonomy. Process gives structure to the team, an AI should be part of a team. A process gives you that structure, it’s a series of tasks and handoffs pointed toward a goal that matters.
A key part of providing this management capability is Appian’s Data Fabric, which was introduced at the back-end of 2022. This now provides a virtual database layer that connects all the data in an enterprise with a common semantic layer, which allows it all to be addressed it as if it is local data. It can also be optimized so that, for example, running a query repeatedly will automatically tune the process to run the query faster, like appearing to relocate data without doing so. Calkins suggested it made the technology respect the way an enterprise infrastructure is aligned in practice.
This means users don’t have to train AI via Machine Learning. Instead the Data Fabric will train the AI algorithm. Whenever there’s a question for the AI service, the Data Fabric will quickly find the data that corresponds to that question, whatever is pertinent no matter where it is in the enterprise, and present it alongside the question to AI. This does mean having a de-centralized data strategy however, which may not suit every business. But it does mean the questions, coupled with the relevant latest data, can be asked of AI services with greatly reduced risk of internal data leaking into the service’s generic data pool. Even if it does, it is likely to just be a set of numbers without any associated context to give it value, said Calkins:
We can provide an API with just the data that corresponds to the permission levels of the person asking the question. So if you care about security, that’s a big deal. And it’s auditable, because AI can say some strange things. It’s a black box that gives you an answer.
This approach also bypasses another potential failing of AI systems – the ‘elephants never forget’ issue. AI doesn’t forget things because it doesn’t obsolete its own data stores, and unless a business has gone down the very expensive route of having its own isolated AI instance on premises or bare metal on long-term rent at a cloud service provider, having a service that can selectively obsolete data stores is a non-starter. This leaves the business free to decide what information is pertinent to retire, and when to retire it.
It also creates the potential to work in an AI agnostic environment without the risk of long-term lock-in. Because there is no in-depth training required and the AI algorithm is asked to process a question set against a specific dataset, the algorithm can be changed if required – in other words, the user can switch to another AI service provider. This, as Calkins observed, can improve the negotiating position a company has with any AI vendor.
Going down the mine
Since the acquisition by Appian of Lana Labs, in 2021, the potential of Process Mining has been growing, but not really fulfilled. Now however, it should advance considerably, for it has fully assimilated the technology into Appian’s suite of tools that has now been able to realise something quite significant – real time Process Mining, which is being made available under the name Process HQ.
So instead of just being able to go back to a process, plow through it slowly and identify process errors so they can be amended, this can now be achieved in real time while the process is running, allowing users to go to any data source connected by the Data Fabric and drill down to an incident. Users can then measure the incident, such as delays, or the other side of the coin, check out the actual performance of changes or additions to the process. Calkins explained:
Where’s the inefficiency? Where’s the correlation between time and a certain activity? Whatever variables that correlate to that, then we need to tell you where are the slowdowns. And we furthermore analyze that relative to the volume of that instance. And we recommend certain things that should be done in order to improve your efficiency.
When it comes to working with AI, he indicated that a by-product of using Process HQ may well be found in helping to identify and manage AI hallucinations:
Hallucinations originate whenever either of the two data pools is getting dangerously thin – the pool of current events that then forms the question and the pool of past events which forms the comparison. So if either one of those were to be depleted could lead to an hallucination. Now, if Process Mining is accelerated in real time, they could allow us to detect problems faster, notice them and remediate them.
Process HQ then leads to the Elastic Process Execution feature, designed to handle scalability risk, the volatility around it, and the unpredictability of when it arises. Some businesses will know they are likely to get hit by a workload spike, but they won’t know where or when it is coming. The feature provides a scalability paradigm that manages processing on demand.
My take
There is no doubt that gen AI is one of the most important, most exciting developments to come out of the IT industry since the invention of the microprocessor, cloud services, and the serverless environment. It is also reasonable to suggest it is also one of – perhaps the most – dangerous pieces of software ever created. It is likely to be a case of not what can it do, but what can’t it do?
In the day-to day-reality of every business therefore, there will be a growing need for an ability to control the monster and fence it in. This is where Appian is firmly pitching its tent. I can imagine what the company is offering will not meet the needs of every AI user – I suspect it could wreck attempts to run the monster research programs needed to unearth new drugs, for example – but for the typical enterprise, seeking valuable responses to typical business questions, I also suspect that Appian has found at least one workable, manageable solution.