There is no doubt Artificial Intelligence will play a pivotal role in our future, and each enterprise should now be analyzing its inevitable effect on their business. There seem to be basic misconceptions about how Artificial Intelligence (AI) can – and should – impact our enterprises.

One of the more pervasive misconceptions is that these AI systems will be setting their own rules. This is simply a consequence of the widespread misunderstanding of how exactly AI systems will fit into the enterprise.

Consider dispute management as an example. The AI system would not make up its own rules for processing a case. To start, it would need to know what the choices are – that it may choose to deny a case, write a case off, or send a chargeback. It would need to know what system to use to issue a credit to the customer, and what fields need to be passed. It would also need to know what system to use to send a Chargeback, and what fields need to be passed.

The AI will not be responsible for setting these rules. For the foreseeable future, humans will be responsible for defining the rules that govern AI behavior. At least, until they rise up and take over the world one day as the Science Fiction stories keep assuring us.

The other misconceptions of AI revolve around how these rules will be conveyed. Eventually, I will almost certainly be able to tell my kitchen “Make me Chicken Cacciatore” using a voice command, but I will be relying on the fact that someone has defined a set of rules that the kitchen AI can follow in order to do so. And, those rules would not have been entered into a computer verbally because that would be a horrible way of communicating such a complex set of data.

Similarly, anyone in the dispute processing world will know that there is no way that you could convey the rules governing this process verbally.

It is very possible that, at some point, an AI system could use optical character recognition (OCR) to read a Chicken Cacciatore recipe that was scanned in and “learn” to cook it from that. Knowing the abilities of AI as I do, I suspect we are a long way from this. And, I think there would be many people throwing out lots of chicken, onions and tomato sauce in the process.

When it comes to the rules running any reasonably-sized enterprise, any documentation written with the intention of human consumption is far too ambiguous to be useful as an input into an automated AI-driven system.

People sometimes use the term “legalese” disparagingly, but contracts are written the way they are to remove the ambiguity inherent in natural, human language.

So, theoretically, one possible solution would be to re-document all rules that govern an enterprise using some version of legal language that we could eventually use to feed the AI system. But, most people would agree that is a less than ideal approach. Instead, any enterprise that is going to take full advantage of AI technology using an explicit and clear set of rules governing the behavior of that AI will need a new kind of language to define those rules.

This language needs to have the following characteristics:

  • It needs to define the rules in an explicit manner.
  • It needs to be writable by a significant percentage of the enterprise staff.
  • It needs to be readable by the entire staff of the enterprise.

I would suggest that, while not strictly a requirement, a visual format would be much more effective than a straight prose format.

That is why Lean Industries has built and is continuing to improve a Visual Logic Language. This is not a stop gap until AI takes over. The VLL, or something like it, will be the key to taking full advantage of AI technology for the near and foreseeable future, defining the rules for AI with an explicit format.

Greg Cooper is the Vice President of Product Development at Lean Industries.