Artificial Intelligence allows a machine to perform a task that requires reasoning based on representations, inferences, learning, and memorization: for example, image recognition, language translation, analysis or decision-making. In its wide acceptance, this discipline begins with the birth of computer science, relying on algorithms which are dating from the Babylonian era ...! And theories of learning neural networks have emerged about fifty years ago.
However, we have seen a democratization of AI over the past decade thanks to computing power and data storage capacity (of which AI is a big consumer) which are now accessible to all in large quantities. Not to mention that there are many environments and code libraries that greatly simplify the implementation of AI models.
This said, AI remains a complicated discipline that requires in-depth expertise, numerous delicate adjustments that are often tedious and always specific to the topic addressed and the data used.
Our brains are very powerful at finding conclusions with little data within a relatively short time. Whereas, AI finds solutions by performing a multitude of calculations on large volumes of data. According to the philosopher Jean-Michel Besnier, co-author of “Do robots make love? Transhumanism in 12 questions”, “The peculiarity of humans is the ability to think outside the box, while that of AI is based on the logic of repeating calculations". In the current state of our knowledge, AI does not know how to create new representations of the world from scratch.
What makes humankind unique is our extraordinary ability to learn by imitation, in other words, to capitalize on knowledge but then, to shift slightly from these achievements to innovate. Against popular belief, innovation is rarely a big break, but most often a shift from a commonly accepted practice. This is the idea of "un-coincidence" popularized by the philosopher François Jullien.
However, when the domain of representation is well-defined, the inference and learning capabilities of AI are impressive. The level of investigation allowed by AI algorithms is far beyond our own capabilities. Take the example of DeepMind's algorithm Alpha GO which has defeated the best GO professional players in the world. This game was known to be impossible to program with classic algorithms because its rules are too subtle. The Alpha Zero advanced variant managed to learn the game in a matter of days, playing against itself, without any input from predefined tactics or human-played games. Alpha Zero started out on her own, then rediscovered the same instincts as those of the best human players to finally find her own but even more effective methods.
It is certain that more and more tasks will be done by machines, forcing humans to adapt to other tasks. As always, tools present both opportunities and dangers commensurate with their power. Ethics and education remain essential to frame the implementation of these technologies. Another ethical debate is the coupling of AI, Big Data and control of ideas through mass surveillance...
Enterprise Architecture (EA) is a method of continuous transformation of organizations that must constantly adapt to their ecosystem: regulations, customer expectations, new technologies, etc. This collaborative method is based on a knowledge graph that describes the constituents of the company and their interactions, as well as on a software governance tool to frame and orchestrate the transformation while remaining agile.
In the field of EA, classical algorithms are used to perform impact analyzes, scenario comparisons, or the analysis of the propagation of incidents within the structure of the company. But as one might imagine, a business is a very complex, non-linear system that involves many variables. And for such a system, it is difficult to model a priori the equations of its behavior and the rules which govern its evolution.
Artificial intelligence, especially Deep Learning, plays on different levels of abstraction - hence the notion of depth - to extract meaning from data and produce a result while freeing itself from knowing the equations that govern the system.
Here are some fields of application of AI and Deep Learning relevant to EA:
This allows you to photograph drawings to instantly convert them into structured models. Models can be process, data, application structures models, computer networks, etc. The conversion of the drawing into a structured model makes it then possible to analyze it, for example, to know that such sensitive data is used within the framework of a given business process.
Many companies need to communicate internally in different languages: an official working language supplemented by local languages allowing each linguistic community to contribute to in its native language. Instant, or batch, machine translation works very well to build a multilingual repository and facilitate internal communication and alignment.
The particularity of any model is that it is not universal. Models are representations of reality designed for a particular use. We must transform the models to produce different views for various uses. One may rely on a dual mechanism of formal graph transformation plus artificial intelligence to offer views that meet the concerns of various users in the company.
The (almost) natural language processing techniques are based on the semantic analysis of the question and its approximation to information from the frame of reference [via vector distances for example - cf. Word2vec]. The provision of results via queries and reports makes it easier for all employees of a company to consult the data so that they understand its operation, structure and contribute to its transformation.
On the same principle as NLP techniques, data normalization makes it possible to reconcile terms and present data summaries. Typically, if you perform a scan of your IS in search of applications or technologies deployed across the company, the raw data will contain duplicates and variants - name differences, typos, minor versions of software, etc. - or insignificant components that will drown out the useful information. It is, therefore, necessary to extract a clean, consolidated view, put it into perspective, and classified in relation to the management levers of the company.
new cloud tools embed more and more metadata, both in the context of big data and in the context of computer processing - descriptions of APIs, ETLs, etc. The semantic analysis of these metadata and their reconciliation with the information in the repository - portfolios of the company's processes, functions, and products - facilitate the connection between the teams in charge of operational management and those in charge of new developments and continuous transformation. It is a key contributor to the success of agility at scale.
By collecting operational data from projects coupled with architectural data - scale, domain, complexity, ramifications, technologies, scale of the transformation, timing, and resources ... - we can try to predict the risk level of a transformation project. This risk is of course different from one company to another and considers many factors, such as that of its project culture, for example. Here, enough data from a company is needed to obtain a relevant signature that characterizes it.
It should be promising! As we can see from the examples mentioned above, Enterprise Architecture is not immune to the attractiveness of the AI’s potential, and we will see developments of its usage in the years to come. We are working on its application to the most difficult challenges for companies: that of continuous transformation and the design of the future-proof enterprise. But this requires a powerful representation (digital twin) of the company as well as large volumes of data, two drivers for new types of solutions.
We are also working on solutions to identify transformation opportunities as well as recommend transformation scenarios. For example, the identification and recommendations of migration strategies to the Cloud ... To be continued!