What is AI 2.0? - On to the next stage of evolution

from | 1 June 2021 | Basics

Many companies have only just begun to explore the potential applications of artificial intelligence. Only a few are already fully exploiting the potential. Meanwhile, the development of AI in research and science is advancing rapidly and a number of new techniques are emerging that take the application of AI to the next level. 

In the book "AI Superpowers", Kai-Fu Lee describes four waves of the establishment of AI technologies. In the first phase, "Internet AI", the focus is on applications that use large amounts of data from the internet and the analysis of user behaviour to improve the user experience and personalise content. The development and application of AI applications mainly took place in large internet and e-commerce companies.  

This is followed by the second phase, AI 2.0 - this is about "Business AI", i.e. the use of AI in companies. 

AI 2.0 in business 

In the AI 2.0 phase, analyses and predictions based on historical company data and other sources are used in a wide variety of areas to increase efficiency and build new business models. Users and, to some extent, developers of AI applications are not only digital companies, but companies from all sectors and of all sizes. 

The economic growth potential of AI 2.0 is enormous and has been demonstrated, for example, in a Study by PWC estimated at $15.7 trillion. 

AI 2.0 Infrastructures 

There are also extensive new developments in terms of concepts and infrastructures. The most important thing here is the ability to develop scalable and production-ready AI applications. Data and AI products are implemented with the help of platforms and templates and transferred to production in an agile manner via MLOps processes. In addition, it is important to establish the necessary organisational structures and processes in the companies. For this, data strategies must be developed, data governance concepts implemented and roles and responsibilities defined. 

AI 2.0 Technologies 

According to a recent report, new technologies in the context of AI 2.0 include Forrester Report the following elements: 

  • Transformer networks 
  • Synthetic data 
  • Reinforcement Learning 
  • Federated Learning 
  • Causal Inference 

In AI 1.0, the focus was on pattern recognition, task-specific models and central training of models and their execution. In contrast, AI 2.0 is characterised by the establishment of models for the generation of language, images and other data, as well as the universal applicability of AI, centrally or locally - at the edge. 

Let's take a closer look at the 5 core elements of AI 2.0: 

Transformer  

Transformer networks can handle tasks with a time or context element, such as natural language processing and generation. This advancement makes it possible to train huge models that perform multiple tasks at once with higher accuracy and less data than individual models working separately. Currently, the most prominent representative of this category is the enormously powerful GPT-3 OpenAI model. 

Synthetic data 

One of the biggest challenges in building AI models is the availability of a sufficiently large, usable training data set. Synthetic data solves this problem and improves the accuracy, robustness and generalisability of models. In applications for object recognition, autonomous driving, healthcare and many other fields, synthetic data can be used to build AI models. 

Reinforcement Learning 

Reinforcement learning is not a new concept, but it has been little used in the past. AI applications can be enhanced by Reinforcement Learning Respond quickly to changes in data by learning from interaction with a real or simulated environment through trial and error.  

Federated Learning 

One obstacle to training AI models is the need to transfer data from different sources to a central data store. Transferring this data can be costly, difficult and often risky from a security, privacy or competitiveness perspective. Federated learning makes it possible to train AI models in a distributed way directly on IOT devices, for example, and to use data in different locations.  

Causal Inference 

Causal inference can be used to identify cause-effect relationships between attributes of a data set and to analyse correlations. This can be used, for example, to avoid incorrect business decisions based on spurious correlations. 

In summary, AI 2.0 can be understood as an attempt to get closer to natural intelligence through imagination, trial and error, exchange of experience and understanding of the effects. As in nature, the resulting advantages can make a decisive difference to a company's ability to survive. 

Already at the beginning of an AI transformation, those responsible in a company should deal with the possibilities of AI 2.0 and evaluate the possible areas of application. In this way, potential "killer applications" for their own business model can be implemented at an early stage with the help of the new technologies. 

AI 2.0 in Europe 

In the context of AI 2.0, an important factor is that we in Europe take European values and quality standards into account in the development and application of AI. Ethical issues must be clarified and implemented through appropriate regulations. At the same time, the innovative power and economic potential of AI must not be restricted. Regulations must be defined with a sense of proportion, with a focus on specific application scenarios, taking into account existing measures and following a transparent and precise risk assessment. 

Because only through an innovation push in research and application of AI can we in Europe get on a par with the big players like the USA and China and build our digital sovereignty. 

Author

ALEXANDER THAMM

Alexander Thamm is Founder, CEO and pioneer in the field of data & AI. His mission is to generate real added value from data and restore Germany's and Europe's international competitiveness. He is a founding member and regional manager of the KI-Bundesverband e.V., a sought-after speaker, author of numerous publications and co-founder of the DATA Festival, where AI experts and visionaries shape the data-driven world of tomorrow. In 2012, he founded Alexander Thamm GmbH [at], which is one of the leading providers of data science & artificial intelligence in the German-speaking world.

0 Kommentare

Submit a Comment

Your email address will not be published. Required fields are marked *