What is AI 2.0 – The next stage of the evolution

by | 1. June 2021 | Basics

Many companies have only just begun to explore the potential applications of artificial intelligence. Only a few are already fully exploiting the potential.

Meanwhile, AI development in research and science is advancing rapidly, and a number of new techniques are emerging that will take the application of AI to the next level.

In his book “AI Superpowers,” Kai-Fu Lee describes four waves of the establishment of AI technologies. In the first phase, “Internet AI,” the focus is on applications based on big data from the Internet and the analysis of user behavior to improve user experience and personalized content delivery. The development and application of AI technology took place mainly at large Internet and e-commerce companies.

This is followed by the second phase, AI 2.0 – this is about “business AI”, so the use of AI in companies.

AI 2.0 in business 

In the AI 2.0 phase, analyses and predictions based on historical company data and other sources are used in a wide variety of areas to increase efficiency and build new business models. Users and, to some extent, developers of AI applications are not only digital companies, but companies from all industries and of all sizes.

The economic growth potential of AI 2.0 is enormous and was estimated, for example, in a study by PWC at 15.7 trillion dollars.

AI 2.0 infrastructures 

There are also extensive new developments in terms of concepts and infrastructures. Most important here is the ability to develop scalable and production-ready AI applications. Data and AI products are implemented with the help of platforms and templates and transferred to production in an agile manner via MLOps processes. In addition, it is also important to establish the necessary organizational structures and processes in the companies. For this purpose, data strategies must be developed, data governance concepts implemented, and roles and responsibilities defined.

AI 2.0 technologies 

According to a recent Forrester report, the new technologies in the context of AI 2.0 include the following elements: 

  • Transformer networks 
  • Synthetic data  
  • Reinforcement Learning   
  • Federated Learning
  • Causal Inference  

In AI 1.0, the focus was on pattern recognition, task-specific models, and centralized training of models and their execution. In contrast, AI 2.0 is characterized by the establishment of models to generate language, images and other data, as well as the universal applicability of AI, centrally or locally – at the Edge.

Let’s take a closer look at the 5 core elements of AI 2.0: 

Transformer  

Transformer networks can handle tasks with a time or context element, such as natural language processing and generation. This advancement makes it possible to train huge models that perform multiple tasks at once with higher accuracy and less data than individual models working separately. Currently, the most prominent representative of this category is OpenAI’s enormously powerful GPT-3 model.

Synthetic data 

One of the biggest challenges in building AI models is the availability of a sufficiently large, usable training data set. Synthetic data solves this problem and improves the accuracy, robustness and generalizability of models. In applications for object recognition, autonomous driving, healthcare, and many other fields, synthetic data can be used to build AI models.

Reinforcement Learning 

Reinforcement learning is not a new concept, but it has had little usage in the past. AI applications can use reinforcement learning to quickly respond to changes in data by learning from interaction with a real or simulated environment through trial and error.

Federated Learning 

One obstacle to training AI models is the need to transfer data from multiple sources to a central data store. Transferring this data can be costly, difficult, and often risky from a security, privacy or competitiveness perspective. Federated learning enables AI models to be trained in a distributed manner directly on IOT endpoints, for example, and to leverage data in different locations.

Causal Inference 

Causal inference can be used to identify cause-and-effect relationships between attributes of a data set and analyze correlations. The practical use is, for example, to avoid incorrect business decisions based on spurious correlations.

In summary, AI 2.0 can be understood as an attempt to get closer to natural intelligence by means of imagination, trial and error, exchange of experience and understanding of modes of action. As in nature, the resulting advantages can make a decisive difference with regard to the survivability of a company.

Right at the start of an AI transformation, those responsible in a company should look into the possibilities of AI 2.0 and evaluate the potential areas of application. In this way, potential “killer applications” for the company’s own business model can be implemented at an early stage with the help of the new technologies.

AI 2.0 in Europe 

In the context of AI 2.0, an important factor is that we in Europe take European values and quality standards into account when developing and applying AI. Ethical issues must be clarified and implemented via appropriate regulations. At the same time, the innovative power and economic potential of AI must not be restricted. Regulations must be defined with a sense of proportion, focusing on specific application scenarios, taking into account existing measures and following a transparent and precise risk assessment.

Only by boosting innovation in research and application of AI we in Europe can get on a level with the big players like the U.S. and China and build our digital sovereignty.

<a href="https://www.alexanderthamm.com/en/blog/author/alexander/" target="_self">ALEXANDER THAMM</a>

ALEXANDER THAMM

Alexander Thamm ist Founder, CEO und Pionier auf dem Gebiet der Daten & KI. Seine Mission ist es, einen echten Mehrwert aus Daten zu generieren und die internationale Wettbewerbsfähigkeit Deutschlands und Europas wiederherzustellen. Er ist Gründungsmitglied und Regionalmanager des KI-Bundesverbandes e.V., ein gefragter Speaker, Autor zahlreicher Publikationen und Mitbegründer des DATA Festivals, auf dem KI-Experten und Visionäre die datengetriebene Welt von morgen gestalten. Im Jahr 2012 gründete er die Alexander Thamm GmbH [at], welche zu den führenden Anbietern von Data Science & Künstlicher Intelligenz im deutschsprachigen Raum gehört.

Karriere

Data Navigator Newsletter