What is Fog Computing?
Fog computing, also known as fogging, is a decentralised infrastructure that is located between the Cloud and the data source. The cloud is like a cloud, centrally hovering over all end devices, and Fog Computing is like a fog, closer to the end devices. By processing and storing data in mini data centres (the Fog Nodes) on site, there is no longer a need to route all data to the cloud. Fog computing thus brings the advantages and performance of the cloud closer to the end devices and thus offers Reduced latency and processing times and also lowers bandwidth utilisation through the pre-processed data volumes.
How does Fog Computing work?
Fog computing uses so-called Fog Nodes, which act between the cloud and the end devices. These Fog Nodes act as mini data centresto store and / or analyse collected data from the end devices. This means that not all data has to be sent to the cloud, but can be Used closer to the location of the data source to ensure real-time decisions. For complex analyses, the data is forwarded to the cloud. This structure of Fog Nodes can be seen as a local cloud. The Fog Nodes can interact and communicate with each other.
What are applications and examples of Fog Computing?
IoT and IIoT
Since large amounts of data are generated by sensors and control devices, Fog computing is very important for the IoT (Internet of Things) makes a lot of sense, just like the IIoT (Industrial Internet of Things). Through the Fog Nodes, data is already processed on site, which means less data has to be sent to the cloud. This saves time and money, as the communication between the end devices and the Fog Nodes is faster and enables timely decisions.
For the autonomous driving becomes a Combination of Fog and Edge computing used. Large amounts of data are generated by control units, sensors and actuators, up to 20 terabytes per day are possible. By using Fog Computing, a local data analysis (code to data) is carried out in a mobile mini computer centre, the data is evaluated on site and only the results are forwarded. By processing the required data in real time, quick decisions are possible, because delays can be life-threatening in ongoing road traffic.
Fog Computing vs. Cloud Computing
Fog computing complements the Cloud computing and can thus be seen as an intermediary of the cloud infrastructure. While in cloud computing the data is processed in a central IT structure, the cloud, in fogging this is done in the fog nodes closer to the data source. This means that In Fog Computing, short-term and real-time analyses are possible, while in the cloud, time- and resource-intensive analyses of big data are possible. (in English Big Data) take place. Fog computing thus figuratively brings the cloud closer to the end devices and offers faster decisions and shorter latency times.
Fog Computing vs. Edge Computing
Fog and edge computing are often used as synonyms, although they describe different approaches. Edge computing describes decentralised data processing at the edge of the network. Here, the data generated at the end device is pre-filtered and, if necessary, simple analyses are made. This data can then be forwarded to Fog Nodes to be stored or further analysed, for example. Since the Fog Nodes can communicate with each other and more computing power is available, more complex analyses are possible than with edge computing.
Fog, edge and cloud computing work particularly well together. First, edge computing is used to pre-filter and reduce the amount of data. Then, initial analyses are carried out in the Fog Nodes and finally, time-consuming and complex tasks are handled by Cloud Computing. In this way, the respective strengths of the different models can be profited from.
What is formal language?
A formal language is an abstract language used to express definitions, instructions and logic. It consists of a certain set of character or symbol chains (words), which in turn are formed from certain characters (alphabet, symbols). Formal languages are used in the fields of computer science, mathematics and linguistics.
The definition is:
A formal language L over an alphabet Σ is a subset of the Kleene's hull of the alphabet: L⊆Σ*.
The Kleene's envelope Σ* defines the set of all words that can be composed of the characters contained in alphabet A by any concatenation (concatenation of character strings). The empty word (a character string of length 0) is included.
In general, a distinction is also made between syntax and semantics in formal languages. Syntax describes the grammar of the language, i.e. the rules with which the language is composed. Semantics, on the other hand, describes the meaning of the words. The sentence: "The lamp closes the window with a cow." is syntactically correct, but semantically nonsensical. Therefore, languages need the correct cooperation of syntax and semantics.
Formal language in computer science
In theoretical computer science, formal languages are used for the ModellingThe data are used for data processing, information processing and for compiler construction. They are defined by certain replacement procedures. These are rules on how the characters of the alphabet may be combined. Common substitution procedures are, for example, Chomsky grammars, semi-Thue systems or the Lindenmayer systems.
In applied computer science, formal language is used in the form of programming languages. The source code (the entire instructions in the form of a programming language) can be created in a simple text editor. This must be translated into a suitable machine language (a binary code). Depending on the time of translation, there are various possibilities for this. A compiler is used to translate the source code before the programme is executed. A JIT compiler (just-in-time compiler) or an interpreter translates the source code while the programme is running. A combination of both is also possible and is used, for example, in the Java programming language.
Programming languages can be divided into different classes, the so-called programming paradigms. The three best-known applications of formal language are object-oriented, functional and imperative programming.
A Object-oriented programming based on data and objects. The object belongs to a superordinate class, has certain attributes and various methods are assigned to it.
With a functional programming, all components of the computer program are exclusively functions.even the programme itself. The functions can be linked to higher order functions, as in mathematics.
From a imperative programming is used when the computer program consists of instructions.which tell the computer exactly what to do and when. Loops or branches are used as control structures for this purpose.
What are examples of formal language?
1. programming languages such as:
- Haskell, LISP and Scheme as functional programming
- ALGOL, Cobol, C and FORTRAN as imperative programming
2. language of palindromes: a palindrome is a word that is identical when written forwards and backwards. The formal expression is:
- A palindrome is a word u over the alphabet Σ with the property u=uR.
- The R operator reverses the character string.
3rd Morse sequence (also called Morse-Thue sequence, Thue-Morse sequence): is an infinite binary sequence formed according to concrete rules. As a formal language, it begins with 0, 01, 0110, 01101001, 0110100110010110...
What is Federated Learning?
In federated learning, the Models trained simultaneously on several machines. This is done decentrally and without any surrender or exchange of sensitive information. Thus, the corresponding data remain with the respective owners at all times. The central analysis model only receives the learning results and the parameters from the individual models.
The great advantage is that the learning effect created by incorporating information from different Training data is massively strengthened. Training is done on several devices in parallel and thus the accuracy of the model increases.
Federated learning can provide machine learning companies with the opportunity to get development of data-driven processes and services even with limited data. Federated learning has the potential to save costs and generate high added value.
How does Federated Learning of Cohorts (FLoC) work?
Federated learning of cohorts, FLoC for short, is part of the Google Privacy Sandbox initiative. It is a type of web tracking. The usage behaviour of all users can be directly evaluated by the browser itself and users are grouped into certain categories. Users then receive interest-based advertising.
The mode of operation goes via hashing, by generating cohort IDs within the browser. This is done using SimHash algorithms. The browser history can be encrypted using hash values. Privacy is protected because FLoC replaces third-party cookies and fingerprinting.
Nevertheless, targeted advertising for users is possible. Users are grouped according to the corresponding values and can then be targeted with advertising. Cohort IDs can be accessed via an API. These are newly created every week. Developers receive, for example, through TensorFlow a federated learning framework that enables computations for decentralised data and derive custom user types for advertisers.
Which framework is used to set up federal learning?
A Possibility is the well-known machine learning framework "Flower".. This framework was developed in 2020 and is particularly advantageous due to its wide distribution and high scalability. The infrastructure has proven to be very successful, especially due to the high user-friendliness of Flower.
What is case-based reasoning?
The idea of case-based reasoning (cbr) comes from the psychological model that people can react to similar problems with experiences they have had, from tasks they have been given. With the help of these analogies machine learning Problem solutions generated and applied.
Case-based reasoning captures machine experience through a problem and a matching solution. This is the general model of a CBR system, in which cases from the past are used to solve current problems. Experience is the basis for building even more knowledge-based systems, but knowledge is used differently in CBR systems, so these applications are seen as a further development of knowledge-based systems.
How does a CBR cycle work?
Case-based reasoning is a well-founded paradigm of the Artificial Intelligence (AI) for problem solving based on experience. This is based on the observation that similar problems usually require similar solutions. Accordingly, the idea of case-based reasoning is to use knowledge gained from solving problems in the past to adapt similar problems to the current situation. For this purpose, the solutions of the former problems are adapted in order to be able to transfer them to the new problem.
The The main component of any case-based problem solver is the case base. This is a collection of stored units of experience, the cases. Such a case contains a description of the problem and a corresponding solution. As a rule, the case basis is in the Database stored and forms the basic knowledge of the problem solver.
The new problems are solved by retrieving cases from a case base that are analogous to the current problem. The experiences stored in the similar cases are then reused. For example, parts of a solution can be adapted to a new problem and possibly combined.
What are applications for case-based reasoning?
The Case-based reasoning has proven itself especially in application systems for customer service, the help desk systems.where the user uses it, for example, to diagnose customer enquiries. Recently, it has been increasingly used in Consultancy systems used for products, for example in e-commerce and for structuring texts.
The advantage is that the method can be used even for incompletely described and poorly structured problems. Compared to neighbouring concepts, a rather small collection of references is sufficient at the beginning, which grows further and further by working with the CBR system. CBR is also suitable for application domains whose interdependencies are not always fully known.
What is a Feedforward Neural Network?
A feedforward neural network is a Neural network of artificial neuronswhich has no feedback whatsoever. In such a network, the signals always run from the input layer towards the output layer. Multilayer perceptrons and radial basis function networks also belong to this class of networks. Feedforward Neural Networks are also referred to as forward network or as forward-propagating network is the term used. In contrast to such networks, networks with feedback are referred to as recurrent networks.
In a feedforward neural network, there are connections between nodes that do not form a cycle. This network was the first and simplest artificial neural network. The information in such a network always flows in one direction only. The flow of information comes from the input nodes through the hidden nodes (if any) to the output nodes. There are neither cycles nor loops in such a network. There are single-layer perceptrons or multi-layer perceptrons.
Depth forward networks are part of Deep Learning. This Deep Learning Models are intended to approximate some function and are called feedforward because the information flows through a function and is evaluated by x and then runs through intermediate calculations to define the function f and finally arrives at the output y. There are no feedback connections in which outputs of the model come back to themselves.
Why is a neural model needed?
A network of perceptrons is used to solve a problem. For example, there are inputs to the network consisting of raw pixel data from a scanned, handwritten image of a digit. We want the network to learn weights and biases so that the network correctly classifies the digit in the output. We want it so that if there is a small change in the weighting, there is only a small change in the corresponding output from the network. This then makes learning possible.
How does a feedforward neural network work?
In its simplest form, a feedforward neural network is considered a single-layer perceptron. In this model, a series of inputs are taken and multiplied by their weights. Each value is added to obtain a sum of the weighted input values. If the sum of values is above a specified threshold (usually set to zero), then the value produced will often be 1, whereas if the sum falls below the threshold, the output will be -1. The single-layer perceptron is very often used for classification tasks.