First order logic

What is first-level predicate logic?

First-order logic (FOL) is a method based on mathematics for assigning unique properties to an object. Here, each sentence/statement is decomposed into its subject and its predicate. The relationship between them is done in first-level predicate logic by P(x), where P stands for predicate and the variable x for the corresponding subject.

It should be noted that the Predicates in First-Order Logic refer to only one subject at a time.. Unlike in linguistics, a predicate is not necessarily a verb, but merely provides relevant information about the subject in question. The use of the Predicates also allow relations to be established; for example, through comparisons (greater/smaller than, equal to, etc.).

In the first-level predicate logic, the Quantifiers and represented by the symbols ∀ (universal quantifier; read: "for all") and ∃ (existential quantifier; read: "it exists" or "for some"). The representation is done in First-Order Logic by mathematical symbols and consists of:

  • Terms: Human, animal, plant etc.
  • Names of objects. In the linguistic sense, these can be both objects and subjects!
  • Variables a, b, c, ..., x, y, z etc.

These stand for objects that are not yet known.

Predicates [red, fragrant, is a flower etc.] stand for properties and relations that are linguistically comparable to verbs or attributes.

Quantifiers [∀, ∃] allow statements about sets of objects for which the predicate applies.

Relations [∧ (and), ∨ (or), →(implies), ⇒ (follows from), ⇔ (is equivalent to), == (equality - operator)] give conclusions about relations.

Example of first level predicate logic

The rose is red.

P(x) = red(rose)

The rose is fragrant.

P(x) = fragrant(rose)

The rose is a flower.

P(x) = Flower(Rose)

We learn about the rose that it red is, Smells and a Flower is.

This results in ∀:

All Roses are red.

All Roses fragrant.

All Roses are Flowers.

However, not all roses are red and not every rose is fragrant.

That all roses are flowers, on the other hand, is a true statement.

∀(x) Rose(x) → Flower(x)

In order that the other two statements can be checked for their correctness, existential quantifiers are now used.

From the two statements:

"All the roses are red." and "All the roses are fragrant." are made by using ∃:

"Some roses are red." and "Some roses are fragrant."

To translate it into a first-order formula, we need to define a variable x:

A predicate A(x), where x the Rose and a predicate G(x), which corresponds for x is, red resp. Smells.

∃(x) Rose(x) → red(x)

resp.

∃(x) Rose(x) → fragrant(x)

This tells you that there are roses that are red are and roses exist that fragrant. It follows logically that there must also be roses that are not red or that are not fragrant.

Pathfinding

What is Pathfinding?

In computer science, pathfinding is understood to be algorithms with which the optimal path between two or more points is to be found. The optimal path can be defined on the basis of different parameters.

The optimal path always depends on the respective application. In addition to the shortest path, for example, the most cost-effective path can also be defined as the optimum. For example, other constraints such as avoiding certain waypoints or route sections can also influence the determination of the optimal path.

This behaviour is generally known in route planners when motorways or toll roads are to be avoided.

Algorithms

Depending on the requirements of the objective, various algorithms can be used within the framework of pathfinding.

At A* algorithm This is what is known as an informed Search algorithmwhich determines the shortest graph between two points using an estimation function/heuristic. The search examines (from the starting point) the next node that is likely to lead quickly to the destination or reduce the distance to the destination node. If an examination of a node does not lead to the goal, it is marked as such and the search is continued with another node. According to this algorithm, the shortest path to the destination node is examined.

Another algorithm for determining the shortest path is the Dijkstra algorithm. This is not based on Heuristicsbut on the basis of the procedure that, starting from the starting node, all nodes with the shortest partial routes lead to each other to the optimal solution, in that the sum of the shortest partial routes leads to the overall shortest total path. Thus, the procedure works in the form of the most promising partial solution.

In contrast to the procedures described so far, the Bellman-Ford algorithm (or Moore-Bellman-Ford algorithm) also the consideration of graphs with negative edge weights to determine the shortest paths. This means that the costs (e.g. time) between two nodes can also be negative. However, it must be ensured that cycles through negative weights are excluded, as otherwise the path is reduced by repeatedly traversing the negative edge weights. All the approaches considered so far optimise the path as seen from a particular node.

The Min-plus matrix multiplication algorithm on the other hand, searches for the optimum of all node pairs in relation to each other.

The same applies to the Floyd-Warshall algorithm. The method makes use of the dynamic programming approach. In order to find the optimum, the overall problem is divided into similar subproblems and then, by solving and storing them, the overall problem is optimised. The operation is split into two, with Floyd's part ensuring that the shortest distances between nodes are calculated, while Warshall's part is responsible for constructing the shortest paths. The consideration of negative edge weights is also possible with this algorithm.

While some methods have the optimisation between two nodes as their objective, others optimise all nodes in relation to each other. This naturally leads to an increase in computing power. Therefore In addition to the consideration of the objective in the choice of the algorithm, the demand on resources is also a decisive factor.. In addition to the computing power, the required storage space and the runtime can also be relevant variables when choosing a method.

For some of the described methods, there are prefabricated algorithms that can be implemented in own solutions. For example, the library NetworkX can be implemented in Python and used as a framework for pathfinding problems.

Examples for the application of pathfinding in practice

The application possibilities of pathfinding are manifold. They range from simple as well as complex Controls and route planning in the computer game sector up to the solution of transport logistics problems and the optimisation of routing problems in the network sector.. To support the optimisation solutions, sub-areas of the Artificial intelligence be implemented.

As mentioned at the beginning, the optimum to be achieved can be defined individually. The limitation of the cost size can be represented by minimising the factor of time, money, intermediate stops and many other parameters.

PyTorch

PyTorch is a Open source framework for Machine Learning (machine learning) and is based on the programming language Python and the Torch library. It was developed in 2016 by a team of researchers for artificial intelligence by Facebook to improve the efficiency of developing and deploying research prototypes. PyTorch computes with tensors, which are accelerated by graphics processors (GPU for short). Over 200 different mathematical operations can be used with the framework.

Today, PyTorch is one of the most popular platforms for research in the field of Deep Learning and is mainly used for artificial intelligence (AI), data science and research. PyTorch is becoming increasingly popular because it makes it comparatively easy to create models for artificial neural networks (KNN) have created. PyTorch can also be used for reinforcement learning. It can be downloaded free of charge as open source from GitHub.

What is PyTorch Lightning?

PyTorch Lightning is a Open source library for Python and provides a high-level interface for PyTorch. The focus is on flexibility and performance to enable researchers, data scientists and machine learning engineers to create suitable and, most importantly, scalable ML systems. PyTorch Lightning is also available as open source for download from GitHub.

What are the features and benefits of PyTorch?

Dynamic graph calculation

The network behaviour can be changed spontaneously and the complete code does not have to be executed for this.

Automatic differentiation

Using backward sweeps in neural networks, the derivative of a function is calculated numerically.

User-friendly interface

It is called TorchScript and makes seamless switching between modes possible. It offers functionality, speed, flexibility and ease of use.

Python support

Since PyTorch is based on Python, it is easy to learn and programme and all libraries compatible with Python, such as NumPy or SciPy, can be used. Furthermore, uncomplicated debugging with Python tools is possible.

Scalability

It takes place on important Cloud platforms a good support and is therefore easy to scale.

Dataset and DataLoader

It is possible to create your own dataset for PyTorch to store all the necessary data. The dataset is managed by means of DataLoader. Among other things, the DataLoader can run through the data, manage batches and transform data.

In addition, PyTorch can export learning models in the Open Neural Network Exchange (ONNX) standard format and has a C++ front-end interface option.

What are examples of the use of PyTorch?

  • Object detection
  • Segmentation (semantic segmentation)
  • LSTM (Long Short-Term Memory)
  • Transformer

PyTorch vs. Tensorflow

Tensorflow is also a deep learning framework and was developed by Google. It has been around longer than PyTorch and therefore has a larger developer community and more documentation. Both frameworks have their advantages and disadvantages, as they are intended for different projects.

While Tensorflow defines the computational graphs in a static way, PyTorch takes a dynamic approach. Also, the dynamic graphs can be manipulated in real time with PyTorch and only at the end with Tensorflow. Therefore, PyTorch is particularly suitable for uncomplicated prototyping and research work due to its simple and easy handling. Tensorflow, on the other hand, is particularly suitable for projects that require scalable production models.

PyTorch vs. scikit-learn

Scikit-learn (also called Sklearn) is a free library for Python and specialises in machine learning. It offers a range of Classification-, Regression- and Clustering algorithms, such as Random Forest, Support vector machines or k-means. Scikit-learn provides an efficient and straightforward Data analysis and is particularly suitable for defining algorithms, but is rather unsuitable for end-to-end training of deep neural networks, for which, on the other hand, PyTorch can be used very well.

Pythia (Software)

The computer programme Pythia is used in particle physics. Here it is used to simulate or generate collisions at particle accelerators such as CERN. The software is thus at the same time the most frequently used Monte Carlo event generator. The calculations are based on the algorithms of probability theory.

Random samples of a distribution are drawn with random experiments via the Pythia software. This is particularly useful for finding out which signals are noticeable at the particle accelerator when physics models deviate from the standard. It makes sense to simulate the models numerically in advance via the Pythia software.

For which simulations is the Pythia software used?

A classic field of application for Pythia software is particle physics with its most diverse areas of application. For example, if a physics model predicts a new particle, assumptions can be made upstream. Before the experiment phase, they help to obtain clues about the signals that are sought in the experiment. If necessary, the detectors of the simulation can be optimised for this.

Basic considerations that can be made in advance:

  • Which particles can be produced and how should they decay in the model?
  • How complex and limited is the measurability of decay products?

The simulation with the Pythia software creates a clear signal. This describes the number and momentum of the incoming particles from the collision. The aim is for the detectors to detect the particles created as part of the acceleration experiment.

Experiments in particle accelerators are among the most important sources for discovering new physical phenomena. It gets difficult because the experiments have become bigger and bigger over the decades. This is where programmes like the Pythia software help in the search for new particles. The Pythia software has an extensive portfolio of settings and functionalities. Pythia enables the simulation of different scenarios in the exploration of physical events.

In the resulting application, protons are accelerated to an enormous speed in particle accelerators. They collide with each other in one of the detectors. In the process, the energy contained in the particle is converted into new particles in an extremely short space of time. When they hit the detectors, the data analysis begins. Thus, from here on, traces are evaluated and tracks are tried to be read in order to reconstruct the events of the collision.

The amounts of data generated in the process are exorbitant. For years, physics has been in the process of artificial intelligence to classify, sort and order.

Predictive Maintenance

Predictive maintenance (PdM) is a further development of the condition-based approach, as it is about more than just analysing the actual condition. The goal is no longer just the early detection of degenerative processes, but rather the Additional deeper diagnosisto predict the expected anomalous process behaviour.

Predictive maintenance thus helps to estimate when machine maintenance should be carried out. Compared to routine or time-based maintenance strategies, PdM can save costs.