We offer the development and implementation of machine learning and deep learning solutions. The method we use in a specific case depends on the task, the available and obtainable data, and the required quality (precision) of the solution.

The basic functionality of machine learning and deep learning can be described as follows:

  • A system with a specific structure and parameters (the model) is selected or developed
  • An algorithm (the training algorithm) uses data to determine the best possible values for the model parameters. This process is called “model training”
  • The parameter optimisation leads to the “trained model””, frequently referred to as “the algorithm”. This is a different type of algorithm than the training algorithm.
  • Finally, the trained model is used to perform the task.

On the one hand, “AI” refers to the entire field, which is meant by “imitation of intelligent human behavior”, on the other hand, “AI” stands for a number of earlier methods, such as rule-based systems, which differ from the principle of machine learning.

Machine learning serves both as a term for concrete methods and models, such as regressions and decision trees, and, on the other hand, for programming (“training”) models with data.

Deep learning stands for a number of concrete models. Neural networks are the typical deep learning models. These are structures that orient themselves on a very small scale to nervous systems. Deep learning solutions can exceed the precision of machine learning systems by far, but require much more data and computing power for training.

Machine learning applications arose in the 1980s and deep learning around 2010.

Machine Learning

In machine learning, solutions are “trained” instead of being programmed purposely. It can be said very well:

A new paradigm has emerged.

Trainings are based on the information contained in data.

The quality of solutions depends decisively on the data, their information content and their preparation. Preparation refers to the pre-processing of raw data, which leads to “new” data with suitable properties for training.

Training methods

There are three basic methods for training models. These are:

  • Supervised Learning: Model and training algorithm get the data and the expected results (labels). Typical applications are prediction and classification.
  • Unsupervised Learning: Model and training algorithm receive data but no labels. A typical application is clustering.
  • Reinforcement Learning: No training data is used. The model learns from the environment which is typically simulated. Frequently used in robotics. It was also used by DeepMind in AlphaGo.

Deep Learning

Machine learning , artificial intelligence , ai, deep learning blockchain neural network concept.

Deep learning has, compared to machine learning, the following

Advantages

  • Higher performance of solutions. The achievable accuracy of the solutions is theoretically unlimited
  • Features are automatically learnt from data
  • Makes lower demands on data preparation

Disadvantages

  • A major hurdle is the very large amounts of data needed for training – hundreds of thousands and millions of data sets may be required
  • Training is very computationally intensive

DEEP NEURONAL NETWORKS (DNNs)

Artificial deep neural network structure, gray

Commonly used deep learning models are:

Feed-forward Networks. Do not have (feedback) loops in the model structure

Convolutional Neural Networks (CNNs). Have different structures and functionality in different planes and are very effective for image recognition

Recurrent Neural Networks (RNNs). Have feedback loops, thus necessarily storage/memory to avoid endless loops. RNNs are used for tasks with dynamic behavior.

Generative Adversarial Networks (GANs). Two models, which do not necessarily have to be Deep Neural Networks, train each other by “playing against each other”. DeepMind has used this type of training in the development of AlphaGo.

Transfer Learning

Transfer learning uses a trained model (optimised parameters) and uses it as starting point for training with different training data, i.e. for a different task. Transfer Learning has an important practical value due to the reduced development effort, but can’t compete with the precision of “hand-crafted” models for a specific task.