In our AI projects, we develop and implement AI solutions based on machine learning and deep learning models and methods. The choice of models is based on the respective use cases or tasks. There are use cases, such as classification tasks, for which certain machine learning models are the right choice. Deep learning models are more suitable for other use cases.
With our projects, we optimize the quality of AI solutions and their financial benefits, e.g. in terms of ROI (return on investment).
The core components of an AI project are:
- The AI project process, an innovation process adapted to the development of use case solutions
- The standard data science project process CRISP-DM and elements from the project process ASUM-DM
- MATLAB as a development platform
- Use of locally and internationally recruited freelancers and support of the consulting department of The Mathworks, which has developed MATLAB
The AI projects are divided into the following phases
- Solution development
AI project process
The AI project process is an innovation process in which use cases take on the role of ideas in innovation processes. The process is enriched with the data science project process CRISP-DM and elements of ASUM-DM.
In the AI project process, the use cases serve as sources for the creation of projects (use case projects).
The projects are managed as a project portfolio, i.e. they compete for the same resources. Resources are preferably dedicated to better projects (use cases). “Bad use cases” are detected and stopped in the process; the sooner, the better.
Projects are evaluated in terms of feasibility, quality of solutions and financial benefits along the process. Multidisciplinary teams and management decide on the next steps for individual projects at several gates in the process.
The first phase of the project process consists of the agreement of the use cases with the customers, the examination of the data necessary for the training of AI models, the design of alternative solutions and the proof of the basic technical Feasibility of the solutions by creating proofs of concept (PoC).
Our projects start with a workshop. During the workshop, employees of the customer present the company, its machines, plants, processes, products and services as well as their ideas regarding AI. The unicoKI team introduces basic and practical aspects of machine learning and deep learning, including applications in the industry sector of the customer.
We have over a decade of experience in managing technology innovations. That is one reason why we use the innovation process. We will also use creativity techniques if they benefit use cases and projects.
Use cases are fundamental to the value of AI solutions because they determine the project results to be achieved and thus the benefit for the customer.
Use cases are identified and refined during and after workshops. These can be improvements, e.g. the prevention of machine and plant failures with predictive maintenance. New use cases may also arise; these are often based on new data sources and developed handling opportunities resulting from machine learning and deep learning. After setting up the use cases, AI solutions and their potential benefits are identified and presented in the company.
Design and data
Based on selected AI solution approaches, suitable models (machine learning and/or deep learning models) are preselected for each use case, the data to be used for training models are determined, and an inventory of available data and those which could be procured is made. The data is collected and processed in two phases, first for PoCs and then for final solutions.
In our case, a PoC (Proof of Concept) is a simplified implementation of a use case solution that proves the technical possibility to create a solution. The PoC is not as detailed as a prototype. A PoC can be considered a simplified version of a solution development (see below). PoC does not use the complete data used in solution development for training. unicoKI develops and trains PoC-models in laptops. In case of complex models and/or high volume of data, model training is done with high-performance GPUs in data centers.
If a PoC is successful, a business case is created. With this, all relevant payments (cash in and out) such as those due to use case savings, revenue increases or new revenues, internal and external project costs, hardware (e.g. sensors and edge systems), maintenance of solutions, and, case wise, an IoT platform and computing time in data centers). The picture on the left is the result of the business case of a project in which the investment amounts to about 600T€ (4th quarter) and the cumulative profit (CCF) after five years (20th quarter) 3 million euros. Results of PoCs and businesscases will be presented to management to decide on the way forward.
In the solution development phase, solutions that meet the requirements of the real use cases as accurate as possible are created based on the PoCs. The solutions will fulfil industrial or commercial requirements. CRISP-DM (Cross Industry Standard Process for Data Mining) is an application-neutral standard process for development projects in the data science world. We use CRISP-DM at this stage, but we present the process graphically in a slightly different way.
Data collection and data preparation
The solution development begins with the collection of data suitable for the training of the selected models. This data may be available to the company and its customers. It is also possible that parts of the data need to be obtained externally. Once the data has been gathered, the complex work of data preparation (3) begins. The type of data preparation and the related effort is different for machine learning and deep learning.
Model training, validation and testing (5,6)
Models or “algorithms” are the structures or computation schemes whose properties are determined during training. Unlike conventional programming, these properties are generated by the data being processed and the training algorithm. Subsequently, the validation and testing of the trained models takes place. In cases where performance goals are not achieved, further data is collected and/or preprocessed.
The quality of the data and data preparation has a decisive influence on the success of the training and thus the quality of the solution.
unicoKI performs the trainings, validations and tests with the development platform MATLAB, unless there are very specific reasons for utilising programming languages such as Python and libraries such as TensorFlow, Keras and Caffe. For complex models and/or large amounts of data, including big data, the “MATLAB Parallel Server”, a virtual implementation of MATLAB for cloud infrastructures, e.g. for AWS or Microsoft Azure data centers, is used. This provides access to GPUs with the highest performance. This can shorten development times by several orders of magnitude.
The solutions can be implemented in a datacenter or on-premise. Implementing on cloud infrastructures in data centers has advantages over on-premise implementations. A local implementation is typically used when required latency is can’t be met utilising data centers or the customer does not want a datacenter solution.
The elements of a typical implementation are the following:
- Sensors and actuators on machines and systems
- I/O systems for connecting sensors and actuators
- Local networks that ensure the transfer of data between I/O systems and an IoT platform
- The IoT platform that connects the customer’s location(s) to data centers while ensuring security
- The machine learning and deep learning algorithms running e.g. in data centers with high-performance GPUs.
- MATLAB© can generate “CUDA code” for NVIDIA GPUs, including parallel GPUs, from the development versions of the solutions. The solutions can then be implemented in single or multiple NVIDIA GPUs On-Premise or in data centers.
WORKSHOPS & SEMINARS
We conduct interactive workshops with employees of our customers at the beginning of the AI projects. In the workshops, the unicoKI team presents various topics on machine learning and deep learning which depend on the level of knowledge and the needs of customer’s employees. The employees present the aspects of the company that are relevant to the AI project. These include usually machines and systems, processes as well as products and services.
We conduct seminars that are geared to the needs and interests of our customers. Topics for seminars can include: Introduction to the basics and applications of machine learning and deep learning, deeper insights in theory and applications, development and implementation aspects and management of technology innovations, including the AI project process we use in AI projects.
We offer advise in two areas:
Strategic AI Consulting
We address fundamental questions about the use of AI technologies in enterprises and organisations. We examine the technologies used in the company at low depth, determine the qualitative potential of AI and estimate quantitative benefits roughly. Based on this, AI projects could then be initiated. We provide information about the status of AI in Germany at no additional cost.
AI expert advice
We advise on specific questions of customers, e.g. certain industry specific applications or technologies.