Due to Industry 4.0, machines can be connected to their manufacturing processes with the ability to react faster and smarter to changing conditions in a factory. Previously, Internet of Things (IoT) devices could only collect and send data to the cloud for analysis. However, the increasing computing capacity of today¿s devices allows them to perform complex computations on-device, resulting in edge computing. Edge devices are a fundamental component of modern, distributed real-world artificial intelligence (AI) systems in Industry 4.0 environments. As a result, edge computing extends cloud computing capabilities by bringing services near the edge of a network and thus supports a new variety of AI services and machine learning (ML) applications. However, there is a large difference between designing and training an ML model, potentially in the cloud, to create ML services that can be deployed and consumed on the edge. This article presents an ML workflow based on ML operations (MLOps) over the Thinger.io IoT platform to streamline the transition from model training to model deployment on edge devices. The proposed workflow is composed of different elements, such as the ML training pipeline, ML deployment pipeline, and ML workspace. Similarly, this article describes the ease of design and deployment of the proposed solution in a real environment, where an anomaly detection service is implemented for detecting outliers on temperature and humidity measurements. The performance tests performed over the ML pipeline steps and the ML service throughput on the edge indicate that this workflow adds minimum overhead to the process, providing a more reliable, reusable, and productive environment.
Classification
subjects
Computer Science
keywords
edge computing; internet of things (iot); machine learning operations (mlops); thinger.io