CopernNet : Point Cloud Segmentation using ActiveSampling Transformers
In the dynamic field of railway maintenance, accurate data is critical. From ensuring the health of ...
Published on: December 14, 2021
Machine vision systems typically collect large amounts of data that need to be stored and analyzed in order to make a decision. To do this effectively and at the lowest cost possible, businesses often turn to cloud computing. But in many cases, the cloud is not always the best option. Especially for critical inspection processes, an AI on the edge approach is much more suited.
Artificial Intelligence (AI) and machine vision projects are easily associated with cloud technology. In industrial applications, cloud computing allows plants, equipment and machines equipped with sensors to collect and exchange massive amounts of operational data. The cloud is not only the place where data is stored, but also where it is processed and analyzed. The cloud has also made AI and machine vision more easily accessible at an acceptable cost. As upload speeds improved, cloud-based AI and machine vision applications became more practical.
Edge computing brings the computing power closer to where the data is collected
However, in many quality control applications for the food, metallurgical, chemical and other industries, machine vision systems are not running in the cloud, although they are developed in the cloud. Instead, these applications choose a local processing approach. This is often referred to as edge computing, because it brings the computing power closer to where the data is collected, at the edge of the network. In other words, it shortens the distance between the collected data and the processing.
If you want to visually monitor processes or verify product quality by means of AI-driven machine vision, then edge processing is often preferred over cloud computing.
There are at least three reasons for that:
A visual verification needs to be followed by a decision (for example: quality ‘OK’ or ‘not OK’). However, sending a collected image to the cloud for processing takes time. There is always some degree of non-deterministic latency – the delay between the capturing of the visual information by the camera or sensor, and the cloud service provider’s response.
For example, if your food sorting machine needs to inspect 100 objects per minute, a cloud service might never be able to achieve the reaction speed deadline. By processing your visual information locally, you do not lose time sending your data over to the cloud.
Critical production processes often need 24/7 availability of their machine vision capability. When making use of cloud computing, there is always the risk of losing that network connection disrupting the production process. When machine vision processing happens locally, you reduce that risk.
Many businesses are not comfortable storing and processing their critical data with a third-party cloud service provider. When choosing an edge computing approach, your bits do not leave your building or local network, which makes data privacy and security much easier to implement and enforce.
And yet, the benefits of cloud computing cannot be ignored. Scalability is just one of them. Using cloud computing, you can always add or reduce IT resources to meet your demands. Also, with cloud computing, you have no upfront costs for equipment or infrastructure. And last but not least, the cloud makes your application easily accessible and available from any device at any time and from any location.
So, is there a way to leverage these benefits, and still benefit from a local approach as well? There is.
Cloud and edge computing can perfectly work together and complement each other in machine vision projects. This way, you get the best of both.
For example, an AI algorithm can be developed and trained in the cloud, and then be deployed locally. The real-time detection happens on premise, while useful data for future insights can be saved in the cloud. Conversely, captured data can also be pre-processed on the edge device, and could only be distributed over the cloud when useful, or when triggered by a specific event. This way, low latency can still be maintained.
There are many ways in which the cloud and the edge device can work together, but the central idea is that what happens in the cloud should support what happens at the edge, and vice versa.
So, do you need an edge-based approach or a cloud solution? Kapernikov does not have a preference for either edge or cloud computing. But not just every machine vision project is suitable for the cloud or the edge. Your approach will depend on your application and expectations.
If fast reaction time or offline availability is not an issue, then cloud computing may be an option. But for critical inspection processes, where reaction time, privacy or offline availability are more important, an edge approach may be more suitable.
Before deciding on your approach, it’s important to know what your functional requirements and expectations are. What does your production line look like? What critical reaction times do I need? What is the cost of deploying a machine vision system in the cloud or locally?
Waging everything on the cloud is not very realistic in an AI-driven machine vision project. But if you really need to decide on a suitable approach, then know that there is always a capable and knowledgeable Kapernikov expert to guide you in your selection process.
Subscribe to our newsletter and stay up to date.