3 ways to improve your AI machine vision results

Published on: November 30, 2021

The success of an AI-driven machine vision system depends on the interaction of different factors. In this article, we discuss three ways to influence and improve AI machine vision performance.

The need for machine vision and artificial intelligence is increasing. As the industry is aiming for higher productivity and product quality requirements, the demand for high-performance quality inspection and automation has never been greater. No wonder, because machine vision systems are better quality inspectors than humans, they make fewer errors, and they don’t mind doing boring, repetitive tasks all day long.

But as production lines get faster and quality requirements more challenging, the question is whether your machine vision system will be able to keep up. Due in part to mediatized AI machine vision success stories, business leaders often have very high or even unrealistic expectations about their results. AI is expected to boost machine vision results to unseen heights. But in reality, the results are often less spectacular, if not disappointing.

How is this possible?

What defines the success of an AI machine vision project? And how can we improve our machine vision prediction results? If we know the answers to these questions, we might have more realistic expectations of our machine vision solutions.

In this article, we discuss three important success factors of AI-driven machine vision systems: latency, detection accuracy and data quality.

Latency versus speed

Let’s say the belt speed of your vegetable inspection line is 60 meter per minute. This means that your machine vision system will need to be able to process its vegetables at this speed as well. This means: capturing the image, communicating the image information to the processing device (at the edge or in the cloud), and then returning a command or decision based on that information (quality is OK or not OK).

The speed at which this will be possible depends on the system’s latency: the delay between capturing the image and performing the decision. Latency can be influenced by different things: the overall bandwidth, the performance of the hardware (sensor or camera), and the complexity of the analytical model (or: the number of parameters your model needs to take into account).

Fortunately, you can influence latency in several ways:

  • Optimize your software: Redesign and/or rewrite your software bottlenecks in a more efficient way.
  • Optimize your hardware: You may use more powerful vision hardware that is able to capture the image at higher speeds.
  • Adjust your scope: By reducing the number of parameters your machine vision system needs to take into account, you reduce the complexity of your model, and as a result, increase detection speed.

Detection accuracy

The accuracy of your analytical model will determine your detection quality. Theoretically, we all want 100% accurate detection. But is this possible? It usually isn’t, or it may be too costly to develop a more accurate model.

So, the question remains: how accurate do you want your detection to be? And what is the added value of increasing your detection accuracy? Is it a big issue to have a few false negatives (bad quality you did not detect) or false positives (something you erroneously identified as bad quality)? In some industries, like the food industry, false negatives will have a bigger impact than in other sectors.

How to improve detection accuracy?

  • Improve the quality of your data used to train your machine learning model.
  • Use a more complex training network.
  • Tweak your model by adding data that gives more weight to specific use cases.

The quality of your training material

AI-based machine vision can only work if you have a sizable, high-quality dataset that can be used to find predictable patterns. Although both the quantity and quality of your data are important, the idea of ‘the more, the better’ often takes the upper hand. However, more data does not necessarily mean better results.

A food sorting application may need more data to decide whether it is seeing a good or a bad carrot, because of the product’s inherent variety in shape, and because of the wide variety of things that can be collected in the field. But inspecting a metal bar for defects may not need such a big dataset, because there is less variation in the inspected object and in the types of defects.

Data quality is at least as important. Poor machine vision performance is often the result of badly annotated datasets, inconsistent data, bad recordings, and more. Garbage in, garbage out, as the saying goes.

Things you can do to improve data quality are:

  • Data cleansing: You can improve data quality significantly by correcting, completing and updating data across various databases, either manually and/or by means of automated scripts.
  • Filling the data gaps: You may have data gaps for specific use cases, which are currently filled with less representative data. Adding specific data to fill those gaps will improve your data quality and machine vision results.

Making it all work…

Optimizing your AI-driven machine vision system for latency, detection accuracy and data quality are three paths you can follow to improve the quality of your results. The complexity of it – or the beauty of it, depending on how you look at it – is that these three success factors constantly interact with each other.

For example, improving detection accuracy may require a more complex network, which in turn may increase system latency. But improving the quality of your data may reduce complexity, which may reduce system latency.

Although success stories are not impossible, adding AI to your machine vision system will not improve results just like that. It’s much more realistic to see the performance as a result of a complex interplay of success factors, some of which we described in this article.

Nonetheless, we would love to make your machine vision project a success. Do you have a machine vision or automated inspection challenge? Contact us and let’s talk shop with one of our machine vision experts.

It seems like you're really digging this article.

Subscribe to our newsletter and stay up to date.

    guy digging a hole


    Bart Verhagen

    Bart is one of Kapernikov’s senior computer vision engineers, whose main role is to tackle challenges all the way from architecture to implementation.