CopernNet : Point Cloud Segmentation using ActiveSampling Transformers
In the dynamic field of railway maintenance, accurate data is critical. From ensuring the health of ...
Published on: June 4, 2018
Automated Guided Vehicles (AGV) are no longer an uncommon sight on today’s production floors. These robots are used for goods displacement and often navigate around the plant using a fixed trajectory map. As such, they are helping companies avoid non-value-adding man hours.
The flow of goods in a production process may look stable and fixed on paper. However, in reality, changes in planning and human interventions are a practical challenge for automated co-workers.
Humans do not always apply robot-like precision when putting goods on a particular location. This makes interaction between AGVs or collaborative robots and human co-workers error-prone.
When robots and humans need to work together, rudimentary position measurement solutions will not be sufficient. The answer lies in the use of today’s state of the art in sensor technology. Sensors already do a great job facilitating inspection activities. But sensors also have the ability to make robots more aware of their changing environment. This makes them smarter on the production floor. In order to achieve this, robots need to be able to turn all of this sensor data into useful information.
Kapernikov specializes in object recognition in all kinds of image footage. Your imagination is the only limitation for our artificial intelligence solutions.
We have experience with the data integration with MES or asset management systems to register the results of our visual observations.
We are independent and able to integrate off-the-shelf software or develop software tailored to your needs.
Modern web technologies allow us to create dashboards for real time interaction with your production facility.
Imagine autonomous vehicles trying to find the optimal way or pick-and-place applications using vision for pose estimation from a moving camera position.
We advise our customers in the sensor selection for visible and combined registration.
Computer vision helps robots to detect objects in an image, estimate their position and handle them with care, all capabilities that are evident to human beings. This capability relies on a wide range of sensors, from hyperspectral cameras to time-of-flight sensors.
Kapernikov is your partner to make efficient computer vision a reality. We configure the capture setup, select a lens system and lighting, and we develop adaptive behavior in the lens, the sensor and the algorithms. We use sensor fusion techniques to integrate multiple sensors in one solution.
We also manage to read text and barcodes on a moving target, or identify vegetation from a moving train. This allows us to create human-like responsiveness in an unconditioned real-life environment. When applicable, we use machine learning algorithms to develop object detection and localization. As an example, we have set up a learning stack allowing to train an artificial brain based on rendered images instead of real-life examples.
This way, a solution can be made production-ready before the first samples even exist.