The Saxion research group Mechatronics together with Benchmark Electronics, specialized in electronics manufacturing, and Bronkhorst High-Tech, a company specialized in mass flow meters, are exploring the use of collaborative robots in their production process in the Tech for Future Project (TFF) RoboTAO.
The partners are investigation the use of collaborative robotics in manufacturing and production environment. The focus of the research is the real co-operation instead of sequential task deployment. In this project, a vision system is used to recognize human handling of the product and the intended interference in the production process. However, conventional computer vision algorithms have trouble in the robust localization and detection of products due to changing illumination and shadowing caused by a human sharing the same workspace as the robotic system. To enhance the robustness of the vision system machine learning with neural networks is explored more thoroughly.
With both companies, we investigate the regular assembly step of a product where product housing, connectors and PCB boards are represented by the building blocks. In this representative case, we investigate the assembly process of a DUPLOâ (LEGOâ group, Billund Denmark) house in different levels of human-machine interaction. First, the cobot will recognize human interference in the production process, and continue where the human stopped. Later, the cobot will recognize a human is sharing the same workspace with a hand detection algorithm. Finally, the cobot will interact with the human to assemble the DUPLOâ house by sharing the blocks. Currently, the project partners are working of the first stage of the human-machine interaction.
To compare conventional computer vision algorithms with machine learning we defined a user scenario in which 4 types of DUPLOâ blocks have to be identified and localized in order to pick them up. The 4 types were discriminated by color: red, green, blue and yellow. The setup is shown in the figure. The blocks were randomly placed in a predefined workspace. To visualize the blocks a CMOS was placed above the workspace, also to visualize a build plate. A UR5 collaborative robotic arm (Universal Robots, Denmark) equipped with RG2 gripper from OnRobot (Odense, Denmark) as an end effector was used to manipulate the blocks and position the blocks to the build plate. Watch the video.
We used Robotic operating system to command the UR5. ROS is an open-source, meta-operating system for robots. It provides the services you would expect from an operating system, including hardware abstraction, and low-level device control.
We analyzed the results and performance using two different computer vision approaches: one based on predefined color spaces, also used in previous studies and the other with machine learning, using multi-layer perceptrons.
Systems in manufacturing processes, or inspection lines using vision for pick and place applications and quality checks are more robust to changes in lighting conditions and product properties (like material colors) when using machine learning software algorithms. In our case study, we showed that relatively simple feed forward neural networks, like MLP can be used to identify products of interest. Furthermore, mechanical alignment using a fixture can be prevented in case of using DUPLOâ blocks. In case when processing speed is a critical parameter, it is recommended to invest in proper hardware and stable (illumination) environments. The project in which we investigate the use of machine learning in collaborative robotics is still ongoing but these preliminary results look very promising for further investigation.