Publication
IEEE Transactions on Pattern Analysis and Machine Intelligence
Paper

Calibrating a Cartesian Robot with Eye-on-Hand Configuration Independent of Eye-to-Hand Relationship

View publication

Abstract

This paper describes a new approach for geometric calibration of Cartesian robots. This is part of a trio for Real-time 3-D robotics eye, eye-to-hand, and hand calibration, which use a common setup and calibration object, common coordinate systems, matrices, vectors, symbols, and operations throughout the trio, and is especially suited to machine vision community. It is easier and faster than any of the existing techniques, and is ten times more accurate in rotation than any existing technique using standard resolution cameras, and equal to the state-of-the-art vision based technique in terms of linear accuracy. The robot makes a series of automatically planned movement with a camera rigidly mounted at the gripper. At the end of each move, it takes a total of 90 ms to grab an image, extract image feature coordinates, and perform camera extrinsic calibration. After the robot finishes all the movements, it takes only a few milliseconds to do the calibration. The key of this technique is that only one single rotary joint is moving for each movement while the robot motion can still be planned such that the calibration object remains within the field of view. This allows the calibration parameters to be fully decoupled, and converts a multidimensional problem into a series of one-dimensional problem. Another key is that eye-to-hand transformation is not needed at all during the computation (a rough estimation is needed however to keep the calibration object within the field of view). Results of real experiments are reported. The approach used in viewpoint planning can be of use to the general problem of automatic determination of camera placement. © 1989 IEEE