Robot Primer 10: The Core Is Coordinates And Kinematics
In my last post, I talked about the development time advantage the robot’s integrated system brings.Â However, I think the core robot advantage is coordinate points, transforms, and kinematics, which all go together.
However, a robot still has much faster development time because it deals with real world coordinates.
Terminology And Capabilities
I am using Denso Robotic’s terminology and capabilities as a rough basis for my posts, instead of continually saying “most robot controllers do X, some do Y, and a few do Z”.Â Most robot controllers should have similar capabilities and equivalent terms.
A point in space is represented using a coordinate systems, such as cartesian (XYZ or rectangular), spherical, or cylindrical.Â Using the coordinate system that best fits the problem can really help when you’re doing physics or geometry, but in the robot world rectangular coordinates are the usual choice.
However, most controllers provide a choice of coordinate origins, including work (fixed relative to the robot base) and tool (fixed relative to the end of the robot’s arm).
The orientation of the end effector can be represented using systems such as Rx (roll about the X axis), Ry (roll about the Y axis), and Rz (roll about the Z axis) or roll, pitch, and yaw.
Kinematics Yet Again
The robot moves its joints, not coordinates.Â Kinematics (the science of motion) and inverse kinematics is how the robot figures out how to move its joints to get to the desired coordinate position and orientation.
The robot controller knows the position of each of its joints (using encoder feedback) and knows their relationships (length of each joint segment, which joint is connected to which, etc), so by doing a little fancy math it can always know where the end effector is in a particular coordinate system (the kinematics part) or figure out how to move the joints to get to the desired positionÂ (the inverse kinematics portion).
Let’s look at a very simple example: suppose we want to lay down glue on the path shown below at a constant velocity from P1 to P4.
It’s pretty simple if you are using a cartesian robot.Â For example, if you are using a Galil controller, the core code could be something like:
PA 0,0 BG AM LMAB VS 10000 VA 50000 VD 50000 LI 0,5000 LI 10000,0 LI 0,-5000 LI -10000,0 LE BGS AM
But suppose we’re using a SCARA robot.Â Now it’s tough to use a normal motion controller, because joints are rotary so every time we try to move a joint in X or the Y axis, we also move in the other axis (Y or X).Â To get straight lines, we have to move multiple joints at just the right relative speeds.
But it’s easy with a robot controller:
MOVE L, @E P[1 TO 4], S50
which moves the robot through positions P1, P2, P3, and P4 at 50% speed with square corners.
The bottom line: the robot controller makes using complex robots (such as articulated, SCARA, or delta) as easy as using a cartesian robot.
Coordinate transforms are very useful; here are a few examples:
- Moving using the teach pendent in Tool mode (the robot has to do coordinate transforms between the Tool coordinates and its base coordinates)
- Easy use of multiple end effectors, such as dual grippers and a camera.Â For example, you can teach one location and then move any of the end effector tools over that location simply by changing the Tool mode.
- Getting machine vision information into a form usable by the robot (calibrate camera, get its positions, and then transform them into robot coordinates)
I will be dig deeper into coordinate systems, transforms, and their uses.