While researching for this post, I reviewed the relevant Denso manual (the Setup Guide).Â Although its dry text is no match for my scintillating style, I have to say it gives a good, illustrated explanation of the various coordinate systems.Â I am not going to try to compete with it; instead, I will give my own summary with some videos and, in an effort to get your programming juices flowing, concentrate on why and where you might want to use these features.
I will be using a simulated robot recorded in WinCaps III simulation mode (kudos to Denso for providing a 90-day WinCaps III trial version, available to everyone).Â I chose a 6-axis articulated robot because it can do motions that are impossible using a SCARA or Delta robot. I am using Denso in my examples, primarily because I can use the simulator and am familiar with their robots.
As I’ve noted before, the basics should apply to other robots,Â but the details will vary for different robot controllers,Â Â Of course the robot type determines what poses the robot can do (for example, a 4-axis SCARA can only roll about the Z axis (Rz), not the X axisÂ or Y axis).
Note that you can click on the pictures to see a bigger version.
Work coordinates are rectangular coordinates fixed relative to the base of the robot.Â Work coordinate systems are defined relative to the Base coordinates by specifying:
- the coordinate origin (X, Y, Z) defined in base coordinates
- the angles of rotation (Rx, Ry, Rz) around the corresponding base coordinate axes (Rx, Ry, Rz).
Base coordinates are work coordinates with the origin at the base of the robot.Â In Denso terminology, the base coordinates are “3-dimensional Cartesian coordinates whose origin is at the center of the robot basement”.
Example Base And Work Coordinates
I setup my robot work space with a few objects:
- Denso robot with my simple end effector
- A table with two of my simple fixtures.Â The second fixture is rotated 180 degrees from the first fixture.
- Two work coordinate systems, Work1 for Fixture 1 and Work2 for Fixture 2.Â When the robot is in the appropriate Work coordinates, the fixtures’ positions are exactly the same.
The picture, above, shows Base coordinates (Work0), Work1, and Work 2.Â The lines show the direction of the positive axes (+X, +Y, and +Z).Â TheÂ bottom window shows the definition of Work1 and Work2 relative to Base coordinates.Â NoteÂ that Work2 has a Rz value of 180 degrees, and you can see that the direction of Work2’s X and Y axes are exactly opposite Work1’s.
Coordinate Axes and Angles of Rotation
The first picture above shows the Work1 coordinate axes (X, Y, Z) and angles of rotation (Rx, Ry, Rz), and the second picture shows the Work2 coordinate axes and angles of rotation.Â The lines and arrows point in the direction of positive movement.
The Base coordinate axis directions (X, Y, Z) are the same as Work1’s axis directions, but the origin is different.Â The angles of rotation are the same for both.
The coordinate axis directions are different between Work1 and Work2: because Work2 is rotated 180 degrees about the Z axis, its X and Y axes point in the opposite direction from Work1’s X and Y axes.
The angles of rotation define the attitude of the robot flange, and are also called yaw, pitch, and roll.Â Their origin always is at the center of the robot flange surface (you can see that the origin is the same for both Work1 and Work2), but the directions are the same as the Work coordinate’s X, Y, and Z axes (so Work2’s Rx and Ry directions are reversed compared to Work1’s Rx and Ry directions).Â When you rotate along Rx, Ry, or Rz, the origin (which is the center of the flange surface) will stay in the same X, Y, Z position, but the rest of robot will rotate around that axis.
Putting It All Together: Robot Movements in Base World Coordinates
My youtube video shows some basic movements in Base World Coordinates, moving in CP (straight line) mode.Â Try to match my descriptions above with what the robot is doing: making this video took a lot of time, so I hope it helps make my prose a lot clearer.
More on World Coordinates, of course, including potential applications.
September 12, 2014 6 Comments
I’ve been having major problems importing my end effector into Denso WINCAPS III and maintaining my desired colors.
WINCAPS III can only import Direct-3D (*.X) or VRML Ver 2 (*.WRL) files.Â On the other hand, most MCAD software won’t export VRML files.
I used DesignSpark Mechanical (DSM) to create my design.Â DSM can export 5 3-D file formats: STL, OBJ, 3D PDF, SKP (Sketchup), and XAML.Â Â I was frustrated trying to set the colors I wanted in DSM; help (including blogs and forums) is still very limited, and I couldn’t figure out how to change the color of imported STEP files.Â I was able to get to this:
Since I choose to export to STL, the next step was to convert from STL to VRML using meshconv, but when I imported the resulting VRML file into WINCAPS III I got this:
Yuck!Â All my color is gone, and my part is white hot and glowing purple.Â I’m pretty sure part of the problem is that the WINCAPS simulator has a bright light, which as far as I can tell can’t be adjusted; when the part is rotated, the bright spots change.Â But the major problem, which took me a while to figure out, is that STL files normally do not retain any color information.Â After all, it’s not needed by most 3D printers, and STL was invented for 3D printers.
I did a little more research on the DSM export formats.Â I am using two conversion tools, meshconv (a command line converter) and MeshLab (which includes a viewer and much more).Â Of the 5 DSM 3D export formats, meshconv and MeshLab are only able to import STL and OBJ.Â While OBJ may be able to contain color information, it wasn’t retained when I tested exporting from DSM to OBJ and then importing in meshconv or MeshLab.
I tried using SketchUp.Â I was able to color the parts with a bit of effort (see below for an example), and export to VRML using a add-on, but WINCAPS III didn’t like the resulting VRML file.
So I ended up using MeshLab: I exported from DSM to a STL file, imported the STL file into MeshLab, colored using MeshLab (pretty easy), exported from MeshLab to VRML, and finally imported the VRML file into WINCAPS III.Â The colors in WINCAPS are pretty different from MeshLab’s colors, but they’re much better than my first attempt.
March 25, 2014 No Comments
In my last post, I talked about the development time advantage the robot’s integrated system brings.Â However, I think the core robot advantage is coordinate points, transforms, and kinematics, which all go together.
However, a robot still has much faster development time because it deals with real world coordinates.
Terminology And Capabilities
I am using Denso Robotic’s terminology and capabilities as a rough basis for my posts, instead of continually saying “most robot controllers do X, some do Y, and a few do Z”.Â Most robot controllers should have similar capabilities and equivalent terms.
A point in space is represented using a coordinate systems, such as cartesian (XYZ or rectangular), spherical, or cylindrical.Â Using the coordinate system that best fits the problem can really help when you’re doing physics or geometry, but in the robot world rectangular coordinates are the usual choice.
However, most controllers provide a choice of coordinate origins, including work (fixed relative to the robot base) and tool (fixed relative to the end of the robot’s arm).
The orientation of the end effector can be represented using systems such as Rx (roll about the X axis), Ry (roll about the Y axis), and Rz (roll about the Z axis) or roll, pitch, and yaw.
Kinematics Yet Again
The robot moves its joints, not coordinates.Â Kinematics (the science of motion) and inverse kinematics is how the robot figures out how to move its joints to get to the desired coordinate position and orientation.
The robot controller knows the position of each of its joints (using encoder feedback) and knows their relationships (length of each joint segment, which joint is connected to which, etc), so by doing a little fancy math it can always know where the end effector is in a particular coordinate system (the kinematics part) or figure out how to move the joints to get to the desired positionÂ (the inverse kinematics portion).
Let’s look at a very simple example: suppose we want to lay down glue on the path shown below at a constant velocity from P1 to P4.
It’s pretty simple if you are using a cartesian robot.Â For example, if you are using a Galil controller, the core code could be something like:
PA 0,0 BG AM LMAB VS 10000 VA 50000 VD 50000 LI 0,5000 LI 10000,0 LI 0,-5000 LI -10000,0 LE BGS AM
But suppose we’re using a SCARA robot.Â Now it’s tough to use a normal motion controller, because joints are rotary so every time we try to move a joint in X or the Y axis, we also move in the other axis (Y or X).Â To get straight lines, we have to move multiple joints at just the right relative speeds.
But it’s easy with a robot controller:
MOVE L, @E P[1 TO 4], S50
which moves the robot through positions P1, P2, P3, and P4 at 50% speed with square corners.
The bottom line: the robot controller makes using complex robots (such as articulated, SCARA, or delta) as easy as using a cartesian robot.
Coordinate transforms are very useful; here are a few examples:
- Moving using the teach pendent in Tool mode (the robot has to do coordinate transforms between the Tool coordinates and its base coordinates)
- Easy use of multiple end effectors, such as dual grippers and a camera.Â For example, you can teach one location and then move any of the end effector tools over that location simply by changing the Tool mode.
- Getting machine vision information into a form usable by the robot (calibrate camera, get its positions, and then transform them into robot coordinates)
I will be dig deeper into coordinate systems, transforms, and their uses.
November 1, 2013 No Comments
A Programmer’s Introduction To Industrial Robots
Since many books and articles on industrial robotics have already been written, why I am spending time writing this? Because I am frustrated by the existing material. The books are typically textbooks or academic books, going into the nuances of robot control theory or painting a broad overview. In the magazines and on-line, Iâ€™ve seen marketing white papers and application stories, which can be useful, but donâ€™t go into depth.
Iâ€™ve sub-titled this series A Programmerâ€™s Introduction To Industrial Robots because I am writing from a software developerâ€™s point of view and my goal is to give some idea of what industrial robots can do, not to write an authoritative text.Â (I’m using Robot Primer for the title because that’s its short but still gives the basic idea).
My robotic experience has been using Adept and Denso robots to do precision assembly and test. I donâ€™t claim to be a robot expert, but I did have to do some uncommon operations and thus become familiar with Adept and Denso tech support. I had good experiences with both companiesâ€™ robots and technical support, and would use them again, but Iâ€™ve also heard good things about other vendors.
I view robots as an alternative to other automation options such ball screw stages, pneumatic cylinders, and such, not a replacement for people. This view probably reflects my Silicon Valley background.
Since I am most familiar with Denso robots I might be a little biased towards them, but most of what I cover should be applicable to other manufacturers.
I am only going to cover tradition robotic arms with controllers (such as SCARA and articulated robots), not autonomous robots, mobile robots, etc.
The goal of this series is to provide a basic understanding of industrial robots, with an emphasis on the controllerâ€™s programming and capabilities, so that by the end you should have an idea of when a robot might be a good choice, and then can do more in-depth research on your own.
June 27, 2013 2 Comments