Universal Robots has come to be known as the leader in the collaborative robotics industry. They also have a strong reputation for being easy to implement and program. There is no doubt that, compared to other collaborative robot products and traditional industrial robots, Universal Robots has a very user-friendly interface. And for most applications, has a quick “out of the box to production” implementation. Instead of needing a team of engineers or a dedicated programmer, Universal Robots are being programmed by technicians and sometimes even by operators that previously performed the task for which the robot is being implemented.
With that being said, in my years of experience supporting and being involved with new UR applications, along with programming applications and tests myself, I see a recurring theme among new users. Learning a new interface can take a user’s focus away from being as efficient as possible. The user’s thought is going into where to find a certain button or menu, rather than thinking “what can I do to make this process more efficient and reliable?”
This entry will be the first of a multipart series of best practices and tips for a new UR programmer taken from my experience to make programming the UR faster and easier. These are not intended to be detailed program instructions as you would find in the UR manual. Rather these will help a new user take advantage of years of experience to make their current and future application programming more effective, even if they’ve already read the UR manual cover to cover. (Note: It is a good idea to have read the manual and completed UR’s online Academy so that you are familiar with the terms and features).
Programming Important Application Features
Before diving into programming the task that the robot will perform, it is a good idea to take a step back and think about what it will be useful for the robot to “know.” For ease of understanding, we’ll personify the robot, so don’t worry, it won’t rise up and start disobeying humanity. The robot knows nothing about its surroundings, and so cannot move with respect to your machines or other application features; it only knows where it is in reference to itself, the base coordinate system.
The first thing you will want to do is make note of everything in the application that would be helpful for the robot to be able to move with respect to - this can be machine features such as clamps and locating jigs to tables and load/unload pallets. Be sure to select physical features that are points of interest to the application (think about where the robot will physically interact with something). Being able to move the robot around these physical features in a known way (i.e.: defined coordinate systems) will save time and frustration during the programming phase.
Figure 1: Features Menu
To configure new features, navigate to the Installation Tab in the robot (see Figure 1), and select Features. Base and Tool are known built-in coordinate systems that the robot can be moved with respect to, and here you can add your own based on application needs. I recommend using the Plane feature because it is the easiest to visualize and properly define, as it takes only three separate points to do so. In this way, you can precisely control where the origin and coordinate system axes will lie. We humans like right angles, so this will likely lend itself well to our machines.
Figure 2: Taught Plane
Once a feature is configured, as in Figure 2, you will see three check boxes on the bottom left side of the screen. Options “Show Axes” and “Joggable” are selected by default. “Show Axes” means that origin and axes are shown on screens with graphics as above. It also means that on the Move Tab, when that feature is selected, all “move tool” arrows will be color coded to the appropriate axis! (Red = X axis; Green = Y axis; Blue = Z axis) This can be very handy when teaching program waypoints.
Additionally, if you select Base or Tool in the Features menu, you can select “Show Axes” on each for the same effect on the Move Tab. I highly recommend this!
Figure 3: Feature on Move Tab
In Figure 3, you can see the taught plane selected on the Move Tab. I renamed the plane “Machine” and selected it in the Features menu on the top right of the screen. You can see the Move Tool arrows are all color coded, and we can now move the robot with respect to that Feature. Over the course of programming, this can save precious time, and - perhaps more significantly - the hassle of trying to “zig-zag” in one coordinate system that does not line up with the physical machines.
As you saw in Figure 2, there was a third check box called “Variable.” I will cover this item in a later part of this series as we get a little more advanced, but as a small teaser, it becomes possible to shift entire sections of program waypoints based on a variable feature! This is especially useful for applications where the robot is mobile or when something that the robot will interact with may not be in the same position.
The most important takeaway from all of this information is not programming related; it is the importance of planning ahead.
Many projects need to be done as quickly as possible and users jump into programming the robot right away because they know the application backward and forwards. But they only know how it was done before involving robotics. This difference is critical! Without taking advantage of the tools built into the system, the application will likely take much longer than necessary. This theme will continue in the next part of this series on the subject of program organization: specifically, the Folders function in the UR.