Difference between revisions of "Category:Robotic"
Line 8: | Line 8: | ||
[[File:RoboFab 2018 - Robot Intro_Página_02.jpg]] | [[File:RoboFab 2018 - Robot Intro_Página_02.jpg]] | ||
− | === The term Robot === | + | ==== The term Robot ==== |
Karl Capek coined the term robot in 1920. He was a Czech playwright who wrote R.U.R. which stands for Rosumovi Univerzální Roboti (Rossum’s Universal Robots). | Karl Capek coined the term robot in 1920. He was a Czech playwright who wrote R.U.R. which stands for Rosumovi Univerzální Roboti (Rossum’s Universal Robots). | ||
− | === Robots === | + | ==== Robots ==== |
− | + | === What is a robot? === | |
Rather than defining what a robot is right away, let's pause for a moment and discuss | Rather than defining what a robot is right away, let's pause for a moment and discuss | ||
Line 418: | Line 418: | ||
[[File:2-finger_robot_gripper_and_parallel_robot-1.jpg]] | [[File:2-finger_robot_gripper_and_parallel_robot-1.jpg]] | ||
− | |||
− | |||
− | + | ==== Basic terminologies ==== | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | === Basic terminologies === | + | |
'''Work Cell''': All the equipment needed to perform the robotic process (robot, table, fixtures, etc.) | '''Work Cell''': All the equipment needed to perform the robotic process (robot, table, fixtures, etc.) | ||
Line 460: | Line 446: | ||
[[File:RoboFab 2018 - Robot Intro_Página_09.jpg]] | [[File:RoboFab 2018 - Robot Intro_Página_09.jpg]] | ||
− | === Classification === | + | ==== Classification ==== |
Industrial robots can be classified into six categories based on the following characteristics: | Industrial robots can be classified into six categories based on the following characteristics: | ||
Line 508: | Line 494: | ||
− | + | ==== Types of Motion ==== | |
Robot Programming allows us to develop several motion types: | Robot Programming allows us to develop several motion types: | ||
Line 582: | Line 568: | ||
[[File:Robotic_processes.jpg]] | [[File:Robotic_processes.jpg]] | ||
+ | |||
+ | ==== The latest generation: Collaborative industrial robots ==== | ||
+ | |||
+ | There is a new qualifier that has just recently been used to classify an industrial robot, that is to say, if it can collaborate with its human co-workers. Collaborative robots are made in such a way that they respect some safety standards so that they cannot hurt a human. While traditional industrial robots generally need to be fenced off away from human co-workers for safety reasons. Collaborative robots can be used in the same environment as humans. They can also usually be taught instead of programmed by an operator. Examples of collaborative robots are: | ||
+ | |||
+ | ABB | ||
+ | https://new.abb.com/products/robotics | ||
+ | |||
+ | KUKA | ||
+ | https://www.kuka.com/en-de | ||
+ | |||
+ | Rethink Robotics Sawyer & Baxter | ||
+ | [https://www.rethinkrobotics.com/] | ||
+ | |||
+ | Universal Robots UR3, UR5 & UR10 | ||
+ | https://blog.robotiq.com/bid/61616/Robot-Gripper-for-Universal-Robots | ||
+ | |||
+ | ==== Robot Programming with Kuka|prc ==== | ||
+ | [part of the information explained here is coming from: http://mkmra2.blogspot.com/2016/01/robot-programming-with-kukaprc.html] | ||
+ | |||
+ | KUKA|prc is a set of Grasshopper components that provide Procedural Robot Control for KUKA robots (thus the name PRC). These components are very straightforward to use and it's actually quite easy to program the robots using them. | ||
+ | |||
+ | === Rhino File Setup === | ||
+ | |||
+ | When you work with the robots using KUKA|prc your units in Rhino must be configured for the Metric system using millimetres. The easiest way to do this is to use the pull-down menus and select File > New... then from the dialogue presented chose "Small Objects - Millimeters" as your template. | ||
+ | - Orientation Axes: The other joints (4, 5, 6). These joints are always rotary. Pitch / Roll / Yaw = Orientation Axes. These are the axes closer to the tool. | ||
+ | |||
+ | When installing KUKA|prc has a user interface (UI) much like other Grasshopper plug-ins. The UI consists of the palettes in the KUKA|prc menu. | ||
+ | |||
+ | [[File:kuka prc.jpg]] | ||
+ | |||
+ | There are five palettes which organize the components. These are: | ||
+ | |||
+ | 01 | Core: The main Core component is here (discussed below). There are also the components for the motion types (linear, spline, etc.). | ||
+ | 02 | Virtual Robot: The various KUKA robots are here. We'll mostly be using the KUKA gelis KR6-10 R900 component as those are what are used in the Agilus work cell. | ||
+ | 03 | Virtual Tools: Approach and Retract components are here (these determine how the robot should move after a toolpath has completed). There are also components for dividing up curves and surfaces and generating robotic motion based on that division. | ||
+ | 04 | Toolpath Utilities: The tools (end effectors) are here. We'll mostly be using the Custom Tool component. | ||
+ | 05 | Utilities: The components dealing with input and outputs are stored here. These will be discussed later. | ||
+ | |||
+ | |||
+ | '''KUKA|prc CORE''' | ||
+ | |||
+ | The component you always use in every definition is called the Core. It is what generates the KUKA Robot Language (KRL) code that runs on the robot. It also provides the graphical simulation of the robot motion inside Rhino. Everything else gets wired into this component. | ||
+ | |||
+ | [[File:kuka prc _ Core.jpg]] | ||
+ | |||
+ | The Core component takes five inputs. These are: | ||
+ | |||
+ | SIM- This is a numeric value. Attach a default slider with values from 0.00 to 1.00 to control the simulation. | ||
+ | |||
+ | CMDS- This is the output of one of the KUKA|prc Command components. For example a Linear motion command could be wired into this socket. | ||
+ | |||
+ | TOOL- This is the tool (end effector) to use. It gets wired from one of the Tool components available in the Virtual Tools panel. Usually, you'll use the KUKA|prc Custom Tool option and wire in a Mesh component will show the tool geometry in the simulation. | ||
+ | |||
+ | ROBOT - This is the robot to use. The code will be generated for this robot and the simulation will graphically depict this robot. You'll wire in one of the robots from the Virtual Robot panel. For the Agilus Workcell, you'll use the Agilus KR6-10 R900 component. | ||
+ | |||
+ | COLLISION - This is an optional series of meshes that define collision geometry. Enable collision checking in the KUKA|prc settings to make use of this. Note that collision checking has a large, negative impact on KUKA|prc performance. | ||
+ | |||
+ | There are two output as well: | ||
+ | |||
+ | GEO: This is the geometry of the robot at the current position - as a set of meshes. You can right-click on this socket and choose Bake to generate a mesh version of the robot for any position in the simulation. You can use this for renderings for example. | ||
+ | |||
+ | ANALYSIS: This provides a detailed analysis of the simulation values. This has to be enabled for anything to appear. You enable it in the Settings dialogue, Advanced page, Output Analysis Values checkbox. Then use the Analysis component from the Utilities panel. For example, if you wire a Panel component into the Axis Values socket you'll see all the axis values for each command that's run. | ||
+ | |||
+ | [[File:kuka prc_Analysis.jpg]] | ||
+ | |||
+ | '''Settings''' | ||
+ | |||
+ | The grey KUKA|prc Settings label at the bottom of the Core component gives you access to its settings. Simply left click on the label and the dialog will appear. | ||
+ | |||
+ | The settings are organized into pages which you select from along the top edge of the dialog (Settings, Advanced, and Analysis). The dialog is modeless which means you can operate Rhino while it is open. To see the effect of your changes in the viewport click the Apply button. These settings will be covered in more detail later. | ||
+ | |||
+ | [[File:kuka prc_Settings1.jpg]] | ||
+ | |||
+ | '''Basic Setup''' | ||
+ | There is a common set of components used in nearly all definitions for use with the Agilus Workcell. Not surprisingly, these correspond to the inputs on the Core component. Here is a very typical setup: | ||
+ | |||
+ | [[File:kuka prc_BasicSetup.jpg]] | ||
+ | |||
+ | SIM SLIDER: The simulation Slider goes from 0.000 to 1.000. Dragging it moves the robot through all the motion specified by the Command input. It's often handy to drag the right edge of this slider to make it much wider than the default size. This gives you greater control when you scrub to watch the simulation. You may also want to increase the precision from a single decimal point to several (say 3 or 4). Without that precision, you may not be able to scrub to all the points you want to visualize the motion going through. | ||
+ | |||
+ | You can also add a Play/Pause component. This lets you simulate without dragging the time slider. | ||
+ | |||
+ | CMDS: The components which get wired into the CMDS slot of the Core is really the heart of your definition and will obviously depend on what you are intending the robot to do. In the example above a simple Linear Move, the component is wired in. | ||
+ | |||
+ | TOOL: We normally use custom tools with the Agilus Workcell. Therefore a Mesh component gets wired into the KUKA|prc Custom Tool component (labelled TOOL above). This gets wired into the TOOL slot of the Core. The Mesh component points to a mesh representation of the tool drawn in the Rhino file. See the section below on Tool orientation and configuration. | ||
+ | |||
+ | ROBOT: The robots we have in the Agilus Workcell are KUKA KR6 R900s. So that component is chosen to form the Virtual Robots panel. It gets wired into the ROBOT slot of the Core. | ||
+ | |||
+ | COLLISION: If you want to check for collisions between the robot and the work cell (table) wire in the meshes which represent the work cell. As noted above this has a large negative impact on performance so use this only when necessary. | ||
+ | |||
+ | === Robot Position and Orientation === | ||
+ | |||
+ | The Agilus workcell has two robots named Mitey and Titey. Depending on which one you are using you'll need to set up some parameters so your simulation functions correctly. These parameters specify the location and orientation of the robot within the workcell 3D model. | ||
+ | |||
+ | Note: The latest revision of Kuka|prc contains a custom robot for the Agilus workcell. It has two output sockets, Mitey and Titey. Simply wire in the robot you intend to use and no more configuration is required. | ||
+ | |||
+ | [[File:Kuka_prc_MiteyTiteyComp.jpg]] | ||
+ | |||
+ | If you don't have the latest version, see below for how to set them up. | ||
+ | |||
+ | '''Mitey''' | ||
+ | |||
+ | Mitey is the name of the robot mounted in the table. Its base is at 0,0,0. The robot is rotated about its vertical axis 180 degrees. That is, the cable connections are on the right side of the robot base as you face the front of the workcell. | ||
+ | |||
+ | [[File:kuka prc_Workcell.jpg]] | ||
+ | |||
+ | '''Bold text''' | ||
+ | To set up Mitey do the following: | ||
+ | |||
+ | Bring up the Settings dialog by left clicking on KUKA|prc Settings label on the Core component. The dialog presented is shown below: | ||
+ | |||
+ | [[File:kuka prc_BaseSetup.jpg]] | ||
+ | |||
+ | You specify the X, Y, and Z offsets in the Base X, Base Y, and Base Zdialogues of the dialog. Again, for Mitey these should all be 0. In order to rotate the robot around the vertical axis you specify 180 in the Base A field. You can see that the A axis corresponds to vertical in the diagram. | ||
+ | |||
+ | Base X: 0 | ||
+ | Base Y: 0 | ||
+ | Base Z: 0 | ||
+ | Base A: 180 | ||
+ | Base B: 0 | ||
+ | Base C: 0 | ||
+ | |||
+ | After you hit Apply the robot position will be shown in the viewport. You can close the dialog with the Exit button in the upper right corner. | ||
+ | |||
+ | '''Titey''' | ||
+ | |||
+ | The upper robot hanging from the fixture is named Titey. It has a different X, Y and Z offset values and rotations. Use the settings below when your definition should run on Titey. | ||
+ | |||
+ | [[File:Kuka prc BaseSetup.jpg]] | ||
+ | |||
+ | Note: These values are all in millimetres. | ||
+ | Base X: 1102.5 | ||
+ | Base Y: 0 | ||
+ | Base Z: 1125.6 | ||
+ | Base A: 90 | ||
+ | Base B: 180 | ||
+ | Base C: 0 | ||
+ | |||
+ | '''Code Output''' | ||
+ | |||
+ | The purpose of KUKA|prc is to generate the code which runs on the robot controller. This code is usually in the Kuka Robot Language (KRL). You need to tell KUKA|prc what directory and file name to use for its code output. Once you've done this, as you make changes in the UI, the output will be re-written as necessary to keep the code up to date with the Grasshopper definition. | ||
+ | |||
+ | To set the output directory and file name follow these steps: | ||
+ | Bring up the Settings dialogue via the Core component. | ||
+ | On the main Settings page, enter the project filename and choose an output directory. Note: See the? button in the dialogue for recommendations on the filename (which characters to avoid). | ||
+ | |||
+ | [[File:Kuka_prc_Output.jpg]] | ||
+ | |||
+ | |||
+ | '''Start Position / End Position''' | ||
+ | |||
+ | When you work with robots there are certain issues you always have to deal with: | ||
+ | Reach: Can the robot's arms reach the entire workpiece? | ||
+ | Singularities: Will any joint positions result in singularities? (See below for more on this topic) | ||
+ | Joint Limits: During the motion of the program will any of the axes hit their limits? | ||
+ | One setting which has a major impact on these is the Start Position. The program needs to know how the tool is positioned before the motion starts. This value is VERY important. That's because it establishes an initial placement for the joint limits. Generally, you should choose a start position that doesn't have any of the joints near their rotation limits - otherwise, your programmed path may cause them to hit the joint limit. This is a really common error. Make sure you aren't unintentionally near any of the axes limits. Also, the robot will move from it's current position (wherever that may be) to the start position. It could move right through your workpiece or fixture setup. So make sure you are aware of where the start position is, and make sure there's a clear path from the current position of the robot to the start position. In other words, jog the robot near to the start position to begin. That'll ensure the motion won't hit your set up. | ||
+ | |||
+ | You specify these start and end position values in the Settings of the Core. Bring up the settings dialog and choose the Advanced page. | ||
+ | |||
+ | Under the Start / Endposition section, you enter the axis values for A1 through A6. This begs the questions "how do I know what values to use?". | ||
+ | |||
+ | |||
+ | [[File:kuka prc_StartEndPositions.jpg]] | ||
+ | |||
+ | You can read these directly from the physical robot pendant. That is, you jog the robot into a reasonable start position and read the values from the pendant display. Enter the values into the dialog. Then do the same for the End values. See the section Jogging the Robot in topic Taubman College Agilus Workcell Operating Procedure. | ||
+ | |||
+ | You can also use KUKA|prc to visually set a start position and read the axis values to use. To do this you wire in the KUKA|prc Axis component into the Core component. You can "virtually jog" the robot to a specific position using a setup like this: | ||
+ | |||
+ | [[File:kuka prc_StartPositionSetup.jpg]] | ||
+ | |||
+ | Then simply read the axis values from your sliders and enter these as the Start Position or End Position. | ||
+ | |||
+ | Another way is to move the simulation to the start point of the path. Then read the axis values from the Analysis output of the Core Settings dialog. You can see the numbers listed from A01 to A06. Jot these down, one decimal place is fine. Then enter them on the Advanced page. | ||
+ | |||
+ | [[File:kuka prc_JointAnalysisValues.jpg]] | ||
+ | |||
+ | === Initial Posture === | ||
+ | |||
+ | Related to the Start Point is the Initial Posture setting. If you've set the Start Position as above and are still seeing motion (like a big shift in one of the axis to reorient) try the As Start option. This sets the initial posture to match the start position. | ||
+ | |||
+ | [[File:Kuka prc_InitialPosture.jpg]] | ||
+ | |||
+ | |||
+ | |||
+ | ==== Robot Programming with Robots plugin ==== | ||
+ | [part of the information explained here is coming from: https://github.com/visose/Robots/wiki/How-To-Use#grasshopper] | ||
+ | |||
+ | Grasshopper plugin for programming ABB, KUKA and UR robots for custom applications. Special care is taken to have feature parity between all manufacturers and have them behave as similar as possible. The plugin can also be used as a .NET library to create robot programs through scripting inside Rhino (using Python, C# or VB.NET). Advanced functionality is only exposed through scripting. | ||
+ | |||
+ | === How To Use === | ||
+ | |||
+ | The basic Grasshopper workflow: | ||
+ | |||
+ | 1- Select your robot model using the "Load robot" component. | ||
+ | |||
+ | 2- Define your end effector (TCP, weight and geometry) using the "Create tool" component. | ||
+ | |||
+ | 3- Create a flat list of targets that define your tool path using the "Create target" component. | ||
+ | |||
+ | 4- Create a robot program connecting your list of targets and robot model to the "Create program" component. | ||
+ | |||
+ | 5- Preview the tool path using the "Simulation" component. | ||
+ | |||
+ | 6- Save the robot program to a file using the "Save program" component. If you're using a UR robot, you can also use the "Remote UR" component to stream the program through a network. | ||
+ | |||
+ | |||
+ | === Parameters === | ||
+ | |||
+ | '''Target''' | ||
+ | |||
+ | [[File:ROBOTS_target.JPG]] | ||
+ | |||
+ | A target defines a robot pose, how to reach it and what to do when it gets there. A tool path is made out of a list of targets. Besides the pose, targets have the following attributes: tool, speed, zone, frame, external axes and commands. | ||
+ | |||
+ | There are two types of targets, joint targets and Cartesian targets: | ||
+ | |||
+ | Joint target: The pose of the robot is defined by 6 rotation values corresponding to the 6 axes. This is the only way to unambiguously define a pose. The first target of a robot program should be a joint target. | ||
+ | |||
+ | Cartesian target: The pose of the robot is defined by a plane that corresponds to the desired position and orientation of the TCP. Cartesian targets can produce singularities, the most common being wrist singularities. This happens when the desired position and orientation requires the 4th and 6th joints to be parallel to each other. | ||
+ | |||
+ | |||
+ | |||
+ | Cartesian targets contain two optional attributes, configuration and motion type: | ||
+ | |||
+ | === Configuration === | ||
+ | |||
+ | Industrial robots have 8 different joint configurations to reach the same TCP position and orientation. By default, the configuration in which the joints have to rotate the least is selected. This is determined using the least squares method, which is also the closest distance between targets in joint space. All joints are weighted equally. You can explicitly define a configuration by assigning a value (from 0 to 7) to the Configuration variable. Forcing a configuration doesn't define a pose unambiguously since the joints might rotate clockwise or counter-clockwise depending on the previous target. | ||
+ | |||
+ | === Motion type === | ||
+ | |||
+ | A robot can move towards a Cartesian target following either a joint motion or a linear motion: | ||
+ | |||
+ | Joint: This is the default motion type. In a joint motion, the controller calculates the joint rotation values on the target using inverse kinematics and moves all of the joints at proportional but fixed speeds so that they will stop at the same time at the desired target. The motion is linear in joint space but the TCP will follow a curved path in world space. It's useful if the path that the TCP follows is not critical, like in pick and place operations. Since inverse kinematics only needs to be calculated at the end of the path, it's also useful to avoid singularities. | ||
+ | |||
+ | [[File:ROBOTS_target_joint.JPG]] | ||
+ | |||
+ | Linear: The robot moves towards the target in a straight line in world space. This is useful if the path that the TCP follows is critical, like while milling or extruding material. If the path goes through a singularity at any point it will not be able to continue. If it moves close to a singularity it might slow down below the programmed speed. | ||
+ | |||
+ | [[File:ROBOTS_target_linear.JPG]] | ||
+ | |||
+ | '''Castings''' | ||
+ | |||
+ | A string containing 6 numbers separated by commas will create a joint target with default attribute values. | ||
+ | A plane will create a Cartesian target with default attribute values. | ||
+ | |||
+ | === Tool === | ||
+ | |||
+ | [[File:ROBOTS_create a tool.JPG]] | ||
+ | |||
+ | This parameter defines a tool or end effector mounted to the flange of the robot. In most cases a single tool will be used throughout the tool path, but each target can have a different tool assigned. You might want to change tool if your end effector has more than one TCP, or due to load changes during pick and place. Contains the following attributes: | ||
+ | |||
+ | Name: Name of the tool (should not contain spaces or special characters). The name is used to identify the tool in the pendant and create variable names in post-processing. | ||
+ | |||
+ | TCP: Stand for "tool center point". Represents the position and orientation of the tip of the end effector in relation to the flange. The default value is the world XY plane (the center of the flange). | ||
+ | |||
+ | Weight: The weight of the end effector in kilograms. The default value is 0 kg. | ||
+ | |||
+ | Mesh: Single mesh representing the geometry of the tool. Used for visualization and collision detection. | ||
+ | |||
+ | === Coordinate systems === | ||
+ | |||
+ | [[File:Coordinate systems.jpg]] | ||
+ | |||
+ | As with Rhino, the plugin uses a right-handed coordinate system. The main coordinate systems are: | ||
+ | |||
+ | World coordinate system: It's the Rhino document's coordinate system. Cartesian robot targets are defined in this system. They've transformed to the robot coordinate system during post-processing. | ||
+ | |||
+ | Robot coordinate system: Used to position the robot in reference to the world coordinate system. By default, robots are placed in the world XY plane. The X axis points away from the front of the robot, the Z axis points vertically. | ||
+ | |||
+ | Tool coordinate system: Used to define the position and orientation of the TCP relative to the flange. The Z axis points away from the flange (normal to the flange), the X axis points downwards.]] | ||
+ | |||
+ | As with Rhino, the plugin uses a right-handed coordinate system. The main coordinate systems are: | ||
+ | |||
+ | World coordinate system: It's the Rhino document's coordinate system. Cartesian robot targets are defined in this system. They have transformed into the robot coordinate system during post-processing. | ||
+ | |||
+ | Robot coordinate system: Used to position the robot in reference to the world coordinate system. By default, robots are placed in the world XY plane. The X axis points away from the front of the robot, the Z axis points vertically. | ||
+ | |||
+ | Tool coordinate system: Used to define the position and orientation of the TCP relative to the flange. The Z axis points away from the flange (normal to the flange), the X-axis points downwards. | ||
+ | |||
+ | === Robot === | ||
+ | [[File:ROBOTS_load robot system.jpg]] | ||
+ | |||
+ | Represents a specific robot model. It's used to calculate the forward and inverse kinematics for Cartesian targets, to check for possible errors and warnings on a program, for collision detection and simulation. If your robot model is not included in the assembly, check the wiki on how to add your own custom models. | ||
+ | |||
+ | Remote connection: You can use the robot parameter to connect to the robot controller through a network. Currently, this is only supported on UR robots. | ||
+ | |||
+ | === Create a program === | ||
+ | |||
+ | Units: The plugin always uses the same units irrespective of the robot type or document settings. | ||
+ | |||
+ | Length: Millimeters | ||
+ | |||
+ | Angle: Radians | ||
+ | |||
+ | Weight: Kilograms | ||
+ | |||
+ | Time: Seconds | ||
+ | |||
+ | Linear speed: Millimeters per second | ||
+ | |||
+ | Angular speed: Radians per second | ||
+ | |||
+ | '''Uploading the program to a robot''' | ||
+ | |||
+ | A program defines a complete toolpath and creates the necessary robot code to run it. To create a program you need a list of targets and a robot model. | ||
+ | |||
+ | [[File:ROBOTS_create a program.JPG]] | ||
+ | |||
+ | When a program is created, the following post-processing is done: | ||
+ | |||
+ | It will clean up and fix common mistakes. | ||
+ | |||
+ | It will run through the sequence of targets checking for kinematic or other errors. | ||
+ | |||
+ | It will return warnings for unexpected behaviour. | ||
+ | |||
+ | It will generate a simulation to preview the toolpath. | ||
+ | |||
+ | It calculates an approximate duration of the program. | ||
+ | |||
+ | If there are no errors, it will generate the necessary code in the robot's native language. | ||
+ | |||
+ | '''Errors''' | ||
+ | |||
+ | After the first error is found, it will stop and output a program ending in the error. Most errors are due to kinematics (the TCP not being able to position itself on the target). There are other errors, like exceeding the maximum payload. To identify the error, preview simulation of the program at the last target. Programs that contain errors won't create native code. | ||
+ | |||
+ | '''Warnings''' | ||
+ | |||
+ | The program will also inform of any warnings to take into account. Warnings may include changes in configuration, maximum joint speed reached, targets with unassigned values, first target not set as a joint target. Programs that contain warnings will create native code and might be safe to run if the warnings are believed to not cause any issues. | ||
+ | |||
+ | '''Code''' | ||
+ | |||
+ | To run the program, a code has to be generated in the specific language used by the manufacturer (RAPID for ABB robots, KRL for KUKA robots and URScript for UR robots). If necessary, this code can then be edited manually. A program containing edited code will not check for warnings or errors and can't be simulated. | ||
+ | |||
+ | '''Simulation''' | ||
+ | |||
+ | The program contains a simulation of the tool path. The simulation currently doesn't take into account acceleration, deceleration or approximation zones. It simulates both linear and joint motions, actual robot speed, including slowdowns when moving close to singularities and wait times. | ||
+ | |||
+ | === Zone === | ||
+ | |||
+ | Defines an approximation zone for a target. Two variables make up a zone, a distance (in mm) and a rotation (in radians). The default value is 0 mm. | ||
+ | |||
+ | [[File:ROBOTS_zone parameter.JPG]] | ||
+ | |||
+ | Targets can be stop points or way points: | ||
+ | |||
+ | Stop points have a distance and rotation value of 0. All axis will completely before moving to the next target. Commands associated with this target will run just after the TCP reaches the target. | ||
+ | |||
+ | Way points have a distance or rotation value greater than 0. Once the TCP position is within the distance value to the target, it will start moving towards the next target. One the TCP orientation is within the rotation value, it will start orienting towards the next target. This is useful to create a continuous path and avoid the robot stopping (decelerating and accelerating) at the cost of precision. Commands associated with this target will usually run a bit before the TCP enters the zone area. | ||
+ | |||
+ | IMPORTANT: If multiple targets use the same zone, first use a string or number to cast into a zone parameter, then assign the parameter to the different targets. Don't assign a string or number directly as a zone to multiple targets, as different zone instances will be created (even if they have the same value) and will create unnecessary duplication in the robot code. | ||
+ | |||
+ | |||
+ | |||
+ | |||
Revision as of 18:58, 9 November 2018
Here you can find a very brief overview of robots used in manufacturing. It talks about what is it that makes a machine a robot, what differentiates the various types of robots, different ways robots can move, and three types of power sources for robots.
Iaac - Robotic fabrication workshop with Tom Pawlofsky. 2013
File:RoboFab 2018 - Robot Intro Página 02.jpg
Contents
- 1 The term Robot
- 2 Robots
- 3 What is a robot?
- 4 Industrial robots with different types of movements
- 5 Industrial robots for different applications
- 6 Rhino File Setup
- 7 Robot Position and Orientation
- 8 Initial Posture
- 9 How To Use
- 10 Parameters
- 11 Configuration
- 12 Motion type
- 13 Tool
- 14 Coordinate systems
- 15 Robot
- 16 Create a program
- 17 Zone
The term Robot
Karl Capek coined the term robot in 1920. He was a Czech playwright who wrote R.U.R. which stands for Rosumovi Univerzální Roboti (Rossum’s Universal Robots).
Robots
What is a robot?
Rather than defining what a robot is right away, let's pause for a moment and discuss whether we need to answer a question like this after all. Everybody knows the hat a a robot is some sort of a machine that can move around depending on what movie you saw or which book you read, it can either help humans in their day-to-day life or mean the end of humanity. It's clear that there is some controversy and lots of misunderstandings about robots and their role in the past, present, and the future. In order to better understand the situation, let's first examine closely the term "robot" itself. Then, we will try to define it a bit more formally to prevent any misunderstanding or controversy. History of the term robot The term "robot" was used for the first time by Karel Čapek, a Czech writer in his play Rossum's Universal Robots (R.U.R) that he wrote in 1920, to denote an artificial human made out of synthetic organic matter. These robots (roboti in Czech) were made in factories and their purpose was to replace human workers. While they were very efficient and executed orders they were given perfectly, they lacked any emotion. It seemed that humans would not need to work at all because robots seemed to be happy to work for them. This changed after a while and a robot revolt result in the ed in the extinction of the human race. R.U.R is quite dark and disturbing, but it does not leave the future hopeless. It was considered quite a success back in the day and we certainly do recommend you to read it. As its copyright had already expired in many countries at the time of writing this book, it should not be a problem to find a version online, which is in the public domain.
"When he (Young Rossum) took a look at human anatomy he saw immediately that it was too complex and that a good engineer could simplify it. So he undertook to redesign anatomy, experimenting with what would lend itself to omission or simplification. Robots have a phenomenal memory. If you were to read them a twenty-volume encyclopedia they could repeat the contents in order, but they never think up anything original. They'd make fine university professors." – Karel Capek, R.U.R. (Rossum's Universal Robots), 1920
While many attribute the term robot to Karel Čapek as he wrote the play in which it appeared for the first time, there are sources suggesting that it was actually Čapek's brother Josef who came up with the term (it seems that there was an article in Czech daily print written by Karel Čapek himself, in which he wants to set the record straight by telling this story). Karel wanted to use the term laboři (from Latin labour, work), but he did not like it. It seemed too artificial to him, so he asked his brother for advice. Josef suggested robotic and that was what Karel used in the end.
Now that we know when the term robot was used for the first time and who actually created it, let's find out where does it come from. The explanation that many use is that it comes from the Czech words robota and robotník, which literally means "work" and "worker" respectively. However, the word robota also means "work" or "serf labour" in Slovak. Also, we should take into account that some sources suggest that by the time Karel was writing R.U.R, he and his brother often visited his father in a small Slovak spa town called Trenčianske Teplice. Therefore, it might very well be that the term robot was inspired by the usage of the word "robota" in the Slovak language, which is coincidental, the native language of one of the authors of this book.
Whether the term robot comes from Czech or Slovak, the word robota might be a matter of national pride, but it does not concern us too much. In both cases, the literal meaning is "work", "labour", or "hard work" and it was the purpose of the Čapek's robots. However, robots have evolved dramatically over the past hundred years. To say that they are all about doing hard work would probably be an understatement. So, let's try to define the notion of a robot as we perceive it today.
Modern definition of a robot
When we try to find a precise definition of some term, our first stop is usually some sort of encyclopedia or a dictionary. Let's try to do this for the term robot. Our first stop will be Encyclopedia Britannica. Its definition of a robot is as follows:
"Any automatically operated machine that replaces human effort, though it might not resemble human beings in appearance or perform functions in a humanlike manner."
This is quite a nice definition, but there are quite a few problems with it. First of all, it's a bit too broad. By this definition, a washing machine should also be considered a robot. It does operate automatically (well, most of them do), it does replace human effort (although not by changing the same tasks a human would do), and it certainly does not resemble a human.
Secondly, it's quite difficult to imagine what a robot actually is after reading this definition. With such a broad definition, there are way too many things that can be considered a robot and this definition do not provide us with any specific features. It turns out that while Encyclopedia Britannica's definition of a robot does not fit our needs well enough, it's actually one of the best ones that one can find. For example, The Free Dictionary defines a robot as "A mechanical device that sometimes resembles a human and is capable of performing a variety of often complex human tasks on command or by being programmed in advance."
This is even worse than what we had and it seems that a washing machine should still be considered a robot. The inherent problem with these definitions is that they try to capture a vast amount of machines that we call robots these days. The result is that it's very difficult, if not impossible, to come up with a definition that will be comprehensive enough and not include a washing machine at the same time. John Engelberger, founder of the world's first robotics company and industrial robotics (as we know it today) once famously said, "I can't define a robot, but I know one when I see one."
So, is it even possible to define a robot? Maybe not in general. However, if we limit ourselves just to the scope of this book, there may be a definition that will suit our needs well enough. In her very nice introductory book on the subject of robotics called The Robotics Primer (which we also highly recommend), Maja J. Mataric uses the following definition:
"A robot is an autonomous system which exists in the physical world, can sense its environment, and can act on it to achieve some goals."
At first sight, it might not seem like a vast improvement over what we have so far, but let's dissect it part by part to see whether it meets our needs. The first part says, "A robot is an autonomous system". By autonomous, we mean that a robot makes decisions on its own—it's not controlled by a human. This already seems to be an improvement as it weeds out any machine that's controlled by someone (such as our famous washing machine). Robots that we will talk about throughout this book may sometimes have some sort of a remote function, which allows a human to control it remotely, but this functionality is usually built-in as sort of a safety measure so that if something goes wrong and the robot's autonomous systems fail to behave as we would expect them to, it's still possible to get the robot to safety and diagnose its problems afterwards. However, the main goal still stays the same, that is, to build robots that can take some direction from humans and are able to act and function on their own. However, just being an autonomous system will certainly not be enough for a robot in this book. For instance, we can find many computer programs that we can call autonomous systems (they are not controlled by an individual and make decisions on their own) and yet we do not consider them to be robots. To get around this obstacle, we need the other part of the sentence that says, "which exists in the physical world".
Given the recent advances in the fields of artificial intelligence and machine learning, there is no shortage of computer systems that act on their own and perform some work for us, which is what robots should be for. As a quite notorious example, let's consider spam filters. These are computer programs that read every e-mail that reaches your e-mail address and decides whether you may want to read it (and that the e-mail is indeed legitimate) or whether it's yet another example of an unwanted e-mail.
There is no doubt that such a system is helpful (if you disagree, try to read some of the e-mails in your Spam folder—I am pretty sure it will be a boring read). It's estimated that over 60 per cent of all e-mail traffic in 2014 can be attributed to spam e-mails. Being able to automatically filter them can save us a lot of reading time. Also, as there is no human involved in the decision process (although, we can help it by marking an e-mail as spam), we can call such a system as autonomous. Still, we will not call it a true robot. Rather, we call them "software robots" or just "bots" (the fact that their name is shorter may come from the fact that they are short of the physical parts of true robots).
While software robots are definitely an interesting group on its own, it's the physical world in which robots operate that makes the process of creating them so exciting and difficult at the same time. When creating a software robot, you can count on the fact that the environment it will run in (usually the operating system) will be quite stable (as in, not too many things may change unexpectedly). However, when you are creating a real robot, you can never be sure. This is why a real robot needs to know what is happening in the environment in which it operates. Also, this is why the next part of the definition says, "can sense its environment".
Sensing what is happening around a real robot is arguably its most important feature. To sense their surrounding environments, robots usually have sensors. These are devices that measure physical characteristics of the environment and provide this information back to the robot so that it can, for instance, react to sudden changes of temperature, humidity, or pressure. This is quite a big difference from software robots. While they just get the information they need in order to operate somewhat magically, real robots need to have a subsystem or subsystems that take care of obtaining this information. If we look at the differences between robots and humans, we will not find many (in our very high-level view, of course). We can think of sensor subsystems as artificial replacements for human organs that provide this sort of information to the brain.
One important consequence of this definition is that anything that does not sense its environment cannot be called a robot. This includes any devices that just "drive blind" or move in a random fashion because they do not have any information from the environment to base their behaviour on. Any roboticist will tell you that robots are very exciting machines. Many will also, argue that what makes them so exciting is actually their ability to interact with the outside world (which is to move or otherwise change the environment they are in). Without this, they are just another static machine that might be useful, but rather unexciting.
Our definition of a robot reflects this in its last part when it says, "can act on it to achieve some goals".
Acting on the environment might sound like a very complex task for a robot, but in this case, it just means changing the world in some (even very slight) way. We call these parts of robots that perform this as effectors. If we look at our robot vs human comparison, effectors are the artificial equivalents of hands, legs, and other body parts that allow it to move. Effectors make use of some lower-level systems such as motors or muscles that actually carry out the movement. We call them actuators. Although, the artificial ones may seem to function similar to the biological ones, a closer look will reveal that they are actually quite different. You may have noticed that this part is not only about acting on the robot's environment, but also about achieving some goals. While many hobby roboticists build robots just for the fun of it, most robots are built in order to carry out (or, should we rather say, to help with) some tasks, such as moving heavy parts in a factory or locating victims in areas affected by natural disasters.
As we said before, a system or a machine that behaves randomly and does not use information from its environment cannot really be considered a robot. However, how can it use this information somehow? The easiest thing to do is to do something useful, which we can rephrase as trying to reach some goal that we consider useful, which in turn brings us back to our definition. A goal of a robot does not necessarily need to be something as complex and ambitious as "hard labour for human". It can easily be something simple, such as "do not bump into obstacles" or "turn the light switch on".
Now, as we have at least a slight idea of what a robot is, we can move on to briefly discuss where robots come from, in other words, the history of robotics.
Where do robots come from?
As the title suggests, this part of the chapter should be about the history of robots. We already know a few quite important facts, such as the term robot was coined by a Czech author Karel Čapek in 1920. As it turns out, there are many more interesting events that happened over the years, other than this one. In order to keep things organized, let's start from the beginning.
It's quite difficult to pinpoint a precise date in history, which we can mark as the date of birth of the first robot. For one, we have established quite a restrictive definition of a robot previously; thus, we will have to wait until the 20th century to actually see a robot in the proper sense of the word. Until then, let's at least discuss the honourable mentions.
The first one that comes close to a robot is a mechanical bird called "The Pigeon". This was postulated by a Greek mathematician Archytas of Tarentum in the 4th century BC and was supposed to be propelled by steam. It cannot be considered a robot by our definition (not being able to sense its environment already disqualifies it), but it comes pretty comes for its age. Over the following centuries, there were many attempts to create automatic machines, such as clocks measuring time using the flow of water, life-sized mechanical figures, or even first programmable humanoid robots (it was actually a boat with four automatic musicians on it). The problem with all these is that they are very disputable as there is very little (or none) historically trustworthy information available about these machines.
It would have stayed like this for quite some time if it was not for Leonardo Da Vinci's notebooks that were rediscovered in the 1950s. They contain a complete drawing of a 1945 humanoid (a fancy word for a mechanical device that resembles humans), which looks like an armoured knight. It seems that it was designed so that it could sit up, wave its arms, move its head, and most importantly, amuse royalty. In the 18th century, following the amusement line, Jacques de Vaucanson created three automata: a flute player that could play twelve songs, a tambourine player, and the most famous one, "The Digesting Duck". This duck was capable of moving, quacking, flapping wings, or even eating and digesting food (not in a way you will probably think—it just released matter stored in a hidden compartment). It was an example of "moving anatomy"—modeling human or animal anatomy using mechanics.
Our list will not be complete if we omitted these robot-like devices that came about in the following century. Many of them were radio-controlled, such as Nikola Tesla's boat, which he showcased at Madison Square Garden in New York. You could command it to go forward, stop, turn left or right, turn its lights on or off, and even submerge. All of this did not seem too impressive at that time because the press reports attributed it to "mind control".
At this point, we have once again reached the time when the term robot was used for the first time. As we said many times before, it was in 1920 when Karel Čapek used it in his play, R.U.R. Two decades later, another very important term was coined. Issac Asimov used the term robotics for the first time in his story "Runaround" in 1942. Asimov wrote many other stories about robots and is considered to be a prominent sci-fi author of his time.
However, in the world of robotics, he is known for his three laws of robotics:
• First law: A robot may not injure a human being or through inaction allow a human being to come to harm.
• Second Law: A robot must obey the orders given to it by human beings, except where such orders would conflict with the first law.
• Third law: A robot must protect its own existence, as long as such protection does not conflict with the first or second law.
After a while, he added a zeroth law:
• Zeroth law: A robot may not harm humanity or by inaction allow humanity to come to harm.
These laws somehow reflect the feelings people had about machines they called robots at that time. Seeing enslavement by some sort of intelligent machine as a real possibility, these laws were supposed to be some sort of guiding principles one should at least keep in mind, if not directly follow when designing a new intelligent machine. Also, while many were afraid of the robot apocalypse, the time has shown that it's still yet to come. In order for it to take place, machines will need to get some sort of intelligence, some ability to think and act based on their thoughts. Also, while we can see that over the course of history, the mechanical side of robots went through some development, the intelligence simply was not there yet. This was part of the reason why in the summer of 1956, a group of very wise gentlemen (which included Marvin Minsky, John McCarthy, Herbert Simon, and Allan Newell) was later called to be the founding fathers of the newly founded field of Artificial Intelligence. It was at this very event where they got together to discuss creating intelligence in machines (thus, the term artificial intelligence).
Although, their goals were very ambitious (some sources even mention that their idea was to build this whole machine intelligence during that summer), it took quite a while until some interesting results could be presented. One such example is Shakey, a robot built by the Stanford Research Institute (SRI) in 1966. It was the first robot (in our modern sense of the word) capable to reason its own actions. The robots built before this usually had all the actions they could execute preprogrammed. On the other hand, Shakey was able to analyze a more complex command and split it into smaller problems on his own.
The following image of Shakey is taken from https://en.wikipedia.org/wiki/ File:ShakeyLivesHere.jpg:
Shakey, resting in the Computer History Museum in Mountain View, California
His hardware was quite advanced too. He had collision detectors, sonar range finders, and a television camera. He operated in a small closed environment of rooms, which were usually filled with obstacles of many kinds. In order to navigate around these obstacles, it was necessary to find a way around these obstacles while not bumping into something. Shakey did it in a very straightforward way.
At first, he carefully planned his moves around these obstacles and slowly (the technology was not as advanced back then) tried to move around them. Of course, getting from a stable position to movement wouldn't be possible without some shaky moves. The problem was that Shakey's movements were mostly of this shakey nature, so he could not be called anything other than Shakey. The lessons learned by the researchers who were trying to teach Shakey how to navigate in his environment turned out to be very important. It comes as no surprise that one of the results of the research on Shakey is the A* search algorithm (an algorithm that can very efficiently find the best path between two goals). This is considered to be one of the most fundamental building blocks not only in the field of robotics or artificial intelligence but also in the field of computer science as a whole. Our discussion on the history of robotics can go on and on for a very long time. Although one can definitely write a book on this topic (as it's a very interesting one), it's not this book; we shall try to get back to the question we tried to answer, which was: where do robots come from?
In a nutshell, robots evolved from the very basic mechanical automation through remotely-controlled objects to devices or systems that can act (or even adopt) on their own in order to achieve some goal. If this sounds way too complicated, do not worry. The truth is that to build your own robot, you do not really need to deeply understand any of this. The vast majority of robots you will encounter are built from simple parts that are not difficult to understand when you see the big picture. So, let's figure out how we will build our own robot. Let's find out what are the robots made of.
What can we find in a robot?
In the very first part of this chapter, we tried to come up with a good (modern) definition of a robot. It turns out that the definition we came up with does not only describe a robot as we know it (or would like to know it), but also gives us some great pointers as to what parts can we most definitely find in (or on) a robot. Let's see our definition again: "A robot is an autonomous system which exists in the physical world, can sense its environment, and can act on it to achieve some goals." So, what will these most important parts be? Here is what we think should be on this list.
information from: Learning Robotics Using Python, Lentin Joseph. Packt Publishing Ltd, May 27, 2015.
Industrial robots with different types of movements
Cartesian robots Cartesian robots are robots that can do 3 translations using linear slides.
SCARA robots Scara robots are robots that can do 3 translations plus a rotation around a vertical axis.
6-axis robots 6-axis robots are robots that can fully position their tool in a given position (3 translations) and orientation (3 orientations)
Redundant robots Redundant robots can also fully position their tool in a given position. But while 6-axis robots can only have one posture for one given tool position, redundant robots can accommodate a given tool position under different postures. This is just like the human arm that can hold a fixed handle while moving the shoulder and elbow joints.
Dual-arm robots Dual-arm robots are composed of two arms that can work together on a given workpiece.
The type of movement is dictated by the arrangement of joints (placement and type) and linkages.
Industrial robots for different applications
The application is the type of work that the robot is designed to do. Robot models are created with specific applications or processes in mind. Different applications will have different requirements. For instance, a painting robot will require a small payload but a large movement range and be explosion proof. On the other hand, an assembly robot will have a small workspace but will be very precise and fast. Depending on the target application, the industrial robot will have a specific type of movement, linkage dimension, control law, software and accessory packages. Below are some types of applications:
Welding robots
Material handling robots
palletizing robot
Painting robot
Assembly robot
Serial or parallel industrial robots Serial robots are the most common. They are composed of a series of joints and linkages that go from the base to the robot tool.
Parallel robots come in many forms. Some call to them spiders robots. Parallel industrial robots are made in such a way that you can close loops from the base, to the tool and back to the base again. It's like many arms working together with the robot tool. Parallel industrial robots typically have a smaller workspace (try to move your arms around while holding your hands together vs the space you can reach with a free arm) but higher accelerations, as the actuators don't need to be moved: they all sit at the base.
Basic terminologies
Work Cell: All the equipment needed to perform the robotic process (robot, table, fixtures, etc.)
Work Envelope: All the space the robot can reach.
Degrees of Freedom: The number of movable motions in the robot. To be considered a robot there needs to be a minimum of 4 degrees of freedom. The Kuka Agilus robots have 6 degrees of freedom.
Payload: The amount of weight a robot can handle at full arm extension and moving at full speed.
End Effector: The tool that does the work of the robot. Examples: Welding gun, paint gun, gripper, etc.
Manipulator: The robot arm (everything except the End of Arm Tooling).
TCP: Tool Center Point. This is the point (coordinate) that we program in relation to.
Positioning Axes: The first three axes of the robot (1, 2, 3). Base / Shoulder / Elbow = Positioning Axes. These are the axes near the base of the robot.
Orientation Axes: The other joints (4, 5, 6). These joints are always rotary. Pitch / Roll / Yaw = Orientation Axes. These are the axes closer to the tool.
Coordinates Systems
Classification
Industrial robots can be classified into six categories based on the following characteristics:
Degrees of Freedom Arm Geometry Power Source Types of Motion Path Control Intelligence
Degrees of Freedom
The number of movable motions in a robot defines its degrees of freedom. In articulated robots such as those in the Fab Lab have at least 6 degrees of freedom. These joints, or axes, are broken into two categories. The three joints nearest the base of the manipulator are called the positioning axes. The three closest to the tool are called the orientation axes. Robots can have larger degrees of freedom by having external axes, for instance, the entire robot can be mounted on a sledge which moves along a track. This would be the seventh degree of freedom.
Arm geometry
The arm geometry, that is the configuration and type of joints used, determines the shape of the work envelope.
Rectangular (Cartesian) The work envelope is a box. All three axes are linear.
Cylindrical
The work envelope is a cylinder. Axis 1 is rotary. Other axes are linear.
SCARA This is a variation of a cylindrical work envelope robot. SCARA is an acronym for Selective Compliance Articulated Robot Arm. Joints 1 and 2 of this type are rotary and in the same plane. Joint 3 is linear. These are often called Pick and Place robots.
Spherical
This type of arm geometry produces a ball-shaped work envelope.
Axes 1 and 2 are rotary. Axis 3 is linear.
Articulated This type of arm geometry, which is what we use in the Fab Lab is also referred to as Jointed Spherical.
Types of Motion
Robot Programming allows us to develop several motion types:
- PTP: POINT TO POINT
The robot guides the TCP along the fastest path to the endpoint. The fastest path is generally not the shortest path and is thus not a straight line. As the motions of the robot axes are rotational, curved paths can be executed faster than straight paths. The exact path of the motion cannot be predicted.
- CIRC: Circular
The robot guides the TCP at a defined velocity along a circular path to the endpoint. The circular path is defined by a start point, auxiliary point and endpoint.
- LIN: Linear
The robot guides the TCP at a defined velocity along a straight path to the end point. This path is predictable.
Singularities
This is a condition in which the manipulator loses one or more degrees of freedom and change in joint variables does not result in change in end effector location and orientation variables. This is a case when the determinant of the Jacobian matrix is zero ie. It is a rank deficit.
Intuitively, Singularities play a significant role in the design and control of robot manipulators. Singularities of the kinematic mapping, which determines the position of the end–effector in terms of the manipulator’s joint variables, may impede control algorithms, lead to large joint velocities, forces and torques and reduce instantaneous mobility.
However they can also enable fine control, and the singularities exhibited by trajectories of the points in the end–effector can be used to mechanical advantage. A number of attempts have been made to understand kinematic singularities and, more specifically, singularities of robot manipulators, using aspects of the singularity theory of smooth maps.
Power source
The three most common method of powering robots are air pressure (pneumatic), fluid pressure (hydraulics) and electricity. The main characteristics of each of these methods are listed below:
Pneumatic
Weakest Fastest Clean Inexpensive Low Tech Open loop (non-servo) Stop-to-stop for path control Uses hard-stops determine program locations Loud - referred to as "bang bang" robots
Hydraulic
Most powerful (greatest payload) Messy to repair Closed loop (servo) More flexible than pneumatic Mid-range in noise Oil used can contaminate paints Most expensive (have to buy both hydraulic and electronic systems) Most costly to repair (have to fix both hydraulic and electronic systems)
Electric
Most popular Clean Quiet Closed loop (servo motors) Most flexible Can use sealed motors for painting
Robotic Processes
To make working with robots easier for everybody, the Iaac Atelier Lab has divided the workflow into two groups: Existing Robotic processes and New Robotic processes. The existing robotic processes are processes which have been tested in the recent past, which means they can be easily accessed but still have to be checked with the lab about their current usage. New robotic process is techniques still not developed / in process, so they might need extra support for setting up. It is to be considered that they would need additional time for calibration.
Following are the Existing Robotic Processes :
The latest generation: Collaborative industrial robots
There is a new qualifier that has just recently been used to classify an industrial robot, that is to say, if it can collaborate with its human co-workers. Collaborative robots are made in such a way that they respect some safety standards so that they cannot hurt a human. While traditional industrial robots generally need to be fenced off away from human co-workers for safety reasons. Collaborative robots can be used in the same environment as humans. They can also usually be taught instead of programmed by an operator. Examples of collaborative robots are:
ABB https://new.abb.com/products/robotics
KUKA https://www.kuka.com/en-de
Rethink Robotics Sawyer & Baxter [1]
Universal Robots UR3, UR5 & UR10 https://blog.robotiq.com/bid/61616/Robot-Gripper-for-Universal-Robots
Robot Programming with Kuka|prc
[part of the information explained here is coming from: http://mkmra2.blogspot.com/2016/01/robot-programming-with-kukaprc.html]
KUKA|prc is a set of Grasshopper components that provide Procedural Robot Control for KUKA robots (thus the name PRC). These components are very straightforward to use and it's actually quite easy to program the robots using them.
Rhino File Setup
When you work with the robots using KUKA|prc your units in Rhino must be configured for the Metric system using millimetres. The easiest way to do this is to use the pull-down menus and select File > New... then from the dialogue presented chose "Small Objects - Millimeters" as your template. - Orientation Axes: The other joints (4, 5, 6). These joints are always rotary. Pitch / Roll / Yaw = Orientation Axes. These are the axes closer to the tool.
When installing KUKA|prc has a user interface (UI) much like other Grasshopper plug-ins. The UI consists of the palettes in the KUKA|prc menu.
There are five palettes which organize the components. These are:
01 | Core: The main Core component is here (discussed below). There are also the components for the motion types (linear, spline, etc.). 02 | Virtual Robot: The various KUKA robots are here. We'll mostly be using the KUKA gelis KR6-10 R900 component as those are what are used in the Agilus work cell. 03 | Virtual Tools: Approach and Retract components are here (these determine how the robot should move after a toolpath has completed). There are also components for dividing up curves and surfaces and generating robotic motion based on that division. 04 | Toolpath Utilities: The tools (end effectors) are here. We'll mostly be using the Custom Tool component. 05 | Utilities: The components dealing with input and outputs are stored here. These will be discussed later.
KUKA|prc CORE
The component you always use in every definition is called the Core. It is what generates the KUKA Robot Language (KRL) code that runs on the robot. It also provides the graphical simulation of the robot motion inside Rhino. Everything else gets wired into this component.
The Core component takes five inputs. These are:
SIM- This is a numeric value. Attach a default slider with values from 0.00 to 1.00 to control the simulation.
CMDS- This is the output of one of the KUKA|prc Command components. For example a Linear motion command could be wired into this socket.
TOOL- This is the tool (end effector) to use. It gets wired from one of the Tool components available in the Virtual Tools panel. Usually, you'll use the KUKA|prc Custom Tool option and wire in a Mesh component will show the tool geometry in the simulation.
ROBOT - This is the robot to use. The code will be generated for this robot and the simulation will graphically depict this robot. You'll wire in one of the robots from the Virtual Robot panel. For the Agilus Workcell, you'll use the Agilus KR6-10 R900 component.
COLLISION - This is an optional series of meshes that define collision geometry. Enable collision checking in the KUKA|prc settings to make use of this. Note that collision checking has a large, negative impact on KUKA|prc performance.
There are two output as well:
GEO: This is the geometry of the robot at the current position - as a set of meshes. You can right-click on this socket and choose Bake to generate a mesh version of the robot for any position in the simulation. You can use this for renderings for example.
ANALYSIS: This provides a detailed analysis of the simulation values. This has to be enabled for anything to appear. You enable it in the Settings dialogue, Advanced page, Output Analysis Values checkbox. Then use the Analysis component from the Utilities panel. For example, if you wire a Panel component into the Axis Values socket you'll see all the axis values for each command that's run.
Settings
The grey KUKA|prc Settings label at the bottom of the Core component gives you access to its settings. Simply left click on the label and the dialog will appear.
The settings are organized into pages which you select from along the top edge of the dialog (Settings, Advanced, and Analysis). The dialog is modeless which means you can operate Rhino while it is open. To see the effect of your changes in the viewport click the Apply button. These settings will be covered in more detail later.
Basic Setup There is a common set of components used in nearly all definitions for use with the Agilus Workcell. Not surprisingly, these correspond to the inputs on the Core component. Here is a very typical setup:
SIM SLIDER: The simulation Slider goes from 0.000 to 1.000. Dragging it moves the robot through all the motion specified by the Command input. It's often handy to drag the right edge of this slider to make it much wider than the default size. This gives you greater control when you scrub to watch the simulation. You may also want to increase the precision from a single decimal point to several (say 3 or 4). Without that precision, you may not be able to scrub to all the points you want to visualize the motion going through.
You can also add a Play/Pause component. This lets you simulate without dragging the time slider.
CMDS: The components which get wired into the CMDS slot of the Core is really the heart of your definition and will obviously depend on what you are intending the robot to do. In the example above a simple Linear Move, the component is wired in.
TOOL: We normally use custom tools with the Agilus Workcell. Therefore a Mesh component gets wired into the KUKA|prc Custom Tool component (labelled TOOL above). This gets wired into the TOOL slot of the Core. The Mesh component points to a mesh representation of the tool drawn in the Rhino file. See the section below on Tool orientation and configuration.
ROBOT: The robots we have in the Agilus Workcell are KUKA KR6 R900s. So that component is chosen to form the Virtual Robots panel. It gets wired into the ROBOT slot of the Core.
COLLISION: If you want to check for collisions between the robot and the work cell (table) wire in the meshes which represent the work cell. As noted above this has a large negative impact on performance so use this only when necessary.
Robot Position and Orientation
The Agilus workcell has two robots named Mitey and Titey. Depending on which one you are using you'll need to set up some parameters so your simulation functions correctly. These parameters specify the location and orientation of the robot within the workcell 3D model.
Note: The latest revision of Kuka|prc contains a custom robot for the Agilus workcell. It has two output sockets, Mitey and Titey. Simply wire in the robot you intend to use and no more configuration is required.
If you don't have the latest version, see below for how to set them up.
Mitey
Mitey is the name of the robot mounted in the table. Its base is at 0,0,0. The robot is rotated about its vertical axis 180 degrees. That is, the cable connections are on the right side of the robot base as you face the front of the workcell.
Bold text To set up Mitey do the following:
Bring up the Settings dialog by left clicking on KUKA|prc Settings label on the Core component. The dialog presented is shown below:
You specify the X, Y, and Z offsets in the Base X, Base Y, and Base Zdialogues of the dialog. Again, for Mitey these should all be 0. In order to rotate the robot around the vertical axis you specify 180 in the Base A field. You can see that the A axis corresponds to vertical in the diagram.
Base X: 0 Base Y: 0 Base Z: 0 Base A: 180 Base B: 0 Base C: 0
After you hit Apply the robot position will be shown in the viewport. You can close the dialog with the Exit button in the upper right corner.
Titey
The upper robot hanging from the fixture is named Titey. It has a different X, Y and Z offset values and rotations. Use the settings below when your definition should run on Titey.
Note: These values are all in millimetres. Base X: 1102.5 Base Y: 0 Base Z: 1125.6 Base A: 90 Base B: 180 Base C: 0
Code Output
The purpose of KUKA|prc is to generate the code which runs on the robot controller. This code is usually in the Kuka Robot Language (KRL). You need to tell KUKA|prc what directory and file name to use for its code output. Once you've done this, as you make changes in the UI, the output will be re-written as necessary to keep the code up to date with the Grasshopper definition.
To set the output directory and file name follow these steps: Bring up the Settings dialogue via the Core component. On the main Settings page, enter the project filename and choose an output directory. Note: See the? button in the dialogue for recommendations on the filename (which characters to avoid).
Start Position / End Position
When you work with robots there are certain issues you always have to deal with: Reach: Can the robot's arms reach the entire workpiece? Singularities: Will any joint positions result in singularities? (See below for more on this topic) Joint Limits: During the motion of the program will any of the axes hit their limits? One setting which has a major impact on these is the Start Position. The program needs to know how the tool is positioned before the motion starts. This value is VERY important. That's because it establishes an initial placement for the joint limits. Generally, you should choose a start position that doesn't have any of the joints near their rotation limits - otherwise, your programmed path may cause them to hit the joint limit. This is a really common error. Make sure you aren't unintentionally near any of the axes limits. Also, the robot will move from it's current position (wherever that may be) to the start position. It could move right through your workpiece or fixture setup. So make sure you are aware of where the start position is, and make sure there's a clear path from the current position of the robot to the start position. In other words, jog the robot near to the start position to begin. That'll ensure the motion won't hit your set up.
You specify these start and end position values in the Settings of the Core. Bring up the settings dialog and choose the Advanced page.
Under the Start / Endposition section, you enter the axis values for A1 through A6. This begs the questions "how do I know what values to use?".
You can read these directly from the physical robot pendant. That is, you jog the robot into a reasonable start position and read the values from the pendant display. Enter the values into the dialog. Then do the same for the End values. See the section Jogging the Robot in topic Taubman College Agilus Workcell Operating Procedure.
You can also use KUKA|prc to visually set a start position and read the axis values to use. To do this you wire in the KUKA|prc Axis component into the Core component. You can "virtually jog" the robot to a specific position using a setup like this:
Then simply read the axis values from your sliders and enter these as the Start Position or End Position.
Another way is to move the simulation to the start point of the path. Then read the axis values from the Analysis output of the Core Settings dialog. You can see the numbers listed from A01 to A06. Jot these down, one decimal place is fine. Then enter them on the Advanced page.
Initial Posture
Related to the Start Point is the Initial Posture setting. If you've set the Start Position as above and are still seeing motion (like a big shift in one of the axis to reorient) try the As Start option. This sets the initial posture to match the start position.
File:Kuka prc InitialPosture.jpg
Robot Programming with Robots plugin
[part of the information explained here is coming from: https://github.com/visose/Robots/wiki/How-To-Use#grasshopper]
Grasshopper plugin for programming ABB, KUKA and UR robots for custom applications. Special care is taken to have feature parity between all manufacturers and have them behave as similar as possible. The plugin can also be used as a .NET library to create robot programs through scripting inside Rhino (using Python, C# or VB.NET). Advanced functionality is only exposed through scripting.
How To Use
The basic Grasshopper workflow:
1- Select your robot model using the "Load robot" component.
2- Define your end effector (TCP, weight and geometry) using the "Create tool" component.
3- Create a flat list of targets that define your tool path using the "Create target" component.
4- Create a robot program connecting your list of targets and robot model to the "Create program" component.
5- Preview the tool path using the "Simulation" component.
6- Save the robot program to a file using the "Save program" component. If you're using a UR robot, you can also use the "Remote UR" component to stream the program through a network.
Parameters
Target
A target defines a robot pose, how to reach it and what to do when it gets there. A tool path is made out of a list of targets. Besides the pose, targets have the following attributes: tool, speed, zone, frame, external axes and commands.
There are two types of targets, joint targets and Cartesian targets:
Joint target: The pose of the robot is defined by 6 rotation values corresponding to the 6 axes. This is the only way to unambiguously define a pose. The first target of a robot program should be a joint target.
Cartesian target: The pose of the robot is defined by a plane that corresponds to the desired position and orientation of the TCP. Cartesian targets can produce singularities, the most common being wrist singularities. This happens when the desired position and orientation requires the 4th and 6th joints to be parallel to each other.
Cartesian targets contain two optional attributes, configuration and motion type:
Configuration
Industrial robots have 8 different joint configurations to reach the same TCP position and orientation. By default, the configuration in which the joints have to rotate the least is selected. This is determined using the least squares method, which is also the closest distance between targets in joint space. All joints are weighted equally. You can explicitly define a configuration by assigning a value (from 0 to 7) to the Configuration variable. Forcing a configuration doesn't define a pose unambiguously since the joints might rotate clockwise or counter-clockwise depending on the previous target.
Motion type
A robot can move towards a Cartesian target following either a joint motion or a linear motion:
Joint: This is the default motion type. In a joint motion, the controller calculates the joint rotation values on the target using inverse kinematics and moves all of the joints at proportional but fixed speeds so that they will stop at the same time at the desired target. The motion is linear in joint space but the TCP will follow a curved path in world space. It's useful if the path that the TCP follows is not critical, like in pick and place operations. Since inverse kinematics only needs to be calculated at the end of the path, it's also useful to avoid singularities.
Linear: The robot moves towards the target in a straight line in world space. This is useful if the path that the TCP follows is critical, like while milling or extruding material. If the path goes through a singularity at any point it will not be able to continue. If it moves close to a singularity it might slow down below the programmed speed.
Castings
A string containing 6 numbers separated by commas will create a joint target with default attribute values. A plane will create a Cartesian target with default attribute values.
Tool
This parameter defines a tool or end effector mounted to the flange of the robot. In most cases a single tool will be used throughout the tool path, but each target can have a different tool assigned. You might want to change tool if your end effector has more than one TCP, or due to load changes during pick and place. Contains the following attributes:
Name: Name of the tool (should not contain spaces or special characters). The name is used to identify the tool in the pendant and create variable names in post-processing.
TCP: Stand for "tool center point". Represents the position and orientation of the tip of the end effector in relation to the flange. The default value is the world XY plane (the center of the flange).
Weight: The weight of the end effector in kilograms. The default value is 0 kg.
Mesh: Single mesh representing the geometry of the tool. Used for visualization and collision detection.
Coordinate systems
As with Rhino, the plugin uses a right-handed coordinate system. The main coordinate systems are:
World coordinate system: It's the Rhino document's coordinate system. Cartesian robot targets are defined in this system. They've transformed to the robot coordinate system during post-processing.
Robot coordinate system: Used to position the robot in reference to the world coordinate system. By default, robots are placed in the world XY plane. The X axis points away from the front of the robot, the Z axis points vertically.
Tool coordinate system: Used to define the position and orientation of the TCP relative to the flange. The Z axis points away from the flange (normal to the flange), the X axis points downwards.]]
As with Rhino, the plugin uses a right-handed coordinate system. The main coordinate systems are:
World coordinate system: It's the Rhino document's coordinate system. Cartesian robot targets are defined in this system. They have transformed into the robot coordinate system during post-processing.
Robot coordinate system: Used to position the robot in reference to the world coordinate system. By default, robots are placed in the world XY plane. The X axis points away from the front of the robot, the Z axis points vertically.
Tool coordinate system: Used to define the position and orientation of the TCP relative to the flange. The Z axis points away from the flange (normal to the flange), the X-axis points downwards.
Robot
File:ROBOTS load robot system.jpg
Represents a specific robot model. It's used to calculate the forward and inverse kinematics for Cartesian targets, to check for possible errors and warnings on a program, for collision detection and simulation. If your robot model is not included in the assembly, check the wiki on how to add your own custom models.
Remote connection: You can use the robot parameter to connect to the robot controller through a network. Currently, this is only supported on UR robots.
Create a program
Units: The plugin always uses the same units irrespective of the robot type or document settings.
Length: Millimeters
Angle: Radians
Weight: Kilograms
Time: Seconds
Linear speed: Millimeters per second
Angular speed: Radians per second
Uploading the program to a robot
A program defines a complete toolpath and creates the necessary robot code to run it. To create a program you need a list of targets and a robot model.
File:ROBOTS create a program.JPG
When a program is created, the following post-processing is done:
It will clean up and fix common mistakes.
It will run through the sequence of targets checking for kinematic or other errors.
It will return warnings for unexpected behaviour.
It will generate a simulation to preview the toolpath.
It calculates an approximate duration of the program.
If there are no errors, it will generate the necessary code in the robot's native language.
Errors
After the first error is found, it will stop and output a program ending in the error. Most errors are due to kinematics (the TCP not being able to position itself on the target). There are other errors, like exceeding the maximum payload. To identify the error, preview simulation of the program at the last target. Programs that contain errors won't create native code.
Warnings
The program will also inform of any warnings to take into account. Warnings may include changes in configuration, maximum joint speed reached, targets with unassigned values, first target not set as a joint target. Programs that contain warnings will create native code and might be safe to run if the warnings are believed to not cause any issues.
Code
To run the program, a code has to be generated in the specific language used by the manufacturer (RAPID for ABB robots, KRL for KUKA robots and URScript for UR robots). If necessary, this code can then be edited manually. A program containing edited code will not check for warnings or errors and can't be simulated.
Simulation
The program contains a simulation of the tool path. The simulation currently doesn't take into account acceleration, deceleration or approximation zones. It simulates both linear and joint motions, actual robot speed, including slowdowns when moving close to singularities and wait times.
Zone
Defines an approximation zone for a target. Two variables make up a zone, a distance (in mm) and a rotation (in radians). The default value is 0 mm.
Targets can be stop points or way points:
Stop points have a distance and rotation value of 0. All axis will completely before moving to the next target. Commands associated with this target will run just after the TCP reaches the target.
Way points have a distance or rotation value greater than 0. Once the TCP position is within the distance value to the target, it will start moving towards the next target. One the TCP orientation is within the rotation value, it will start orienting towards the next target. This is useful to create a continuous path and avoid the robot stopping (decelerating and accelerating) at the cost of precision. Commands associated with this target will usually run a bit before the TCP enters the zone area.
IMPORTANT: If multiple targets use the same zone, first use a string or number to cast into a zone parameter, then assign the parameter to the different targets. Don't assign a string or number directly as a zone to multiple targets, as different zone instances will be created (even if they have the same value) and will create unnecessary duplication in the robot code.
All this information is coming from:
Alexandre Dubor,
Kunal Chadha,
Ricardo Mayor Luque
and many people at IAAC
Learning Robotics Using Python, Lentin Joseph. Packt Publishing Ltd, May 27, 2015.
Singularities of Robot Manipulators. Peter Donelan
and http://mkmra2.blogspot.com/
This category currently contains no pages or media.