Pick and place with simple reference conditioning

V.1.1

Pick and place with simple reference conditioning

Difficulty: easy

Time: ~20 min

Note

This tutorial is working from:
The version v3.0.0 of the ned_ros_stack
The version v3.0.0 of Niryo Studio
The version v3.1.0 of PyNiryo

Introduction

The goal of this application is to show multiple examples of applications you can do with the Vision Set and Ned/Ned2. But also to have a first approach of industrial processes that can be carried out with Ned/Ned2. This process consists in picking any type of object from a working area and conditioning them in a packing area.

Requirements

Knowledge:
  • Basic knowledge of Python

  • Being able to use Ned/Ned2

  • Having studied the Vision Set’s User Manual

  • Having looked at the PyNiryo library documentation

Hardware:
  • A Vision Set

  • A Ned (or a Ned2)

  • PyNiryo’s library installed on your computer

Ned

The script of the application can be found here: Vision conditioning one reference

Script’s operation

This script shows an example on how to use Ned/Ned2’s Vision Set in order to make a conditioning with any objects supplied.

The script works in 2 ways:
  • One where all the vision processes are made on the robot.

  • The other one where the vision process is made on the computer.

The first one aims to show how easy it is to use Ned/Ned2’s Vision Set with PyNiryo.

The second shows a way to do image processing from user’s computer. It highlights the fact that the user can imagine every type of process on his computer. The objects will be conditioned in a grid of dimension grid_dimension. If the grid is completed, objects will be packed over the lower level.

Note

The code is commented so you can easily understand how this application is working.

Before running the application, you will have to change the variables below:
  • robot_ip_address

  • tool_used

  • workspace_name

You can change the variables below if you want to design your own environment:
  • grid_dimension

  • vision_process_on_robot

  • display_stream

Finally, you may change the variables below to perfectly adapt the application to your environment:
  • observation_pose

  • center_conditioning_pose

  • sleep_pose