Visual picking with Artificial Intelligence using Tensor Flow

V.1.1

Vision pick with tensor flow

Difficulty: medium

Time: ~30 min

Note

This tutorial is working from:
The version v3.0.0 of the ned_ros_stack
The version v3.0.0 of Niryo Studio
The version v3.1.0 of PyNiryo

If you are using a Niryo One, please refer to this tutorial.

Introduction

This application uses Tensorflow, an open-source machine learning tool developed by Google, in order to allow Ned/Ned2 to recognize multiple objects on its workspace, thanks to the Vision Set, artificial intelligence, image processing and machine learning.

Requirements

  • Ned (or a Ned2)

  • The Large Gripper (or the Vacuum Pump in case you want to use colored pawns provided with the Education Set)

  • The Vision Set and its workspace

  • Some various objects to put on the workspace (in our case, Celebration chocolates)

Set-up

Hardware

Start by calibrating the workspace with Niryo Studio. The workspace’s name must be the same as the one you filled in the robot’s program (by default, the name is “default_workspace”).

If your workspace is not in front of the robot, you will have to change the variable “observation_pose” so the robot can see the four workspace’s landmarks.

Note

It is important to firmly attach the robot and the workspace to maintain the precision as the manipulations go by.

Software

First, install PyNiryo with:

pip install pyniryo

Then, download the application’s source code on our Github that you can find here. Let’s assume you will clone the repository in a folder named ‘tensorflow_ned’. You can clone it with the command:

git clone https://github.com/NiryoRobotics/ned_applications.git

Open the robot_gui.py file and change the robot_ip_address and the workspace variables by the current private IP address and workspace name of your robot:

robot_ip_address = "IP adress of your robot"
workspace = "workspace of your robot"

If you use the Vacuum Pump, change also the z_offset variable, which is the offset between the workspace and the target height. This will allow the Vacuum Pump to reach pawns in order to grab them. Since the Vacuum Pump is shorter than the Large Gripper, you can change the z_offset to a small negative value, as:

z_offset = -0.01

Now, steps to follow in order to launch the application depend on the operating system you are using.

Note

The software intallation part should only be done once on your computer. If you already installed all the required libraries, go directly to the Launching the program section

On Windows

You must start by installing Anaconda to use the application’s installation script. Anaconda must be installed on its default location (C:Usersanaconda3).

Click here to install Anaconda.

Two solutions are available:

  • Simplified installation:

    In the application’s folder:

    • Launch setup.bat to install all the used libraries.

    • Accept the installation of these libraries.

    • Launch run.bat to launch the program.

The program should launch. If it doesn’t, launch a manual installation.

  • Manual installation:

  1. Open a terminal from Anaconda Navigator (CMD.exe Prompt, “Launch”).

You should see “(base)” displayed to the left of your terminal.

  1. Update Anaconda:

    conda update -n base -c defaults conda
    
  2. Create a TensorFlow 2 environment with python 3.6:

    conda create -n tf_ned tensorflow=2 python=3.6
    
  3. Enable TensorFlow’s environment:

    conda activate tf_ned
    

You should now see “(tf_ned)” instead of “(base)” on the left of your terminal.

  1. Update TensorFlow:

    pip install –upgrade tensorflow
    
  2. Install opencv, pygame and pygame-menu libraries:

    install opencv-python pygame pygame-menu
    
  3. Get in the application folder:

    cd Desktop/tensorflow_ned
    
  4. Launch the program:

    python robot_gui.py
    

On Linux

You must start by installing Anaconda to use the application’s installation script.

Click here to install Anaconda.

  1. Open a terminal. You should find “(base)” displayed on the left of your username.

  2. Update Anaconda:

    conda update -n base -c defaults conda
    
  3. Create a TensorFlow 2 environment with python 3.6:

    conda create -n tf_ned tensorflow=2 python=3.6
    
  4. Enable TensorFlow’s environment:

    conda activate tf_ned
    

You should now see “(tf_ned)” instead of “(base)” on the left of your terminal.

  1. Update TensorFlow:

    pip install –upgrade tensorflow
    
  2. Install opencv, pygame and pygame-menu libraries:

    install opencv-python pygame pygame-menu
    
  3. Get in the application’s folder:

    cd tensorflow_ned
    
  4. Launch the program:

    python robot_gui.py
    

Note

If you want to deactivate the conda environment once you’re done, use :

conda deactivate

Launching the program

If you already followed the previous steps once, and you simply want to launch the program:

On Windows:

  • Just launch:
    run.bat
    
  • Or in the application’s directory:
    conda activate tf_ned
    python robot_gui.py
    

On Linux enter the command:

conda activate tf_ned
python3 robot_gui.py

Note

Make sure to always be in an environment having TensorFlow as well as the necessary Python libraries, with:

conda activate tf_ned

How to use

When the program is launched, if Ned is not able to see the four workspace’s landmarks from its observation pose, it will automatically switch to learning mode and the graphic interface will be red.

Setup observation pose

Setup observation pose

It will then be necessary to move the camera so the robot can see the four landmarks, which will make the graphic interface turn to green. A click on the screen or pressing Enter will confirm the actual position, which will be saved for the next use (you can still change it from the settings menu Observation pose).

Once the workspace is seen by Ned/Ned2, you can access the graphic interface:

Graphic interface

Graphic interface home page

Features

Graphic interface for keyboard / track mouse or touch screen.

Details

  • Play
    • [object_name] [object_image]

One or more pages containing objects and their miniature. Click on the name of the object of your choice to ask Ned/Ned2 to grab it.

Play menu

Play menu

  • Settings

    • Observation pose

      The “Observation pose” button allows to change the position from which the robot will record the workspace.

    • Drop pose

      The “Drop pose” button allows to change the position from which the robot drops the asked objects.

    • Labelling

      • Name: [selector]

        The “name” selector allows you to pick an already existing object’s name in the database or a new object named “obj_x”.

      • Name: [text_input]

        The “name” text entry allows you to pick the name of the object you want to add to the database. To add images to an existing object, use the selector or write the same name as the object in “name”.

      • Add img

        The “add img” allows you to add a picture of the actual workspace in the database under the “data/” directory. When you add a new object to the database, we recommend you to take at least twenty pictures of it. The added objects must have a certain contrast with the workspace (we recommend to avoid white and highly reflective objects).

    • Train

      • Full Training:

        This button launches the training of the neural network with the actual content of the “data” folder. During all of the training, the interface won’t be usable (~1 to 10 minutes). When the training’s over, the network will be saved in the Model folder and the interface will automatically be refreshed.

      • Lite Training:

        This button launches a lite training. The difference with the full training is that the number of steps per epoch during the training is divided by two, which results in a quicker training but also in a slight loss in the model’s accuracy.

    • Update

      The “Update” button launches a scan of the “data”, “logo” and “data_mask” folders. Then, it refreshes the saved neural network in its model and updates each menu (similar to a program restart).

Settings menu

Settings menu

  • Quit

    Ned goes to its home position and activates learning mode. The program ends.

Other features:

  • Replace the images in the “logo” folder with customized logos (black is used as a transparency color).

  • Add or remove images and folders in the database from a file management tool (use the “Update” button to ask the application to rescan the folders).

  • Provided data sets:
    • Two data sets based on the Celebration chocolates.

    • A data set with 963 images which allows you to train a model with 95 / 99.5% accuracy (1 to 15 minutes of training).

How it works

Creation of the database (labelling.py)

To create your database, you need to take pictures of the objects you want to use. Take at least 20 pictures of each object to get good results.

The aim is to take pictures of each object under a multitude of angles and different lighting conditions. The pictures will be stored in a folder named with the name of the concerned object, inside the “data” folder.

../../_images/pictures_of_object.png

Several pictures of the same object

In order to create your database, you can either:

  • use the Labelling button in the graphic interface

  • or launch the program:
    python labelling.py
    

    You will then have to press Enter whenever you want to add a new picture to your database.

Tracking of the objects (utils.py)

  • Image shooting (with the “take_workspace_img()” function):

    Uses the PyNiryo API to ask the robot to send an image, to crop it and to compensate the lens’ distortion.

  • Calculation of the mask (with the “objs_mask()” function):

    The code uses the cv2.cvtColor() to modify the image colorimetry from RGB to HLS, then uses cv2.inRange() to get a mask which approximately delineates the objects to detect. In order to only keep objects with a sufficient surface, combine the cv2.dilate() and cv2.erode() functions to remove images’ impurities.

    We finally obtain a black and white picture corresponding to the shape of the objects placed on the workspace.

    ../../_images/black_and_white.png

    Shapes of objects in the workspace

  • Objects’ extraction (with the “extract_objs()” function)

    It uses cv2.findContours() to obtain the list of the outline of the objects being on the previously calculated mask. Then, calculate the center as well as the angle of the objects with the help of the vision functions of the PyNiryo API get_contour_barycenter(contour) and get_contour_angle(contour).

    With cv2.minAreaRect() we obtain a square containing the smallest object and use this information to extract the object from the image and put it vertically (giving the same orientation to these images makes the recognition easier for TensorFlow).

    ../../_images/objects_same_orientation.png

    Objects picture with same orientation

Training (training.py)

Launch training.py with:

python training.py

or click on the “Train” menu and “Full training” button in the graphic interface. This creates a TensorFlow model (neural network). Then create a list which contains all the images from the “data” folder and a list which contains the label corresponding to each image.

It uses [model].fit( [images] , [labels] ) to train the model with the database. When the training is over, test the model’s performances and save it in the “model” folder.

You can also use the “Lite training” button in the Train menu, to launch a quicker training.

Prediction (robot.py)

Launch robot.py with:

python robot.py

And enter the name of the object you want Ned/Ned2 to grab, or use the graphic interface’s “Play” menu. The program uses the previously trained model to recognize the different objects on the workspace.

FAQ

The IA’s predictions don’t appear:

../../_images/ia_prediction.png

IA Predictions

If the artificial intelligence’s predictions do not appear on the screen, it is probably because:

  • Your model is not yet trained for this database. In that case, click on “Train” in the Settings.

  • The saved model does not match the objects currently in your data file. In that case, you can put the data folder in the same configuration as in the last training and click on update.

Wrong predictions:

If the workspace’s observation angle is too different from the one in the database, then the results might not be correct anymore. If an object is often mistaken with another one, try to add more images of these two objects in the database.