Vision

This file presents the different Vision - Command functions, Vision - Enums, Vision - Niryo Topics & Vision - Namedtuple available with the Vision API

Vision - Command functions

This section reference all existing functions to control your robot arm, which include

  • Getting camera image

  • Detecting objects

  • Managing workspaces

All functions to control the robot are accessible via an instance of the class NiryoRobot

robot = NiryoRobot(<robot_ip_address>)

robot.vision.vision_pick("workspace_1", 0.0, ObjectShape.ANY, ObjectColor.ANY)
robot.vision.detect_object("workspace_1", ObjectShape.ANY, ObjectColor.ANY)
...

See examples on Examples Section

List of functions subsections:

class Vision(client, arm=None, tool=None)[source]

Vision robot functions

Example:

ros_instance = NiryoRos("10.10.10.10") # Hotspot
vision_interface = Vision(ros_instance)
Parameters

client (NiryoRos) – Niryo ROS client

Camera functions

property Vision.get_img_compressed

Get image from video stream in a compressed format. Use uncompress_image from the vision package to uncompress it

Examples:

import pyniryo

img_compressed = vision.get_img_compressed()
camera_info = vision.get_camera_intrinsics()
img = pyniryo.uncompress_image(img_compressed)
img = pyniryo.undistort_image(img, camera_info.intrinsics, camera_info.distortion)
Returns

string containing a JPEG compressed image

Return type

NiryoTopic

property Vision.get_camera_intrinsics

Get calibration object: camera intrinsics, distortions coefficients The topic return a namedtuple(intrinsics: list[list[float]], distortion: list[list[float]])

Examples:

vision.get_camera_intrinsics()
vision.get_camera_intrinsics().value

def camera_info_callback(camera_info):
    print(camera_info.intrinsics)
    print(camera_info.distortion)
    vision.get_camera_intrinsics.unsubscribe()

vision.get_camera_intrinsics.subscribe(camera_info_callback)
Return type

NiryoTopic

property Vision.get_image_parameters

Return the NiryoTopic to get last stream image parameters: Brightness factor, Contrast factor, Saturation factor. The topic return a namedtuple(brightness_factor: float, contrast_factor: float, saturation_factor: float)

Brightness factor: How much to adjust the brightness. 0.5 will give a darkened image, 1 will give the original image while 2 will enhance the brightness by a factor of 2.

Contrast factor: A factor of 1 gives original image. Making the factor towards 0 makes the image greyer, while factor>1 increases the contrast of the image.

Saturation factor: 0 will give a black and white image, 1 will give the original image while 2 will enhance the saturation by a factor of 2.

Examples:

vision.get_image_parameters()
vision.get_image_parameters.value

def image_parameters_callback(image_parameters):
    print(image_parameters.brightness_factor)
    print(image_parameters.contrast_factor)
    print(image_parameters.saturation_factor)

    vision.get_image_parameters.unsubscribe()

vision.get_image_parameters.subscribe(image_parameters_callback)
Returns

ImageParameters namedtuple containing the brightness factor, contrast factor and saturation factor.

Return type

NiryoTopic

Vision.set_brightness(brightness_factor)[source]

Modify image brightness

Parameters

brightness_factor (float) – How much to adjust the brightness. 0.5 will give a darkened image, 1 will give the original image while 2 will enhance the brightness by a factor of 2.

Return type

None

Vision.set_contrast(contrast_factor)[source]

Modify image contrast

Parameters

contrast_factor (float) – A factor of 1 gives original image. Making the factor towards 0 makes the image greyer, while factor>1 increases the contrast of the image.

Return type

None

Vision.set_saturation(saturation_factor)[source]

Modify image saturation

Parameters

saturation_factor (float) – How much to adjust the saturation. 0 will give a black and white image, 1 will give the original image while 2 will enhance the saturation by a factor of 2.

Return type

None

Detection functions

Vision.get_target_pose_from_cam(workspace_name, height_offset=0.0, shape=<ObjectShape.ANY: 'ANY'>, color=<ObjectColor.ANY: 'ANY'>)[source]

First detects the specified object using the camera and then returns the robot pose in which the object can be picked with the current tool

Parameters
  • workspace_name (str) – name of the workspace

  • height_offset (float) – offset between the workspace and the target height

  • shape (ObjectShape) – shape of the target

  • color (ObjectColor) – color of the target

Returns

object_found, object_pose, object_shape, object_color

Return type

(bool, PoseObject, ObjectShape, ObjectColor)

Vision.vision_pick(workspace_name, height_offset=0.0, shape=<ObjectShape.ANY: 'ANY'>, color=<ObjectColor.ANY: 'ANY'>)[source]

Picks the specified object from the workspace. This function has multiple phases:

1. detect object using the camera
2. prepare the current tool for picking
3. approach the object
4. move down to the correct picking pose
5. actuate the current tool
6. lift the object

Example:

robot = NiryoRobot(ip_address="x.x.x.x")
robot.arm.calibrate_auto()
robot.arm.move_pose(<observation_pose>)
obj_found, shape_ret, color_ret = robot.vision.vision_pick(<workspace_name>,
                                                    height_offset=0.0,
                                                    shape=ObjectShape.ANY,
                                                    color=ObjectColor.ANY)
Parameters
  • workspace_name (str) – name of the workspace

  • height_offset (float) – offset between the workspace and the target height

  • shape (ObjectShape) – shape of the target

  • color (ObjectColor) – color of the target

Returns

object_found, object_shape, object_color

Return type

(bool, ObjectShape, ObjectColor)

Vision.move_to_object(workspace_name, height_offset, shape, color)[source]

Same as get_target_pose_from_cam but directly moves to this position

Parameters
  • workspace_name (str) – name of the workspace

  • height_offset (float) – offset between the workspace and the target height

  • shape (ObjectShape) – shape of the target

  • color (ObjectColor) – color of the target

Returns

object_found, object_shape, object_color

Return type

(bool, ObjectShape, ObjectColor)

Vision.detect_object(workspace_name, shape=<ObjectShape.ANY: 'ANY'>, color=<ObjectColor.ANY: 'ANY'>)[source]

Detect object in workspace and return its pose and characteristics

Parameters
  • workspace_name (str) – name of the workspace

  • shape (ObjectShape) – shape of the target

  • color (ObjectColor) – color of the target

Returns

object_found, object_pose, object_shape, object_color

Return type

(bool, PoseObject, str, str)

Workspace functions

Vision.get_target_pose_from_rel(workspace_name, height_offset, x_rel, y_rel, yaw_rel)[source]

Given a pose (x_rel, y_rel, yaw_rel) relative to a workspace, this function returns the robot pose in which the current tool will be able to pick an object at this pose.

The height_offset argument (in m) defines how high the tool will hover over the workspace. If height_offset = 0, the tool will nearly touch the workspace.

Parameters
  • workspace_name (str) – name of the workspace

  • height_offset (float) – offset between the workspace and the target height

  • x_rel (float) – x relative pose (between 0 and 1)

  • y_rel (float) – y relative pose (between 0 and 1)

  • yaw_rel (float) – Angle in radians

Returns

target_pose

Return type

PoseObject

Vision.save_workspace_from_robot_poses(workspace_name, pose_origin, pose_2, pose_3, pose_4)[source]

Save workspace by giving the poses of the robot to point its 4 corners with the calibration Tip. Corners should be in the good order. Markers’ pose will be deduced from these poses

Poses should be either a list [x, y, z, roll, pitch, yaw] or a PoseObject

Parameters
Return type

None

Vision.save_workspace_from_points(workspace_name, point_origin, point_2, point_3, point_4)[source]

Save workspace by giving the points of worskpace’s 4 corners. Points are written as [x, y, z] Corners should be in the good order.

Parameters
Return type

None

Vision.delete_workspace(workspace_name)[source]

Delete workspace from robot’s memory

Parameters

workspace_name (str) – name of the saved workspace

Return type

None

Vision.get_workspace_ratio(workspace_name)[source]

Get workspace ratio from robot’s memory

Parameters

workspace_name (str) – workspace name

Return type

float

Vision.get_workspace_list()[source]

Get list of workspaces’ name store in robot’s memory

Return type

list[str]

Vision - Niryo Topics

The use of these functions is explained in the NiryoTopic section. They allow the acquisition of data in real time by callbacks or by direct call.

Vision’s Niryo Topics

Name

Function

Return type

/niryo_robot_vision/compressed_video_stream

get_img_compressed

list [ numpy.uint8 ]

/niryo_robot_vision/camera_intrinsics

get_camera_intrinsics

CameraInfo

Vision - Enums

List of enums:

  • ObjectColor

  • ObjectShape

  • ManageWorkspace

class ObjectColor(value)[source]

Enumeration of Colors available for image processing

RED = 'RED'
BLUE = 'BLUE'
GREEN = 'GREEN'
ANY = 'ANY'
class ObjectShape(value)[source]

Enumeration of Shapes available for image processing

SQUARE = 'SQUARE'
CIRCLE = 'CIRCLE'
ANY = 'ANY'
class ManageWorkspace(value)[source]

Enumeration of actions available for workspaces management

SAVE = 1
SAVE_WITH_POINTS = 2
DELETE = -1

Vision - Namedtuple

class CameraInfo(intrinsics, distortion)

Create new instance of CameraInfo(intrinsics, distortion)

distortion

Alias for field number 1

intrinsics

Alias for field number 0

class ImageParameters(brightness_factor, contrast_factor, saturation_factor)

Create new instance of ImageParameters(brightness_factor, contrast_factor, saturation_factor)

brightness_factor

Alias for field number 0

contrast_factor

Alias for field number 1

saturation_factor

Alias for field number 2