2.5 Gazebo sensors

For sensor properties , the official SDF reference manual is available on the following link.

Introduction

After we built a simulated robot that we can drive it around in Gazebo environment, we'll start adding various sensors to it, like camera, IMU, Lidar etc.

To add a sensor for Gazebo simulation, we need to change 2 files in URDF:

  1. The robot_3d.urdf.xacro : we have to define the position, orientation and other physical properties of the camera in this file. This is not simulation dependent, just in case of the real robot with a real sensor.

  2. The mec_mobile.gazebo : this is fully simulation dependent, we have to define the properties of the simulated sensor in this file.

Camera

1. Conventional RGB Camera

To add a Camera for Gazebo simulation, we need to change 2 files in URDF:

  1. The robot_3d.urdf.xacro : update the camera’s physical properties.

Camera physical properties

The camera is attached to the base_link with a fixed joint. But there is another link camera_link_optical connected to the camera_link through the camera_optical_joint! This link is to solve the conflict between 2 different conventional coordinate systems:

  • By default, URDF uses the right-handed coordinate system with X forward, Y left, and Z up.

  • However, many ROS drivers and vision processing pipelines expect a camera’s optical axis to be aligned with Z forward, X to the right, and Y down.

camera_optical_joint applies a static rotation so that the camera data will be interpreted correctly by ROS tools that assume the Z-forward convention for image and depth sensors.

  <!-- 7 - Camera -->
  <joint type="fixed" name="camera_joint">
    <origin xyz="0.225 0 0.075" rpy="0 0 0"/>
    <child link="camera_link"/>
    <parent link="base_link"/>
    <axis xyz="0 1 0" />
  </joint>

  <link name='camera_link'>
    <pose>0 0 0 0 0 0</pose>
    <inertial>
      <mass value="0.1"/>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <inertia
          ixx="1e-6" ixy="0" ixz="0"
          iyy="1e-6" iyz="0"
          izz="1e-6" />
    </inertial>

    <collision name='collision'>
      <origin xyz="0 0 0" rpy="0 0 0"/> 
      <geometry>
        <box size=".03 .03 .03"/>
      </geometry>
    </collision>

    <visual name='camera_link_visual'>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
        <box size=".03 .03 .03"/>
      </geometry>
    </visual>

  </link>

  <gazebo reference="camera_link">
    <material>Gazebo/Red</material>
  </gazebo>

  <joint type="fixed" name="camera_optical_joint">
    <origin xyz="0 0 0" rpy="-1.5707 0 -1.5707"/>
    <child link="camera_link_optical"/>
    <parent link="camera_link"/>
  </joint>

  <link name="camera_link_optical">
  </link>
  1. The mec_mobile.gazebo : define the simulation properties of the camera.

Camera simulation properties

With the plugin we define a couple of things for Gazebo:

  • <gazebo reference="camera_link">, we have to refer to the camera_link that we defined in the urdf

  • <horizontal_fov>1.3962634</horizontal_fov>, the field of view of the simulated camera

  • width, height, format and update_rate, properties of the video stream

  • <optical_frame_id>camera_link_optical</optical_frame_id>, we have to use the camera_link_optical that we checked in details above to ensure the right static transformations between the coordinate systems

  • <camera_info_topic>camera/camera_info</camera_info_topic>, certain tools like rviz requires a camera_info topic that describes the physical properties of the camera. The topic's name must match camera's topic (in this case both are camera/...)

  • <topic>camera/image</topic>, we define the camera topic here

  <!-- 7 - Camera Gazebo Plugin -->
  <gazebo reference="camera_link">
    <sensor name="camera" type="camera">
      <camera>
        <horizontal_fov>1.3962634</horizontal_fov>
        
        <clip>
          <near>0.1</near>
          <far>15</far>
        </clip>
        <noise>
          <type>gaussian</type>
          <!-- Noise is sampled independently per pixel on each frame.
               That pixel's noise value is added to each of its color
               channels, which at that point lie in the range [0,1]. -->
          <mean>0.0</mean>
          <stddev>0.007</stddev>
        </noise>
        <optical_frame_id>camera_link_optical</optical_frame_id>
        <camera_info_topic>camera/camera_info</camera_info_topic>
      </camera>
      <always_on>1</always_on>
      <update_rate>20</update_rate>
      <visualize>true</visualize>
      <topic>camera/image</topic>
    </sensor>
  </gazebo>

As the new Gazebo simulator topics are not automatically forwarded, we have to update the parameter_bridge of the ros_gz_bridge package. It has a very detailed readme what kind of topic types can be forwarded between ROS and Gazebo.

Extend the arguments of the parameter_bridge in our launch file:
    # Node to bridge /cmd_vel and /odom
    gz_bridge_node = Node(
        package="ros_gz_bridge",
        executable="parameter_bridge",
        arguments=[
            "/clock@rosgraph_msgs/msg/Clock[gz.msgs.Clock",
            "/cmd_vel@geometry_msgs/msg/Twist@gz.msgs.Twist",
            "/odom@nav_msgs/msg/Odometry@gz.msgs.Odometry",
            "/joint_states@sensor_msgs/msg/JointState@gz.msgs.Model",
            "/tf@tf2_msgs/msg/TFMessage@gz.msgs.Pose_V",
            "/camera/image@sensor_msgs/msg/Image@gz.msgs.Image",
            "/camera/camera_info@sensor_msgs/msg/CameraInfo@gz.msgs.CameraInfo",

        ],
        output="screen",
        parameters=[
            {'use_sim_time': LaunchConfiguration('use_sim_time')},
        ]
    )

Let's rebuild the workspace and try it again:

ros2 launch mec_mobile_gazebo spawn_robot.launch.py model:=robot_3d.urdf.xacro

2. Image transport with compressed images

We can see that both /camera/camera_info and /camera/image topics are forwarded. ROS has a very handy feature with it's image transport protocol plugins, but this feature doesn't work together with parameter_bridge. Without compression the 640x480 camera stream consumes almost 20 MB/s network bandwith which is unacceptable for a wireless mobile robot.

There is a dedicated image_bridge node in the ros_gz_image package to compress images:

Update our launch file for image_bridge node
    # Node to bridge /cmd_vel and /odom
    gz_bridge_node = Node(
        package="ros_gz_bridge",
        executable="parameter_bridge",
        arguments=[
            "/clock@rosgraph_msgs/msg/Clock[gz.msgs.Clock",
            "/cmd_vel@geometry_msgs/msg/Twist@gz.msgs.Twist",
            "/odom@nav_msgs/msg/Odometry@gz.msgs.Odometry",
            "/joint_states@sensor_msgs/msg/JointState@gz.msgs.Model",
            "/tf@tf2_msgs/msg/TFMessage@gz.msgs.Pose_V",
            #"/camera/image@sensor_msgs/msg/Image@gz.msgs.Image",
            "/camera/camera_info@sensor_msgs/msg/CameraInfo@gz.msgs.CameraInfo",

        ],
        output="screen",
        parameters=[
            {'use_sim_time': LaunchConfiguration('use_sim_time')},
        ]
    )

    # Node to bridge camera image with image_transport and compressed_image_transport
    gz_image_bridge_node = Node(
        package="ros_gz_image",
        executable="image_bridge",
        arguments=[
            "/camera/image",
        ],
        output="screen",
        parameters=[
            {'use_sim_time': LaunchConfiguration('use_sim_time'),
             'camera.image.compressed.jpeg_quality': 75},
        ],
    )
    
    # add the new node to the launchDescription
    launchDescriptionObject.add_action(gz_image_bridge_node)    

After rebuild we can try it withrqt and see huge improvement in the bandwith by compression:

If compressed images are not visible in rqt, you have to install the plugins you want to use:

  • sudo apt install ros-jazzy-compressed-image-transport: for jpeg and png compression

  • sudo apt install ros-jazzy-theora-image-transport: for theora compression

  • sudo apt install ros-jazzy-zstd-image-transport: for zstd compression

RViz has problem with the compressed camera stream, as RViz always expect the image and the camera_info topics with the same prefix which works well for:

/camera/image → /camera/camera_info

But doesn't work for:

/camera/image/compressed → /camera/image/camera_info

There is another useful tool that we can use, the relay node from the topic_tools package:

Add the relay node to launch file
    # Relay node to republish /camera/camera_info to /camera/image/camera_info
    relay_camera_info_node = Node(
        package='topic_tools',
        executable='relay',
        name='relay_camera_info',
        output='screen',
        arguments=['camera/camera_info', 'camera/image/camera_info'],
        parameters=[
            {'use_sim_time': LaunchConfiguration('use_sim_time')},
        ]
    )
    
    # add the new node to the launchDescription
    launchDescriptionObject.add_action(relay_camera_info_node)

If topic_tools is not installed: sudo apt install ros-jazzy-topic-tools

Rebuild the workspace and let's try it!

We already set up the jpeg quality in the image_bridge node with the following parameter:

'camera.image.compressed.jpeg_quality': 75

To get the name of the parameter and any other settings we can change, we need to use the rqt_reconfigure node. First start the simulation and then rqt_reconfigure:

ros2 launch mec_mobile_gazebo spawn_robot.launch.py
ros2 run rqt_reconfigure rqt_reconfigure

3. Wide angle camera

Using wide angle or fisheye lens on mobile robots is quite common, to increase the field of view and get a wide angle distortion, we need a different plugin, the wideangle_camera.

Replace the camera simulation properties in the mec_mobile.gazebo
  <!-- 7 - Gazebo Camera Sim -->
  <gazebo reference="camera_link">
    <sensor name="wideangle_camera" type="wideanglecamera">
      <camera>
        <horizontal_fov>3.14</horizontal_fov>
        
        <clip>
          <near>0.1</near>
          <far>15</far>
        </clip>
        <optical_frame_id>camera_link_optical</optical_frame_id>
        <camera_info_topic>camera/camera_info</camera_info_topic>
      </camera>
      <always_on>1</always_on>
      <update_rate>20</update_rate>
      <topic>camera/image</topic>
      <gz_frame_id>camera_link</gz_frame_id>
    </sensor>
  </gazebo>

Rebuild the workspace and try it again:

ros2 launch mec_mobile_gazebo spawn_robot.launch.py model:=robot_3d.urdf.xacro

Note that the images works in both rviz and in rqt while the camera does not work in RViz because of ROS2 Jazzy and Gazebo Harmonic issue.

IMU

An Inertial Measurement Unit (IMU) typically consists of a 3-axis accelerometer, 3-axis gyroscope, and sometimes a 3-axis magnetometer. It measures linear acceleration, angular velocity, and possibly magnetic heading (orientation).

An IMU can't measure motion when the velocity is constant (acceleration is zero) or there is no change in the orientation. Therefore we cannot replace the odometry of the robot with an IMU but with the right technique we can combine these two into a more precise measurement unit.

To add a IMU for Gazebo simulation, we need to change 2 files in URDF:

  1. The robot_3d.urdf.xacro : update the IMU's physical properties.

IMU physical properties
  <!-- 8 - IMU -->
  <joint name="imu_joint" type="fixed">
    <origin xyz="0 0 0" rpy="0 0 0" />
    <parent link="base_link"/>
    <child link="imu_link" />
  </joint>

  <link name="imu_link">
  </link>
  </link>
  1. The mec_mobile.gazebo : define the simulation properties of the IMU.

IMU simulation properties
  <!-- 8 - IMU Gazebo Plugin -->
  <gazebo reference="imu_link">
    <sensor name="imu" type="imu">
      <always_on>1</always_on>
      <update_rate>50</update_rate>
      <visualize>true</visualize>
      <topic>imu</topic>
      <enable_metrics>true</enable_metrics>
      <gz_frame_id>imu_link</gz_frame_id>
    </sensor>
  </gazebo>

With adding the IMU we aren't done yet, with the new Gazebo we also have to make sure that our simulated world has the right plugins within its <world> tag.

    <plugin
      filename="gz-sim-imu-system"
      name="gz::sim::systems::Imu">
    </plugin>

Finally, we also have to bridge the topics from Gazebo towards ROS using the parameter_bridge. Let's add the imu topic - or what we defined as <topic> in the Gazebo plugin - to the parameter bridge, rebuild the workspace and we are ready to test it!

Add the imu topic in the launch file
    # Node to bridge /cmd_vel and /odom
    gz_bridge_node = Node(
        package="ros_gz_bridge",
        executable="parameter_bridge",
        arguments=[
            "/clock@rosgraph_msgs/msg/Clock[gz.msgs.Clock",
            "/cmd_vel@geometry_msgs/msg/Twist@gz.msgs.Twist",
            "/odom@nav_msgs/msg/Odometry@gz.msgs.Odometry",
            "/joint_states@sensor_msgs/msg/JointState@gz.msgs.Model",
            "/tf@tf2_msgs/msg/TFMessage@gz.msgs.Pose_V",
            #"/camera/image@sensor_msgs/msg/Image@gz.msgs.Image",
            "/camera/camera_info@sensor_msgs/msg/CameraInfo@gz.msgs.CameraInfo",
            "/imu@sensor_msgs/msg/Imu@gz.msgs.IMU",
        ],
        output="screen",
        parameters=[
            {'use_sim_time': LaunchConfiguration('use_sim_time')},
        ]
    )

To properly visualize IMU in RViz install the following plugin: sudo apt install ros-jazzy-rviz-imu-plugin

Rebuild and test it.

Lidar

LIDAR is a sensing technology that uses laser light to measure distances. LIDAR provides a 2D or 3D map of distances to any objects around the robot, and it's widely used in SLAM algorithms to build a map in real time and estimate the robot’s pose (position and orientation) within that map.

First, we start with a simple 2D lidar, which is similar to other sensors but this time we put all related code in a separate single file and import it in the robot URDF:

Ilidar.urdf.xacro
<?xml version="1.0"?>
<robot xmlns:xacro="http://www.ros.org/wiki/xacro">
  <xacro:property name="M_PI" value="3.1415926535897931" />

  <xacro:macro name="lidar_sensor" params="
    prefix:=''
    parent:='base_link'
    frame_id:='laser_frame'
    mesh_file:='file://$(find mec_mobile_description)/meshes/rplidar/rplidar_s2.stl'
    xyz_offset:='0 0 0.0825'
    rpy_offset:='0 0 0'
    mesh_xyz_offset:='0 0 0'
    mesh_rpy_offset:='${-M_PI/2} 0 0'
    mesh_scale:='0.001 0.001 0.001'
    collision_xyz_offset:='0 0 0'
    collision_rpy_offset:='0 0 0'
    inertial_xyz_offset:='0 0 0'
    inertial_rpy_offset:='0 0 0'
    lidar_width:=0.077
    lidar_height:=0.0398
    mass:=0.185
    material_name:='rplidar_black'
    material_color:='0.1 0.1 0.1 1'
    gazebo_material_ambient:='0.1 0.1 0.1 1'
    gazebo_material_diffuse:='0.1 0.1 0.1 1'
    gazebo_material_specular:='0.1 0.1 0.1 1'
    update_rate:=10
    ray_count:=720
    min_angle:='-3.14159'
    max_angle:='3.14159'
    min_range:='0.20'
    max_range:='30.0'
    range_resolution:='0.013'
    topic_name:='scan'
    always_on:=true
    visualize:=false
    enable_collision:=false">

    <xacro:property name="radius" value="${lidar_width/2}" />
    <xacro:property name="ixx_iyy" value="${(mass/12) * (3 * radius * radius + lidar_height * lidar_height)}" />
    <xacro:property name="izz" value="${(mass/2) * (radius * radius)}" />

    <!-- 2.5.3 Lidar physical properties -->
    <link name="${prefix}${frame_id}">
      <visual>
        <origin xyz="${mesh_xyz_offset}" rpy="${mesh_rpy_offset}"/>
        <geometry>
          <mesh filename="${mesh_file}" scale="${mesh_scale}"/>
        </geometry>
        <material name="${material_name}">
          <color rgba="${material_color}"/>
        </material>
      </visual>

      <xacro:if value="${enable_collision}">
        <collision>
          <origin xyz="${collision_xyz_offset}" rpy="${collision_rpy_offset}"/>
          <geometry>
            <cylinder radius="${lidar_width/2}" length="${lidar_height}"/>
          </geometry>
        </collision>
      </xacro:if>

      <inertial>
        <mass value="${mass}" />
        <origin xyz="${inertial_xyz_offset}" rpy="${inertial_rpy_offset}"/>
        <inertia
          ixx="${ixx_iyy}"
          ixy="0.0"
          ixz="0.0"
          iyy="${ixx_iyy}"
          iyz="0.0"
          izz="${izz}" />
      </inertial>
    </link>

    <joint name="${prefix}${frame_id}_joint" type="fixed">
      <parent link="${prefix}${parent}"/>
      <child link="${prefix}${frame_id}" />
      <origin xyz="${xyz_offset}" rpy="${rpy_offset}"/>
    </joint>

    <!-- 2.5.3 IMU gz sim properties -->
    <gazebo reference="${prefix}${frame_id}">
      <material>
        <ambient>${gazebo_material_ambient}</ambient>
        <diffuse>${gazebo_material_diffuse}</diffuse>
        <specular>${gazebo_material_specular}</specular>
      </material>
      <sensor name="${prefix}lidar_sensor" type="gpu_lidar">
        <topic>${topic_name}</topic>
        <update_rate>${update_rate}</update_rate>
        <always_on>${always_on}</always_on>
        <visualize>${visualize}</visualize>
        <lidar>
          <scan>
            <horizontal>
              <samples>${ray_count}</samples>
              <resolution>1</resolution>
              <min_angle>${min_angle}</min_angle>
              <max_angle>${max_angle}</max_angle>
            </horizontal>
            <vertical>
              <samples>1</samples>
              <resolution>1</resolution>
              <min_angle>0</min_angle>
              <max_angle>0</max_angle>
            </vertical>
          </scan>
          <range>
            <min>${min_range}</min>
            <max>${max_range}</max>
            <resolution>${range_resolution}</resolution>
          </range>
        </lidar>
        <gz_frame_id>${prefix}${frame_id}</gz_frame_id>
      </sensor>
    </gazebo>

  </xacro:macro>

</robot>

Finally, we also have to bridge the topics from Gazebo to ROS using the parameter_bridge. Let's add the imu topic in the launch file, then rebuild the workspace and we are ready to test it!

            "/scan@sensor_msgs/msg/LaserScan@gz.msgs.LaserScan",

Run the launch and check the lidar in RViz by LaserScan and in Gazebo by Visualize Lidar plugin. If we increase decay time of the LaserScan in RViz and drive around the robot, we can do a rough "mapping" of the environment.

3D lidar

To simulate a 3D lidar, we only need to increase the number of vertical samples and the minimum and maximum angles. For example the following vertical parameters are for a Velodyne VLP-32 sensor:

          <vertical>
              <samples>32</samples>
              <min_angle>-0.5353</min_angle>
              <max_angle>0.1862</max_angle>
          </vertical>

Finally, we bridge the topic in the launch file to visualize a 3D point cloud in RViz:

            "/scan/points@sensor_msgs/msg/PointCloud2@gz.msgs.PointCloudPacked",

Rebuild and test, and we can also increase the decay time in RViz just as we did with the 2D points.

RGBD Camera

Another way to get 3D pointclouds around the robot is using an RGBD camera which tells us not just the color but also the depth of every single pixel. To add an RGBD camera let's replace the conventional camera with this one:

RGBD camera sim code
  <gazebo reference="camera_link">
    <sensor name="rgbd_camera" type="rgbd_camera">
      <camera>
        <horizontal_fov>1.25</horizontal_fov>
        
        <clip>
          <near>0.3</near>
          <far>15</far>
        </clip>
        <optical_frame_id>camera_link_optical</optical_frame_id>
      </camera>
      <always_on>1</always_on>
      <update_rate>20</update_rate>
      <visualize>true</visualize>
      <topic>camera</topic>
      <gz_frame_id>camera_link</gz_frame_id>
    </sensor>
  </gazebo>

And let's forward 2 topics with the parameter_bridge:

  • the /camera/depth_image which provides a grayscale camera stream that correspond to the distance of the individual pixels. RViz is able to render depth image topic and the color image topic together as a depth cloud.

  • the /camera/points which is a 3D pointcloud, the same type as the 3D lidar's point cloud. We can visualize it in RViz just as any point clouds.

Let's add the topics to the parameter bridge:

            "/camera/depth_image@sensor_msgs/msg/Image@gz.msgs.Image",
            "/camera/points@sensor_msgs/msg/PointCloud2@gz.msgs.PointCloudPacked",

The orientation of 3D point cloud isn't correct because it's interpreted in the camera_link_optical frame, let's change the Gazebo plugin a little bit:

        <optical_frame_id>camera_link</optical_frame_id>

After rebuild we can try it out. Just as we saw before we can adjust the decay time to keep rendering the previous points:

Gazebo support many more different sensors that we won't cover in this lesson, you can find more examples on the following link.

Image processing with OpenCV

Last updated