For sensor properties , the official SDF reference manual is available on the following link.
Introduction
After we built a simulated robot that we can drive it around in Gazebo environment, we'll start adding various sensors to it, like camera, IMU, Lidar etc.
To add a sensor for Gazebo simulation, we need to change 2 files in URDF:
The robot_3d.urdf.xacro : we have to define the position, orientation and other physical properties of the camera in this file. This is not simulation dependent, just in case of the real robot with a real sensor.
The mec_mobile.gazebo: this is fully simulation dependent, we have to define the properties of the simulated sensor in this file.
Camera
1. Conventional RGB Camera
To add a Camera for Gazebo simulation, we need to change 2 files in URDF:
The camera is attached to the base_link with a fixed joint. But there is another link camera_link_optical connected to the camera_link through the camera_optical_joint! This link is to solve the conflict between 2 different conventional coordinate systems:
By default, URDF uses the right-handed coordinate system with X forward, Y left, and Z up.
However, many ROS drivers and vision processing pipelines expect a camera’s optical axis to be aligned with Z forward, X to the right, and Y down.
camera_optical_joint applies a static rotation so that the camera data will be interpreted correctly by ROS tools that assume the Z-forward convention for image and depth sensors.
The mec_mobile.gazebo: define the simulation properties of the camera.
Camera simulation properties
With the plugin we define a couple of things for Gazebo:
<gazebo reference="camera_link">, we have to refer to the camera_link that we defined in the urdf
<horizontal_fov>1.3962634</horizontal_fov>, the field of view of the simulated camera
width, height, format and update_rate, properties of the video stream
<optical_frame_id>camera_link_optical</optical_frame_id>, we have to use the camera_link_optical that we checked in details above to ensure the right static transformations between the coordinate systems
<camera_info_topic>camera/camera_info</camera_info_topic>, certain tools like rviz requires a camera_infotopic that describes the physical properties of the camera. The topic's name must match camera's topic (in this case both are camera/...)
<topic>camera/image</topic>, we define the camera topic here
<!-- 7 - Camera Gazebo Plugin --> <gazeboreference="camera_link"> <sensorname="camera"type="camera"> <camera> <horizontal_fov>1.3962634</horizontal_fov>  <clip> <near>0.1</near> <far>15</far> </clip> <noise> <type>gaussian</type><!-- Noise is sampled independently per pixel on each frame. That pixel's noise value is added to each of its color channels, which at that point lie in the range [0,1]. --> <mean>0.0</mean> <stddev>0.007</stddev> </noise> <optical_frame_id>camera_link_optical</optical_frame_id> <camera_info_topic>camera/camera_info</camera_info_topic> </camera> <always_on>1</always_on> <update_rate>20</update_rate> <visualize>true</visualize> <topic>camera/image</topic> </sensor> </gazebo>
As the new Gazebo simulator topics are not automatically forwarded, we have to update the parameter_bridge of the ros_gz_bridge package. It has a very detailed readme what kind of topic types can be forwarded between ROS and Gazebo.
Extend the arguments of the parameter_bridge in our launch file:
We can see that both /camera/camera_info and /camera/image topics are forwarded. ROS has a very handy feature with it's image transport protocol plugins, but this feature doesn't work together with parameter_bridge. Without compression the 640x480 camera stream consumes almost 20 MB/s network bandwith which is unacceptable for a wireless mobile robot.
There is a dedicated image_bridge node in the ros_gz_image package to compress images:
Update our launch file for image_bridge node
# Node to bridge /cmd_vel and /odom
gz_bridge_node = Node(
package="ros_gz_bridge",
executable="parameter_bridge",
arguments=[
"/clock@rosgraph_msgs/msg/Clock[gz.msgs.Clock",
"/cmd_vel@geometry_msgs/msg/Twist@gz.msgs.Twist",
"/odom@nav_msgs/msg/Odometry@gz.msgs.Odometry",
"/joint_states@sensor_msgs/msg/JointState@gz.msgs.Model",
"/tf@tf2_msgs/msg/TFMessage@gz.msgs.Pose_V",
#"/camera/image@sensor_msgs/msg/Image@gz.msgs.Image",
"/camera/camera_info@sensor_msgs/msg/CameraInfo@gz.msgs.CameraInfo",
],
output="screen",
parameters=[
{'use_sim_time': LaunchConfiguration('use_sim_time')},
]
)
# Node to bridge camera image with image_transport and compressed_image_transport
gz_image_bridge_node = Node(
package="ros_gz_image",
executable="image_bridge",
arguments=[
"/camera/image",
],
output="screen",
parameters=[
{'use_sim_time': LaunchConfiguration('use_sim_time'),
'camera.image.compressed.jpeg_quality': 75},
],
)
# add the new node to the launchDescription
launchDescriptionObject.add_action(gz_image_bridge_node)
After rebuild we can try it withrqt and see huge improvement in the bandwith by compression:
If compressed images are not visible in rqt, you have to install the plugins you want to use:
sudo apt install ros-jazzy-compressed-image-transport: for jpeg and png compression
sudo apt install ros-jazzy-theora-image-transport: for theora compression
sudo apt install ros-jazzy-zstd-image-transport: for zstd compression
RViz has problem with the compressed camera stream, as RViz always expect the image and the camera_info topics with the same prefix which works well for:
There is another useful tool that we can use, the relay node from the topic_tools package:
Add the relay node to launch file
# Relay node to republish /camera/camera_info to /camera/image/camera_info
relay_camera_info_node = Node(
package='topic_tools',
executable='relay',
name='relay_camera_info',
output='screen',
arguments=['camera/camera_info', 'camera/image/camera_info'],
parameters=[
{'use_sim_time': LaunchConfiguration('use_sim_time')},
]
)
# add the new node to the launchDescription
launchDescriptionObject.add_action(relay_camera_info_node)
If topic_tools is not installed: sudo apt install ros-jazzy-topic-tools
Rebuild the workspace and let's try it!
We already set up the jpeg quality in the image_bridge node with the following parameter:
'camera.image.compressed.jpeg_quality': 75
To get the name of the parameter and any other settings we can change, we need to use the rqt_reconfigure node. First start the simulation and then rqt_reconfigure:
ros2 launch mec_mobile_gazebo spawn_robot.launch.py
ros2 run rqt_reconfigure rqt_reconfigure
3. Wide angle camera
Using wide angle or fisheye lens on mobile robots is quite common, to increase the field of view and get a wide angle distortion, we need a different plugin, the wideangle_camera.
Note that the images works in both rviz and in rqt while the camera does not work in RViz because of ROS2 Jazzy and Gazebo Harmonic issue.
IMU
An Inertial Measurement Unit (IMU) typically consists of a 3-axis accelerometer, 3-axis gyroscope, and sometimes a 3-axis magnetometer. It measures linear acceleration, angular velocity, and possibly magnetic heading (orientation).
An IMU can't measure motion when the velocity is constant (acceleration is zero) or there is no change in the orientation. Therefore we cannot replace the odometry of the robot with an IMU but with the right technique we can combine these two into a more precise measurement unit.
To add a IMU for Gazebo simulation, we need to change 2 files in URDF:
With adding the IMU we aren't done yet, with the new Gazebo we also have to make sure that our simulated world has the right plugins within its <world> tag.
Finally, we also have to bridge the topics from Gazebo towards ROS using the parameter_bridge. Let's add the imu topic - or what we defined as <topic> in the Gazebo plugin - to the parameter bridge, rebuild the workspace and we are ready to test it!
To properly visualize IMU in RViz install the following plugin: sudo apt install ros-jazzy-rviz-imu-plugin
Rebuild and test it.
Lidar
LIDAR is a sensing technology that uses laser light to measure distances. LIDAR provides a 2D or 3D map of distances to any objects around the robot, and it's widely used in SLAM algorithms to build a map in real time and estimate the robot’s pose (position and orientation) within that map.
First, we start with a simple 2D lidar, which is similar to other sensors but this time we put all related code in a separate single file and import it in the robot URDF:
Finally, we also have to bridge the topics from Gazebo to ROS using the parameter_bridge. Let's add the imu topic in the launch file, then rebuild the workspace and we are ready to test it!
Run the launch and check the lidar in RViz by LaserScan and in Gazebo by Visualize Lidar plugin. If we increase decay time of the LaserScan in RViz and drive around the robot, we can do a rough "mapping" of the environment.
3D lidar
To simulate a 3D lidar, we only need to increase the number of vertical samples and the minimum and maximumangles. For example the following vertical parameters are for a Velodyne VLP-32 sensor:
Rebuild and test, and we can also increase the decay time in RViz just as we did with the 2D points.
RGBD Camera
Another way to get 3D pointclouds around the robot is using an RGBD camera which tells us not just the color but also the depth of every single pixel. To add an RGBD camera let's replace the conventional camera with this one:
And let's forward 2 topics with the parameter_bridge:
the /camera/depth_image which provides a grayscale camera stream that correspond to the distance of the individual pixels. RViz is able to render depth image topic and the color image topic together as a depth cloud.
the /camera/points which is a 3D pointcloud, the same type as the 3D lidar's point cloud. We can visualize it in RViz just as any point clouds.
The orientation of 3D point cloud isn't correct because it's interpreted in the camera_link_optical frame, let's change the Gazebo plugin a little bit:
<optical_frame_id>camera_link</optical_frame_id>
After rebuild we can try it out. Just as we saw before we can adjust the decay time to keep rendering the previous points:
Gazebo support many more different sensors that we won't cover in this lesson, you can find more examples on the following link.