Building the Andino robot
10 Oct 2025 - 40 min read
A great learning resource if you want to build your first mobile robot.
Written by: gleniosp
I was searching for a mobile robot project that was open-source, well-maintained and with a great documentation behind it. The robot should be relatively simple and easy to build by using off the shelf components. I also wanted ROS 2 support with good default configurations and programs to test things like motion and overall autonomous navigation. The idea was to learn how to build a robot from scratch that could be programmed with ROS 2 and potentially other frameworks.
That’s when I found the Andino project from Ekumen. The project met all my requirements and I decided to build one Andino robot for me. As the documentation for the project is already great, there isn’t much to say about the whole building and usage process. Therefore, this post serves as a complement to the Andino documentation, and highlights my own experience building the robot while providing some useful tips for beginners. This post also assumes that you read through the Andino documentation in the humble branch of the repository in order to have an overall idea about building process.
If you’re just starting with ROS 2, you might want to check out the official documentation. The University of Eastern Finland and Henki Robotics also created a course based on the Andino project to give you basic hands-on experience with ROS 2, which I highly recommend. Besides, if you’re interested in the jazzy version of ROS 2, the Andino project also maintains a jazzy branch.
Hardware
The images below show my finalized Andino robot. I built it by following the instructions in the andino_hardware package.
Robot chassis
The top and bottom plates for the robot chassis and the camera mount were 3D printed from the models provided by the project. I don’t remember exactly what the basic printing configurations I used, but it was something between 15-25% infill density and Grid (or Giroid) infill pattern. I first printed the camera mount. Then, I printed one of the plates and the motor holders. I didn’t pay much attention so I ended up printing the wheel encoder pieces as well, but they aren’t required if you’re using the suggested Sparkfun motors, as they already have encoders. Finally, I printed the second plate. In total, the whole printing process took around 4.5 hours and 130 g of PLA in my Bambu Lab A1 printer. As an alternative if you don’t want to 3D print the chassis, the PDF for it is available so you can a use laser cutter to produce it.
During the build, I noticed that the motor holder pieces weren’t long enough and not properly matching the holes in the Sparkfun motors. So, I imported the original STL file on OnShape to use it as a reference, and created an updated model for the holders. If you have the same issue, you can use my view-only public OnShape document for the Model holder and export a STL from it for printing. If you want to do further changes to the model, you first need an OnShape account, then you can open this document here to make a copy and edit it.
Fixing and mounting
I mostly used nylon M3 sets (standoffs, screws, nuts and washers) to fix and mount the components and the overall chassis. I only used M2.5 sets when mouting the Raspberry Pi and the RPLidar. If you want to buy these sets, the documentation already provides some suggestion for them, and I’m also leaving a few here for reference:
However, buying these sets only for this project is too much. So, unless you intend to use them in other projects, it might be better to go a hardware store and buy only a few of M3 and M2.5 standoffs, screws, nuts and washers between 6 mm to 30 mm length. Remember that you can stack standoffs on top of each other if they are short for your use case.
One extra tip, try to align the motor wheels and the caster wheel when assembling them, so that your robot doesn’t end up uneven on the floor. Consider that the extra components will also weigh on the robot, potentially lowering the motor wheels and caster wheel a bit. I used two M3x25+5 standoffs, two M3x6 screws and two M3 nuts to fix the caster wheel. It isn’t perfect, but it is good enough for my tests. You can always adjust this later.
Electronics
Instead of mounting some of the electronic components upside down in the bottom plate, like the motor driver board, I tried to squeeze most of them on the top side of the bottom plate. I didn’t want to have jumper wires connected against gravity (upside down) at the bottom plate in order to avoid them being accidentally removed for whatever reason while the robot was moving.
One thing that you’ll notice is that the robot plates are more or less generic, meaning that they won’t have pre-defined holes to fix all components properly. For some boards, I was only able to fix them through 2 or 3 holes (out of 4). For other components, I used double sided tape to fix them, like: the Powerbank, the BNO055 IMU board, the Arduino Nano, and the Raspberry Pi camera flat cable. After mounting the electronic components, I didn’t have much space to pass the camera cable and connect it to the camera. So, I passed it through the bottom side of the bottom plate and fixed it with the double sided tape. This is the 30 cm cable that I used.
For the Arduino Nano board, I placed it on a mini breadboard, wired all connections for it using jumper wires, and fixed the mini breaboard in the robot plate using the double sided tape. As a tip, if you buy an Arduino board and have to solder the header pins, first place the header pins in the breadboard, then the Arduino on top of them. It’ll become easier to properly solder the pins correctly aligned with the breadboard holes.
Even though the holes on the robot plates aren’t properly aligned with the components, you can easily mount the components, except for the RPLidar. The Lidar already comes with its own M2.5 threaded spacers with the correct height to fix it. However, I could only align two spacers with the holes on the top side of the top plate. As I didn’t want to drill holes on the plate, and thought that being more or less close enough to the front of the robot was enough, I only fixed two Lidar spacers (the front ones) to the top plate. I used the long horizontal cut on the top plate at the front part to fix the spacers. I guess some people probably just look at the URDF files, find the correct position for the Lidar on the plate and drill the holes in the correct position. Another option is to print a different robot plate model with better positioned holes. There is an updated model available in the issues section for the project if you want to have a look.
Wiring
Just a few quick observations about wiring:
- To make it easier to share the 5 V and ground lines, I used a D2-6 wire splice connector.
- I had to use three USB cables to connect: the Lidar to the Raspberry Pi (micro USB to USB-A); the Arduino Nano to the Raspberry Pi (USB-C to USB-A); and the Powerbank to the Raspberry Pi (USB-C to USB-C). In some cases I didn’t have shorter cables, so you can clearly see the cables standing out in the robot.
- Pay attention to the wiring and double check it. For example, it was not an actual wiring issue, but I only realized later that I had to change two wires for one of the motor encoders while testing the robot navigation. I talk more about it in the Motor controlling section.
One last thing about wiring is regarding the power distribution. The documentation suggests powering the circuit using a Powerbank connected to the Raspberry Pi. The Raspberry Pi will then output 5 V directly from the Powerbank through its 5 V pins. This way, if using a Powerbank capable of outputing at least 3 A, there will be around 1 A to 2 A left to power the Arduino and the motors via the motor driver (using the DC-DC Step up regulator module). This is enough and it should be fine. However, as an alternative, it is worth noticing that some people may prefer to have another separate power source (e.g. another Powerbank or a pair of 18650 batteries) to exclusively power the motor driver to minimize any potential risks in case the motor driver consumes too much current or something like that.
BNO055 IMU board
I bought this BNO055 board from Aliexpress. I wasn’t aware of this but this board has two communication modes: I2C and UART. This is dictated by the jumper pads in the board. If the jumpers are open, the board will use UART, otherwise it’ll use I2C. As the Andino software expects I2C communication with the IMU board, I had to solder these pads. You can see what I mean in the images below:
I realized this while testing the board separately with the Arduino Nano. The serial communication wasn’t being initiliazed and I couldn’t get any readings from the board. The code to test it is the same as the one for the Adafruit BNO055 board and their own library. Be aware that you might need to change the I2C address in the code, like:
/* Here I'm adding the 0x29 I2C address. If you don't provide it,it'll be 0x28 by default. Your board probably uses one of these addresses,although the address could be a completely different one. */Adafruit_BNO055 bno = Adafruit_BNO055(55, 0x29);Once the jumper pads were soldered, I could get the acceleremoter and gyroscope readings from the board. Then, to finish this part, I fixed the IMU board at the front of the robot with a double sided tape and with the +X axis label on the board pointing forward.
Building order
Unless you test all the electronic parts individually before building the robot, I recommend that you first mount the motors and the caster wheel to the robot bottom plate. Then, you mount the Arduino Nano, the motor driver, wire the Arduino to the driver, and mount the Raspberry Pi (just to power the Arduino and the motor driver, no code is need for the Pi at this point). This way, you can test if the Arduino is properly driving the motors. I talk more about this test in the Arduino section.
Once everything is fine with the motor control, you can add the rest of the components and “close” the robot with the top plate. Then, you can install the Ubuntu OS in the Raspberry Pi, the ROS 2 software and test the integration of everything. This is just a suggestion. Testing everything separately as well also works. I wouldn’t recommend building everything at once though and only then testing individual things, unless you’re very confident about what you’re doing.
Software
The software part is divided in two, one for the microcontroller (Arduino Nano) and one for the Raspberry Pi. The Arduino is responsible for controlling the motors, reading the motor encoders and the IMU sensor. These Arduino actions are dictated by serial commands received in its USB port. The Raspberry Pi is the one sending the serial commands to the Arduino, acting as the robot brain, telling the Arduino how the robot should move (motor control) and checking with the Arduino how the robot has actually moved so far (motor encoders and IMU). The Raspberry Pi uses ROS 2 nodes to process the sensor readings returned by the Arduino, along with Pi camera images and Lidar readings. By gathering all this information, the ROS 2 nodes can decide how the robot should move at each step to perform complex tasks, like autonomous navigation. This division of responsibilities between a Microcontroller (MCU) and a Single Board Computer (SBC) properly decouples the low and high level operations.
Arduino
Firmware upload
As I described in the hardware section, I built the base of the robot first with the main electronic parts and tested the motor control using the Arduino. I only mounted the Raspberry Pi at this point to be able to power the Arduino and the motor driver for my initial tests. Once everything was right with the Arduino part, I knew that loading the software for the Raspberry Pi (e.g. ROS 2) to talk to the Arduino, and adding the rest of the components, like the Lidar, would be easier to test and validate.
Therefore, to test that the Arduino software (firmware) was working and it was interpreting the serial commands properly, I basically followed the instructions for the firmware upload using PlatformIO as described in the repository. One observation here is that depending on the Arduino Nano board, it might be using the new bootloader for Nano, so you might have to update the environment configuration for the Nano in the platformio.ini file, like so:
...; Environment for Arduino Nano.[env:nanoatmega328new]extends = base_build, base_atmelavrboard = nanoatmega328new...One extra thing that I did was to add the monitor_echo config to the [base_atmelavr] configuration:
; Base configuration for Atmel AVR based Arduino boards.[base_atmelavr]...monitor_speed = 57600monitor_echo = yestest_ignore = desktop/*lib_deps = ...This way, when I was testing sending serial commands via the serial monitor interface with PlatformIO, I could see what I was typing, and not only the Arduino serial response for my commands.
Motor controlling
In the description and Test it! sections for the Andino firmware documentation we can see which serial commands should be sent to the firmware so it can perform actions like reading the motor encoders or driving the motors.
So, if we want to read the encoders we simply send an e character through the serial port (by using the PlatformIO serial monitor, for example). We should
get a pair of numbers, one for each motor encoder. Then, if we want to drive both motors at full speed, in the same direction, we simply send o 255 255.
The documentation is very clear about it. One important thing though that I’d like to mention here is the importance of validating the motor encoder
readings before moving forward with the rest of the building and software installation.
We need to be sure that we’ll get positive values for both motor encoders when turning the wheel manually in the forward direction. We basically
send the r command via the serial port to set the encoders to zero, then we use our hands to move each wheel a bit in the robot forward direction,
and finally send the e command to read the encoder values. If we get two positive readings, we’re fine. The opposite is also true, test moving each
wheel a bit in the robot backward direction and see if we get two negative readings. When I was building the robot I only checked one of the wheels and moved foward
with the building. Then, when testing the robot autonomous navigation later I noticed that the robot wasn’t moving correctly.
By checking the odometry, via the ROS /odom topic
in RViz, I noticed that whenever I drove the robot forward, the odom topic was
indicating that the robot was actually rotating. This was because the left motor encoder was returning a positive value and the right motor encoder was returning a negative value, which is
essentialy a rotation (one wheel moving forward and another way moving backwards). It was a sensor reading issue, not an actual drive issue, because the robot was
moving forward correctly. Even though I had connected the wires in the same way as in the documentation (I double checked it), the encoder readings were inverted for the right wheel for whatever reason.
Then, instead of doing changes to the code to compensate for that, I simply switched the Encoder - Pin (A) and Encoder - Pin (B) connections for the right motor encoder (it should return positive readings when moving forward).
In the end, instead of having Encoder - Pin (A) connected to the Arduino Nano A2 Pin as in the documentation, I connected it to the Arduino Nano A3 Pin. Consequently, I connected the
Encoder - Pin (B) to the Arduino Nano A2 Pin instead of the Arduino Nano A3 Pin.
Raspberry Pi
After testing the Arduino behavior and making sure that everything is working fine from its side, we can move to the software installion for the Raspberry Pi. Installing the software for the Raspberry Pi is nicely described in the Single Board Computer (SBC) section of the Andino project.
I just want to highlight something regarding the Ubuntu Mate 22.04 ARM64 installation. When using the Raspberry Pi Imager don’t bother about configuring WiFi or things like that in the settings, and the Andino documentation also mentions this: “No extra configuration should be necessary.”. Installing Ubuntu Mate is not like installing the official Raspberry Pi OS where we can provided some settings beforehand and make the Raspberry Pi boot up properly with these settings, so we can start using the WiFi right away, for example. When installing Ubuntu Mate it is very useful to connect the Raspberry Pi to an external display, mouse and keyboard. When the Pi boots up with Ubuntu Mate for the first time, it’ll show us a series of screens to configure WiFi, timezone, username, password, hostname, keyboard layout and other stuff, in the same way as installing normal Ubuntu on our own PC. Another thing that I did at this point was to add the user password, but disable the Require my password to log in option. This makes it easier to just power up the robot and connect to it remotely via SSH once the system is fully booted without having to access the robot first using an external display, keyboard and mouse to just write the password before we can log in to Ubuntu. Bear in mind though that we still need an user and password to connect via SSH. I’m sure there are better ways to still have a login by password for improved security (in case someone else has physical access to the robot) and at the same time make it easy to connect to the robot remotely, but I didn’t want to complicate things at this point. Then, when the Ubuntu installation is done, we can install the SSH server so we can connect to the Pi via SSH later. I’ll talk more about remote communication later.
Arduino communication
Once the Raspberry Pi software is installed, we can test its communication with the Arduino board. The Raspberry Pi needs a program to communicate with
the Arduino via its serial interface (using the USB cable). The program is implemented as a Differential Drive interface from ROS 2 control, to be used by the other Andino ROS 2 nodes.
The implementation lives inside the andino_base package. This package
also has a motor_driver_demo example application
that we can quickly use to test the motor encoder readings and motor drive in the same way that the Rapsberry Pi would do when communicating with the Arduino.
This is because both the motor_driver_demo and the differential drive interface rely on the MotorDriver class.
So, we can easily start our tests by running the demo application. We first need to build the ROS 2 workspace, which we already did by completing the Raspberry Pi software installation section of the documentation,
in particular, the create workspace one. Then, assuming that you also fixed the USB port names like me, we can open a
terminal on Ubuntu Mate, in the Raspberry Pi, navigate to the workspace, source it with source install/setup.bash just to be sure, then:
- Read the encoder values for the motors with:
motor_driver_demo --serial_port=/dev/ttyUSB_ARDUINO --msg='e' - Drive both motor wheels at full speed forward with:
motor_driver_demo --serial_port=/dev/ttyUSB_ARDUINO --msg='o 255 255'
We can also check again that the encoder readings are fine, just to be sure, for the reasons explained before.
At this point, you might face an issue, like I did. When running the demo I was constantly getting Response to X timed out errors, where X represents one of the provided commands like e or o.
If these errors are showing up here, they will probably show up later as well when running the robot bringup code.
After seeing theses errors I wanted to verify if this was an issue with the firwmare installed in the Andino or not.
I installed picocom in the Raspberry Pi with sudo apt install picocom and I opened a serial connection with the Arduino using picocom -b 57600 /dev/ttyUSB_ARDUINO. Then, I sent the e command to read from the motor encoders and I got a proper reading, no timeout.
I also sent a o 255 255 command to see if the motors were being driven properly as well, and everything was fine. Thus, the issue was probably related to the serial communication opened by the MotorDriver class itself.
As the Andino project uses Libserial to help with the serial communication, I was suspecting of issues using the library, as each library
might have different configurations to open a serial port and things like that. So, I decided to create two small C++ programs using different serial libraries to test the Arduino communication directly from my PC instead.
This way, I could understand if the Libserial library was the problem, or something else in the code or the way it was used. You can check out the cpp_serial project with these two C++ programs.
I simply wanted the C++ programs to send a single serial command to the Arduino, connected via USB to my PC (running Linux), in a similar fashion as the demo application, using the same serial configurations but with different serial libraries.
I also targeted the /dev/ttyUSB0 directly in these programs, so you can change this port in the project code if your Arduino board shows up as another USB port on your PC. By the way, nothing blocks you on running these programs directly in the Raspberry Pi as well. The project uses Pixi as the package manager, so it is very easy to use.
That said, with the two C++ programs I noticed that I could read from the Arduino encoder values 100% of the time, without a timeout, using the Boost and Termios serial libraries.
After all the tests, I knew that the Arduino firmware was working properly and that I could use the same serial configurations as the Andino project to talk to it. So, it was certainly an issue
in the way that Libserial was opening the serial port to communicate with the Arduino.
To solve this mistery, I deep dived into the motor demo application and the MotorDriver class. I wanted to understand why I was getting timeouts and if I could handle it without switching the Libserial library. After my findings and as a way to contribute to this amazing Andino project, I’ve created an issue in the Andino repository and also a pull request (PR) to tackle this same issue. You can check them out, if you want 🙂.
Remote communication
Once I knew that the Raspberry Pi could communicate properly with the Arduino, it was time to test some of the ROS 2 programs. For that, we first need to bring up the robot,
that is, start all the ROS 2 nodes that will perform the essential robot initilization tasks, like: connect to the Arduino, reset the values for the motor encoders, create the /odom topic,
start reading from the Lidar sensor, IMU and etc. At this point, the robot will be ready to start running more complex tasks, like navigation. The bringup program should
always be running in a terminal, while we open new terminal sessions to run other programs.
Ok, so we need to run a set of programs to do some useful things with the Andino. We basically connect the Andino robot to an external display, mouse and keyboard, then we start the bringup programs, right? Besides, what about starting the autonomous navigation program and then following up the navigation path through a visualization program (e.g. RViz) while the robot is moving? Does all of this still sound reasonable? In my opinion, it might be doable, but very cumbersome. Instead of physically accessing the Andino every time we want to run something on it, it’s better to just remotely connect to it as I’ve been talking about.
As we have a SSH server configured in the Andino at this point, we can simply use Visual Studio Code with the Remote SSH extension to easily connect to the robot via SSH from our PCs. Then, we can navigate through the Andino filesystem normally from the editor and also open a terminal to run commands directly in the Raspberry Pi as we would do if a keyboard was connected to it. As I mentioned previously in this post, if you have installed Ubuntu with the Require my password to log in option disabled, connecting to the robot becomes even easier because you can simply turn on the power for your robot, give some seconds for it to boot up and then try to connect to it via SSH until it succeeds. Once you’re in, you can run the ROS 2 bringup command. Just be aware that sometimes the bringup exits for whatever reason when you first start it. What I do is to simply run it again, until ROS 2 has spawned all bringup nodes and the terminal session just stops outputting logs, without exiting the bringup program, showing that everything is up. If you want to check what is the expected output for the bringup program, you can check the Expected behavior session in the issue description that I created.
Now that we can bring up the Andino remotely, I’d like to give another tip. Previously I mentioned Rviz, which is a nice visualization software to help us see the data coming from the robot, and more. For example, RViz will make it easier to see the robot mapping and localization, and also to give navigation goals to the robot. However, even though we can connect to the Andino remotely via SSH, we can’t see a graphical interface running in the Andino through SSH, the protocol doesn’t have this feature. We could use a VNC program to help us with that, allowing us to connect remotely to the Andino using a graphical interface instead. VNC is an option, but I believe there is a better way as we’re using ROS 2.
ROS 2 is a distributed framework, meaning that as long as the nodes can find each other in a network, then can easily communicate with each other via topics, services or actions.
For that, we first need to set some environment variables so the nodes can find each other. We’ll start by defining these variables in the ~/.bashrc file in the Raspberry Pi.
This way, every time we open a new terminal session in the Raspberry Pi, these variables will be exported and made available to the ROS 2 nodes:
# For ROS 2export ROS_DISTRO=humbleexport ROS_DOMAIN_ID=0export ROS_LOCALHOST_ONLY=0source /opt/ros/humble/setup.bashNotice that we’re setting the ROS distribution to humble, as we’ve installed it per instructions in the Andino documentation, if you followed
along with the links in this post. We also set ROS_LOCALHOST_ONLY=0, meaning that ROS 2 communication can happen outside the localhost. Finally, we define ROS_DOMAIN_ID=0,
which allows ROS 2 nodes on the same domain to freely discover and send messages to each other using DDS (the default middleware protocol used for distributed communication in ROS 2),
ROS 2 nodes on different DDS domains cannot find each other. You can read more about the DOMAIN ID and LOCALHOST ONLY in the ROS 2 documentation.
To help us further, we also source the ROS 2 worskpace, which makes ROS 2 commands available to us as well in the terminal session.
With the ROS environment variables configured on the Andino side, we can set up the Andino workspace and similar environment variables on our PCs as well. This allow us to run other Andino ROS 2 programs, like navigation, and visualize the robot data using RViz directly on the PC over our local WiFi network.
This can be achieved because ROS 2 programs (or nodes) can find each other in a network if this is properly configured, as we’re doing. The nodes will communicate with each other by exchanging messages through topics, services and actions.
So, the navigation nodes running in our PC can read /odom topics published by the Andino, for example, and they can publish velocity vectors in the cmd_vel topic subscribed by the Andino in order to move the robot.
Once the configuration is done on the PC side, we can easily teleoperate the robot over WiFi using the teleoperation node.
To make it easier to set up the Andino workspace on the PC along with ROS 2, we can use a Docker container.
The process is relatively easy once we have Docker installed. Then, we just need to clone the Andino repository, humble branch, open it with VS Code and use the Dev Containers extension.
The Dev Containers extension requires a .devcontainer folder with a devcontainer.json file in it to work. The extension will read the configuration provided in the devcontainer.json
file to set up our project within an isolated docker container that we can iteract with from inside VS Code as if we were working locally in a folder on our PC. The Dev Containers extension can detect this devcontainer.json file automatically
and prompt us with actions like Rebuild container to spin up the Docker container for us and open the container filesystem inside VS Code. If you’re not sure about how to use it, have a look at the Developing inside a Container documentation.
I’m assuming that you’re using a Linux OS on your PC. This remote communication process might also work on Mac, but I haven’t tested it. On Windows, I tried to
make it work using WSL 2 and Docker Desktop for Windows,
but something in the way that WSL 2 sets up its network and uses Docker wasn’t quite working. I also tried installing the Docker engine directly inside WSL 2 and
I even configured the Mirrored mode networking for WSL 2 without luck.
That said, someone else might be able to persist a bit more with this Windows setup than I did.
Moving on to the devcontainer.json file, I have seen one example in this PR opened by Romu10 in the Andino repository.
I used it as an example to create my own (feel free to copy and paste it):
{ "name": "Andino", "build": { "context": "../", "dockerfile": "../docker/Dockerfile", "args": { "USERID": "${localEnv:USER_UID:1000}", "USER": "${localEnv:USER}" }, "options": ["--ssh=default"] }, "workspaceFolder": "/home/${localEnv:USER}/ws", "mounts": [ "source=${localWorkspaceFolder},target=/home/${localEnv:USER}/ws/src,type=bind", "source=/tmp/.X11-unix,target=/tmp/.X11-unix,type=bind", "source=${localEnv:HOME}/.ssh,target=/home/${localEnv:USER}/.ssh,type=bind" ], "runArgs": [ "--net=host", "--hostname=${localEnv:HOSTNAME}", "--name=ros2_humble_andino_container", "--gpus=all", "--runtime=nvidia" ], "containerEnv": { "DISPLAY": "${localEnv:DISPLAY}", "ROS_DOMAIN_ID": "0", "ROS_LOCALHOST_ONLY": "0", "NVIDIA_DRIVER_CAPABILITIES": "all" }, "postCreateCommand": "/bin/bash -c 'rosdep install --from-paths src --ignore-src -i -y && colcon build && source install/setup.bash'", "shutdownAction": "stopContainer", "customizations": { "vscode": { "extensions": [ "ms-vscode.cpptools-extension-pack", "ms-python.python", "ms-iot.vscode-ros", "redhat.vscode-xml", "redhat.vscode-yaml" ], "settings": { "terminal.integrated.shell.linux": "/bin/bash" } } }}
// REMINDER// Please run xhost +local:docker on your host machine.
// Add the following line to .bashrc:// export USER_UID=$(id -u)// export USER=$(whoami)// export HOSTNAME=$(hostname)Note that it is recommended that you export some environment variables on your system:
export USER_UID=$(id -u)export USER=$(whoami)export HOSTNAME=$(hostname)These environment variables will share your Linux user with the Docker container, so the processes inside the container can run as your local user instead of the root user.
It’s also recommended to run the terminal command xhost +local:docker before launching this Dev Container on VS Code. The xhost command will allow your
Docker container to access your Window Management system so it can open graphical interfaces, like RViz.
We also set the ROS_DOMAIN_ID and ROS_LOCALHOST_ONLY variables for the container with the same values as we set for the Raspberry Pi. Now, the ROS 2 nodes
running inside the container on the PC will be able to find the ROS 2 nodes running on the Raspberry Pi over the WiFi network. This is also possible because
we’re passing the --net=host argument to the container, which shares the PC’s local network with the container.
You might have seen some NVIDIA and GPU configurations in the devcontainer.json file as well. If you’re planning to run simulations inside the Docker container, which I’ll mention later,
it might be useful to allow the container to use the NVIDIA container runtime to run things (of course, if you have a NVIDIA GPU on your PC). For this to happen we have to have the NVIDIA drivers
configured on the PC and the Docker runtime configured to use the NVIDIA container runtime. If you don’t have a NVIDIA GPU, you can
simply remove the following lines from your .devcontainer/devcontainer.json file, making Docker use the PC’s CPU by default:
{ ... "runArgs": [ ... "--gpus=all", "--runtime=nvidia" ], "containerEnv": { ... "NVIDIA_DRIVER_CAPABILITIES": "all" }, ...}Another optional thing is the SSH configuration provided to the container. As I’m cloning GitHub repositories using SSH,
I also would like to commit Git changes that I make inside the container and push them to GitHub. For this, I need to forward my SSH config to the container during development.
If you don’t use SSH or use another authentication mechanism like personal access tokens or
a credential helper, you might have to use another procedure if you want to able to commit changes inside the container. Then, if you aren’t using SSH,
please check the Sharing Git credentials with your container documentation from VS Code.
If that’s the case, you can also remove the following lines from your .devcontainer/devcontainer.json file:
{ ... "build": { ... "options": ["--ssh=default"] }, ... "mounts": [ ... "source=${localEnv:HOME}/.ssh,target=/home/${localEnv:USER}/.ssh,type=bind" ], ...}Once you have your devcontainer.json file ready, you might want to read the Simulation
section as well, which might have changes complementary to the ones mentioned in this section for the Docker container. Finally, be aware that depending on your WiFi network
configuration you might have issues with the remote connection. DDS might have trouble over some WiFi networks, especially regarding multicast.
That’s why some people are replacing the default ROS middleware implementation with Zenoh. There is a great
talk, Actuate 2024 | Chris Lalancette | Zenoh and ROS 2: Not a Paradox, about this topic that you can check it out.
I didn’t have any issues with DSS in my normal WiFi connection at home, though.
Robot navigation
We’re finally ready to make the Andino drive autonomously for the first time. One of the advantages of the ROS 2 ecosystem is the number of great packages that provide us with state of the art algorithms and tooling for robotics. Nav2 is one of these packages. The andino_navigation package implements Nav2 for the Andino, and describes how to run the navigation for the real or the simulated robot. If you’re familiar with Nav2, this should be enough information. Otherwise, keep reading this section if you feel like you need to know a bit more.
First of all, as I already mentioned in this post, we need to be careful with the motor encoder readings, otherwise the Odometry won’t work. Secondly, we also need to have the
bringup code running in the Andino with: ros2 launch andino_bringup andino_robot.launch.py. It’s recommended to start RViz in another terminal as well
with ros2 launch andino_bringup rviz.launch.py. Basically, these are the steps that we need to perform every time before running other processes.
As for the navigation process, the other steps are: create a map of the desired place where we want the Andino to drive autonomously, and use this map to perform the actual autonomous navigation later.
To create the map we can start the teleoperation node in another terminal session with ros2 launch andino_bringup teleop_keyboard.launch.py.
If you have a joystick, it’s easier to pilot the robot with it. Otherwise, you can do just fine with your PC’s keyboard. Then, we start the SLAM node
in another terminal session with ros2 launch andino_slam slam_toolbox_online_async.launch.py. This node will start mapping the robot’s surroundings while we’re driving the robot around with the teleoperation node. We’ll see on RViz that a map
starts to show up. Once we have driven the Andino robot enough and we can see a good initial map for our desired place, we can save the map with ros2 run nav2_map_server map_saver_cli -f ./src/mymap. Here, I’m saving the map
with the mymap name inside the src folder in the workspace. I’m assuming you’ll be running these commands from the root folder of your workspace as well (which will be the case if you opened the workspace in your PC using the devcontainer.json file that I shared).
Saving the map will actually create two files: mymap.pgm and mymap.yaml. Before we move forward, I have two more tips here. Sometimes the SLAM node dies due to no apparent
reason and you might see the Robot Model on RViz failing to render due to the lack of TF transformations, because the ROS 2 topic /map provided by the SLAM node isn’t available. In this case,
you can simply restart the SLAM node and possibly RViz as well. You might even need to restart the robot bringup. For the second tip, I noticed that
the SLAM node was printing in the terminal some warnings regarding the RPLidar having a minimum laser range equals to 0.2 instead of 0.0 as in the Andino SLAM configuration.
So, I just updated the line min_laser_range: 0.0 to min_laser_range: 0.2 in the config/slam_toolbox_online_async.yaml file in the andino_slam package. This seems to be only a minor thing though.
Lastly, now that we have a saved map, we can quit the SLAM node (with CTRL + C in the terminal or just by closing the terminal) and we can start the navigation nodes by providing the
YAML map file for the navigation bringup: ros2 launch andino_navigation bringup.launch.py map:=./src/mymap.yaml. Then, on RViz, we need to provide a robot estimated pose via the UI so the
navigation package knows more or less where the robot currently is. Once the estimated pose is provided we can give a goal pose in the map to make the robot navigate to this goal autonomously.
The below image shows these RViz buttons:
As an observation, you might not see the robot properly rendered on RViz sometimes. I quickly mentioned before that this might be due to the missing /map topic.
This topic is created by the Nav2 and SLAM packages when you run any of these. The RViz configuration for the Andino is also using map as the global fixed frame by default.
If you only want to see the robot on RViz to check something and don’t intend to use SLAM or navigation, you can change the RViz global fixed Frame to base_footprint or base_link.
Otherwise, if map is the global fixed frame, check if you have either the SLAM or navigation nodes running properly so the /map topic becomes available.
In this section I’m only describing a basic example of first creating the map and then navigating using it. There are other options as well to perform things like localization without mapping. If you want to know more, the Nav2 documentation is a great place to start. I also recommend these two videos from the Articulated Robotics channel: Easy SLAM with ROS using slam_toolbox and Making robot navigation easy with Nav2 and ROS!. In the videos, even though it is shown a different robot, Nav2 is used for navigation and the same ideas apply to the Andino navigation packages.
Simulation
We can simulate the Andino robot with Gazebo. By default, the humble version of the Andino project uses Gazebo Classic.
You can see how to start a simulation in the andino_gz_classic package inside the Andino repository.
However, as Gazebo Classic isn’t supported anymore since 01/2025, I updated the docker/Dockerfile and docker/requirements.txt in my local cloned Andino repository in order to use the Gazebo Fortress version
for testing things on the simulation side. Luckily, the Ekumen team also created a new Andino simulation package for the Gazebo Fortress version
that I added to the docker/requirements.txt file (ros-humble-andino-gz). The file changes are:
FROM osrf/ros:humble-desktop-full
# Arguments for buildingARG USERIDARG USER
# Setup environmentENV TERM linuxENV DEBIAN_FRONTEND noninteractiveRUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
# Copy requirement files and install dependenciesCOPY docker/requirements.txt .RUN apt-get update && apt-get install --no-install-recommends -y $(cat requirements.txt)RUN rm requirements.txt
# Install Gazebo FortressRUN sudo sh -c 'echo "deb http://packages.osrfoundation.org/gazebo/ubuntu-stable `lsb_release -cs` main" > /etc/apt/sources.list.d/gazebo-stable.list'RUN wget http://packages.osrfoundation.org/gazebo.key -O - | sudo apt-key add -RUN sudo apt-get updateRUN sudo apt-get install ignition-fortress -y
# Create a user with passwordless sudoRUN adduser --uid $USERID --gecos "ekumen developer" --disabled-password $USERRUN adduser $USER sudoRUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoersRUN echo "export QT_X11_NO_MITSHM=1" >> /home/$USER/.bashrcRUN echo "export IGN_IP=127.0.0.1" >> /home/$USER/.bashrcUSER $USER
# Adds USER to dialout and plugdev group.# This is needed to access the serial ports, for further references check# the libserial documentation.RUN sudo usermod -a -G dialout $USERRUN sudo usermod -a -G plugdev $USER
# Creates the src folder of the workspace.RUN mkdir -p /home/$USER/ws/src
# Adds to bashrc the ros humble overlay sourcing.RUN echo "source /opt/ros/humble/setup.bash" >> /home/$USER/.bashrc# Adds to bashrc the project's workspace sourcing.RUN echo "source /home/$USER/ws/install/setup.bash" >> /home/$USER/.bashrc# Adds colcon autocompleteRUN echo "source /usr/share/colcon_argcomplete/hook/colcon-argcomplete.bash" >> /home/$USER/.bashrc
# UpdatesRUN sudo apt upgrade -y && sudo apt update && rosdep update
# Download models from fuelRUN ign fuel download --url "https://fuel.gazebosim.org/1.0/OpenRobotics/models/Office Desk"RUN ign fuel download --url "https://fuel.gazebosim.org/1.0/OpenRobotics/models/Office Chair"RUN ign fuel download --url "https://fuel.gazebosim.org/1.0/OpenRobotics/models/Dining Table"RUN ign fuel download --url "https://fuel.gazebosim.org/1.0/OpenRobotics/models/FoodCourtTable1"RUN ign fuel download --url "https://fuel.gazebosim.org/1.0/OpenRobotics/models/Depot"RUN ign fuel download --url "https://fuel.gazebosim.org/1.0/malbonico/models/Table"RUN ign fuel download --url "https://fuel.gazebosim.org/1.0/openrobotics/models/fridge/2"RUN ign fuel download --url "https://fuel.gazebosim.org/1.0/openrobotics/models/table/4"RUN ign fuel download --url "https://fuel.gazebosim.org/1.0/makerspet/models/tv_65in/1"RUN ign fuel download --url "https://fuel.gazebosim.org/1.0/openrobotics/models/table/4"
# Defines a workspace folder.WORKDIR /home/$USER/ws
CMD ["/bin/bash"]apt-utilsbash-completionbuild-essentialcurldebian-archive-keyringdebian-keyringevtestgdbgitgnupg2gpg-agentjoystickjstest-gtklocaleslsb-releasemercurialnanoopenssh-serverpython3python3-pippython3-setuptoolsros-humble-andino-gzsoftware-properties-commonsudotmuxxtermwgetYou can read the Andino simulation package documentation to see which commands to use. As we already did the Docker setup on our PC with VS Code, we can simply run the simulation inside the container as well. Very nice!
Summary
It was a fun learning experience building my own autonomous robot. I could do a small software contribution to the Andino project (and might do others in the future) and I even got to work with CAD (to update the design for the motor holder) and electronics (by soldering some of the components and wiring them as well). It was a complete robotics project for me. I believe Andino is one of the best open-source robots that a serious beginner could build and program. A robot with enough processing power to run things like ROS 2 and more. For example, Ekumen now mantains a Rust version of the project, which replaces ROS 2 with dora-rs (and potentially other frameworks in the future). Ekumen also has other open-source projects worth checking out.
For the next steps, it would be good to deep dive into the Andino project and understand how each piece was programmed. Also, from what I briefly checked, even though the BNO055 is being initiliazed by the Andino firmware, the sensor readings are not being used. Besides, I also don’t think the camera is being used to perform any task, like visual navigation. I know that the andino-rs project has some working examples that use the camera though. That said, I believe it’d be good to have a look at these things too, I can even see some issues in the project roadmap regarding them, like the BNO055 one. Lastly, if you liked the project as I did, consider giving it a star on GitHub 😉.