You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In order to support a new humanoid design, there is a whole series of code that needs to be developed.
A humanoid essentially contains 30+ motors (not counting the motors in their fingers). These motors are communicated to using CAN bus protocol. Each motor gets a unique slave id and master id.
The motors are controlled in "MIT Mode" which means we send commands to motors to set immediate targets for each motor's position, velocity and force. We define targets and the motor executes the lower level work of achieving them using its internal PID loops running at 1kHz (typical).
The CAN communication to motors are typically abstracted into a service - like a ROS2 service. This allows easier interface with the motors for the rest of the system. ROS2 service is also useful bridge to allow us to switch between the hardware or the simulated robot under the hood - so it acts as a interface for the rest of the system.
Finally, there often is a simulator, which uses a MJCF or URDF file to simulate the robot in a physics engine (typically Mujoco or Isaac Lab). These robot description files, carry information in an XML like file about the entire mechanical design of the robot.
These simulators are also used for training the AI models which control the robot - using the ROS2 services described above.
These simulators along with pytorch code - allows us to learn a model of doing some useful task using the robot's motors. The models are exported in a interoperable format called ONNX. This ONNX format model is then loaded into the C++ program which is meant to generate targets for the motors of the humanoid - usually at over 200Hz.
In order to move motors and learn to do useful tasks, the motor must be able to estimate fairly well - the pose of its entire body right now given all the information available to it (from IMU sensor and position feedback from motor encoders). This task requires algorithms built around kalman filters . Typically, we use an open source implementation of this like pinnochio, or a closed source but extensible library like leggedcontrol2.
In general,
Building robots involve many sub workflows. One needs to build the motor control loop - involving can bus, multithreading, ONNX inference, etc; One needs to build pipeline to go from video recordings to extracting human motions from them to transferring those motions onto the humanoid's body in simulation; One also needs to be able to take such motion datasets and learn those behaviors in simulation and then deploy them on real hardware.
In short, it has a lot of pieces. Luckily, there is a lot of good open source projects allowing us to build most of these components by following their approaches closely. Many of these, we are currently using with slight modifications.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
In order to support a new humanoid design, there is a whole series of code that needs to be developed.
A humanoid essentially contains 30+ motors (not counting the motors in their fingers). These motors are communicated to using CAN bus protocol. Each motor gets a unique slave id and master id.
The motors are controlled in "MIT Mode" which means we send commands to motors to set immediate targets for each motor's position, velocity and force. We define targets and the motor executes the lower level work of achieving them using its internal PID loops running at 1kHz (typical).
The CAN communication to motors are typically abstracted into a service - like a ROS2 service. This allows easier interface with the motors for the rest of the system. ROS2 service is also useful bridge to allow us to switch between the hardware or the simulated robot under the hood - so it acts as a interface for the rest of the system.
Finally, there often is a simulator, which uses a MJCF or URDF file to simulate the robot in a physics engine (typically Mujoco or Isaac Lab). These robot description files, carry information in an XML like file about the entire mechanical design of the robot.
These simulators are also used for training the AI models which control the robot - using the ROS2 services described above.
These simulators along with pytorch code - allows us to learn a model of doing some useful task using the robot's motors. The models are exported in a interoperable format called ONNX. This ONNX format model is then loaded into the C++ program which is meant to generate targets for the motors of the humanoid - usually at over 200Hz.
In order to move motors and learn to do useful tasks, the motor must be able to estimate fairly well - the pose of its entire body right now given all the information available to it (from IMU sensor and position feedback from motor encoders). This task requires algorithms built around kalman filters . Typically, we use an open source implementation of this like pinnochio, or a closed source but extensible library like leggedcontrol2.
In general,
Building robots involve many sub workflows. One needs to build the motor control loop - involving can bus, multithreading, ONNX inference, etc; One needs to build pipeline to go from video recordings to extracting human motions from them to transferring those motions onto the humanoid's body in simulation; One also needs to be able to take such motion datasets and learn those behaviors in simulation and then deploy them on real hardware.
In short, it has a lot of pieces. Luckily, there is a lot of good open source projects allowing us to build most of these components by following their approaches closely. Many of these, we are currently using with slight modifications.
Beta Was this translation helpful? Give feedback.
All reactions