Related Blog Post: For behind-the-scenes details and the full development journey, check out the companion Medium article: How I'm Building an Autonomous Pick-and-Place System with ROS 2 Jazzy and Gazebo Harmonic
The blog dives into simulation setup, robotic control, MoveIt Task Constructor, and lessons learned — perfect if you're curious about the engineering side or want to replicate the project from scratch.
This project integrates the Robotiq 2-Finger Gripper with a Universal Robots UR3 arm using ROS 2 Humble / Jazzy and Ignition Gazebo. It includes URDF models, ROS 2 control configuration, simulation launch files, MoveIt Task Constructor pick-and-place, vision-based object detection, LLM-driven task planning (Ollama), and demonstration recording for behavior cloning.
Note: This setup uses fixed mimic joint configuration for the Robotiq gripper to support simulation in newer Gazebo (Harmonic). Only the primary
finger_jointreceives commands — mimic joints automatically follow.
Make sure you have ROS 2 Humble or ROS 2 Jazzy and Ignition Gazebo installed.
git clone https://github.com/darshmenon/UR3_ROS2_PICK_AND_PLACE.git
cd UR3_ROS2_PICK_AND_PLACE# Set to humble or jazzy
export ROS_DISTRO=humble
sudo apt install ros-$ROS_DISTRO-rviz2 \
ros-$ROS_DISTRO-joint-state-publisher \
ros-$ROS_DISTRO-robot-state-publisher \
ros-$ROS_DISTRO-ros2-control \
ros-$ROS_DISTRO-ros2-controllers \
ros-$ROS_DISTRO-controller-manager \
ros-$ROS_DISTRO-joint-trajectory-controller \
ros-$ROS_DISTRO-position-controllers \
ros-$ROS_DISTRO-gz-ros2-control \
ros-$ROS_DISTRO-ros2controlcli \
ros-$ROS_DISTRO-moveit \
ros-$ROS_DISTRO-moveit-ros-perception \
ros-$ROS_DISTRO-simple-grasping \
ros-$ROS_DISTRO-cv-bridge \
ros-$ROS_DISTRO-tf2-ros \
ros-$ROS_DISTRO-tf2-geometry-msgs \
ros-$ROS_DISTRO-pcl-rosJazzy only — add these two extra packages:
sudo apt install ros-jazzy-ros-gz-sim ros-jazzy-ros-gz-bridge \ ros-jazzy-moveit-planners-stompSTOMP is not packaged for Humble so leave it out there — the planner init fails silently and is harmless.
pip3 install -r requirements.txt
# Ollama is required for the LLM planner:
# Install from https://ollama.com
# Then pull your preferred model:
ollama pull llama2:latestcolcon build --symlink-install
source install/setup.bashThis project supports MoveIt Task Constructor (MTC) for advanced pick-and-place planning.
This repo already includes a patched MTC source in src/moveit_task_constructor/ that works for both ROS 2 Humble and Jazzy — no extra cloning needed. Just build normally:
colcon build --symlink-installMTC uses warehouse_ros_mongo to persist planning scenes and trajectories. MongoDB must be installed and running before launching the demo:
curl -fsSL https://www.mongodb.org/static/pgp/server-7.0.asc | \
sudo gpg -o /usr/share/keyrings/mongodb-server-7.0.gpg --dearmor
echo "deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-7.0.gpg ] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/7.0 multiverse" | \
sudo tee /etc/apt/sources.list.d/mongodb-org-7.0.list
sudo apt-get update && sudo apt-get install -y mongodb-org
sudo systemctl start mongod && sudo systemctl enable mongodVerify it is running: mongosh should connect to mongodb://127.0.0.1:27017.
For Humble/Jazzy API differences and troubleshooting, see ur_mtc_pick_place_demo/README.md.
bash ur_mtc_pick_place_demo/scripts/robot.shLaunches Gazebo + MoveIt + planning scene server + MTC demo in sequence.
ros2 launch ur_gazebo ur.gazebo.launch.pybash ur_mtc_pick_place_demo/scripts/pointcloud.shros2 launch ur_description view_ur.launch.py ur_type:=ur3ros2 launch robotiq_2finger_grippers robotiq_2f_85_gripper_visualization/launch/test_2f_85_model.launch.pyros2 action send_goal /arm_controller/follow_joint_trajectory control_msgs/action/FollowJointTrajectory \
'{
"trajectory": {
"joint_names": [
"shoulder_pan_joint",
"shoulder_lift_joint",
"elbow_joint",
"wrist_1_joint",
"wrist_2_joint",
"wrist_3_joint"
],
"points": [
{
"positions": [0.0, -1.57, 1.57, 0.0, 1.57, 0.0],
"time_from_start": { "sec": 2, "nanosec": 0 }
}
]
}
}'python3 ~/UR3_ROS2_PICK_AND_PLACE/ur_system_tests/scripts/arm_gripper_loop_controller.pyEstimates grasp poses from the Intel D435 point cloud. Two backends:
| Backend | Method | Dependency |
|---|---|---|
| simple_grasping (primary) | PCL RANSAC → moveit_msgs/Grasp[] |
ros-$ROS_DISTRO-simple-grasping |
| numpy centroid (fallback) | Colour HSV filter + centroid + height | built-in |
ros2 launch ur_grasp grasp_detection.launch.py colour:=red
python3 testing/test_grasp.py --colour red --executesource install/setup.bash
python3 ur_llm_planner/scripts/robot_gui.pyFeatures: live camera feed, preset poses, gripper control (Open/Half/Close), per-joint sliders, Pilz PTP execution.
ros2 run ur_moveit_demos custom_zigzag_motionWait at least 45 seconds after launching the simulation before running this.
chmod +x ~/UR3_ROS2_PICK_AND_PLACE/ur_mtc_pick_place_demo/scripts/robot.sh~/UR3_ROS2_PICK_AND_PLACE/ur_mtc_pick_place_demo/scripts/robot.shThis script launches the Gazebo simulation, MoveIt 2, the planning scene server, and the MTC pick-and-place demo.
This workspace has four main AI-facing pieces: grasping, perception, language planning, and RL policy execution. The commands below are the shortest path to getting each one alive without digging through the repo.
Point-cloud grasp estimation for tabletop objects from the Intel D435 depth stream.
Verified in this workspace:
- package imports successfully after
source install/setup.bash - installed executable:
ros2 run ur_grasp grasp_node
Launch:
source install/setup.bash
ros2 run ur_grasp grasp_nodeTrigger one detection:
ros2 service call /ur_grasp/detect std_srvs/srv/Trigger {}Healthy signs:
- advertises
/ur_grasp/detect - subscribes to
/camera_head/depth/color/points - publishes
/ur_grasp/grasp_pose - publishes
/ur_grasp/grasp_markerfor RViz - falls back to the built-in numpy centroid detector if
simple_graspingis not installed - warns and returns no grasp if a point cloud has not arrived yet
Color-based object detection with optional YOLO and PCL cluster extraction from the Intel D435 camera.
Launch:
source install/setup.bash
ros2 launch ur_perception perception.launch.pyWatch detections:
ros2 topic echo /detected_objectsRun the node directly:
source install/setup.bash
ros2 run ur_perception object_detector_node.pyVerified in this workspace:
- package imports successfully after
source install/setup.bash - installed executable:
ros2 run ur_perception object_detector_node.py
Healthy signs:
- publishes detected objects on
/detected_objects - publishes annotated images on
/detection_image - publishes collision objects on
/planning_scene - waits for
/camera_head/color/image_raw,/camera_head/depth/image_rect_raw, and/camera_head/camera_info - warns and keeps color detection enabled if
use_yolo:=trueis set butultralyticsis missing
Natural-language task planning backed by a local Ollama model and connected to perception plus the MoveIt/gripper execution path.
Verified in this workspace:
- package imports successfully after
source install/setup.bash - installed executable:
ros2 run ur_llm_planner llm_planner_node.py - command topic exists in code at
/llm_planner/command - planner converts text into a JSON task list and passes it to
MotionExecutor
Launch:
source install/setup.bash
ros2 run ur_llm_planner llm_planner_node.pyOr use the launch file:
source install/setup.bash
ros2 launch ur_llm_planner llm_planner.launch.pySend a text instruction:
ros2 topic pub --once /llm_planner/command std_msgs/msg/String \
"{data: 'pick up the red object and place it to the left of the robot'}"Healthy signs:
- subscribes to
/detected_objects - listens on
/llm_planner/command - asks Ollama for a JSON task plan
- executes actions like
move_to_named_pose,pick,place,open_gripper, andclose_gripper - warns and returns an empty task list if Ollama is not available at
http://localhost:11434 - may plan successfully but fail execution if MoveIt or gripper action servers are unavailable
Ollama setup:
ollama serve
ollama pull llama3.2:3b
ros2 launch ur_llm_planner llm_planner.launch.py ollama_model:=llama3.2:3bThis stack trains in MuJoCo and deploys the learned single-arm policy into Gazebo. The main runtime node is shared_arm_policy_node.
Run the policy in Gazebo:
# Terminal 1
ros2 launch ur_gazebo ur.gazebo.launch.py
# Terminal 2
ros2 run mujoco_ur_rl_ros2 shared_arm_policy_node \
--ros-args \
-p model_path:=/path/to/best_model.zip \
-p object_x:=0.45 -p object_y:=0.0 -p object_z:=0.045 \
-p drop_x:=0.45 -p drop_y:=0.2 -p drop_z:=0.025Or launch Gazebo and the policy together:
ros2 launch mujoco_ur_rl_ros2 gazebo_shared_arm_policy.launch.py \
model_path:=/path/to/best_model.zip \
launch_policy:=trueThe policy reads /joint_states and publishes trajectories to /arm_controller/joint_trajectory and /gripper_controller/joint_trajectory.
Train a Gazebo-aligned policy:
python3 mujoco_ur_rl_ros2/train_gazebo_single_arm.py \
--timesteps 2000000 \
--n-envs 8 \
--curriculum grasp_focusResume training:
python3 mujoco_ur_rl_ros2/train_gazebo_single_arm.py \
--timesteps 2000000 \
--n-envs 8 \
--curriculum grasp_focus \
--resume models/gazebo_single_arm/<run>/best_model.zipSaved artifacts per run:
models/gazebo_single_arm/<run>/best_model.zipmodels/gazebo_single_arm/<run>/best_model_replay_buffer.pklmodels/gazebo_single_arm/<run>/ckpt_<steps>_steps.zipmodels/gazebo_single_arm/<run>/ckpt_replay_buffer_<steps>_steps.pkl
Key notes:
--curriculum grasp_focusis the main setting to keep for grasp learning- checkpoints are written every
100ksteps - resume now works best from
best_model.zipbecause a matchingbest_model_replay_buffer.pklis saved beside it ur_gazebo_single_arm_env.pyis the main transfer environment for Gazebo- keeping spawned objects inside the UR3 reachable workspace improves transfer stability
Quick run summary:
python3 mujoco_ur_rl_ros2/summarize_single_arm_runs.py
python3 mujoco_ur_rl_ros2/summarize_single_arm_runs.py gazebo_single_arm_20260418_0905 gazebo_single_arm_20260418_0928ros2 launch ur_gazebo full_demo.launch.py
ros2 launch ur_gazebo full_demo.launch.py use_llm_planner:=truePull requests and issues are welcome, especially around simulation stability, transfer learning, and perception-to-action integration.
train_gazebo_single_arm.py trains in MuJoCo, not in Gazebo.
python3 mujoco_ur_rl_ros2/train_gazebo_single_arm.py \
--timesteps 2000000 \
--n-envs 1 \
--curriculum grasp_focus \
--render \
--resume models/gazebo_single_arm/gazebo_single_arm_20260415_1430/best_model.zipUse --render only with --n-envs 1.
python3 mujoco_ur_rl_ros2/train_gazebo_single_arm.py \
--timesteps 2000000 \
--n-envs 8 \
--curriculum grasp_focus \
--resume models/gazebo_single_arm/gazebo_single_arm_20260415_1430/best_model.zipLaunch Gazebo and the policy together:
ros2 launch mujoco_ur_rl_ros2 gazebo_shared_arm_policy.launch.py \
model_path:=/home/asimov/UR3_ROS2_PICK_AND_PLACE/models/gazebo_single_arm/gazebo_single_arm_20260415_1430/best_model.zip \
launch_policy:=trueNotes:
- set
launch_policy:=trueif you want the RL node to start automatically - wait about
55seconds before expecting motion - that delay gives Gazebo,
gz_ros2_control, and the arm/gripper controllers time to settle before commands begin
Or run Gazebo and the policy separately:
ros2 launch ur_gazebo ur.gazebo.launch.pyros2 run mujoco_ur_rl_ros2 shared_arm_policy_node \
--ros-args \
-p model_path:=/home/asimov/UR3_ROS2_PICK_AND_PLACE/models/gazebo_single_arm/gazebo_single_arm_20260415_1430/best_model.zip \
-p arm_trajectory_topic:=/arm_controller/joint_trajectory \
-p gripper_trajectory_topic:=/gripper_controller/joint_trajectory \
-p object_x:=0.35 -p object_y:=0.0 -p object_z:=0.045 \
-p drop_x:=0.35 -p drop_y:=0.20 -p drop_z:=0.025The bundled Gazebo flow uses single_arm_transfer.world, which matches the single-arm MuJoCo training scene.
ros2 launch mujoco_ur_rl_ros2 dual_view_single_arm.launch.pyUseful variants:
# Gazebo + live MuJoCo viewer
ros2 launch mujoco_ur_rl_ros2 dual_view_single_arm.launch.py \
launch_training:=true \
training_render:=true# Training only
ros2 launch mujoco_ur_rl_ros2 dual_view_single_arm.launch.py \
launch_gazebo:=false \
launch_training:=true \
training_render:=true# Gazebo only with a specific checkpoint
ros2 launch mujoco_ur_rl_ros2 dual_view_single_arm.launch.py \
launch_training:=false \
model_path:=/absolute/path/to/best_model.zip- MuJoCo viewer shows the live RL training environment
- headless mode runs the same trainer without opening the viewer
- Gazebo shows the saved policy on the ROS 2 simulation stack
train_gazebo_single_arm.pyis named for Gazebo transfer, but the RL loop itself runs in MuJoCo
- Add a vision-language-action pipeline for task-conditioned robot control.
- Bring back LLM task planning as a stable feature for high-level commands such as pick, place, sort, and multi-step tabletop tasks.
- Connect perception, RL, and language planning more tightly so detected objects can be selected and manipulated from natural-language instructions.
- Improve MuJoCo-to-Gazebo transfer so learned grasping policies behave more consistently on the UR3 with the Robotiq gripper.

.gif)








