Skip to content

nolann030628/ShapeForce

Repository files navigation

ShapeForce: Low-Cost Soft Robotic Wrist for Contact-Rich Manipulation

Paper Project Page

ShapeForce is a low-cost, plug-and-play soft robotic wrist that converts external forces and torques into measurable deformations, which are then estimated via marker-based pose tracking and converted into force-like signals for contact-rich robotic manipulation. This repository supports both rule-based (search-and-control) policies and imitation learning policies.

Overview

  • Core Idea: Inspired by human perception, contact-rich tasks rely on relative force changes rather than precise force magnitudes. ShapeForce captures contact information through deformation of its compliant core, eliminating the need for expensive six-axis force-torque sensors.
  • Hardware: 3D-printed compliant core (TPU) + rigid connectors (PLA) + wrist-mounted RGB camera + ChArUco marker board (see Fabrication Details)

Hardware & Software

  • Robot: UFactory xArm7 (or xArm Python SDK compatible)
  • Camera: Intel RealSense D435/D435i (wrist-mounted)
  • Teleoperation: GELLO (for imitation learning data collection)
  • OS: Ubuntu 22.04, Python 3.9+, PyTorch 2.x, CUDA 12.x

Quick Setup

git clone https://github.com/shapeforce/Deformable-Force-Sensor.git
cd Deformable-Force-Sensor

conda create -n shapeforce python=3.9 && conda activate shapeforce
pip install -r scripts/requirements.txt
pip install xarm-python-sdk pyrealsense2

export PYTHONPATH="${PYTHONPATH}:$(pwd)/third_party:$(pwd)"

Set robot IP in third_party/3rdparty/xarm7/xarm7_interface/__init__.py.

Calibration: Run camera intrinsic calibration (aruco_detect/capture_images.py + calibration.py), then hand-in-eye calibration (hand-eye-calibration/hand_in_eye/). Update exst_name in third_party/utils_repo/peg_insertion/return_transformation.py to your calibration result directory.


I. Rule-Based (Search-and-Control) Policies

Use ShapeForce force-like signals to drive search and force-control policies.

Tasks & Scripts

Task Script
Peg Insertion scripts/search_and_control_policies/peg_insertion.py
USB Insertion scripts/search_and_control_policies/usb_insertion.py
Bottle Cap Tightening scripts/search_and_control_policies/screw.py
Whiteboard Wiping scripts/search_and_control_policies/clean_board.py
Toy Desk Assembly scripts/search_and_control_policies/install_desk.py
Maze Exploration scripts/search_and_control_policies/getoutmaze.py

How to Run

export PYTHONPATH="$(pwd)/third_party:$(pwd)"
cd scripts/search_and_control_policies
python peg_insertion.py   # or clean_board.py, screw.py, etc.

Tunable Parameters (Rule-Based)

Parameter Location Description
init_pose, direct_pose, end_pose Each task script (e.g. peg_insertion.py) Start pose, approach pose, retract pose (x, y, z, roll, pitch, yaw in mm/deg). Must tune for your workspace.
delta peg_insertion.py, usb_insertion.py Grid search step sizes (mm) for hole finding, e.g. [-3, -1.5, 1.5, 3]
step_x_size, step_y_size, step_z_size third_party/utils_repo/peg_insertion/force_insert.py Grid search step sizes (mm) in _phase_grid_search_forpeg, _phase_clean_board, etc.
force_soft[2] < -1.0 (peg) force_insert.py line ~1440 Fz threshold for detecting contact with base surface
abs(force[2]) > 0.5 (wipe) force_insert.py line ~1932 Fz threshold for contact detection in wiping
target_force, kp, max_adjust force_insert.py ForceController (~line 800) PID for wiping: target Fz (e.g. -0.9), kp, max z-adjustment
FORCE_THRESHOLD_COLLISION, FORCE_THRESHOLD_SAFE force_insert.py _phase_get_out_maze Collision (1.0) and safe retreat (0.4) thresholds for maze
delta_for_each_step getoutmaze.py Step size (mm) for maze exploration

Scripts with _gt suffix use a commercial force-torque sensor as baseline.


II. Imitation Learning Policies

Use ACT or Diffusion Policy with ShapeForce force-like signals, images, and joint states.

1. Data Collection (GELLO)

cd gello_software
git submodule init && git submodule update
pip install -r requirements.txt && pip install -e . && pip install -e third_party/DynamixelSDK/python

# Launch robot and GELLO
python experiments/launch_nodes.py --robot=<your_robot>
python experiments/run_env.py --agent=gello --use-save-interface

2. Convert to HDF5

python gello/data_utils/demo_to_gdict.py --source-dir=<path_to_collected_pkls>

Output goes to _conv/multiview/train/none/ (or similar). Note the path for dataset_dir in constants.py.

3. Configure Task & Dataset

Edit saywhen_learning_base/ModelTrain/constants.py:

TASK_CONFIGS = {
    'train_gello': {
        'dataset_dir': '/path/to/your/_conv/multiview/train/none/',  # path from demo_to_gdict
        'episode_len': 4000,
        'train_ratio': 0.95,
        'camera_names': ['wrist']
    },
    # ...
}

4. Train

cd saywhen_learning_base/ModelTrain

# Fix sys.path in model_train_act.py / model_train_diff.py to point to robomimic
# e.g. sys.path.append('/path/to/Deformable-Force-Sensor/saywhen_learning_base/robomimic-r2d2/robomimic')

pip install -r ../robomimic-r2d2/requirements.txt
pip install -e ../robomimic-r2d2

# ACT
python model_train_act.py --ckpt_dir ./ckpt/your_task --task_name train_gello --batch_size 64 --num_steps 9000

# Diffusion Policy
python model_train_diff.py --ckpt_dir ./ckpt/your_task_diff --task_name train_gello --num_steps 30000

5. Inference

# Edit ckpt_dir and model_name in the inference script
python model_inference_test_gello_act.py   # ACT
python model_inference_test_gello_diff.py # Diffusion Policy

Tunable Parameters (Imitation Learning)

Parameter Location Description
dataset_dir constants.py Path to HDF5 dataset from demo_to_gdict
episode_len constants.py Max episode length (frames)
camera_names constants.py Camera keys, e.g. ['wrist']
--ckpt_dir model_train_act.py / model_train_diff.py Checkpoint save directory
--task_name Training scripts Must match a key in TASK_CONFIGS
--batch_size Training scripts Default 64
--num_steps Training scripts ACT: 9000, Diffusion: 30000
--chunk_size Training scripts ACT: 45, Diffusion: 48

Project Structure

├── aruco_detect/           # ChArUco detection, pose estimation, calibration
├── hand-eye-calibration/  # Hand-in-eye calibration
├── scripts/
│   ├── calculate_transformation.py   # Real-time deformation visualization
│   ├── K_calculate_6d.py             # Stiffness matrix calibration (optional)
│   └── search_and_control_policies/  # Rule-based task scripts
├── third_party/utils_repo/peg_insertion/
│   ├── force_insert.py    # Core search & force control logic
│   └── return_transformation.py  # ShapeForce deformation signal
├── gello_software/        # GELLO teleoperation & data collection
└── saywhen_learning_base/ModelTrain/  # ACT & Diffusion Policy training

Citation

@article{zhu2025shapeforce,
  title={ShapeForce: Low-Cost Soft Robotic Wrist for Contact-Rich Manipulation},
  author={Zhu, Jinxuan and Yan, Zihao and Xiao, Yangyu and Guo, Jingxiang and Tie, Chenrui and Cao, Xinyi and Zheng, Yuhang and Shao, Lin},
  journal={arXiv preprint arXiv:2511.19955},
  year={2025}
}

Links

About

[ICRA 2026] ShapeForce: Low-Cost Soft Robotic Wrist for Contact-Rich Manipulation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors