This repo will be update soon!
paper, video-bilibili, video-youtube
If you find our work useful, please cite.
#only for test
git clone https://github.com/ChengYaofeng/PCF-Grasp.git
cd PCF-Grasp
└── PCF-Grasp
└── pcfgrasp_method
#for train
git clone https://github.com/ChengYaofeng/PCF-Grasp.git
cd PCF-Grasp
mkdir acronym
└── PCF-Grasp
├── acronym
└── pcfgrasp_method
Create the conda env. Pytorch 1.8+, CUDA 11.1.
conda env create -f pcf.yaml
If you want to pretrain, use follow to setup CD.
conda activate pcf
cd extensions/chamfer_distance
python setup.py install
Download our trained models from baidu cloud disk, Extract code:.
Our dataset followed the contact-graspnet, but we only placed one object in each scene.
The acronym file should be created as this. After this step, the acronym file should be the same as:
└── acronym
├── grasps
└── meshes
Then, you can follow contact-graspnet create new scenes or just download our scenes here. Extract it to acronym as:
└── acronym
├── grasps
├── meshes
├── scene_contacts
└── splits
We recommend the bach_size at least 5, it is because sometimes the virtual camera can't capture some object as the object and camera are randomly placed.
- Pointclouds Completion
cd /PCF-Grasp/pcfgrasp_method
bash ./scripts/pretrain.sh
- 6-DoF Grasp
cd /PCF-Grasp/pcfgrasp_method
bash ./scripts/train.sh
- grasp inference
cd /PCF-Grasp/pcfgrasp_method
bash ./scripts/inference.sh
- point completion inference
cd /PCF-Grasp/pcfgrasp_method
bash ./scripts/pre_inference.sh
- real world inference
We use realsense d435 camera in our code. If you are the same camera and want to test in the real world scenes, you can use our code directly. Download detectron2.
------------download detectron2-------
cd /PCF-Grasp/pcfgrasp_method
git clone https://github.com/facebookresearch/detectron2.git
python -m pip install -e detectron2
-----------------run code-------------
bash ./scripts/real_world_inference.sh
What's more, if you want to test on robot. You can create a msg file as objects_grasp_pose.msg
with follow code. It will publish rostopic as topicname '/grasp_pose'. You can use grasp_pose = rospy.wait_for_message('/grasp_pose', objects_grasp_pose)
to recieve grasp pose.
int32[] obj_index
geometry_msgs/Pose[] grasp_pose
MIT-LICENSE