Skip to content

The code for paper -- 'PCF-Grasp: Converting Point Completion to Geometry Feature to Enhance 6-DoF Grasp'

License

Notifications You must be signed in to change notification settings

ChengYaofeng/PCF-Grasp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 

Repository files navigation

PCF-Grasp

This repo will be update soon!

PCF-Grasp: Converting Point Completion to Geometry Feature to Enhance 6-DoF Grasp

paper, video-bilibili, video-youtube

Citation

If you find our work useful, please cite.

Download Model and Dataset

Prepare code and file

#only for test
git clone https://github.com/ChengYaofeng/PCF-Grasp.git
cd PCF-Grasp

└── PCF-Grasp
    └── pcfgrasp_method

#for train
git clone https://github.com/ChengYaofeng/PCF-Grasp.git
cd PCF-Grasp
mkdir acronym

└── PCF-Grasp
    ├── acronym
    └── pcfgrasp_method

Installation

Create the conda env. Pytorch 1.8+, CUDA 11.1.

conda env create -f pcf.yaml

If you want to pretrain, use follow to setup CD.

conda activate pcf
cd extensions/chamfer_distance
python setup.py install

Model

Download our trained models from baidu cloud disk, Extract code:.

Dataset

Our dataset followed the contact-graspnet, but we only placed one object in each scene.

The acronym file should be created as this. After this step, the acronym file should be the same as:

└── acronym
    ├── grasps
    └── meshes

Then, you can follow contact-graspnet create new scenes or just download our scenes here. Extract it to acronym as:

└── acronym
    ├── grasps
    ├── meshes
    ├── scene_contacts
    └── splits

Running

Train

We recommend the bach_size at least 5, it is because sometimes the virtual camera can't capture some object as the object and camera are randomly placed.

  • Pointclouds Completion
cd /PCF-Grasp/pcfgrasp_method
bash ./scripts/pretrain.sh
  • 6-DoF Grasp
cd /PCF-Grasp/pcfgrasp_method
bash ./scripts/train.sh

Inference

  • grasp inference
cd /PCF-Grasp/pcfgrasp_method
bash ./scripts/inference.sh
  • point completion inference
cd /PCF-Grasp/pcfgrasp_method
bash ./scripts/pre_inference.sh
  • real world inference

We use realsense d435 camera in our code. If you are the same camera and want to test in the real world scenes, you can use our code directly. Download detectron2.

------------download detectron2-------
cd /PCF-Grasp/pcfgrasp_method
git clone https://github.com/facebookresearch/detectron2.git
python -m pip install -e detectron2
-----------------run code-------------
bash ./scripts/real_world_inference.sh

What's more, if you want to test on robot. You can create a msg file as objects_grasp_pose.msg with follow code. It will publish rostopic as topicname '/grasp_pose'. You can use grasp_pose = rospy.wait_for_message('/grasp_pose', objects_grasp_pose) to recieve grasp pose.

int32[] obj_index
geometry_msgs/Pose[]  grasp_pose

License

MIT-LICENSE

About

The code for paper -- 'PCF-Grasp: Converting Point Completion to Geometry Feature to Enhance 6-DoF Grasp'

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published