Skip to content

choibigo/robot_ai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

89 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Assistive Robots: Grasping, Skeleton Detection and Motion Generation

✔ Introduction

  • Many countries face the dual challenge of an aging population due to low birth rates and increased life expectancy, leading to escalating demands for elderly care and disability services.
  • However, the current works of assistive robots are limited to performing specific tasks and often struggle to adapt to different objects and handle diverse shapes of the human body effectively.
  • To address this limitation, we are implementing a skill-based approach that can be reused in learning novel tasks and can adapt to diverse environments.

✔ Project Purpose

In this task, we aim to accomplish three main goals.

  1. We seek to detect human skeletons to provide a more personalized assistive service.
  2. We aim to enable robots to effectively assist the elderly with natural movements and movement representation.
  3. We strive to enhance robustness by enabling detection of various objects with natural language.

✔ Project Outline

demo video


image

✔ Demonstration Execution

  • If you encounter a permissions error, you need to insert sudo before performing the command
If you are using Windows, You can refer to it
export DISPLAY={YOUR_IP}:0 # you can see your ip through "ipconfig" in cmd
export LIBGL_ALWAYS_INDIRECT=

(1) docker execution

  1. docker file build
$ cd docker_folder
$ docker build --tag robot_ai_project:1.0 .
  1. docker image execution
$ docker run -it -v {clone_path}:/workspace --gpus all --env DISPLAY=unix$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --name robot_ai_project_container robot_ai_project:1.0 /bin/bash
  1. if container status is Exited (0)
$ docker start robot_ai_project_container

$ docker exec -it robot_ai_project_container /bin/bash
If you encounter a GPU error during doing docker run, You can refer it to
  • (docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].)
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
   && curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \
   && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
$ sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit

$ sudo systemctl restart docker

(2) GUI Setting

  1. X host execution - in local(ubuntu)
$ xhost +local:docker
  1. Display setting - in container(windows)
export DISPLAY={your_local_ip}:0

(3) Simulation Execution

$ cd src # Move to src folder
$ python main.py --style {shaking,circular,...} --instruction {grasp target object} --goal_point {head,right_arm,...}  
# Simulation execution through python with simulation enviroment parameter
  • instruction example

    • 'Give me a meat can', => meat can
    • 'Give me something to cut' => scissors,
    • 'I want to eat fruit' => banana,
    • 'Pour the sauce' => mustard bottle
    • Enter your instructions => object
  • goal_point candidate

    • head
    • left_arm
    • right_arm
    • left_leg
    • right_leg
    • left_hand
    • right_hand
  • style candidates

    • circular
    • linear
    • massage
    • shaking

(4) Movement Primitives (optional)

  1. Generative movement with simulation
  • You can generate motion by dragging the robot while holding down the left mouse button.
  • The motion is then recorded over 1000 timesteps and saved to the file '/workspace/data/traj_data/{style}/{style}.csv'.
$ python movement_primitive/path_generate.py --style {style}
  1. Train VMP with trajectory data
  • You can use more than one path data.
  • The trained weight is saved to the file '/workspace/data/weight/{style}'.
$ python movement_primitive/train_vmp.py --style {style}

✔ Folder Struct

workspace/
  |-- data/ # The trajectory data and VMP weights exist.
  |-- docker_folder/ # Dockerfile exists.
  |-- docs/
  |-- src/ # face detection, grasping, movement primitive, simulation, util package exists
      |-- grasping/
      |-- movement_primitive/
      |-- simulation/
      |-- skeleton/
      |-- main.py # executing demo with main.py

✔ Team Member

Kim, Seonho

Cha, Seonghun

Choi, Daewon

image

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •