- Many countries face the dual challenge of an aging population due to low birth rates and increased life expectancy, leading to escalating demands for elderly care and disability services.
- However, the current works of assistive robots are limited to performing specific tasks and often struggle to adapt to different objects and handle diverse shapes of the human body effectively.
- To address this limitation, we are implementing a skill-based approach that can be reused in learning novel tasks and can adapt to diverse environments.
- We seek to detect human skeletons to provide a more personalized assistive service.
- We aim to enable robots to effectively assist the elderly with natural movements and movement representation.
- We strive to enhance robustness by enabling detection of various objects with natural language.
- If you encounter a permissions error, you need to insert
sudo
before performing the command
If you are using Windows, You can refer to it
- wsl2 install : https://gaesae.com/161#google_vignette
- GUI in Windows : https://bmind305.tistory.com/110
- write this command in container
export DISPLAY={YOUR_IP}:0 # you can see your ip through "ipconfig" in cmd
export LIBGL_ALWAYS_INDIRECT=
- docker file build
$ cd docker_folder
$ docker build --tag robot_ai_project:1.0 .
- docker image execution
$ docker run -it -v {clone_path}:/workspace --gpus all --env DISPLAY=unix$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --name robot_ai_project_container robot_ai_project:1.0 /bin/bash
- if container status is Exited (0)
$ docker start robot_ai_project_container
$ docker exec -it robot_ai_project_container /bin/bash
If you encounter a GPU error during doing docker run, You can refer it to
(docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].)
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
&& curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \
&& curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
$ sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
$ sudo systemctl restart docker
- X host execution - in local(ubuntu)
$ xhost +local:docker
- Display setting - in container(windows)
- first, you can refer to this website - https://bmind305.tistory.com/110
- second, you should set it as below in container
export DISPLAY={your_local_ip}:0
$ cd src # Move to src folder
$ python main.py --style {shaking,circular,...} --instruction {grasp target object} --goal_point {head,right_arm,...}
# Simulation execution through python with simulation enviroment parameter
-
instruction example
- 'Give me a meat can', => meat can
- 'Give me something to cut' => scissors,
- 'I want to eat fruit' => banana,
- 'Pour the sauce' => mustard bottle
- Enter your instructions => object
-
goal_point candidate
- head
- left_arm
- right_arm
- left_leg
- right_leg
- left_hand
- right_hand
-
style candidates
- circular
- linear
- massage
- shaking
- Generative movement with simulation
- You can generate motion by dragging the robot while holding down the left mouse button.
- The motion is then recorded over 1000 timesteps and saved to the file '/workspace/data/traj_data/{style}/{style}.csv'.
$ python movement_primitive/path_generate.py --style {style}
- Train VMP with trajectory data
- You can use more than one path data.
- The trained weight is saved to the file '/workspace/data/weight/{style}'.
$ python movement_primitive/train_vmp.py --style {style}
workspace/
|-- data/ # The trajectory data and VMP weights exist.
|-- docker_folder/ # Dockerfile exists.
|-- docs/
|-- src/ # face detection, grasping, movement primitive, simulation, util package exists
|-- grasping/
|-- movement_primitive/
|-- simulation/
|-- skeleton/
|-- main.py # executing demo with main.py