Skip to content

VR-assisted training for manufacturing tasks, learns by immitation

Notifications You must be signed in to change notification settings

lintglitch/vr-assembly

Repository files navigation

Setup

Tested with CUDA 10.1, Tensorflow 2.3.1 and Unity3D 2019.4.0f1

Tested on the Oculus Rift

JSON Parameters

Processing Settings

  • length - length of each sequence, the number of frames that is input in the model at once
  • action_classes - number of possible actions
  • ws_stepsize - the stepsize for window slicing
  • values_per_tracked_object - number of values for each object in a csv (without the name)
  • csv_extra_values - additional values in the csv, aside from the object information
  • csv_name_index - index of the name column in the csv file
  • csv_y_index - index of the csv y values
  • stride - will parse every N lines of a csv, set to 1 to disable
  • normalization_enabled - will normalize the csv values in a per-column basis
  • normalization min and max values are set for every x,y,z coordinate itself per hand

Augmentation Settings

  • use_original_csv - if true, input csv files are kept in the output data (useful for observing augmentations only)
Gaussian Noise
  • gaussian_noise_enabled - enables the use of gaussian noise
  • gaussian_std_deviation - standard deviation for gaussian distribution
  • gaussian_multiplier - specifies how many noised recordings are created per input recording and other used augmentations
Window Warping
  • window_warping_enabled - enables the use of window warping (iterates repeatedly through input recordings and creates a single recording per recording)
  • window_warping_amount - specifies how many recordings are created in total
  • window_warping_compression_limit - specifies the degree of max compression, values in range [0.5, 1.0[
  • window_warping_extension_limit - specifies the degree of max extension, values in range ]1.o, 2.0]
Random Transformation
  • random_transformation_enabled - enables the use of Random Transformations (iterates repeatedly through input recordings and creates a single recording per recording)
  • random_transformation_amount_synthetic_data - specifies how many recordings are created in total
  • random_transformation_x_translation_limit - specifies the max limit of x translation
  • random_transformation_y_translation_limit - specifies the max limit of y translation
  • random_transformation_z_translation_limit - specifies the max limit of z translation
  • random_transformation_x_rotation_limit_degrees - specifies the max limit of x rotation in degrees
  • random_transformation_y_rotation_limit_degrees - specifies the max limit of y rotation in degrees
  • random_transformation_z_rotation_limit_degrees - specifies the max limit of z rotation in degrees
Path Joining
  • path_joining_enabled - enables the use of path joining (iterates repeatedly through input recordings and creates a single recording per recording)
  • path_joining_amount - specifies how many recordings are created in total

Model settings

  • batch_size - input batch size
  • filters - number of convolutional filters
  • kernel_size - the number -1 is used for with of the individual convolutional units
  • depth - how many inception time units the network is deep
  • residual - generate residual connections
  • use_bottleneck - if enabled will generate a bottleneck layer each inception time unit
  • epochs - number of epochs for training

Workflow

This describes the work-steps needed to train a new model. The JSON parameters need to be set first as wished.

Record within virtual environment

Open the project in Unity and start Philipps Scene. All recording parameters can be changed within the "Manager" class and its children. Make sure that the TransformTrackerManager is not set to live writing.

Start the project. By pressing the right controller button you start recording. The "Record" text will become green

Application of augmentations

Pre-processing

First the data can be split into a train, evaluation and test data set with the split_datset.py script.

The following example creates a split of evaluation 60%, train 10% and test 30%.

python .\scripts\split_dataset.py PATH\TO\INPUT_RECORDINGS\ PATH\TO\OUTPUT_DIRECTORY\ --eval 0.6 --test 0.3

Use of Augmentations

The following step is only required when using path-joining. Here the path joining data is created as csv files.

python .\scripts\pre_path_joining.py PATH\TO\INPUT_RECORDINGS\ PATH\TO\OUTPUT_DIRECTORY\ --processing .\processing_settings.json --augmentation .\augmentation_settings.json --basename path_joined 

In this step Window Warping, Random Transformation (Rotation), Gaussian Noise and Window Slicing are applied. It creates a npz file containing the input data for the model. Apply these augmentations with preprocess.py

The following example creates the training dataset with augmentations.

python .\scripts\preprocess.py PATH\TO\RECORDINGS\ --processing .\processing_settings.json --augmentation .\augmentation_settings.json --name train.npz

The following example creates the evaluation dataset without augmentations. In the same way the test dataset can be created.

python .\scripts\preprocess.py PATH\TO\RECORDINGS\ --processing .\processing_settings.json --augmentation .\augmentation_disabled.json --name evaluation.npz

Training

To train the model the train.py script can be used.

python .\scripts\train.py PATH\TO\train.npz PATH\TO\evaluation.npz PATH\TO\OUTPUT_DIRECTORY\ --settings .\model_settings.json

Use live assembly assistance

Open the VR environment in Unity. Open Philipps Scene. Make sure that the TransformTrackerManager is set to live writing.

Start the VR environment. Within that environment start Recording.

Then run from a terminal:

python .\scripts\run.py --processing .\processing_live.json --model INSERT_PATH\model\ --sleep 1.0 INSERT_PATH\recording.csv --send

The sleep parameter is entirely optional but recommended when using the predicitions from a hard-drive, since the current recorded data are written to and read from the hard-drive. This is not ncessary when using an SSD or RAM disk.

If everything worked the Prediction text should turn green and you should receive predictions.

About

VR-assisted training for manufacturing tasks, learns by immitation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published