This repository has been archived by the owner on Jan 3, 2023. It is now read-only.
Improved CPU performance for SSD and inference with batchnorm, Docker file
- Optimized SSD MKL backend performance (~3X boost version over version)
- Bumped aeon version to v1.3.0
- Fixed inference performance issue of MKL batchnorm
- Fixed batch prediction issue for gpu backend
- Enabled subset_pct for MNIST_DCGAN example
- Updated "make clean" to clean up mkl artifacts
- Added dockerfile for IA mkl