Overview and Installation

Vitis AI Optimizer Overview

Vitis™ AI is a Xilinx® development kit for AI inference on Xilinx hardware platforms. Inference in machine learning is computation-intensive and requires high memory bandwidth to meet the low-latency and high-throughput requirements of various applications.

Vitis AI optimizer provides the ability to optimize neural network models. Currently, Vitis AI optimizer includes only one tool called pruner. Vitis AI pruner (VAI pruner) prunes redundant connections in neural networks and reduces the overall required operations. The pruned models produced by VAI pruner can be further quantized by VAI quantizer and deployed to an FPGA. For more information on VAI quantizer and deployment, see the Vitis AI User Guide in the Vitis AI User Documentation (UG1431).

Figure 1: VAI Optimizer

The VAI pruner supports four deep learning frameworks: TensorFlow, PyTorch, Caffe, and Darknet. The corresponding tool names are vai_p_tensorflow, vai_p_pytorch, vai_p_caffe, and vai_p_darknet, where the "p" in the middle stands for pruning.

Vitis AI Optimizer requires a commercial license to run. Contact xilinx_ai_optimizer@xilinx.com to access the Vitis AI Optimizer installation package and license.

Navigating Content by Design Process

Xilinx® documentation is organized around a set of standard design processes to help you find relevant content for your current development task. All Versal™ ACAP design process Design Hubs can be found on the Xilinx.com website. This document covers the following design processes:

Machine Learning and Data Science​
Importing a machine learning model from a Caffe, Pytorch, TensorFlow, or other popular framework onto Vitis™ AI, and then optimizing and evaluating its effectiveness. Topics in this document that apply to this design process include:

Installation

The following are the two ways to obtain the Vitis AI Optimizer:

Docker Image
Vitis AI provides a docker environment for the Optimizer. In the docker image, there are three optimizer related conda environments: vitis-ai-optimizer_tensorflow, vitis-ai-optimizer_caffe, and vitis-ai-optimizer_darknet. All requirements are ready in these environments. CUDA and cuDNN versions in the docker are CUDA 10.0 and cuDNN 7.6.5. After getting a license, you can run the VAI pruner directly in the docker.
Note: The optimizer for PyTorch is not in the docker image and can only be installed using the conda package.
Conda Packages
Conda packages are also available for Ubuntu 18.04. Contact xilinx_ai_optimizer@xilinx.com to access the Vitis AI Optimizer installation package and license. Follow the installation steps to install the pre-requirements and Vitis AI Optimizer.

Hardware Requirements

Nvidia GPU card with CUDA Compute Capability >= 3.5 is required. It is recommended to use Tesla P100 or Tesla V100.

Software Requirements

Note: This section is only required for installing the Conda package. If you are using the Docker image, you can skip this section.
GPU-related Software
Install GPU related software according to the operating system. For Ubuntu 16.04, install CUDA 9.0, cuDNND 7 and Driver 384 or above. For Ubuntu 18.04, install CUDA 10.0, cuDNN 7 and Driver 410 or above.
NVIDIA GPU Drivers
Install GPU driver by apt-get or directly install the CUDA package with driver. For example:
apt-get install nvidia-384
apt-get install nvidia-410
CUDA Toolkit
Get the CUDA package associated with the Ubuntu version from https://developer.nvidia.com/cuda-toolkit-archive and directly install the NVIDIA CUDA runfile package.
cuDNN SDK
Get cuDNN from https://developer.nvidia.com/cudnn and append its installation directory to the $LD_LIBRARY_PATH environmental variable.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cudnn-7.0.5/lib64
CUPTI
CUPTI is required by vai_p_tensorflow and is installed together with CUDA. You must add the CUPTI directory to the $LD_LIBRARY_PATH environment variable. For example:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/extras/CUPTI/lib64
NCCL
NCCL is required by vai_p_caffe. Download NCCL from its homepage (https://developer.nvidia.com/nccl/nccl-legacy-downloads) and install.
sudo dpkg -i nccl-repo-ubuntu1804-2.6.4-ga-cuda10.0_1-1_amd64.deb
sudo apt update
sudo apt install libnccl2=2.6.4-1+cuda10.0 libnccl-dev=2.6.4-1+cuda10.0

VAI Pruner

Note: To install the VAI pruner, you must install the Conda package first. For the Docker image, skip this section. VAI pruner already exists in the coresponding Conda environments.

To install VAI pruner, first install Conda and then install the Conda package for your framework.

Install Conda

For more information, see the Conda installation guide.

vai_optimizer_tensorflow

vai_p_tensorflow is based on TensorFlow 1.15. Install this vai_optimizer_tensorflow package to get vai_p_tensorflow.

$ tar xzvf vai_optimizer_tensorflow.tar.gz
$ conda install vai_optimizer_tensorflow_gpu -c file://$(pwd)/vai-bld -c conda-forge/label/gcc7 -c conda-forge

vai_optimizer_pytorch

vai_optimizer_pytorch is a Python library and you can use it by calling its APIs.

$ tar xzvf vai_optimizer_pytorch.tar.gz
$ conda install vai_optimizer_pytorch_gpu -c file://$(pwd)/vai-bld -c pytorch

vai_optimizer_caffe

vai_p_caffe binary is included in this vai_optimizer_caffe conda package.

$ tar xzvf vai_optimizer_caffe.tar.gz
$ conda install vai_optimizer_caffe_gpu -c file://$(pwd)/vai-bld -c conda-forge/label/gcc7 -c conda-forge

vai_optimizer_darknet

$ tar xzvf vai_optimizer_darknet.tar.gz
$ conda install vai_optimizer_darknet_gpu -c file://$(pwd)/vai-bld

VAI Pruner License

There are two types of license: floating license and node-locked license. The VAI Pruner finds licenses using an environment variable XILINXD_LICENSE_FILE. For floating license server, you need to specify the path in the form port@hostname. For example, export XILINXD_LICENSE_FILE=2001@xcolicsvr1. For node-locked license file, you need to specify a particular license file or directory where all the .lic files located.

To specify a particular file:
export XILINXD_LICENSE_FILE=/home/user/license.lic
To specify a directory:
export XILINXD_LICENSE_FILE=/home/user/license_dir
If you have multiple licenses, you can specify them at the same time, each separated by a colon:
export XILINXD_LICENSE_FILE=1234@server1:4567@server2:/home/user/license.lic

For node-locked license, it can also be installed by copying to $HOME/.Xilinx directory.