Quantized Neural Networks (QNNs) deliver excellent recognition accuracy without costly floating-point operations. They are blazingly fast and highly energy-efficient when implemented on FPGAs. If you’re interested in learning how to train NNs with reduced precision and curious on how to deploy them on Xilinx FPGAs, then this session is for you. We’ll guide you how to train QNNs with our new open source PyTorch library Brevitas and provide a preview of our FINN toolflow for taking your trained QNNs all the way down to customized hardware architectures on Xilinx FPGA platforms.