Xilinx AI Inference Acceleration Helps “DisCERN” Answers to the Universe’s Biggest Scientific Questions

11-13-2019 12:17 AM

What are the origins of the universe? What is matter and energy?

In a quest to answer the world’s most challenging scientific questions, a consortium of some 20,000 scientists at CERN, the European Laboratory for Particle Physics, is attempting to reconstruct the origin of the universe. But in order to do this, researchers must push the limits of technology.  During a keynote at XDF Europe, Dr. Thomas James, senior research fellow at CERN, explained how Xilinx FPGA’s buried 100 meters underground play a role in finding answers.  

Cern1

Built underneath Geneva, Switzerland, the Large Hadron Collider (LHC) is the largest particle accelerator in the world. It is a 27-kilometer ring and is comprised of superconducting magnets that accelerate particles to previously unprecedented energy levels. Each proton traverses the ring 11,000 times per-second – almost the speed of light. At four different points on the ring - every 25 nanoseconds - protons collide. The conditions of the collision are captured by particle detectors, one of these particle detectors is called the CMS detector.

The CMS detector has a diameter of 15 meters, is 21 meters long, and weighs more than the Eiffel Tower. It contains hundreds of millions of individual sensors that together detect the thousands of particles that are created from each collision. Because the LHC creates 2.4 billion collisions per-second – about 500 terabits per-second of measurement data - it is impossible to store this much data. So the CERN team developed a tiered “trigger” system to select only the most interesting collisions for analysis, and the rest are discarded.

Cern2

This trigger system is implemented in two layers – the level one trigger being the most demanding – requiring a fixed, extremely low latency AI inference capability of about 3 microseconds per event, along with massive bandwidth. CPUs and GPUs cannot meet these requirements. So, 100 meters underground but shielded from radiation area, are a network of Xilinx FPGAs running algorithms designed to instantaneously filter the data generated and identify novel particle substructures as evidence of the existence of dark matter and other physical phenomena. These FPGAs run both classical and convolutional neural networks to receive and align sensor data, perform tracking and clustering, run machine learning object identification, and trigger functions, all before formatting and delivering the event data. The result has been extremely low-latency inference on the order of 100 nanoseconds.

CERN has evolved its hardware designs across multiple generations of Xilinx technology beginning decades ago with the 180 nanometer Virtex-E family through the 16nm Virtex UltraScale+ architecture. Using Xilinx devices, CERN scientists are able to accomplish a huge depth and breadth of algorithms, even within its strict latency constraints including energy clustering, particle tracking and identification using complex algorithms such as Hough transforms and Kalman filters.

Cern3

In addition to the powerful processing capabilities of our FPGAs, CERN benefits from the increasing ease with which Xilinx FPGAs can be programmed.  James discussed how the latest Xilinx chips and Vitis unified software platform are making the power of Xilinx more accessible to a broader range of CERN scientists, not just engineers. James commented, “Now algorithms that were long thought to be impossible to implement in FPGA are a reality. We expect that over the next decade, this trend will continue, leading to some amazing new discoveries in particle physics.”

For more about our work with CERN and how Xilinx FPGAs are delivering performance advantages unachievable by GPUs and CPUs, check out the CERN case study here.