SDAccel Profiling and Optimization Features
The SDAccel™ environment generates various reports on the kernel resource and performance during compilation. It also collects profiling data during application execution in emulation mode and on the FPGA acceleration card. The reports and profiling data provide you with information on performance bottlenecks in the application and optimization techniques that can be used to improve performance. This chapter describes how to generate the reports and collect, display, and read the profiling results in the SDAccel environment.
System Estimate
Generating FPGA programming files is the step in the SDAccel development environment with the longest execution time. It is also the step in which the execution time is most affected by the target device and the number of compute units placed on the FPGA fabric. Therefore, it is essential for the application programmer to have a quicker way to understand the performance of the application before running it on the target device, so they can spend more time iterating and optimizing their applications instead of waiting for the FPGA programming file to be generated.
The system estimate in the SDAccel development environment takes into account the target hardware device and each compute unit in the application. Although an exact performance metric can only be measured on the target device, the estimation report in the SDAccel environment provides an accurate representation of the expected behavior.
GUI Flow
This report is automatically generated during the hardware emulation flow. There is one report generated for each kernel and a top report for the complete binary container. It is easy to access the reports from the Assistant window in the Emulation-HW folder.
The following figure shows the Assistant window with a System Estimate report
for the binary_container_1
and the kernel with the name
run
.
Command Line
The following command generates the system performance estimate report system_estimate.xtxt for all kernels in kernel.cl:
xocc -c -t hw_emu --platform xilinx:adm-pcie-7v3:1ddr:3.0 --report estimate kernel.cl
The performance estimate report generated by the xocc
-report estimate
option provides information on every binary container in
the application, as well as every compute unit in the design. The report is structured
as follows:
- Target device information
- Summary of every kernel in the application
- Detailed information on every binary container in the solution
Data Interpretation
The following example report file represents the information generated for the estimate report:
---------------------------------------------------------------------
Design Name: _xocc_compile_kernel_bin.dir
Target Device: xilinx:adm-pcie-ku3:2ddr-xpr:3.3
Target Clock: 200MHz
Total number of kernels: 1
---------------------------------------------------------------------
Kernel Summary
Kernel Name Type Target OpenCL Library Compute Units
------------- ---- ------------------ -------------- -------------
smithwaterman clc fpga0:OCL_REGION_0 xcl_xocc 1
----------------------------------------------------------------------
OpenCL Binary: xcl_xocc
Kernels mapped to: clc_region
Timing Information (MHz)
Compute Unit Kernel Name Module Name Target Frequency
--------------- ------------- ------------- ----------------
smithwaterman_1 smithwaterman smithwaterman 200
Estimated Frequency
-------------------
202.020203
Latency Information (clock cycles)
Compute Unit Kernel Name Module Name Start Interval
--------------- ------------- ------------- --------------
smithwaterman_1 smithwaterman smithwaterman 29468
Best Case Avg Case Worst Case
--------- -------- ----------
29467 29467 29467
Area Information
Compute Unit Kernel Name Module Name FF LUT DSP BRAM
--------------- ------------- ------------- ---- ---- --- ----
smithwaterman_1 smithwaterman smithwaterman 2925 4304 1 10
---------------------------------------------------------------------
Design and Target Device Summary
All design estimate reports begin with an application summary and information about the target device. The device information is provided in the following section of the report:
---------------------------------------------------------------------
Design Name: _xocc_compile_kernel_bin.dir
Target Device: xilinx:adm-pcie-ku3:2ddr-xpr:3.3
Target Clock: 200MHz
Total number of kernels: 1
---------------------------------------------------------------------
For the design summary, the only information that is provided is the design name and the selection of the target device. The other information provided in this section is the target board and the clock frequency.
- Target Board
- The name of the board that runs the application compiled by the SDAccel development environment.
- Clock Frequency
- Defines how fast the logic runs for compute units mapped to the FPGA fabric. Both of these parameters are fixed by the device developer.
These parameters cannot be modified from within the SDAccel environment.
Kernel Summary
The Kernel Summary section lists all of the kernels defined for the current SDAccel solution. The following example shows the kernel summary:
Kernel Summary
Kernel Name Type Target OpenCL Library Compute Units
------------- ---- ------------------ -------------- -------------
smithwaterman clc fpga0:OCL_REGION_0 xcl_xocc 1
In addition to the kernel name, the summary also provides the execution target and type of the input source. Because there is a difference in compilation and optimization methodology for OpenCL™, C, and C/C++ source files, the type of kernel source file is specified.
The Kernel Summary section is the last summary information in the report. From here, detailed information on each compute unit binary container is presented.
Timing Information
For each binary container, the detail section begins with the execution target of all compute units. It also provides timing information for every compute unit. As a general rule, if an estimated frequency is higher than that of the device target, the compute unit will be able to run in the device. If the estimated frequency is below the target frequency, the kernel code for the compute unit needs to be further optimized for the compute unit to run correctly on the FPGA fabric. This information is shown in the following example:
OpenCL Binary: xcl_xocc
Kernels mapped to: clc_region
Timing Information (MHz)
Compute Unit Kernel Name Module Name Target Frequency
--------------- ------------- ------------- ----------------
smithwaterman_1 smithwaterman smithwaterman 200
Estimated Frequency
-------------------
202.020203
It is important to understand the difference between the target and estimated frequencies. Compute units are not placed in isolation into the FPGA fabric. Compute units are placed as part of a valid FPGA design that can include other components defined by the device developer to support a class of applications.
Because the compute unit custom logic is generated one kernel at a time, an estimated frequency that is higher than the device target indicates to the developer using the SDAccel environment that there should not be any timing problems during the creation of the FPGA programming files.
Latency Information
The latency information presents the execution profile of each compute unit in the binary container. When analyzing this data, it is important to keep in mind that all values are measured from the compute unit boundary through the custom logic. In-system latencies associated with data transfers to global memory are not reported as part of these values. Also, the latency numbers reported are only for compute units targeted at the FPGA fabric. Following is an example of the latency report:
Latency Information (clock cycles)
Compute Unit Kernel Name Module Name Start Interval Best Case
--------------- ------------- ------------- -------------- ---------
smithwaterman_1 smithwaterman smithwaterman 29468 29467
Avg Case Worst Case
-------- ----------
29467 29467
- Start interval
- Best case latency
- Average case latency
- Worst case latency
The start interval defines the amount of time that has to pass between invocations of a compute unit for a given kernel.
The best, average, and worst case latency numbers refer to how much time it takes the compute unit to generate the results of one ND Range data tile for the kernel. For cases where the kernel does not have data dependent computation loops, the latency values will be the same. Data dependent execution of loops introduces data specific latency variation that is captured by the latency report.
- OpenCL kernels that do not have
explicit
reqd_work_group_size(x,y,z)
- Kernels that have loops with variable bounds
Area Information
Although the FPGA can be thought of as a blank computational canvas, there are a limited number of fundamental building blocks available in each FPGA. These fundamental blocks (FF, LUT, DSP, block RAM) are used by SDAccel development environment to generate the custom logic for each compute unit in the design. The number of each fundamental resource needed to implement the custom logic in a compute unit determines how many compute units can be simultaneously loaded into the FPGA fabric. The following example shows the area information reported for a compute unit:
Area Information
Compute Unit Kernel Name Module Name FF LUT DSP BRAM
--------------- ------------- ------------- ---- ---- --- ----
smithwaterman_1 smithwaterman smithwaterman 2925 4304 1 10
HLS Report
After compiling a kernel using the SDx™ development environment GUI or the XOCC command line, the Vivado® High-Level Synthesis (HLS) tool HLS report is available. The HLS report includes details about the performance and logic usage of the custom-generated hardware logic from user kernel code. These details provide advanced users many insights into the kernel compilation results to guide kernel optimization.
GUI Flow
After compiling a kernel using the SDx environment GUI, you can view the HLS Report in the Assistant window. The report is under the Emulation-HW or System build configuration, and has the <binary container> name, and the <kernel> name. This is illustrated in the following Assistant window:
Command Line
The HLS Report is designed to be viewed by the SDAccel environment GUI. However, for command line users, a textual representation of this report is also published. This report can be found inside the report directory situated under the kernel synthesis directory in the Vivado High-Level Synthesis (HLS) tool solution directory.
Because the xocc
command generates several
additional levels of hierarchy above this synthesis directory, it is best to simply
locate the file by name:
find . -name <module>_csynth.rpt
Where <module> is the name of the kernel.
find
command also supports the lookup
using wildcards such that the following command will lookup all synthesis reports in any
subdirectory:find . -name "*_csynth.rpt"
Data Interpretation
The left pane of the HLS Report shows the module hierarchy. Each module generated as part of the high level synthesis run is represented in this hierarchy. You can select any of these modules to present the synthesis details of the module in the right side of the Synthesis Report window.
The Synthesis Report is separated into several sections, namely:
- General Information
- Performance Estimates (timing and latency)
- Utilization Estimates
- Interface Information
If this information is part of a hierarchical block, it will sum up the information of the blocks contained in the hierarchy. Due to this fact, the hierarchy can also be navigated from within the report, when it is clear which instance contributes what to the overall design.
Profile Summary Report
The SDAccel runtime automatically collects profiling data on host applications. After the application finishes execution, the profile summary is saved in HTML, .csv, and Google Protocol Buffer formats in the solution report directory or working directory. These reports can be reviewed in a web browser, spreadsheet viewer, or the integrated Profile Summary Viewer in the SDAccel environment. The profile reports are generated in both SDAccel GUI and XOCC command line flows.
GUI Flow
When you compile and execute an application from SDAccel environment, the profile summary is automatically generated.
To control the generation of profile information, simply edit the run configuration through the context menu of the build configuration, and select
.After the configuration is run, the Assistant window enables easy access to the report from below the Run Configuration item. After the run configuration has executed, modifying the configuration can now be initiated directly through the context menu of the run configuration item in the Assistant window.
Double-click the report to open it.
Command Line
Command Line users execute standalone applications outside the SDAccel environment. To generate the profile summary data, you
can compile your design without any additional options. However, linking the bitstream file
(xclbin) requires the --profile_kernel
option.
The argument provided through the --profile_kernel
option can be used to limit data collection, which might be
required in large systems. The general syntax for the profile_kernel option with respect to
the profile summary report is:
--profile_kernel <[data]:<[kernel_name|all]:[compute_unit_name|all]:[interface_name|all]:[counters|all]>
Three fields, kernel_name, compute_unit_name, and interface_name can be specified to
determine the interface which the performance monitor is applied to. However, you can also
specify the keyword all
to apply the monitoring to all
existing kernels, compute units, and interfaces with a single option. The last option,
<counters|all>
allows you to restrict the
information gathering to just counters
for large designs,
while all
(default) will include the collection of actual
trace information.
profile_kernel
option is additive and can be used multiple times on the link
line.Executing the program creates an sdaccel_profile_summary.csv file, if profile =
true
is specified in the sdaccel.ini file.
[Debug]
profile = true
The .csv file needs to be manually converted to Google Protocol Buffer format (.xprf) before the profiling result can be viewed in the integrated Profile Summary Viewer. The following command line example generates an .xprf file from the .csv input file:
sdx_analyze profile sdaccel_profile_summary.csv
Display the Profile Summary
Web Browser
Before the HTML profile summary can be displayed in a web browser the following command needs to be executed to create an HTML file representing the data.
sdx_analyze profile -i
sdaccel_profile_summary.csv -f html
This creates an HTML file that can be opened by the web browser of your choice. The file contains the same profiling result as presented in GUI Flow.
Profile Summary View
Use the integrated Profile Summary view to display the profile summary generated by the command line flow.
Follow these steps to open the profile summary in the Profile Summary view:
- Convert the .csv data file
into the protobuf
format.
sdx_analyze profile -i sdaccel_profile_summary.csv -f protobuf
- Start SDAccel tool GUI by
running the
sdx
command:$sdx
- Choose the default workspace when prompted.
- Select .
- Browse to and then open the .xprf file created by the
sdx_analyze
command run in step 1.
The following graphic shows the Profile Summary view that displays OpenCL API calls, kernel executions, data transfers, and profile rule checks (PRCs).
Profile Summary WindowData Interpretation
The profile summary includes a number of useful statistics for your OpenCL application. This can provide you with a general idea of the functional bottlenecks in your application. The profile summary consists of the following sections:
- Top Operations
- Top Data Transfer: Kernels and Global
Memory: This table displays the profile data for top data transfers
between FPGA and device memory.
- Device: Name of device
- Compute Unit: Name of compute unit
- Number of Transfers: Sum of write and read AXI transactions monitored on device
- Average Bytes per Transfer: (Total Read Bytes + Total Write Bytes) / (Total Read AXI Transactions + Total Write AXI Transactions)
- Transfer Efficiency (%):
(Average Bytes per Transfer) / min(4K, (Memory Bit Width/8 * 256))
AXI4 specification limits the max burst length to 256 and max burst size to 4K bytes.
- Total Data Transfer (MB): (Total Read Bytes + Total Write Bytes) / 1.0e6
- Total Write (MB): (Total Write Bytes) / 1.0e6
- Total Read (MB): (Total Read Bytes) / 1.0e6
- Transfer Rate (MB/s): (Total Data Transfer) / (Compute Unit Total Time)
- Top Kernel Execution
- Kernel Instance Address: Host address of kernel instance (in hex)
- Kernel: Name of kernel
- Context ID: Context ID on host
- Command Queue ID: Command queue ID on host
- Device: Name of device
where kernel was executed (format:
<device>-<ID>
) - Start Time (ms): Start time of execution (in ms)
- Duration (ms): Duration of execution (in ms)
- Global Work Size: NDRange of kernel
- Local Work Size: Work group size of kernel
- Top Memory Writes: Host and Device Global
Memory
- Buffer Address: Host address of buffer (in hex)
- Context ID: Context ID on host
- Command Queue ID: Command queue ID on host
- Start Time (ms) : Start time of write transfer (in ms)
- Duration (ms): Duration of write transfer (in ms)
- Buffer Size (KB): Size of write transfer (in KB)
- Writing Rate (MB/s): Writing Rate = (Buffer Size) / (Duration)
- Top Memory Reads: Host and Device Global
Memory
- Buffer Address: Host address of buffer (in hex)
- Context ID: Context ID on host
- Command Queue ID: Command queue ID on host
- Start Time (ms): Start time of read transfer (in ms)
- Duration (ms): Duration of read transfer (in ms)
- Buffer Size (KB): Size of read transfer (in KB)
- Reading Rate (MB/s): Reading Rate = (Buffer Size) / (Duration)
- Top Data Transfer: Kernels and Global
Memory: This table displays the profile data for top data transfers
between FPGA and device memory.
- Kernels & Compute Units
- Kernel Execution (includes estimated device
times): This table displays the profile data summary for all kernel
functions scheduled and executed.
- Kernel: Name of kernel
- Number of Enqueues: Number of times kernel is enqueued
- Total Time (ms) Sum of runtimes of all enqueues (measured from START to END in OpenCL execution model)
- Minimum Time (ms) Minimum runtime of all enqueues
- Average Time (ms) (Total Time) / (Number of Enqueues)
- Maximum Time (ms) Maximum runtime of all enqueues
- Compute Unit Utilization (includes
estimated device times): This table displays the summary profile data
for all compute units on the FPGA.
- Device: Name of device
(format:
<device>-<ID>
) - Compute Unit: Name of Compute Unit
- Kernel: Kernel this Compute Unit is associated with
- Global Work Size: NDRange of kernel (format is x:y:z)
- Local Work Size: Local work group size (format is x:y:z)
- Number of Calls: Number of times the Compute Unit is called
- Total Time (ms): Sum of runtimes of all calls
- Minimum Time (ms): Minimum runtime of all calls
- Average Time (ms): (Total Time) / (Number of Work Groups)
- Maximum Time (ms): Maximum runtime of all calls
- Clock Frequency (MHz): Clock frequency used for a given accelerator (in MHz)
- Device: Name of device
(format:
- Kernel Execution (includes estimated device
times): This table displays the profile data summary for all kernel
functions scheduled and executed.
- Data Transfers
- Data Transfer: Host and Global
Memory: This table displays the profile data for all read and write
transfers between the host and device memory via PCI Express® link.
- Context:Number of Devices: Context ID and number of devices in context
- Transfer Type: READ or WRITE
- Number of Transfers:
Number of host data transfersNote: May contain
printf
transfers - Transfer Rate (MB/s)
(Total Bytes Sent) / (Total Time in usec)
where Total Time includes software overhead
- Average Bandwidth Utilization
(%): (Transfer Rate) / (Max. Transfer Rate)
where Max. Transfer Rate = (256/8 bytes) * (300 MHz) = 9.6 GBps
- Average Size (KB): (Total KB sent) / (number of transfers)
- Total Time (ms): Sum of transfer times
- Average Time (ms): (Total Time) / (number of transfers)
- Data Transfer: Kernels and Global
Memory: This table displays the profile data for all read and write
transfers between the FPGA and device memory.
- Device: Name of device
- Compute Unit/Port Name: <Name of Compute Unit>/<Name of Port>
- Kernel Arguments: List of arguments connected to this port
- DDR Bank: DDR bank number this port is connected to
- Transfer Type: READ or WRITE
- Number of Transfers:
Number of AXI transactions monitored on deviceNote: Might contain
printf
transfers) - Transfer Rate (MB/s):
(Total Bytes Sent) / (Compute Unit Total Time)
-
Compute Unit Total Time = Total execution time of compute unit
-
Total Bytes Sent = sum of bytes across all transactions
-
- Average Bandwidth Utilization
(%): (Transfer Rate) / (0.6 *Max. Transfer Rate)
where Max. Transfer Rate = (512/8 bytes) * (300 MHz) = 19200 MBps
- Average Size (KB): (Total KB sent) / (number of AXI transactions)
- Average Latency (ns): (Total latency of all transaction) / (number of AXI transactions)
- Data Transfer: Host and Global
Memory: This table displays the profile data for all read and write
transfers between the host and device memory via PCI Express® link.
- OpenCL API Calls: This table displays
the profile data for all OpenCL host API function
calls executed in the host application.
- API Name: Name of API function (e.g., clCreateProgramWithBinary, clEnqueueNDRangeKernel)
- Number of Calls: Number of calls to this API
- Total Time (ms): Sum of runtimes of all calls
- Minimum Time (ms): Minimum runtime of all calls
- Average Time (ms): (Total Time) / (Number of Calls)
- Maximum Time (ms): Maximum runtime of all calls
Application Timeline
Application Timeline collects and displays host and device events on a common timeline to help you understand and visualize the overall health and performance of your systems. These events include:
- OpenCL API calls from the host code.
- Device trace data including AXI transaction start/stop, kernel start/stop, etc.
While useful for debugging and profiling the application, timeline and device trace data are not collected by default because the runtime needs to periodically unload the trace data from the FPGA, which can add additional time to the overall application execution. However, the device data is collected with dedicated hardware inside the FPGA, so the data collection does not affect kernel functionality on the FPGA. The following sections describe setups required to enable time and device data collection.
Turning on device profiling is intrusive and can negatively affect overall performance. This feature should be used for system performance debugging only.
GUI Flow
Timeline and device trace data collection is part of run configuration for an SDAccel™ project created from the integrated SDAccel environment. Follow the steps below to enable it:
- Instrumenting the code is required for System execution. This
is done through the Hardware Function Settings dialog box. In the Assistant
window, right-click the kernel under the System [Hardware] configuration, and
select the Settings Command.
With respect to application timeline functionality, you can enable Data Transfer, Execute Profile, and Stall Profiling. These options are instrumenting all ports of each instance of any kernel. As these options insert additional hardware, instrumenting all ports might be too much. Towards that end, more control is available through command line options as detailed in the Command Line section. These options are only valid for system runs. During hardware emulation, this data is generated by default.
- Data Transfer
- This option enables monitoring of data ports.
- Execute Profiling
- This option provides minimum port data collection during system run. This option records the execution times of the compute units. Execute profiling is enabled by default for data and stall profiling.
- Stall Profiling
- This option includes the stall monitoring logic in the bitstream.
- Specify what information is actually going to be reported during
a run.
Note: Only information actually exposed from the hardware during system execution is reported.
To configure reporting, click the down arrow next to the Debug or Run button, and then select Run Configurations to open the Run Configurations window.
- In the Run Configurations window, click the Profile tab.
Ensure the Enable profiling check box is selected. This enables basic profiling support. With respect to trace data, ensure that Generate timeline trace report actually gathers the information in the build config you are running.
Default implies that no trace data capturing is supported in system execution, but enabled by default in hardware emulation.
Additionally, you can select the amount of information to gather during runtime. Select the granularity for trace data collection independently for Data Transfer Trace and Stall Trace.
The Data Transfer Trace options are as follows:
- Coarse
- Show compute unit transfer activity from beginning of first transfer to end of last transfer (before compute unit transfer ends).
- Fine
- Show all AXI-level burst data transfers.
- None
- Turn off reading and reporting of device-level trace during runtime.
The Stall Trace Options are as follows:
- None
- Turn off any stall trace information gathering.
- All
- Record all stall trace information.
- External Memory Stall
- Memory stalls to DDR (for example, AXI4 read from DDR).
- Internal Dataflow Stall
- Intra-kernel streams (for example, writing to a full FIFO between data flow blocks).
- Inter CU Pipe Stall
- Inter-kernel pipe (for example, writing to a full OpenCL™ pipe between kernels).
If you have multiple run configurations for the same project, you must change the profile settings for each run configuration.
- After running configurations, in the Assistant window, double-click Application Timeline to open the Application Timeline window.
Command Line
Complete the following steps to enable timeline and device trace data collection in the Command Line flow:
- This step is responsible for the FPGA bitstream instrumentation
with SDx Accel Monitors (SAM) and SDx Performance Monitors (SPMs). The
instrumentation is performed through the
--profile_kernel
, which has three distinct instrumentation options (data
,stall
,exec
).Note: The--profile_kernel
option is ignored except for system compilation and linking. During hardware emulation, this data is generated by default.The
--profile_kernel
option has three fields that are required to determine the specific kernel interface to which the monitors are applied. However, if resource usage is not an issue, the keywordall
enables you to apply the monitoring to all existing kernels, compute units, and interfaces with a single option. Otherwise, you can specify thekernel_name
,compute_unit_name
, andinterface_name
explicitly to limit instrumentation. The last option,<counters|all>
allows you to restrict the information gathering to justcounters
for large designs, whileall
(default) includes the collection of actual trace information.Note: The--profile_kernel
option is additive and can be used multiple times on the link line.data
: This option enables monitoring of data ports through SAM and SPM IPs. This option needs to be set only during linking.-l --profile_kernel <[data]:<[kernel_name|all]:[compute_unit_name|all]:[interface_name|all]:[counters|all]>
stall
: This option needs to be applied during compilation:
and during linking:-c --profile_kernel <[stall]:<[kernel_name|all]:[compute_unit_name|all]:[counters|all]>
This option includes the stall monitoring logic (using SAM IP) in the bitstream. However, it does require that stall ports are present on the kernel interface. To facilitate this, the option is required for compilation of the C/C++/OpenCL kernel modules.-l --profile_kernel <[stall]:<[kernel_name|all]:[compute_unit_name|all]:[counters|all]>
exec
: This option provides minimum port data collection during system run. It simply records the execution times of the kernel through the use of SAM IP. This feature is by default enabled on any port that uses the data or stall data collection. This option needs to be provided only during linking.-l --profile_kernel <[exec]:<[kernel_name|all]:[compute_unit_name|all]>:[counters|all]
- After the kernels are instrumented, data gathering must be
enabled during runtime execution. Do this through the use of the sdaccel.ini file that is in the same directory as
the host executable. The following sdaccel.ini file will enable maximum information gathering
during
runtime:
[Debug] profile=true timeline_trace=true data_transfer_trace=coarse stall_trace=all
profile=<true|false>
: When this option is specified as true, basic profile monitoring is enabled. Without any additional options, this implies that the host runtime logging profile summary is enabled. However, without this option enabled, no monitoring is performed at all.timeline_trace=<true|false>
: This option will enable timeline trace information gathering of the data. Without adding profile IP into the FPGA (data), it will only show host information. At a minimum, to get more compute unit start and end execution times in the timeline trace, the compute unit needs to be linked with--profile_kernel exec
.data_transfer_trace=<coarse|fine|off>
: This option enables device-level AXI data transfers trace:coarse
: Show compute unit transfer activity from beginning of first transfer to end of last transfer (before compute unit transfer ends).fine
: Show all AXI-level burst data transfers.off
: Turn off reading and reporting of device-level trace during runtime.
stall_trace=<dataflow|memory|pipe|all|off>
: Specify what types of stalls to capture and report in timeline trace. The default isoff
.off
: Turn off any stall trace information gathering.Note: Enabling stall tracing can often fill the trace buffer, which results in incomplete and potentially corrupt timeline traces. This can be avoided by settingtrace_stall=off
.all
: Record all stall trace information.dataflow
: Intra-kernel streams (for example, writing to full FIFO between dataflow blocks).memory
: External memory stalls (for example, AXI4 read from the DDR.pipe
: Inter-kernel pipe (for example, writing to full OpenCL pipe between kernels).
- In command line mode, CSV files are generated to capture the
trace data. These CSV reports need to be converted to the Application Timeline
format using the
sdx_analyze
utility before they can be opened and displayed in the SDAccel environment GUI.sdx_analyze trace sdaccel_timeline_trace.csv
This creates the sdaccel_timeline_trace.wdb file by default, which can be opened from the GUI.
- To view the timeline report host and device waveforms, do the
following:
- Start the SDx
environment by running the
command:
$sdx
- Choose a workspace when prompted.
- Select .wdb file generated during hardware emulation or system run, and open it. , browse to the
- Start the SDx
environment by running the
command:
Data Interpretation
The following figure shows the Application Timeline window that displays host and device events on a common timeline. This information helps you to understand details of application execution and identify potential areas for improvements.
Application timeline trace has two main sections, Host and Device. The host section shows the trace of all the activity originating from the host side. The device section shows the activity of the compute-units on the FPGA.
Under the host different activities are categorized as OpenCL™ API calls, Data Transfer, and the Kernels.
- Host
- OpenCL API Calls: All OpenCL API calls are traced here. The
activity time is measured from the host perspective.
- General: All general OpenCL API calls such as clCreateProgramWithBinary(), clCreateContext(), clCreateCommandQueue etc are traced here
- Queue: OpenCL API calls that are associated with a specific command queue are traced here. This includes commands such as clEnqueueMigrateMemObjects, clEnqueueNDRangeKernel etc. If the user application creates multiple command queues, then this section show as many queues and activities under it.
- Data Transfer: In this section the DMA transfers from the host to the device memory are traced. There are multiple DMA threads implemented in the OpenCL runtime and there is typically an equal number of DMA channels. The DMA transfer is initiated by the user application by calling OpenCL APIs such as clEnqueueMigrateMemObjects. These DMA requests are forwarded to the runtime which delegates to one of the threads. The data transfer from the host to the device appear under Write, and the transfers from device to host appear under Read.
- Kernel Enqueues:
The active kernel executions are shown here. The kernels here should not
be confused with your kernels/compute-unit on the device. By kernels
here we mean the NDRangeKernels and the Tasks created by APIs
clEnqueueNDRangeKernels() and clEnqueueTask() and these are plotted
against the time measured from the host’s perspective. Multiple kernels
can be scheduled to be executed at the same time and they are traced
from the point they are scheduled to run until the end of kernel
execution. This is the reason for multiple entries. The number of rows
depend on the number of overlapping kernel executions. Note: Overlapping of the kernels should not be mistaken for actual real parallel execution on the device as the process might not be ready to actually execute right away.
- OpenCL API Calls: All OpenCL API calls are traced here. The
activity time is measured from the host perspective.
- Device "name"
- Binary Container
"name"
- Accelerator
"name": This is the name of the compute unit
(aka. Accelerator) on the FPGA.
- User
Functions: In the case of the Vivado High-Level
Synthesis (HLS) tool kernels, functions that are
implemented as data flow processes are traced here. The
trace for these functions show the number of active
instances of these functions that are currently
executing in parallel. These names are generated in hw
emulation when waveform is enabled.Note: Function level activity is only possible in Hardware Emulation.
- Function: "name a"
- Function: "name b"
- Read: A compute unit reads from the DDR
over AXI-MM ports. The trace of data a read by a compute
unit is shown here. The activity is shown as transaction
and the tool-tip for each transaction shows more details
of the AXI transaction. These names are generated when
--profile_kernel data
is used.- m_axi_<bundle name>(port)
- Write: A compute unit writes to the DDR
over AXI-MM ports. The trace of data written by a
compute unit is shown here. The activity is shown as
transactions and the tool-tip for each transaction shows
more details of the AXI transaction. This is generated
when
--profile_kernel data
is used.- m_axi_<bundle name>(port)
- User
Functions: In the case of the Vivado High-Level
Synthesis (HLS) tool kernels, functions that are
implemented as data flow processes are traced here. The
trace for these functions show the number of active
instances of these functions that are currently
executing in parallel. These names are generated in hw
emulation when waveform is enabled.
- Accelerator
"name": This is the name of the compute unit
(aka. Accelerator) on the FPGA.
- Binary Container
"name"
Waveform Viewer
The SDx development environment can generate a waveform view and launch a live waveform viewer when running hardware emulation. It displays in-depth details on the emulation results at system level, compute unit level, and at function level. The details include data transfers between the kernel and global memory, data flow via inter-kernel pipes as well as data flow via intra-kernel pipes. They provide many insights into the performance bottleneck from the system level down to individual function call to help developers optimize their applications.
By default, the waveform and live waveform viewers are not enabled. This is because the viewers require that the runtime generates a simulation waveform during hardware emulation, which consumes more time and disk space. The following sections describe the setup required to enable data collection.
GUI Flow
Follow the steps below to enable waveform data collection and to open the viewer:
-
Open the Application Project Settings window, and select the Kernel debug check box.
Click the down arrow next to the Run button, and select Run Configurations to open the Run Configurations window.
- On the Run Configurations window, click the Main tab, and select
the Use waveform for kernel debugging
check box. Optionally, you can select Launch live
waveform to bring up the Simulation window to view the Live
Waveform while the hardware emulation is running.
- In the Run Configurations window, click the Profile tab, and ensure the Enable profiling check box is selected. This
enables basic profiling support.
If you have multiple run configurations for the same project, you must change the profile settings for each run configuration.
- If you have not selected the Live Waveform viewer to be launched
automatically, open the Waveform view from the SDx Development Environment.
In the SDx Development Environment, double-click Waveform in the Assistant window to open the Waveform view window.
Command Line
Follow these instructions to enable waveform data collection from the Command Line during hardware emulation and open the viewer:
- Turn on debug code generation during kernel
compilation.
xocc -g -t hw_emu ...
- Create an sdaccel.ini file in
the same directory as the host executable with the contents
below:
[Debug] profile=true timeline_trace=true
This enables maximum observability. The options in detail are:
- profile=<true|false>
- Setting this option to true, enables profile monitoring. Without any additional options, this implies that the host runtime logging profile summary is enabled. However, without this option enabled, no monitoring is performed at all.
- timeline_trace=<true|false>
- This option enables timeline trace information gathering of the data.
- Execute hardware emulation. The hardware transaction data is collected in the file <hardware_platform>-<device_id>-<xclbin_name>.wdb.
- To see the live waveform and additional simulation waveforms,
add the following to the emulation section in the sdaccel.ini:
[Emulation] launch_waveform=gui
A Live Waveform viewer is spawned during the execution of the hardware emulation, which allows you to examine the waveforms in detail.
- If no Live Waveform viewer was requested, follow the steps below
to open the Waveform view:
- Start the SDx IDE
by running the following command:
$sdx
. - Choose a workspace when prompted.
- Select .wdb file generated during hardware emulation. , browse to the
- Start the SDx IDE
by running the following command:
Data Interpretation Waveform View
The following image shows the Waveform view:
The waveform view is organized hierarchically for easy navigation.
The hierarchy tree and descriptions are:
- Device “name”
- Target device name
- Binary Container “name”
- Binary container name.
- Memory Data Transfers
- For each DDR Bank, this shows the trace of all the read and write request transactions arriving at the bank from the host.
- Kernel “name” 1:1:1
- For each kernel and for each compute unit of that kernel, this section breaks down the activities originating from the compute unit.
- Compute Unit: “name”
- Compute unit name.
- CU Stalls (%)
- Stall signals are provided by the HLS tool to inform you when a portion of
their circuit is stalling because of external memory accesses, internal streams
(i.e., dataflow), or external streams (i.e., OpenCL pipes). The stall bus, shown in detailed kernel trace,
compiles all of the lowest level stall signals and reports the percentage that
are stalling at any point in time. This provides a factor of how much of the
kernel is stalling at any point in the simulation.
For example: If there are 100 lowest level stall signals, and 10 are active on a given clock cycle, then the CU Stall percentage is 10%. If one goes inactive, then it would be 9%.
- Data Transfers
- This shows the read/write data transfer accesses originating from each Master AXI port of the compute unit to the DDR.
- User Functions
- This information is available for the HLS tool kernels and
shows the user functions.
Function: <name>
- Function Stalls
- Shows the different type stalls experienced by the process. It contains External Memory and Internal-Kernel Pipe stalls. The number of rows is dynamically incremented to accommodate the visualization of any concurrent execution.
- Intra-Kernel Dataflow
- FIFO activity internal to the kernel.
- Function I/O
- Actual interface signals.
Data Interpretation Live Waveform
The following figure shows the live waveform viewer while running hardware emulation.
- Device “name”: Target device
name.
- Binary Container “name”:
Binary container name.
- Memory Data Transfers: For each DDR Bank this shows the trace of all the read and write request transactions arriving at the bank from the host.
- Kernel “name” 1:1:1: For
each kernel and for each compute unit of that kernel this section breaks down the
activities originating from the compute unit.
- Compute Unit: “name”: Compute unit name.
- CU Stalls (%): Stall
signals are provided by the Vivado
High-Level Synthesis (HLS) tool to inform you when a portion of the circuit is
stalling because of external memory accesses, internal streams (i.e.,
dataflow), or external streams (i.e., OpenCL™ pipes). The stall bus shown in detailed kernel trace
compiles all of the lowest level stall signals and reports the percentage that
are stalling at any point in time. This provides a factor of how much of the
kernel is stalling at any point in the simulation.
For example: If there are 100 lowest level stall signals, and 10 are active on a given clock cycle, then the CU Stall percentage is 10%. If one goes inactive, then it would be 9%.
- Data Transfers: This shows the read/write data transfer accesses originating from each Master AXI port of the compute unit to the DDR.
- User Functions: This
information is available for the HLS kernels and shows the user functions.
- Function:
“name”
- Dataflow/Pipeline
Activity This shows the number of parallel executions of
the function if the function is implemented as a dataflow process
- Active Iterations: This shows the currently active iterations of the dataflow. The number of rows is dynamically incremented to accommodate the visualization of any concurrent execution.
- StallNoContinue: This is a stall signal that tells if there were any output stalls experienced by the dataflow processes (function is done, but it has not received a continue from the adjacent dataflow process).
- RTL Signals: These are the underlying RTL control signals that were used to interpret the above transaction view of the dataflow process.
- Function
Stalls: Shows the different types of stalls experienced
by the process.
- External Memory: Stalls experienced while accessing the DDR memory.
- Internal-Kernel Pipe: If the compute units communicated between each other through pipes, then this will show the related stalls.
- Intra-Kernel Dataflow: FIFO activity internal to the kernel.
- Function I/O: Actual interface signals.
- Dataflow/Pipeline
Activity This shows the number of parallel executions of
the function if the function is implemented as a dataflow process
- Function: “name”
- Function: “name”
- Function:
“name”
- Binary Container “name”:
Binary container name.
Guidance
The Guidance view is designed to provide feedback to users throughout the development process. It presents in a single location all issues encountered from building the actual design all the way through runtime analysis.
It is crucial to understand that the Guidance view is intended to help you to identify potential issues in the design. These issues might be source code related or due to missed tool optimizations. Also, the rules are generic rules based on experiences on a vast set of reference designs. Nevertheless, these rules might not be applicable for a specific design. Therefore, it is up to you to understand the specific guidance rules, and take appropriate action based on your specific algorithm and requirements.
GUI Flow
The Guidance view is automatically populated and displayed in the lower central tab view. After running hardware emulation, the Guidance view might look like the following:
To simplify visualizing the guidance information, the GUI flow allows you to search, and filter the Guidance view to locate specific guidance rule entries. It is also possible to collapse or expand the tree view or even suppress the hierarchical tree representation and visualize a condensed representation of the guidance rules. Finally, it is possible to select what is shown in the Guidance view. You can enable or disable the visualization of warnings, as well as met rules, and restrict the specific content based on the source of the messages such as build and emulation.
By default, the Guidance view shows all guidance information for the project selected in the drop down.
To restrict the content to an individual build or run step, do the following:
- Use the command
- Select the category .
- Deselect Group guidance rule checks by project.
Command Line
The Guidance data is best analyzed through the GUI, which consolidates all guidance information for the flow. Nevertheless, the tool automatically generates HTML files containing the guidance information. As guidance information is generated throughout the tool flow, several guidance files are generated. The simplest way to locate the guidance reports is to search for the guidance.html files.
find . -name "*guidance.html" -print
This command lists all guidance files generated, which can be opened with any web-browser.
Data Interpretation
The Guidance view places each entry in a separate row. Each row might contain the name of the guidance rule, threshold value, actual value, and a brief but specific description of the rule. The last field provides a link to reference material intended to assist in understanding and resolving any of the rule violations.
In the GUI Guidance view, guidance rules are grouped by categories and unique IDs in the Name column and annotated with symbols representing the severity. These are listed individually in the HTML report. In addition, as the HTML report does not show tooltips, a full Name column is included in the HTML report as well.
The following list describes all fields and their purpose as included in the HTML guidance reports.
- Id
- Each guidance rule is assigned a unique id. Use this id to uniquely identify a specific message from the guidance report.
- Name
- The Name column displays a mnemonic name uniquely identifying the guidance rule. These names are designed to assist in memorizing specific guidance rules in the view.
- Severity
- The Severity column allows the easy identification of the importance of a guidance rule.
- Full Name
- The Full Name provides a less cryptic name compared to the mnemonic name in the Name column.
- Categories
- Most messages are grouped within different categories. This allows the GUI to display groups of messages within logical categories under common tree nodes in the Guidance view.
- Threshold
- The Threshold column displays an expected threshold value, which determines whether or not a rule is met. The threshold values are determined from many applications that follow good design and coding practices.
- Actual
- The Actual column displays the values actually encountered on the specific design. This value is compared against the expected value to see if the rule is met.
- Details
- The Details column provides a brief but specific message describing the specifics of the current rule.
- Resolution
- The Resolution column provides a pointer to common ways the model source code or tool transformations can be modified to meet the current rule. Clicking the link brings up a pop-up window or the documentation with tips and code snippets that you can apply to the specific issue.
Using Implementation Tools
Exploring Kernel Optimizations Using Vivado HLS
All kernel optimizations using OpenCL or C/C++ can be performed from within the SDAccel environment. The primary performance optimizations, such as those discussed in this chapter (pipelining function and loops, applying dataflow to enable greater concurrency between functions and loops, unrolling loops, etc.), are performed by the Xilinx® FPGA design tool, Vivado® High-Level Synthesis (HLS) tool.
The SDAccel environment automatically calls the HLS tool. However, to use the GUI analysis capabilities, you must launch the HLS tool directly from within the SDAccel environment. Using the HLS tool in standalone mode enables the following enhancements to the optimization methodology:
- Focusing solely on the kernel optimization, there is no requirement to execute emulation.
- The ability to create multiple solutions, compare their results, and explore the solution space to find the most optimum design.
- The ability to use the interactive Analysis Perspective to analyze the design performance.
To open the HLS tool in standalone mode, from the Assistant window, right-click the hardware function object, and select Open HLS Project, as shown in the following figure.
Controlling FPGA Implementation with the Vivado Design Suite
SDx development environment provides a smooth flow from an OpenCL/C/C++ model all the way to an FPGA accelerated implementation. In most cases, this flow completely abstracts away the underlying fact that the programmable region in the FPGA is configured to implement the kernel functionality. This fully isolates the developer from typical hardware constraints such as routing delays and kernel placement. However, in some cases these concerns will have to be looked at especially when large designs are to be implemented. Towards this end, SDx development environment allows you to fully control the Vivado Design Suite backend tool.
The SDAccel environment calls the Vivado Design Suite to automatically run RTL synthesis and implementation. You also have the option of launching the design suite directly from within the SDAccel environment. When invoking the Vivado Integrated Design Environment (IDE) in standalone mode in the SDAccel environment, you can open the Vivado synthesis project or the Vivado implementation project to edit, manage, and control the project.
The Vivado project can be opened in the SDAccel environment after the build targeting the System configuration has completed.
To open Vivado IDE in standalone mode, from the Xilinx drop-down menu, select Vivado Integration and Open Vivado Project. Choose between the Vivado synthesis and implementation projects, and click OK.
Using the Vivado IDE in standalone mode enables the exploration of various synthesis and implementation options for further optimizing the kernel for performance and area. Familiarity with the design suite is recommended to make the most use of these features.
-–xp
option for xocc
. For
example: --xp "vivado_prop:run.impl_1.{STEPS.PLACE_DESIGN.ARGS.TCL.POST}={<File and path>}"
This optimization flow is supported in the command line flow by calling xocc
–interactive
to bring up the Vivado IDE, on the current
project. In the IDE, generate a DCP, which can be saved and reused during linking with
xocc. The specific options are:
--interactive
allows the Vivado IDE to be launched from within thexocc
environment, with the right project loaded.--reuse_synth
allows a pre-synthesized Vivado Design Suite tool design checkpoint (.dcp) file to be brought in and used directly in SDx environment flow to complete implementation and xclbin generation.--reuse_impl
allows a pre-implemented and timing closed Vivado tool design checkpoint (.dcp) file to be brought in and used directly in SDx environment flow for xclbin generation.