Creating Embedded Platforms in Vitis
Platform Creation Basics
In the Vitis™ environment acceleration application development flow, the project is divided into two distinct elements: the platform and the processing subsystem. The platform contains essential IP blocks (such as PS for SoCs, NoC and AI Engine for Versal™ ACAPs) and board interface IP blocks (such as high-speed I/Os and memory controllers). The processing subsystem contains the application-specific part of the system and can be composed of both programmable logic and AI Engine blocks. This approach promotes separation of concerns, facilitates concurrent development, and encourages reusability. The application developer is insulated from the low-level details of the platform and can focus on the specifics of the processing subsystem. The platform developer can focus on system bring-up and tuning I/O performance without having to worry about the processing subsystem. This means that the application developer can integrate the subsystem on different platforms, and a platform can be reused with different processing subsystems.
Xilinx provides pre-built platforms for Alveo™ cards and embedded evaluation boards. You can download these platforms from the Xilinx Download Center. Efficiently leveraging the decoupling of platforms and subsystems is central to the methodology and the productivity gains offered by the Vitis environment. For embedded designs, Xilinx recommends a parallel development process where the application team starts working on the subsystem using a Xilinx pre-built platform while the platform team works independently on bringing-up the custom platform. Rapid progress can be made by working in this manner. Using a pre-built platform means that the subsystem can be developed, integrated, and tested independently using a pre-verified, known-good foundation. After the subsystem is in a sufficiently advanced and stable state, the subsystem can be integrated with appropriate versions of the custom platform. Overall, this approach greatly streamlines the system integration process.
The following figure shows how to create a customized embedded platform.
To create a platform, you must have a base bootable design as a starting point. This design can be a Xilinx base platform design, an existing working design, or a design created from scratch. The following base components must be included in your base bootable design:
- A base hardware design exported from Vivado® Design Suite
- A base software design that includes Linux kernel, root file system, and device tree
After you have working hardware and board through a Vivado® Design Suite design, converting it into a Vitis environment platform requires adding properties to the base components to meet the requirements of the Vitis environment. In general, platform creation consists of the following steps:
- Add hardware interface parameters and interrupt support in your Vivado® Design Suite project and export the XSA.
- Update the software platform components to enable application acceleration software stacks (enable XRT, update device tree, and so on).
- Package and generate the platform using XSCT commands or the Vitis IDE.
The Vitis environment uses the properties in the hardware project to recognize the resources in the platforms and link kernels to the platforms. The Vitis environment uses the software stacks to take control of the kernels.
For details on Vitis environment embedded platform creation, see the Vitis Unified Software Platform Documentation (UG1416). For step-by-step instructions, see the Vitis Platform Creation tutorial.
Platform Creation Requirements
The base design you created in a Vitis platform is static after the platform creation process is complete.
Vitis does modify parameters based on certain IPs (for example, SmartConnect, NoCs) by adding additional master/slave interfaces. In some situations, PS/CIPS interfaces can also be modified and Versal™ ACAP andAI Engine IP is instantiated in the platform.
The following table shows the workflows to validate the base system on your board.
Workflow | Development | Validation |
---|---|---|
Basic board bring-up | Processor basic parameter setup. | Standalone Hello world and Memory Test application run properly. |
Advanced hardware setup | Enable advanced I/O in Processing System (such as USB, Ethernet, Flash,
PCIe®, or RC). Add I/O related IP in PL (such as MIPI, EMAC, or HDMI_TX). Add non-Vitis IP (such as AXI BRAM Controller, or Video Processing Subsystem (VPSS) IP). |
If these IP have standalone drivers, test them. |
Base software setup | Create PetaLinux project based on hardware platform. Enable kernel drivers. Configure boot mode. Configure rootfs. |
Linux boots up successfully. Peripherals work properly in Linux. |
Base Component Requirements
Every hardware platform design must contain a Processing System IP block from the IP catalog.
- Versal ACAP, Zynq® UltraScale+™ MPSoC, and Zynq-7000 SoC devices are supported.
- MicroBlaze™ processors are not supported for controlling acceleration kernels, but can be part of the base hardware.
Creating an Embedded Platform
Adding Hardware Interfaces
The following table shows the possible Vitis inputs and the minimal requirements for an acceleration embedded platform.
Inputs | Types Vitis Can Use | Minimum Requirements for AXI MM Kernels |
---|---|---|
Control Interfaces | AXI Master Interfaces from PS or from AXI Interconnect IP or SmartConnect IP | One AXI4-Lite Master for kernel control |
Memory Interfaces | AXI Slave Interfaces | One memory interface for data exchange |
Streaming Interfaces | AXI4-Stream Interfaces | Not required |
Clock | Multiple clock signals | One clock |
Interrupt | Multiple interrupt signals | One Interrupt |
General Requirements
- Every IP used in the platform design that is not part of the standard Vivado IP catalog must be local to the Vivado Design Suite project. References to IP repository paths external to the project are not supported when creating extensible XSA.
- Any platform interface, used for linking to kernels by the Vitis compiler, must be an AXI4, AXI4-Lite, AXI4-Stream, interrupt, clock, or reset type of interface.
- Any platform IP that has an AXI interface for linking to
kernels by the Vitis compiler must also
have associated clock pins to enable
v++
to correctly infer and insert clock domain crossing logic when needed. - Custom bus type and hardware interfaces on the platform or on
kernels are not supported through
v++
linker--connectivity.sp
and--connectivity.sc
directives. If a data bus with a custom bus type needs to be connected to kernels by the Vitis compiler, it must be converted to an AXI4, AXI4-Lite, or AXI4-Stream interface.
Project Type
The Vivado project type needs to be set to extensible Vitis platform type.
When creating a new project, select Project is an extensible Vitis platform.
To change an existing Vivado project to an extensible Vitis platform project, select and enable Project is an extensible Vitis platform.
set_property platform.extensible true [current_project]
Adding Platform Interfaces
If a component in block design has a PFM property, this component can be recognized
by v++
linker and can be used by the acceleration kernel.
In Vivado IDE, the PFM interface properties can be set in the Platform Setup window if the project is created as an extensible platform project. Click to open the settings. They can be defined manually in the Tcl Console, or by a Tcl script as well.
The four Platform Interface Tcl APIs include:
- AXI memory-mapped interfaces:
set_property PFM.AXI_PORT { <port_name> {parameters} <port2> {parameters} ...} [get_bd_cells <cell_name>]
- AXI4-Stream interfaces:
set_property PFM.AXIS_PORT { <port_name> {parameters} <port2> {parameters} ...} [get_bd_cells <cell_name>]
- Clocks and resets:
set_property PFM.CLOCK { <port_name> {parameters} <port2> {parameters} ...} [get_bd_cells <cell_name>]
- Interrupts:
set_property PFM.IRQ {pin_name {id id_number range irq_count}} [get_bd_cells <cell_name>]
The requirements for the PFM Properties are:
- The value of the PFM interface properties must be specified as a Tcl dictionary,
a list of name/"value" pairs.IMPORTANT: The "value" must be quoted, and both the name and value are case-sensitive.
- A
bd_cell
can have multiple PFM interface definitions. However, for each type of PFM interface, all ports are required to be set in a singleset_property
Tcl command. - For each PFM interface property, the name specified for the port object must
match the name of an external port or interface on a
bd_cell
. Each external port or interface object may only have one PFM interface definition. - Each different type of PFM interface can have different parameters.
- Setting the PFM property with a NULL ("") string will delete previously defined PFM interfaces.
Adding AXI Interfaces
To support AXI memory mapped kernels, the platform needs to declare at least one AXI control interface with AXI memory-mapped master port (M_AXI_GP) and one memory interface with AXI Slave port. They can be exported from PS block directly or have an interconnect IP connected. If the platform does not work with AXI memory mapped kernels, these interfaces are not required.
The following is the Tcl command syntax:
set_property PFM.AXI_PORT { <port_name> {parameters} <port2> {parameters} ...} [get_bd_cells <cell_name>]
The AXI control interfaces and AXI memory interfaces share the same
PFM.AXI
property. They have different memport
types.
- AXI control interface can be defined as M_AXI_GP. Memory interfaces use other types: S_AXI_HP, S_AXI_ACP, S_AXI_HPC, or MIG.
- The sptags property for M_AXI_GP port is not supported.
- sptag ID: (Optional) A user-defined ID that should start with an alphabetic character. The ID is case-sensitive. The system port tag (sptag) is a symbolic identifier that represents a class of platform port connections, such as S_AXI_HP, S_AXI_ACP, or M_AXI_GP. Multiple block design platform ports can share the same sptag.
- memory: (Optional) Specify the associated MIG IP instance and
address_segment
. The memory tag is a unique identifier that combines the Cell Name and Base Name columns in the IP integrator Address Editor. This tag will be associated with connections to the Memory Subsystem HIP, where multiple block design platform ports can share the same memory tag.
Exporting AXI interconnect master and slave ports involves the following requirements:
- All ports on the interconnect used within the platform must precede in index order any declared platform interfaces.
- There can be no gaps in the port indexing.
- The maximum number of master IDs for the S_AXI_ACP port is 8, so on a connected
AXI interconnect, available ports to declare must be one of {S00_AXI, S01_AXI,
..., S07_AXI}. Do not declare any ports that are used within the platform
itself. Declaring as many as possible will allow
sds++
to avoid cascadedaxi_interconnects
. - The maximum number of master IDs for an S_AXI_HP or MIG port is 16, so on an
connected AXI interconnect, available ports to declare must be one of {S00_AXI,
S01_AXI, ..., S15_AXI}. Do not declare any ports that are used within the
platform itself. Declaring as many as possible will allow
v++
to avoid cascadedaxi_interconnects
in generated user systems. - The maximum number of master ports declared on an interconnect connected to an
M_AXI_GP port is 64, so on an connected AXI interconnect, available ports to
declare must be one of {M00_AXI, M01_AXI, ..., M63_AXI}. Do not declare any
ports that are use within the platform itself. Declaring as many as possible
will allow
v++
to avoid cascadedaxi_interconnects
in generated user systems.
The following shows an example of defining an AXI master ports on AXI Interconnect IP:
set parVal []
for {set i 2} {$i < 64} {incr i} {
lappend parVal M[format %02d $i]_AXI \
{memport "M_AXI_GP"}
}
set_property PFM.AXI_PORT $parVal [get_bd_cells /axi_interconnect_0]
The following shows an example of defining AXI memory ports with MIG on SmartConnect IP:
set parVal []
for {set i 1} {$i < 16} {incr i} {
lappend parVal S[format %02d $i]_AXI
{memport "MIG" sptag "Bank0"}
}
set_property PFM.AXI_PORT $parVal [get_bd_cells /smartconnect_0]
The following is an example of the PFM.AXI_PORT
setting for control interface and memory interface.
set_property PFM.AXI_PORT {
M_AXI_HPM1_FPD {memport "M_AXI_GP"}
S_AXI_HPC0_FPD {memport "S_AXI_HPC" sptag "HPC0" memory "zynq_ultra_ps_e_0 HPC0_DDR_LOW"}
S_AXI_HPC1_FPD {memport "S_AXI_HPC" sptag "HPC1" memory "zynq_ultra_ps_e_0 HPC1_DDR_LOW"}
S_AXI_HP0_FPD {memport "S_AXI_HP" sptag "HP0" memory "zynq_ultra_ps_e_0 HP0_DDR_LOW"}
S_AXI_HP1_FPD {memport "S_AXI_HP" sptag "HP1" memory "zynq_ultra_ps_e_0 HP1_DDR_LOW"}
S_AXI_HP2_FPD {memport "S_AXI_HP" sptag "HP2" memory "zynq_ultra_ps_e_0 HP2_DDR_LOW"}
} [get_bd_cells /ps_e]
zynq_ultra_ps_e_0
is the instance name of the Zynq UltraScale+ MPSoC module.HPC0_DDR_LOW
is the address range name.
Adding AXI4-Stream Interfaces
To support AXI4-Stream stream kernels, the platform needs to declare the corresponding master or slave AXI4-Stream interfaces.
AXI4-Stream kernel interfaces are specified with the PFM.AXIS_PORT
sptag interface property and a matching connectivity.sc command
argument to the v++
linker.
The following is the Tcl command syntax:
set_property PFM.AXIS_PORT { <port_name> {parameters} <port2> {parameters} ...} [get_bd_cells <cell_name>]
Argument Description
- Port_name
- AXI4-Stream port name.
- Parameters
- type value: Streaming interface port type. Valid values for type
include:
- M_AXIS: A general-purpose AXI master port
- S_AXIS: A high-performance AXI slave port
Example
set_property PFM.AXIS_PORT {AXIS_P0 {type "S_AXIS"}} [get_bd_cells /zynq_ultra_ps_e_0]
v++
linker config file syntax for how to
link AXI4-Stream interface between kernels and
platforms (--connectivity.sc).Adding Clock and Resets
You can export any clock source with the platform, but for each clock you must also export synchronized reset signals using a Processor System Reset IP block in the platform. The PFM.CLOCK property can be set on a BD cell, external port, or external interface.
The following is the Tcl command for setting the PFM.CLOCK property:
set_property PFM.CLOCK { <port_name> {parameters} \
<port2> {parameters} ...} [get_bd_cells <cell_name>]
Adding Interrupts
Vitis provides a way to
automatically connect the kernel output IRQ signal to an IRQ in the platform during
the v++
link stage. The following shows the Tcl command syntax:
set_property PFM.IRQ {pin_name {id id_number}} bd_cell
set_property PFM.IRQ {port_name {id id_number range irq_count}} [get_bd_cell <cell_name>]
Argument Description
- Port_name
- IRQ port name of
bd_cell
. - id_number
- Integer from 0 to 127 to specify the IRQ number or the starting number if range is specified.
- irq_count
- Used for labeling interfaces that are otherwise subject to parameter propagation for specifying sizing of a bus (for example, interrupt controller intr interface).
The example shows how to enable 32 IRQ inputs to axi_intc_0
intr
port.
set_property PFM.IRQ {intr {id 0 range 32}} [get_bd_cells /axi_intc_0]
The example shows how to enable 63 IRQ with cascaded interrupt controller in VCK190 base platform.
set_property PFM.IRQ {intr {id 0 range 32}} [get_bd_cells /axi_intc_cascaded_1]
set_property PFM.IRQ {In0 {id 32} In1 {id 33} In2 {id 34} In3 {id 35} In4 {id 36} In5 {id 37} In6 {id 38} In7 {id 39} In8 {id 40} \
In9 {id 41} In10 {id 42} In11 {id 43} In12 {id 44} In13 {id 45} In14 {id 46} In15 {id 47} In16 {id 48} In17 {id 49} In18 {id 50} \
In19 {id 51} In20 {id 52} In21 {id 53} In22 {id 54} In23 {id 55} In24 {id 56} In25 {id 57} In26 {id 58} In27 {id 59} In28 {id 60} \
In29 {id 61} In30 {id 62}} [get_bd_cells /xlconcat_0]
Exporting Extensible Platforms
Hardware platforms are encapsulated in XSA file format. There are two kinds of XSA formats: fixed XSA for software development and extensible XSA for acceleration projects. To create a Vitis embedded platform for acceleration flow, you must use an extensible XSA.
When Vivado project type is set to extensible Vitis platform, export platform menu is available in button.
In the Export Hardware Platform Window, select platform type. There are four types of platform. If the exported platform will only be used to generate binaries run on hardware board, choose hardware. If it is expected to run hardware emulation with this platform, choose hardware emulation or hardware and hardware emulation. The differences between these two options is that if some modules in the current design is not supported by emulation, you should create an emulation specific design, export as "hardware emulation" platform and then use "Combine XSAs" option to combine a hardware XSA and a hardware emulation XSA into one XSA that is capable of performing both jobs.
For simple designs:
- Select Hardware and Hardware Emulation, click Next.
- Select Pre-synthesis for Platform State. Post-implementation is only needed when creating DFX platforms. Click Next.
- Input Platform Properties. Click Next.
- Input the XSA file name and the export target directory. Click Next.
- Check summary and click Finish.
You can also perform this in the command line using the following command:
set_property pfm_name {vendor:board:name:version} [get_files <bd_file>]
write_hw_platform -hw -force <XSA file>
To create and combine a hardware XSA and a hardware emulation XSA, use the following commands:
write_hw_platform -hw <hw_platform>
write_hw_platform -hw_emu <hw_emu_platform>
combine_hw_platform -hw <hw_platform> -hw_emu <hw_emu_platform> -o <combined_platform>
Updating Software Components
Adding XRT to the Root Filesystem
Vitis acceleration application uses XRT to control hardware. XRT provides a unified programming interface across Alveo™ Data Center accelerator cards to embedded use cases.
You must add the XRT kernel driver (zocl) and the user space library (xrt-dev) to rootfs and sysroot. Package xrt-dev enables you to compile Vitis applications that use the XRT API.
Updating the Device Tree for ZOCL
The zocl driver interface requires a device tree node to enable the interrupt connection.
The following is an example of the zocl device node.
&amba {
zyxclmm_drm {
compatible = "xlnx,zocl";
status = "okay";
interrupt-parent = <&axi_intc_0>;
interrupts = <0 4>, <1 4>, <2 4>, <3 4>,
<4 4>, <5 4>, <6 4>, <7 4>,
<8 4>, <9 4>, <10 4>, <11 4>,
<12 4>, <13 4>, <14 4>, <15 4>,
<16 4>, <17 4>, <18 4>, <19 4>,
<20 4>, <21 4>, <22 4>, <23 4>,
<24 4>, <25 4>, <26 4>, <27 4>,
<28 4>, <29 4>, <30 4>, <31 4>;
};
};
For more information, refer to the XRT documentation: https://xilinx.github.io/XRT/master/html/yocto.html.
Update Interrupt Controller Input Number
In the block diagram, the interrupt controller has not been connected to acceleration
kernels. The auto-generated device tree reflects the hardware design of the block
diagram and does not consider that v++
linker would connect the
interrupt signals of kernels to interrupt controller. To enable these interrupts,
override the interrupt input number of the interrupt controllers.
The following is an example on how to override AXI Interconnect node parameters in the system-user.dtsi.
&axi_intc_0 {
xlnx,kind-of-intr = <0x0>;
xlnx,num-intr-inputs = <0x20>;
interrupt-parent = <&gic>;
interrupts = <0 89 4>;
};
Declaring the Platform with /etc/xocl.txt
Platform name can be written into /etc/xocl.txt in the embedded platform rootfs, so that XRT knows which platform it is. Host application can use XRT API to get the platform name and check the comparability with xclbin and host application with platform.
Adjusting the CMA Size
XRT uses CMA for buffer object allocation. You must reserve sufficient memory for CMA in bootargs or the device tree to prevent running out of memory during acceleration application runtime.
Packaging a Vitis Acceleration Platform
With all requirements prepared for Vitis acceleration platforms, you can package them together and generate the final Vitis acceleration platform. You can do this using either the Vitis IDE or the Xilinx Software Command-Line Tool (XSCT).
- In the Vitis IDE, select to create a Vitis platform.
- Using XSCT, you can use the
platform
command to create a platform and thedomain
command to add domains into a platform. For more information about XSCT, refer to Xilinx Software Command-Line Tool in the Embedded Software Development flow.
The platform is an encapsulation of multiple hardware and software components. This capsulation makes it easier to hand off deliveries from hardware-oriented engineers to application developers.
The following files and information are packaged into the platform.
- Hardware Specification
- This is an extensible XSA file.
- Software Components
- These are added to the platform as a Linux domain that enables OpenCL runtime.
Root Filesystem
FAT32 and Ext4 partition types are supported by Vitis. The root filesystem is optional in platform creation step because it can be assigned during Vitis application creation step.
An image directory needs to be set during platform creation. All contents in this directory will be packaged into final SD card image. If the target file system is FAT32, the files will be placed to SD card root directory; if the target file system is Ext4, the files will be placed to root directory of the first FAT32 partition.
Boot Components
A BIF file must be provided so that the application build process can package the boot image.
The following is an example of a BIF file:
/* linux */
the_ROM_image:
{
[fsbl_config] a53_x64
[bootloader] <fsbl.elf>
[pmufw_image] <pmufw.elf>
[destination_device=pl] <bitstream>
[destination_cpu=a53-0, exception_level=el-3, trustzone] <bl31.elf>
[destination_cpu=a53-0, exception_level=el-2] <u-boot.elf>
}
A boot components directory, including all the files described in the BIF, should also be provided. In this example, the components directory provides fsbl.elf, pmufw.elf, bl31.elf, and u-boot.elf. These boot components can be generated by PetaLinux.
In the Vitis application building and packaging state,
v++
looks for the files in the boot components directory and
replaces the placeholders with real file names and paths. It then calls Bootgen to
generate BOOT.BIN.
Testing Your Platform
Before delivering the platform to the application developers, you should run some basic platform tests to make sure it works properly for acceleration applications.
Generally, we need to make sure the platform can pass these tests:
- Boot test
- The Vivado project generated implementation result BIT file (from Adding Hardware Interfaces) and PetaLinux generated images (from Updating Software Components) should be able to successfully boot to the Linux console.
- Platforminfo test
- The platform generated in Packaging a Vitis Acceleration Platform should have a proper platforminfo report for clock and memory information.
- XRT basic test
- The XRT
xbutil query
utility should be able to run on the target board and properly report platform information.
- Vadd test
- Use Vitis to generate a vector addition
sample application with the platform. The generated application and xclbin should print
test pass
on the board.
Enabling Hardware Emulation for Extensible XSA
The following steps are used for custom platform developers.
- Create a Vivado project with
the necessary BD, RTL, test bench, and other sources.
- Note that in 2020.1, only BD can be used in HW EMU, but starting 2020.2, other sources will also be allowed.
- For Versal ACAPs only, test bench needs to include BD wrapper instead of including BD directly, because Vitis does performs jobs on this level to insert NoC into simulation.
- For Versal ACAPs only, to
enable AI Engine in the Vitis platform, the AI Engine block needs to be configured to
have only one slave AXI4 Memory-Mapped port
enabled and connected to NoC. Vitis based on AI Engine Graph software
will make additional auto-connections during
v++
linking stage. - For DFX platforms, specify correct PFM properties in the dynamic region BD so that the Vitis tools can attach accelerators correctly.
- Update the design HW Emulation packaging into XSA.
- Before packaging the design into XSA, it is important that your design step through the simulator correctly.
- For Versal ACAPs only, prepare the
platform design to enable SystemC models. Update the CIPS and NoC IP setting
to change SELECTED_SIM_MODEL property to TLM. This ensures that for CIPS IP,
the design uses QEMU model on which SW can be run. Similarly, for Zynq®-7000 and Zynq®
UltraScale+™ MPSoC
devices, set the SELECTED_SIM_MODEL on the processing system IP instance.
Following Tcl command can be used in the design. Also, set the parameter to
enable SystemC simulation in Vivado:
foreach tlmCell [get_bd_cells * -hierarchical -filter {VLNV =~ "*:*:axi_noc:*" || VLNV =~ "*:*:versal_cips:*"}] {set_property SELECTED_SIM_MODEL tlm $tlmCell } set_param bd.generateHybridSystemC true
- Create a test bench in
sim_1 fileset
fileset and instantiate the<top>
module of your design. For Versal ACAPs, Vivado requires that the user test bench should not instantiate the<top>
module directly. Instead, it should instantiate<top>_sim_wrapper
module. A file called <top>_sim_wrapper.v is generated when you call thelaunch_simulation -scripts_only
command. The interface of this module is the same as your<top>
module, but it instantiates additional simulations models related to an aggregated NoC module created from various logical NoC modules instantiated in the design. Use the following Vivado Tcl commands to generate the necessary NoC simulation file and use them in your simulation sources.# Ensure that your top of synthesis module is also set as top for simulation set_property top <rtl_top> [get_filesets sim_1] # Generate simulation top for your entire design which would include # aggregated NOC in the form of xlnoc.bd launch_simulation -scripts_only update_compile_order -fileset sim_1 # Set the auto-generated <rtl_top>_sim_wrapper as the sim top set_property top <rtl_top>_sim_wrapper [get_filesets sim_1] update_compile_order -fileset sim_1 #Generate the final simulation script which will compile # the <syn_top>_sim_wrapper and xlnoc.bd modules also launch_simulation -scripts_only launch_simulation -step compile launch_simulation -step elaborate
- Compile the design, go through the above steps, and start simulation.
Because the design is configured to use QEMU, the CIPS IP will not generate
any transactions because there is no SW present when doing simulation in
Vivado simulator. You will see the
following ERROR message in the Vivado
simulation, but it indicates that the basic design loads correctly in
simulator.
############################################################## # # Simulation does not work as Versal CIPS Emulation (SELECTED_SIM_MODLE=tlm) only works with Vitis tool(launch_emulator.py tool in Vitis) # ############################################################## ERROR: [Simtcl 6-50] Simulation engine failed to start: The Simulation shut down unexpectedly during initialization.
Note: To confirm that the design will have correct transactions, you can optionally perform a simulation of the design using the CIPS VIP first before changing it to use TLM (QEMU). First, you must keep the SELECTED_SIM_MODEL property to be RTL for NoC and CIPS IP. Also, create a different test bench which drives the CIPS VIP and also meet the requirement of the NoC Verilog model. Refer to the CIPS VIP and NoC IP documentation for additional details on how to set up test bench for Verilog-based simulation.
- Package the HW Emulation only XSA.IMPORTANT: The source files for all elements of the Vivado project must be local to the project prior to exporting it as an XSA, or an error can be returned when using the platform in the Vitis tool.
- Use
set_property platform.platform_state "pre_synth" [current_project] write_hw_platform -hw_emu -file platform_hw_emu.xsa
to export a Hardware Emulation platform or use the following
Tcl
command: - This XSA can be used with pre-built Linux images or with PetaLinux to create a custom Linux image to create a full platform. Then, the remainder of the Vitis tools can be used to add a kernel to design with the XRT.
- Use
Special Considerations for Embedded Platform Creation
Divide Logic Functions to Platform and Kernel
While the designs on FPGA and SoC are getting more complex, it is common for multiple developers or teams to work on a design together. The Vitis software platform provides a clear boundary for application developers and platform developers. Platform developers might include board developers, BSP developers, system software developers, and so on.
In the view of a system architect, some logic functions might be in a gray area: they can be packaged grouped with platforms, or they can work as an acceleration kernel. To help divide the system blocks, here are some general guidelines.
- The basic consideration for classifying a function as a kernel or platform is whether it is an application-related logic.
- Platforms should be more stable than applications. Application function changes should only happen in the software and kernel.
- Platforms abstract hardware. When changing a hardware board, the application should need no change, or very little change if necessary, to target to the new hardware.
- Follow constraints and limitations of the Vitis tool. For
example:
- Only three types of interfaces are supported by Vitis acceleration kernels: AXI MM, AXI4-Lite, and AXI4-Stream.
- AXI Kernel does not support external I/O pins.
The following table shows the recommended platforms and kernels for logic types.
Logic | Platform | Kernel |
---|---|---|
Hard Processors (PS of Zynq and Zynq UltraScale+ MPSoC) | Only in Platform | |
Soft Processors | Preferred in Platform | OK as an RTL kernel |
I/O Block (External pins, MIPI, PHY, etc.) | Only in Platform | |
Related IP for I/O Block (DMA for PCIe®, MAC for Ethernet, etc.) | Generally in platform because the interface between I/O and IP are not AXI. | OK as Kernel if the interfaces between I/O block and IP are AXI. |
IP with non-AXI interface | Only in Platform | OK if the interface can be changed to AXI MM or AXI4-Stream |
Traditional memory mapped IP which has Linux driver (VPSS, etc.) | Only in Platform | |
HLS AXI memory mapped IP | OK in Platform. You have to write control software. | Preferred as Kernel. Controlled by XRT. |
Acceleration memory mapped IP follows Vitis kernel register standard and open to XRT | Preferred as Kernel | |
Vitis Libraries | Only work as Kernel | |
Free running IP with AXI4-Stream interface | OK | OK |
References
For more information on embedded platforms, see the following links: