Versal AI Edge Series

Delivering Breakthrough AI Performance/Watt For Real-Time Systems

Product Advantages

The Versal™ AI Edge series delivers 4X AI performance/watt vs. leading GPUs1 for real-time systems in automated driving, predictive factory and healthcare systems, multi-mission payloads in aerospace & defense, and a breadth of other applications. More than just AI, the Versal AI Edge series accelerates the whole application from sensor to AI to real-time control, all with the highest levels of safety and security to meet critical standards such as ISO 26262 and IEC 61508. As an adaptive compute acceleration platform, the Versal AI Edge series allows developers to rapidly evolve their sensor fusion and AI algorithms while leveraging the world’s most scalable device portfolio for diverse performance and power profiles from edge to endpoint.

Versal Premium

1: Versal AI Edge VE2802 vs. Jetson AGX Xavier (MAX N-Mode), ResNet50 224x224, batch=1

Blue fiber optic cables for 112G PAM4 Transceivers

Scalar Engines

Scalar Engines deliver power-efficient embedded compute with the safety and security required in real-time systems. The dual-core Arm® Cortex®-A72 application processor is ideal for running Linux-class applications, while the dual-core Arm Cortex-R5F real-time processor runs safety-critical code for the highest levels of functional safety (ASIL and SIL). The platform management controller (PMC) is based on a triple-redundant processor and manages device operation, including platform boot, advanced power and thermal management, security, safety, and reliability across the platform.

Digital crypto padlock

Adaptable Engines

At the heart of the Versal architecture’s flexibility is its Adaptable Engines, enabling the integration of any sensor, connectivity to any interface, and the flexibility to handle any workload. Capable of both parallelism and determinism, Adaptable Engines can implement and adapt sensor fusion algorithms, accelerate pre- and post-data processing across the pipeline, implement deterministic networking and motor control for real-time response, isolate safety-critical functions for fail-safe operation, and allow for hardware redundancies and fault resilience.

intelligent-engines

Intelligent Engines

Consisting of both AI Engines and DSP Engines, Intelligent Engines support a breadth of workloads common in edge applications including AI inference, image processing, and motion control. AI Engines are a breakthrough architecture based on a scalable array of vector processors and distributed memory, delivering breakthrough AI performance/watt. DSP Engines are based on the proven slice architecture in previous-generation Zynq™ adaptive SoCs, now with integrated floating-point support, and are ideal for wireless and image signal processing, data analytics, motion control, and more.

 

safety-security

Safety and Security

Versal adaptive SoCs were built from the ground up to meet the most stringent safety requirements in industrial and automotive applications, including ISO 26262 and IEC 61508 for safety, and IEC 62443 for security. The Versal architecture is partitioned with safety features in each domain, as well as global resources to monitor and eliminate common cause failures. New security features over previous-generation adaptive SoCs improve protection against cloning, IP theft, and cyber-attacks, including higher bandwidth AES & SHA encryption/decryption, glitch detection, and more.

accelerator-ram

Accelerator RAM

The accelerator RAM features 4MB of on-chip memory. The memory block is accessible to all compute engines and helps eliminate the need to go to external memory for critical compute functions such as AI inference. This enhances the already flexible memory hierarchy of the Versal architecture and improves AI performance/watt. The accelerator RAM is also ideal for holding safety-critical code that exceeds the capacity of the real-time processor’s OCM, improving the ability to meet ASIL-C and ASIL-D requirements.

programmable-io

Programmable I/O

Versal adaptive SoC’s programmable I/O allows connection to any sensor or interface, as well as the ability to scale for future interface requirements. Designers can configure the same I/O for either sensors, memory, or network connectivity, and budget device pins as needed. Different I/O types provide a wide range of speeds and voltages for both legacy and next generation standards, e.g., 3.2Gb/s DDR for server-class memory interfacing, 4.2Gb/s LPDDR4x for highest memory bandwidth per pin, and native MIPI support to handle up to 8-megapixel sensors and beyond—critical to Level-2 ADAS and above.


Find Out More

 

Applications

Breakthrough Compute from Edge to Endpoint

Data Center Network Acceleration

ADAS and Automated Drive

AI compute performance is a key requirement for automotive tier-1s and OEMs targeting SAE Level 3 and beyond, in addition to meeting stringent thermal, reliability, security, and safety requirements. The Versal™ AI Edge series was architected for the highest AI performance/watt in power- and thermally-constrained systems. As a heterogeneous compute platform, Versal AI Edge adaptive SoCs match the right processing engine to the workload across the vehicle: custom I/O for any needed combination of radar, LiDAR, infrared, GPS, and vision sensors; Adaptable Engines for sensor fusion and pre-processing; AI Engines for inference and perception processing; and Scalar Engines for safety-critical decision making. Versal AI Edge adaptive SoCs are part of the AMD automotive-qualified (XA) product portfolio and are architected to meet stringent ISO 26262 requirements.


Collaborative Robotics

Robotics integrate precision control, deterministic communications, machine vision, responsive AI, cybersecurity, and functional safety into a single ‘system of systems.' Versal AI Edge adaptive SoCs enable a modular and scalable approach to robotics by providing a single heterogeneous device for fusion of heterogeneous sensors for robotic perception, precise and deterministic control over a scalable number of axes, isolation of safety-critical functions, accelerated motion planning, and AI to augment safety controls for dynamic, context-based execution. The Versal AI Edge series also accelerates real-time analytics with machine learning to support predictive maintenance and deliver actionable insights via cybersecure (IEC 62443) network connectivity. 

Data Center Network Acceleration

Data Center Network Acceleration

Unmanned Aerial Vehicles & Multi-Mission Payloads

The Versal AI Edge series is optimized for real-time, high-performance applications in the most demanding environments such as multi-mission drones and UAVs. A single Versal AI Edge device can support multiple inputs including comms datalinks, navigation, radar for target tracking and IFF (Identification Friend or Foe), and electro-optical sensors for visual reconnaissance. The heterogeneous engines aggregate and pre-process incoming data and sensor input, perform waveform processing and signal conditioning, and ultimately perform low latency AI for target tracking and optimization of flight paths, as well as cognitive RF to identify adversarial signals or channel attacks. The Versal AI Edge series delivers both the intelligence and low-SWaP (size, weight, and power) needed for multi-mission situationally aware UAVs.


Ultrasound Imaging

Requirements are increasing to make medical devices smaller, portable, and battery driven, targeting more Point-of-Care applications, all without compromising patient safety while still satisfying regulatory requirements. The Versal AI Edge series accelerates parallel beamforming and real-time image processing to create higher quality images and analysis, but with the power efficiency for long life, battery-powered portable ultrasound units. As a heterogeneous compute platform, the Versal AI Edge series implements all the different structures across the pipeline. Adaptable Engines perform acquisition functions including control of the analog front-end. AI Engines accelerate advanced imaging techniques, as well as machine learning for diagnostic assistance and efficiency improvements. The Arm® subsystem hosts the Linux-class OS for orchestrating, updating, and providing infrastructure across the data pipeline. The Versal AI Edge series allows for scalability from portable, to desktop, to cart-based ultrasound solutions. Learn More

Data Center Network Acceleration
Product Table

Versal™ AI Edge Series Features Overview

AI / ML Performance

  VE2002 VE2102 VE2202 VE2302 VE1752 VE2602 VE2802
AI Engine - INT8x4 (TOPS) 11 16 32 45 101 202 405
AI Engine - INT8 (TOPS) 5 8 16 23 101 101 202
DSP Engine – INT8 (TOPS) 0.6 1.2 2.2 3.2 9.1 6.8 9.1
Adaptable Engine – INT4 (TOPS) 2 5 13 19 56 47 65
Adaptable Engine – INT8 (TOPS) 1 1 3 5 14 12 17

Scalar Engines Features

  VE2002 VE2102 VE2202 VE2302 VE1752 VE2602 VE2802
Application Processing Unit Dual-core Arm® Cortex®-A72, 48KB/32KB L1 Cache w/ parity & ECC; 1MB L2 Cache w/ ECC
Real-Time Processing Unit Dual-core Arm Cortex-R5F, 32KB/32KB L1 Cache, and 256KB TCM w/ECC
Memory 256KB On-Chip Memory w/ECC
Connectivity Ethernet (x2); UART (x2); CAN-FD (x2); USB 2.0 (x1); SPI (x2); I2C (x2)

Intelligent Engines Features

  VE2002 VE2102 VE2202 VE2302 VE1752 VE2602 VE2802
AI Engine-ML 8 12 24 34 0 152 304
AI Engines 0 0 0 0 304 0 0
DSP Engines 90 176 324 464 1,312 984 1,312

Adaptable Engines Features

  VE2002 VE2102 VE2202 VE2302 VE1752 VE2602 VE2802
System Logic Cells (K) 44 80 230 329 981 820 1,139
LUTs  20,000 36,608 105,000 150,272 448,512 375,000 520,704

Platform Features

  VE2002 VE2102 VE2202 VE2302 VE1752 VE2602 VE2802
Accelerator RAM (Mb) 32 32 32 32 0 0 0
Total Memory (Mb) 46 54 86 103 253 243 263
NoC Master / NoC Slave Ports 2 2 5 5 21 21 21
CCIX & PCIe® w/ DMA (CPM) - - - - 1 x Gen4x16,
CCIX
1 x Gen4x16,
CCIX
1 x Gen4x16,
CCIX
PCI Express® - - 1 x Gen4x8​ 1x Gen4x8 4x
Gen4x8
4x Gen4x8 4x Gen4x8
40G Multirate Ethernet MAC 0 0 1 1 2 2 2
Video Decoder Engines (VDEs) - - - - - 2 4
GTY Transceivers 0 0 0 0 44 0 0
GTYP Transceivers 0 0 8 8 0 32 32
Documentation

Documentation


Versal Design Guidance and Documentation

AMD provides a breadth of documentation, resources, and methodologies to accelerate your development on the Versal™ architecture. If you’re not sure where to begin with Versal adaptive SoCs, the Design Flow Assistant is an interactive guide to help you create a development strategy, while the Design Process Hubs are a visual and streamlined reference to all Versal documentation by design process.


Default Default Title Document Type Date
Get Started

Early Access Program

The Versal™ AI Edge series is currently in Early Access. Contact your local AMD sales representative to apply for the Early Access program or visit the Contact Sales page. Leverage the resources below to learn more about design tools and design methodologies for the Versal adaptive SoC architecture. 


Integrated Software and Hardware Platform for All Developers

With an inherently software programmable silicon infrastructure, the Versal adaptive SoC is designed from the ground up to be software-centric. The enhanced Vivado™ ML Editions introduces a new system design methodology and development environments such as traffic analyzer, NoC compiler, data flow modeling, and more. A high-speed, unified, cohesive debug environment accelerates debug and trace across Scalar, Adaptable, and Intelligent Engines.
Download Vivado ML Editions >

The Vitis™ unified software platform provides comprehensive core development kits, libraries that use hardware-acceleration technology. The platform provides an efficient, convenient, and unified software environment from the cloud to the edge. As a proud member of the opensource community, the Vitis unified software platform is free and offers an extensive set of open-source, performance-optimized libraries that offer out-of-the-box acceleration with minimal to zero code changes to your existing applications.
Download Vitis Unified Software Platform >

Versal Prime Series

Start Developing using the Versal AI Core VCK190 Core Evaluation Kit

Designers who are targeting a Versal AI Edge device can get started now with the Versal AI Core VCK190 Evaluation Kit. Versal AI Edge devices are based on the same architecture as the Versal AI Core series, with common architectural blocks such as Scalar Engines (Arm® processing subsystem), Adaptable Engines (programmable logic), AI Engines*, network on chip (programmable NoC), and connectivity blocks including PCIe® and DDR4. The evaluation kit has everything you need to jump-start your design, including the ability to perform system testing, evaluate key interfaces, and adopt the adaptive SoC design methodology. Versal AI Edge adaptive SoC Evaluation Kits will be available in the 2nd half of 2022. 

Learn more about the Versal AI Core series VCK190 Evaluation Kit >

*AI Engines are available in the VE1752 device; all other Versal AI Edge devices feature AI Engine-ML.


Training Courses


Versal Design Guidance and Documentation

AMD provides a breadth of documentation, resources, and methodologies to accelerate your development on the Versal™ architecture. If you’re not sure where to begin with Versal adaptive SoCs, the Design Flow Assistant is an interactive guide to help you create a development strategy, while the Design Process Hubs are a visual and streamlined reference to all Versal documentation by design process.

Video

Featured Videos


All Videos

Default Default Title Date