Blaize
You are here : Home > Products> Blaize

Xplorer X1600P-Q PCIe Accelerator

 Accelerate low-latency edge AI inference with higher efficiency and lower power vs. CPU, GPU and FPGA solutions

The Blaize® Xplorer X1600P-Q accelerator is designed for AI inference at the edge via an easy plug-in PCIe interface enabling industrial PCs, servers and other products to easily integrate AI inference. The X1600P-Q accelerator is based on the Blaize Graph Streaming Processor ® (GSP ®) architecture that enables new levels of processing power with energy efficiency ideal for AI inferencing workloads at the edge. The X1600P-Q comes in two form factors, a half-height, half-length (HHHL) PCIe form factor and a full-height, full-length (FHFL) PCIe form factor suitable for many different industrial PC and server systems. With low power, low latency, and more efficient use of memory, the X1600P-Q can be used for computer vision applications and new AI inferencing solutions across a range of edge smart vision use cases.

Programmability to Build Complete AI Apps, Keep Pace with Rapid Evolution of AI Models

The X1600P-Q, is a software defined AI inference accelerator, making it easy to update and maintain after it is deployed. The X1600P-Q Graph Streaming Process (GSP) architecture is designed to run efficiently in a streaming fashion and is fully programmable via the Blaize Picasso SDK and Blaize AI Studio software. The hardware and software are purpose-built to enable developers to build entire edge AI inference applications optimized for both deployment and consistent updates by end users.

 
Edge and Enterprise AI Inference Applications

  • Smart Parking & Traffic Management
  • Smart Retail
  • Industrial PCs
  • Warehouse and Factory Safety
  • Autonomous Optical Inspection
  • Network Video Recorders, Security Systems

Features

  • Four Blaize 1600 SoCs each with 16 GSP cores, providing 64-80 TOPs
  • Soft ISP available to run on Blaize 1600 SoC
  • 16 GB LPDDR4
  • PCIe Gen 3.0, 16 lanes
Contact Us