Pathfinder X1600E EDSFF Accelerator
Accelerate low-latency edge AI inference with higher system-level efficiency and lower power vs. CPU, GPU and FPGA solutions
The Blaize Xplorer X1600E accelerator is an enterprise grade designed for AI inference at the edge via easy plug-in EDSFF interface, enabling servers and custom products to easily integrate AI inference. The X1600E accelerator is based on the Blaize Graph Streaming Processor (GSP®) architecture that enables new levels of processing power with energy efficiency ideal for AI inferencing workloads at the edge. The EDSFF X1600E accelerator easily integrates into a 1U rack system for large AI inference deployment. With low power, low latency, and more efficient use of memory, the X1600E can be used for computer vision applications and new AI inferencing solutions across a range of edge smart vision use cases, like autonomous optical inspection, traffic and parking management and more.
Programmability to Build Complete AI Apps, Keep Pace with Rapid Evolution of AI Models
The X1600E, is a software defined AI inference accelerator, making it easy to update and maintain after deployment. The X1600E GSP architecture is designed to run efficiently in a streaming fashion, and it is fully programmable via the
Blaize Picasso SDK and AI Studio. The hardware and software are purpose-built to enable developers to build entire edge AI inference applications, optimized for deployment and consistent updates by end users.
Edge and Enterprise Servers & Applications
Features