The Google Coral USB Accelerator adds an Edge TPU coprocessor to your system. It includes a USB-C socket you can connect to a Linux-based host computer, enabling high-speed machine learning inferencing on a wide range of systems, simply by connecting it to a USB port.
Featuring the on-board Edge TPU is a small ASIC designed by Google that accelerates TensorFlow Lite models in a power efficient manner: it's capable of performing 4 trillion operations per second (4 TOPS), using 2 watts of power—that's 2 TOPS per watt. For example, one Edge TPU can execute state-of-the-art mobile vision models such as MobileNet v2 at almost 400 frames per second. This on-device ML processing reduces latency, increases data privacy, and removes the need for a constant internet connection.
Key features:
Performs high-speed ML inferencing
The on-board Edge TPU coprocessor is capable of performing 4 trillion operations (tera-operations) per second (TOPS), using 0.5 watts for each TOPS (2 TOPS per watt). For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at almost 400 FPS, in a power efficient manner. See more performance benchmarks.
Supports all major platforms
Connects via USB to any system running Debian Linux (including Raspberry Pi), macOS, or Windows 10.
Supports TensorFlow Lite
No need to build models from the ground up. TensorFlow Lite models can be compiled to run on the Edge TPU.
System requirements:
One of the following operating systems:
Linux Debian 6.0 or higher, or any derivative thereof (such as Ubuntu 10.0+), and an x86-64 or ARM64 system architecture
macOS 10.15, with either MacPorts or Homebrew installed
Windows 10
One available USB port (for the best performance, use a USB 3.0 port)
Python 3.5, 3.6, or 3.7
Warranty Period: 12 months
Features
Features:
Google Edge TPU ML accelerator
4 TOPS total peak performance (int8) ○
2 TOPS per watt
USB 3.0 (USB 3.1 Gen 1) Type-C socket
Supports Linux, Mac, and Windows on host CPU
Dimension: 65 mm x 30 mm x 8mm
Specifications:
Edge TPU ML accelerator: ASIC designed by Google that provides high performance ML inferencing for TensorFlow Lite models
Arm 32-bit Cortex-M0+ Microprocessor (MCU): Up to 32 MHz max, 16 KB Flash memory with ECC, 2 KB RAM
Connections: USB 3.1 (gen 1) port and cable (SuperSpeed, 5Gb/s transfer speed), Included cable is USB Type-C to Type-A