COM-HPC® Overview

Latest Member News

January 11, 2023
congatec introduces new Computer-on-Modules with 13th Gen Intel Core processors
January 11, 2023
ADLINK releases COM Express and COM-HPC modules based on Intel® 13th Gen Core™ processors — delivering up to 24 cores for extended power ranges with industrial-grade stability
July 27, 2022
congatec introduces high-performance COM-HPC carrier board in Micro-ATX form factor
June 29, 2022
congatec to enter the functional safety market
June 29, 2022
Avnet Embedded presents COM-HPC modules based on 12th Gen Intel® Core™ processors for high-performance computing

COM-HPC is a modular open standard for building high-performance embedded computing systems. Rising to meet the increasing bandwidth and data intelligence requirements of high-end IoT client devices and embedded edge servers in multiple market segments, the new COM specification family supports state-of-the art interfaces such as PCI Express Gen 5, USB4, DisplayPort 2.0, and 25G Ethernet, as well as server-class processors.

Though not compatible with its predecessor, COM Express, COM-HPC designs are also based on a two-board architecture consisting of a compute module and carrier card that interface over high-speed, high-pin count connectors. This enables easy upgradability of the processor and memory subsystems. The COM-HPC specification calls for a pair of 400-pin connectors that support up to 65 PCIe 5.0 lanes that deliver 32 Gbps throughput, as many as eight 25 GbE channels, 40 Gbps USB4/Thunderbolt data transfer speeds, 80 Gbps DisplayPort signals, and more.

COM-HPC defines two pinout Types – Server and Client – and five different modules sizes ranging from 120 mm x 95 mm to 160 mm x 200 mm. A sixth smaller size dubbed the “Mini” form factor slated for standardization in the near future.

Comparatively larger module sizes versus COM Express allow COM-HPC modules to accommodate up to 1 TB of DRAM and power budgets of up to 300W each. Combined with high-bandwidth interface support, the increased power envelope enables the use of high-performance CPUs, GPUs, FPGAs, accelerators, and heterogeneous multicore SoCs with up to 150W power dissipation in COM-HPC systems.

While the specification provides a ramp to data center-class performance and the option for IT/networking features like out-of-band management (OOB), it also retains technologies you’d expect of an embedded edge device including UART, I2C, SPI/eSPI, and USB 2.0 serial interfaces across both pinouts. The recently released COM-HPC 1.15 Functional Safety (FuSa) sub-specification defines an additional SPI interface dedicated to communication between safety blocks on host processors and a safety controller located on carrier cards to extend status and health monitoring capabilities to the entire COM-HPC-based system.

In short, COM-HPC complements the mid-range use case coverage provided by COM Express by extending performance and feature sets for more demanding and emerging applications.

Higher performance, more interfaces

Preview Specification

Carrier Board Design Guide R2

Embedded EEPROM specification for COM-HPC

Platform Management Interface specification for COM-HPC

COM-HPC Feature Summaries

Client Module

The COM-HPC Client Module Type targets use in high end embedded client products that need one or more displays, a full set of low, medium, and very high bandwidth I/O, powerful CPUs, and modest size.  Typical uses are in medical equipment, high end instrumentation, industrial equipment, casino gaming equipment, ruggedized field PCs, transportation and defense systems, and much more.  Client Modules typically will use SO-DIMM or soldered–down memory.  Up to four SO-DIMM memories may be supported on COM-HPC PCB Size C (160 mm x 120 mm).

Client Modules operate from either a fixed 12V power source, or optionally implement wide range of input power supplies, over a range of 8V to 20V.  This facilitates their use in battery-powered equipment.  Client Modules can accept up to 251W of input power (using the connector vendor recommended 20% current capacity derating) over the 28 Module VCC pins, at the 8V end of the power in range.  This allows CPUs with about up to 100W dissipation or more to be used on Client Modules that implement a wide range of power input.  Some situations may require a more conservative current derating.  Higher power operation is possible if the fixed 12V supply operation is used.

Server Module

The COM-HPC Server Type targets use in high-end headless (no display) embedded servers that require intensive CPU capability, large memory capacity, and lots of high bandwidth I/O including multiple 10Gbps or 25Gbps Ethernet, and up to 65 PICe lanes, at up to PCIe Gen 5 speeds.  Typical uses are in embedded server equipment ruggedized for use in field environments and applications such as autonomous vehicles, cell tower base stations, geophysical field equipment, medical equipment, defense systems, and much more.  Server Modules will typically use full-size DIMMs.  COM-HPC Server Modules are typically larger than the Client Modules, but Server vendors are free to use any of the five defined COM-HPC module sizes.  The sizes are summarized in Section 2.5 below, and a list of Server Module features may be found in Section 2.6 ‘Client and Server Interface Summary and Comparison’ in the full Specification.  Up to eight full size DIMM memories may be implemented on the largest COM-HPC module form factor.

The Server Modules use fixed voltage 12V input power.  Server Modules can accept up to 358W of input power (using the connector vendor recommended 20% current capacity derating) over the 28 Module VCC pins, at the low end of the 12V power in range.  This allows CPUs with up to about 150W dissipation to be deployed on Server Modules.  The limit is subject to considerations such as the connector pin derating used, the number of memory sockets used, and other power consumers on the Module.

Module Size Overview

Five COM-HPC Module PCB sizes are defined:

  • 95mm x 120mm Size A (Recommended for Client use)
  • 120mm x 120mm Size B (Recommended for Client use)
  • 160mm x 120mm Size C (Recommended for Client use)
  • 160mm x 160mm Size D (Recommended for Server use)
  • 200mm x 160mm Size E (Recommended for Server use)

Note that the mounting holes adjacent to Module connectors J1 and J2 are offset from the connector long axis center-lines.  This is done deliberately to provide a visual cue as to the proper mounting orientation of the Modules onto the Carrier boards.

Module Connector

COM-HPC uses a pair of 400 pin high-performance connectors, for a total of 800 pins.  The connector system allows signaling rates up to 32 Gtps, suitable for PCIe Gen 5.  The connector system allows for 10 mm or 5 mm Carrier – Module stack heights.

Non – x86 Implementations

COM-HPC Client and Server Modules are not restricted to traditional x86 CPU implementations.  Modules that host PCIe targets such as FPGAs or GPUs are allowed.  PCIe signal details on this matter are provided in the full Specification.

COM-HPC Modules may host traditional x86 systems or may host alternative ARM or RISC CPUs or may host PCIe targets such as module-based FPGAs or GPUs.

Module and Carrier Board Out-of-Band Management Control

COM-HPC Module and Carrier boards may support out-of-band (OOB) management features.  These features may be implemented on COM-HPC Server or Client systems.  Traditionally OOB management is more of a server-class feature, but the option is there for both COM-HPC Clients and Servers.  A separate PICMG document will describe the COM-HPC OOB management features in detail.

COM-HPC® Products From Our Members

ProductManufacturerCategorySubcategoryMore Info