Demystifying automata processing: GPUs, FPGAS or Micron's AP?

Marziyeh Nourian, Xiang Wang, Xiaodong Yu, Wu Chun Feng, Michela Becchi

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

33 Scopus citations

Abstract

Many established and emerging applications perform at their core some form of pattern matching, a computation that maps naturally onto finite automata abstractions. As a consequence, in recent years there has been a substantial amount of work on high-speed automata processing, which has led to a number of implementations targeting a variety of parallel platforms: CPUs, GPUs, FPGAs, ASICs, and Network Processors. More recently, Micron has announced its Automata Processor (AP), a DRAMbased accelerator of non-deterministic finite automata (NFA). Despite the abundance of work in this domain, the advantages and disadvantages of different automata processing accelerators and the innovation space in this area are still unclear. In this work we target this problem and propose a toolchain to allow an apples-to-apples comparison of NFA acceleration engines on three platforms: GPUs, FPGAs and Micron's AP. We discuss the automata optimizations that are applicable to these three platforms. We perform an evaluation on large-scale datasets: to this end, we propose an NFA partitioning algorithm that minimizes the number of state replications required to maintain functional equivalence with an unpartitioned NFA, and we evaluate the scalability of each implementation to both large NFAs and large numbers of input streams. Our experimental evaluation covers resource utilization, traversal throughput, and preprocessing overhead and shows that the FPGA provides the best traversal throughputs (on the order of Gbps) at the cost of significant preprocessing times (on the order of hours); GPUs deliver modest traversal throughputs (on the order of Mbps), but offer low preprocessing times (on the order of seconds or minutes) and good pattern densities (they can accommodate large datasets on a single device); Micron's AP delivers throughputs, pattern densities, and preprocessing times that are intermediate between those of FPGAs and GPUs, and it is most suited for applications that use datasets consisting of many small NFAs with a topology that is fixed and known a priori.

Original languageEnglish
Title of host publicationICS 2017
Subtitle of host publicationInternational Conference on Supercomputing
ISBN (Electronic)9781450350204
DOIs
StatePublished - 14 Jun 2017
Event31st ACM International Conference on Supercomputing, ICS 2017 - Chicago, United States
Duration: 13 Jun 201716 Jun 2017

Publication series

NameProceedings of the International Conference on Supercomputing
VolumePart F128411

Conference

Conference31st ACM International Conference on Supercomputing, ICS 2017
Country/TerritoryUnited States
CityChicago
Period13/06/1716/06/17

Keywords

  • Automata Processing
  • FPGA
  • GPU
  • Micron's AP
  • Pattern Matching
  • Reconfigurable Computing

Fingerprint

Dive into the research topics of 'Demystifying automata processing: GPUs, FPGAS or Micron's AP?'. Together they form a unique fingerprint.

Cite this