Quantum Computing Diagnostic Benchmarks

SeniorTechInfo
5 Min Read

by Amara Graps

Benchmarks for quantum computers are blooming. From the (not serious) Weird Al benchmark, to the (very serious) Quantum Benchmarking Initiative by DARPA, the quantum community has advanced quite a distance in the last decade. I have almost forty quantum computing benchmarking papers saved in my personal subdirectory now.

Let’s put a frame around these benchmarks with the help of Amico et al., 2023 published in IEEE blog highlight. (See also their paper in arXiv).

The Difference between Standardized and Diagnostic Benchmarks

The authors point out that many benchmarks we’re familiar with do not measure a device’s overall performance in an average sense. Instead, they emphasize the functionality of specific algorithms on particular hardware, making them highly sensitive to individual error sources or device components.

Diagnostic benchmarks, on the other hand, are found within application-oriented circuit libraries and aim to gather different algorithms with various quantum circuit configurations. This approach helps capture the overall functionality of quantum technology by averaging across many circuits, turning them into effective benchmarks when used collectively.

Meanwhile, standardized benchmarks highlight key characteristics like randomness, clarity in specifications, holistic evaluation of device performance, and applicability across different technologies to ensure inclusivity and fairness.

Valuable Features of Diagnostic Benchmarks

Let’s dive deeper into Diagnostic Benchmarks. According to Amico et al., 2023, their valuable features are:

  • Definition and Sensitivity: Diagnostic benchmarks are designed to be highly sensitive to specific types of errors, providing a clear evaluation of performance in specific settings.
  • Predictive Power: These benchmarks can predict outcomes for similarly structured problems, making them useful for specific tasks, though not suitable for general benchmarking.
  • Utility in Application-Oriented Methods: These methods are particularly useful in application-oriented circuit libraries and can be elevated to true benchmarks when aggregated.
  • Compilation and Mitigation Techniques: Diagnostic methods can integrate compilation and mitigation techniques to maximize performance for specific applications, contrasting with benchmarking methods that focus on average performance across tasks.

GQI’s Quantum Tech Stack Approach

Now that Diagnostic Benchmarks have a firmer definition, we can consider to which part of the Quantum Tech Stack they apply. As seen in the next figure, many of these benchmarks apply to the Top Stack.

GQI’s quantum technology stack consists of seven layers, from the Quantum Plane at the bottom to Applications at the top.

The Bottom Stack is focused on the physical components of quantum computing, like qubits and interconnections, while the Top Stack caters to users, featuring advanced computational workflows and quantum algorithms.

Figure. Quantum Tech Stack to determine the application of a Diagnostic Benchmark. (*)

Examples of Diagnostic Benchmarks

Here are some examples of diagnostic benchmarks highlighted by the Amico et al., 2023 articles:

  • Finžgar, Jernej Rudi, Philipp Ross, Leonhard Hölscher, Johannes Klepsch, and Andre Luckow
  • Kordzanganeh, Mohammad, Markus Buchberger, Basil Kyriacou, Maxim Povolotskii, Wilhelm Fischer, Andrii Kurkin, Wilfrid Somogyi, Asel Sagingalieva, Markus Pflitsch, and Alexey Melnikov
  • Kurlej, Arthur, Sam Alterman, and Kevin M. Obenland
  • Li, Ang, Samuel Stein, Sriram Krishnamoorthy, and James Ang
  • Lubinski, Thomas, Sonika Johri, Paul Varosy, Jeremiah Coleman, Luning Zhao, Jason Necaise, Charles H. Baldwin, Karl Mayer, and Timothy Proctor
  • Lubinski, Thomas, Carleton Coffrin, Catherine McGeoch, Pratik Sathe, Joshua Apanavicius, and David E. Bernal Neira
  • Mesman, Koen, Zaid Al-Ars, and Matthias Möller
  • Mundada, Pranav S., Aaron Barbosa, Smarak Maity, Yulun Wang, T. M. Stace, Thomas Merkh, Felicity Nielson, et al.
  • Tomesh, Teague, Pranav Gokhale, Victory Omole, Gokul Subramanian Ravi, Kaitlin N. Smith, Joshua Viszlai, Xin-Chuan Wu, Nikos Hardavellas, Margaret R. Martonosi, and Frederic T. Chong
  • Zhang, Victoria, and Paul D. Nation

We’ll delve into Lubinski et al.’s Application-Oriented Benchmarks in the next article, stay tuned!

(*) The Quantum Tech Stack concept is integral to analyzing quantum technology developments at GQI. Contact info@global-qi.com for more information on GQI’s Quantum Hardware State of Play and other State of Plays.

October 18, 2024

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *