Logic Synthesis Software School

July 23, 2021 · Virtual Experience · co-organized with IWLS

About the event

The Logic Synthesis Software School (LSSS) is an informal meeting on software tools for logic synthesis and verification. The goal of the meeting is to bring together researchers and tool developers in the fields of electronic design automation, logic synthesis, and verification to foster research on open science infrastructure and tools for logic synthesis and verification. LSSS is a joint event of IWLS.

Confirmed speakers:

The meeting was on Zoom


07:00 am PT
16:00 pm CET
23:00 pm JST
07:00 am - 07:40 am PT
16:00 pm - 16:40 pm CET
23:00 pm - 23:40 pm JST
The EPFL Flagship Libraries in Action: A Development Snapshot of mockturtle & tweedledum (Heinz Riener & Bruno Schmitt) [PDF]

mockturtle is a C++-17 logic network library. It provides several logic network implementations (such as And-inverter graphs, Majority-inverter graphs, and k-LUT networks), and generic algorithms for logic synthesis and logic optimization. mockturtle is part of the EPFL logic synthesis libraries, a collection of modular and open software libraries, to tackle problems in the field of logic synthesis.

tweedledum is an extensible and efficient open-source library for quantum circuit synthesis and compilation. It has state-of-the-art performance in many compilation tasks relevant to near-term applications. In this work, we introduce its design principles and concrete implementation. In particular, we describe the library’s core: an intuitive and flexible intermediate representation (IR) that supports different abstraction levels across the same circuit structure. We demonstrate the power of such flexibility through a use case in which we use tweedledum to compile an oracle defined by classical Boolean function into a quantum circuit. This oracle is part of a circuit implementing Grover’s search to solve an instance of vertex coloring. We compare this result against some hand-optimized implementations and show that our automated flow empowers users to implement oracles on higher-level abstractions while delivering highly-optimized results. We also present a use case in which we employ tweedledum to map technology-independent circuits to constrained quantum hardware architectures. Our results show that tweedledum is significantly faster than established frameworks while delivering competitive quality of the results.

GitHub: mockturtletweedledum

Photo of Heinz Riener

Heinz Riener is a researcher at the Integrated Systems Laboratory at École polytechnique fédérale de Lausanne (EPFL), Lausanne, Switzerland in the group of Prof. Giovanni De Micheli. He holds a Ph.D. degree (Dr.-Ing.) in Computer Science from University of Bremen, Germany. He received his B.Sc. and M.Sc. degree from Technical University Graz (TUGraz) in Graz Austria. From 2015 to 2017, he worked at the German Aerospace Center (DLR), Bremen, Germany, in the group of Avionics Systems. From 2011 to 2015, he worked at the University of Bremen, Germany, in the group of Reliable Embedded Systems of Prof. Görschwin Fey. His research interests are logic synthesis, formal methods, and computer-aided verification of hardware and software systems.

Photo of Bruno Schmitt

Bruno Schmitt received a computer engineering degree from the Federal University of Rio Grande do Sul (UFRGS) in 2017. He has worked as a visiting researcher in the logic synthesis & verification group of Prof. Robert K. Brayton at UC Berkeley. Bruno also was a visiting researcher at Prof. Giovanni de Micheli's logic synthesis group at EPFL. Today, He is a fourth-year Ph.D. student at de Micheli's group. Bruno has also worked on both Microsoft's and IBM's quantum research teams. As part of his research, he is the primary developer and maintainer of an open-source full-stack library for quantum compilation, tweedledum, and an active contributor to the EPFL logic synthesis libraries. His primary research interests include quantum compilation, logic synthesis, formal verification, design automation tools (CAD), and SAT solvers.

07:40 am - 07:45 am PT
16:40 pm - 16:45 pm CET
23:40 pm - 23:45 pm JST
Q&A Break
07:45 am - 08:25 am PT
16:45 pm - 17:25 pm CET
23:45 pm - 00:25 am JST
Scalable Functional Synthesis: The Child of Perfect Marriage between Machine Learning and Formal Methods (Kuldeep Meel) [PDF]

Don't we all dream of the perfect assistant whom we can just tell what to do, and the assistant can figure out how to accomplish the tasks? Formally, given a specification F(X, Y) over the set of input variables X and output variables Y, we want the assistant, aka functional synthesis engine, to design a function G such that (X, Y=G(X)) satisfies F. Functional synthesis has been studied for over 150 years, dating back Boole in 1850s and yet scalability remains a core challenge. Motivated by progress in machine learning, we design a new algorithmic framework Manthan, which views functional synthesis as a classification problem, relying on advances in constrained sampling for data generation, and advances in automated reasoning for a novel proof-guided refinement and provable verification. On an extensive and rigorous evaluation over 609 benchmarks, we demonstrate that Manthan significantly improves upon the current state of the art. The significant performance improvements, along with our detailed analysis, highlights several interesting avenues of future work at the intersection of machine learning, constrained sampling, and automated reasoning.

GitHub: Manthan

Photo of Kuldeep MeelKuldeep Meel is the Sung Kah Kay Assistant Professor in the Computer Science Department of School of Computing at the National University of Singapore where he also holds President's Young Professorship. His research interests lie at the intersection of Formal Methods and Artificial Intelligence. He is a recipient of the 2019 NRF Fellowship for AI and was named AI’s 10 to Watch by IEEE Intelligent Systems in 2020. His work received the 2018 Ralph Budd Award for Best PhD Thesis in Engineering, 2014 Outstanding Masters Thesis Award from Vienna Center of Logic and Algorithms and Best Student Paper Award at CP 2015. He received his Ph.D. (2017) and M.S. (2014) degree from Rice University, and B. Tech. (with Honors) degree (2012) in Computer Science and Engineering from Indian Institute of Technology, Bombay.

08:25 am - 08:30 am PT
17:25 pm - 17:30 pm CET
00:25 am - 00:30 am JST
Q&A Break
08:30 am - 09:10 am PT
17:30 pm - 18:10 pm CET
00:30 am - 01:10 am JST
Under the Hood of LSOracle (Pierre-Emmanuel Gaillardon)

In this talk, we will introduce our recent effort in developing a logic synthesis platform, named LSOracle, whose goal is to achieve/approach global optimality by partitioning and classifying portions of a complete logic design, in order to apply ad-hoc logic optimizations (MIGs and other). Indeed, in spite of relying on a single data structure and class of heuristics (like ABC), LSOracle is capable of representing and optimizing logic using several Boolean representations (AIGs, MIGs, XMGs) and manipulate independent partitions back and force between those representations to select the best Power-Performance-Area (PPA) improvements per partition. The structure of LSOracle also lends itself naturally to massively parallel optimization and cloud-based deployments.

GitHub: LSOracle

Photo of Pierre-Emmanuel GaillardonPierre-Emmanuel Gaillardon is an associate professor in the Electrical and Computer Engineering (ECE) department and an adjunct assistant professor in the School of Computing at The University of Utah, Salt Lake City, UT, where he leads the Laboratory for NanoIntegrated Systems (LNIS). He holds an Electrical Engineer degree from CPE-Lyon, France (2008), a M.Sc. degree in Electrical Engineering from INSA Lyon, France (2008) and a Ph.D. degree in Electrical Engineering from CEA-LETI, Grenoble, France and the University of Lyon, France (2011). Prior to joining the University of Utah, he was a research associate at the Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland within the Laboratory of Integrated Systems (Prof. De Micheli) and a visiting research associate at Stanford University, Palo Alto, CA, USA. Previously, he was research assistant at CEA-LETI, Grenoble, France. Prof. Gaillardon is recipient of the C-Innov 2011 best thesis award, the Nanoarch 2012 best paper award, the BSF 2017 Prof. Pazy Memorial Research Award, the 2017 NSF CAREER award, the 2018 IEEE CEDA Pederson Award, the 2018 ChemE Education William H. Corcoran best paper award, the 2019 DARPA Young Faculty Award, the 2019 IEEE CEDA Ernest S. Kuh Early Career Award and the 2020 ACM SIGDA Outstanding New Faculty Award. He has been serving as TPC member for many conferences, including DATE, DAC, ICCAD, Nanoarch, etc. He is an associate editor of IEEE TNANO and a reviewer for several journals and funding agencies. He served as Topic co-chair "Emerging Technologies for Future Memories" for DATE'17-19. He is a senior member of the IEEE. The research activities and interests of Prof. Gaillardon are currently focused on the development of novel computing systems exploiting emerging device technologies and novel EDA techniques.

09:10 am - 09:15 am PT
18:10 pm - 18:15 pm CET
01:10 am - 01:15 am JST
Q&A Break
09:15 am - 09:55 am PT
18:15 pm - 18:55 pm CET
01:15 am - 01:55 am JST
Using Logic to Understand Learning (Satrajit Chatterjee) [PDF]

A fundamental question in Deep Learning today is the following: Why do neural networks generalize when they have sufficient capacity to memorize their training set. In this talk, I will describe how ideas from logic synthesis can help answer this question. In particular, using the idea of small lookup tables, such as those used in FPGAs, we will see if memorization alone can lead to generalization; and then using ideas from logic simulation, we will see if neural networks do in fact behave like lookup tables. Finally, I’ll present a brief introduction to a new theory of generalization for deep learning that has emerged from this line of work.

GitHub: ABC

Photo of Satrajit ChatterjeeSatrajit Chatterjee is an Engineering Leader and Machine Learning Researcher at Google AI. His current research focuses on fundamental questions in deep learning (such as understanding why neural networks generalize at all) as well as various applications of ML (such as hardware design and verification). Before Google, he was a Senior Vice President at Two Sigma, a leading quantitative investment manager, where he founded one of the first successful deep learning-based alpha research groups on Wall Street and led a team that built one of the earliest end-to-end FPGA-based trading systems for general purpose ultra-low latency trading. Prior to that, he was a Research Scientist at Intel where he worked on microarchitectural performance analysis and formal verification for on-chip networks. He did his undergraduate studies at IIT Bombay, has a PhD in Computer Science from UC Berkeley, and has published in the top machine learning, design automation, and formal verification conferences.

09:55 am - 10:00 am PT
18:55 pm - 19:00 pm CET
01:55 am - 02:00 am JST
Q&A Break


Links to Previous Events

LSSS 2019: June 20, 2019, Lausanne, Switzerland