Directory of Experts
Back to search results

Research project title

Understanding and enhancing the logic of neural networks

Education level

Master or doctorate

Director/co-director

Director: Jean Pierre David

Co-director(s): Professeur Langlois, le cas échéant

End of display

April 15, 2026

Areas of expertise

Electronic circuits and devices

Integrated circuits

Digital signal processing

Artificial intelligence

Knowledge representation

Learning and inference theories

Primary sphere of excellence in research


Modeling and Artificial Intelligence

Secondary sphere(s) of excellence in research

New Frontiers in Information and Communication Technologies

Unit(s) and department(s)

Department of Electrical Engineering

Conditions

The desired skills include:

  • Good command of mathematics
  • Good command of programming, particularly Python
  • Good understanding of logic circuits
  • A strong desire to understand what happens inside neural networks with a view to improving them.
  • Demonstrate scientific curiosity.

The following are additional assets:

  • Experience with neural networks (particularly Pytorch).
  • Experience in digital circuit design.

We welcome all talented candidates. Furthermore, if your presence also contributes to the diversity of the team, this will be in your favor.

To apply, contact Professor Jean-Pierre David by email: jean-pierre.david@polymtl.ca

 

Detailed description

Artificial neural networks (ANNs) are evolving rapidly and are finding applications in various sectors, including medical imaging, autonomous driving, and drug discovery. However, trust in AI’s decision-making capabilities, especially when these models lack transparency, remains a significant concern. In 2020, Google Health demonstrated that AI outperformed human radiologists in diagnosing breast cancer through mammogram analysis, reducing both false positives and false negatives. While this showcases AI's potential, the inability of ANNs to explain their reasoning raises questions about relying on such systems in critical domains like healthcare, where human oversight remains essential. There is a growing need to make AI decisions explainable and verify that certain rules are consistently followed, especially in life-or-death situations.

The proposed research program is founded on the fact that ANNs are similar to combinatorial logic circuits, which are intrinsically explainable. Both can be implemented with one another, especially when neural activations are binarized. However, while logic circuits are highly specialized, ANNs often use inefficient and regular topologies with many void weights, making optimization a key research focus. By refining ANNs, this research program aims to reduce their size and energy consumption, improve generalization, and streamline learning processes.

The current research program seeks to enhance ANNs by focusing on their internal logical structures. We also aim to explore how trained ANNs behave during inference by analyzing the logical links between inputs, activations, and outputs. Binarized versions of signals will help reveal these links, enabling the identification of necessary and sufficient conditions for outputs, much like logic circuits do. Ultimately, we hope to create a "meta-network" that can scrutinize and refine the decisions of ANNs, making them explainable and more reliable. In addition to improving ANN efficiency, this research will lead to optimized hardware architectures for both logic circuits and ANNs.

Financing possibility

The project is funded by NSERC.

Jean Pierre David

Jean Pierre David

Full Professor

Main profile