Projects: Ongoing Previous
Computer Vision
Detection of violence acts in psychiatric hospitals from security camera videos

Violence in hospitals has become more widespread in recent years. Mental health centers deal with violence acts from patients toward staff on a daily basis, especially in closed sections. It is estimated that over 90% of the staff were exposed to violence acts at least once during their work. The acts disrupt the regular activity in the center, causes loss of money and negative feelings between the patients and the staff.

In a recent project at the lab, a violence detection project was completed successfully. We would like to extend this activity to mental health centers as well, in which the separation between violence acts and normal treatment techniques is much more challenging.

Another challenge in this project is that the videos obtained from security cameras do not contain sound, and also went thorough a process of anonymization, to protect the privacy of the patients and the staff in the center.

We would like to use technology advances to monitor; detect and even prevent such violence. The goal is to build a Deep Learning based system for the task of classification of these videos. Such a tool can help detect the violence acts and allow quick intervention by the security staff to help keep a secure environment for all the staff and the patients.

This project is in collaboration with Mazor mental health center.

Supervisor(s): Ori Bryt
Requirements: Knowledge in Machine Learning and Image Processing, Good python programming skills
Enhancing piano music note sheets to include fingering

In this project we will try to create a note sheet for piano that includes the correct fingers to use for playing the piece appropriately. Current note sheets include only notes, with proper instruction on how to play them. We will use Image Processing, Audio and Deep Learning to add the correct fingering position to the note sheets.

Supervisor(s): Ori Bryt, Erez Shalev
Requirements: Knowledge in Machine Learning, Good python programming skills
3D Vision & Numerical Geometry
Post-Training Quantization for Lidar-Camera fusion framework

Real-time sensor fusion is critical for autonomous vehicles, which rely on a complex interplay of LiDAR, camera images, and radar data. To make critical navigation decisions in real-time, on-board deep learning models must efficiently process and integrate this sensor data directly within the vehicle, all while operating under limited hardware resources.

Quantization techniques offer a promising approach. By reducing the precision of the model’s calculations, quantization can significantly decrease inference time without sacrificing accuracy. This enables faster decision-making for autonomous vehicles, a crucial factor for safe and reliable navigation.

In this project, you will use Post-Training Quantization techniques for both Image encoder [1] and PointCloud [2] to improve the inference time of multi-modality models for autonomous driving [3].




Supervisor(s): Moshe Kimhi and Dr Chaim Baskin
Requirements: Must: Knowledge in Python and Deep Learning framework (either from a course of previous projects). Advantage: Knowledge in LIDAR, PointCloud or other 3D data formats.
Deep Learning Algorithms & Hardware
Improve contrastive learning for supervised learning

Inspired by the idea of learning features in a hierarchical manner in deep neural networks, we aim to improve SupContrast [1] by encouraging the network to learn close features of super-classes in early stages in the network. (Low level features will be closer for dogs and cats, but further for dogs and aircrafts). SupContrast improvements were shown to be effective in class imbalance settings [3].




Supervisor(s): Moshe Kimhi, Dr. Chaim Baskin
Requirements: Knowledge in Deep Learning
Diffusion-based Quantization

Quantizing neural networks is a common technique to reduce computation complexity [1].

In this project, we follow [2] and try to mimic the quantization process in order to produce a set of quantized NN based on classifier free guidance [3].




Supervisor(s): Moshe Kimhi, Dr. Chaim Baskin
Requirements: Proficiency in Deep Learning (e.g. after cs236781), preferred knowledge in Diffusion models.
Diffusion-based Neural architecture search

Neural architecture search (NAS) is a meta process of finding an optimal network architecture from a discrete search space of possible basic blocks [1]. Looking at the model as a directed graph, we can search for an adjacency matrix only. In this project, We aim to use conditional diffusion models [2] in order to generate architectures out of an existing search space (such as NAS-Bench-101 [3]). 

Note that using diffusion models for generating architecture was published in [4]. Other ideas in this field would be valid for a project, please contact the supervisor.





Supervisor(s): Moshe Kimhi, Dr Chaim Baskin
Requirements: Proficiency in Deep Learning (e.g. after cs236781), preferred knowledge in diffusion models.
Predicting injuries among combat soldiers

Predicting injuries among combat soldiers is critical for a variety of reasons [1]. First and foremost, it protects soldiers’ health and safety while reducing the psychological and physical effects of combat. Second, injury prediction contributes to mission success by facilitating improved resource allocation and planning, which sustains operational effectiveness and unit readiness. It also helps with cost-cutting, effective resource management, and long-term health issues, which benefits society and the military by lowering healthcare costs and the proportion of disabled veterans. In the end, this proactive strategy promotes soldier morale while simultaneously ensuring the success and continuity of military operations.
We are looking for students with some experience writing DL algorithms for an influential project that is a joint venture between the physical therapy department of Haifa University and VISTA lab. funded by the IDF on Wearables and Injuries in Combat Soldiers.
The study seeks to forecast injuries and identify the critical factors influencing soldier injuries. Our goal is to achieve this by utilizing data collected from wearables that Golani troops wore for a duration of six months.
In this project, you will analyze and create machine learning models that can identify critical elements influencing soldiers’ well-being and forecasting models that can anticipate injuries.
Contact Barak Gahtan if you are driven to work on a project involving real and big data to improve combat soldiers’ readiness.

[1] Papadakis N, Havenetidis K, Papadopoulos D, et al
Employing body-fixed sensors and machine learning to predict physical
activity in military personnel BMJ Mil Health 2023;169:152-156.

Supervisor(s): Prof. Alex Bronstein, Barak Gahtan
Requirements: A basic course in Deep Learning ML and a passion for effective AI.
Threat-model agnostic adversarial training

Adversarial Training is one of the common ways to create a robust model.
This training changes the input images and keeps the labels, using it to train a robust model. However, this kind of training targets only a specific attack. In this work, we aim to create a novel training method that is threat-model agnostic, defending from any threat model.

Supervisor(s): Tsachi Blau
Requirements: Knowledge in Machine Learning, Preferably Deep Learning. Good python programming skills
AI in Quantum Optics

The adoption of advanced machine learning (ML) methods in physics has led to far-reaching advances in both theoretical predictions and experiments. Nevertheless, there are still physical phenomena, particularly in quantum physics, that have not yet benefited from this progress.  One important branch of quantum physics that might benefit significantly from the adoption of ML algorithms is quantum optics. Quantum optics has proven to be an invaluable resource for the realization of many quantum technologies, such as quantum cryptography, sensing, and computing. 

If we wish to employ learning-style optimization methods (ML/DL) for problems in quantum physics, it is crucial to have a good physical model of the quantum process in question and integrate it into the algorithm itself, which may be difficult to model when we are discussing problems from quantum mechanics. In a recent paper [1], we show how to employ machine learning algorithms for inverse design problems in quantum optics. Specifically, we developed an algorithm for generating high-dimensional spatially entangled photon pairs, by tailoring the nonlinear interactions of light. The work has generated a lot of interest [a, b, c] due to its potential to advance many areas in quantum optics. For example, the high dimensionality of these generated states increases the bandwidth of quantum information and can improve the security of quantum key distribution protocols.

In this project, we will refine and improve the algorithm beyond recognition in order to solve new important problems in quantum optics; such as applications in Metamaterials [2], improving the fidelity in quantum communication, or designing optical interaction that generates a wide realization of maximally entangled high-dimensional states. We will do this by using deep learning tools that replace internal modules based on optimization or numerical methods, and by applying advanced learning techniques such as Neural ODEs [3], and Implicit Neural Representations [4].










Supervisor(s): Prof. Alex Bronstein
Requirements: We are looking for someone with high mathematical skills and practical experience in writing ML/DL algorithms (and/or advanced optimization methods). Advantages: interest/experience in the field of physics, Elementary course in Deep Learning (e.g. 236781), experience with PyTorch/Jax.
Bioinformatics & Computational Chemistry
Twists in the protein folding dogma

Fifty years ago, Christian Anfinsen conducted a series of remarkable experiments on ribonuclease proteins — enzymes that “cut” RNA molecules in all living cells and are essential to many biological processes. Anfinsen showed that when the enzyme was “unfolded” by a denaturing agent it lost its function, and when the agent was removed, the protein regained its function. He concluded that the function of a protein was entirely determined by its 3D structure, and the latter, in turn, was entirely determined by the electrostatic forces and thermodynamics of the sequence of amino acids composing the protein. This work, that earned Anfinsen his Nobel prize in 1972, led to the “one sequence, one structure” principle that remains one of the central dogmas in molecular biology. However, within the cell, protein chains are not formed in isolation, to fold alone once produced. Rather, they are translated from genetic coding instructions (for which many synonymous versions exist to code a single amino acid sequence) and begin to fold before the chain has fully formed through a process known as co-translational folding. The effect of coding and co-translational folding mechanisms on the final protein structure are not well understood and there are no studies showing side-by-side structural analysis of protein pairs having alternative synonymous coding.

In our previous works [1,2] we used the wealth of high-resolution protein structures available in the Protein Data Bank (PDB) to computationally explore the association between genetic coding and local protein structure. We observed a surprising statistical dependence between the two that is not readily explainable by known effects. In order to establish the causal direction (i.e., to unambiguously demonstrate that a synonymous mutation may lead to structural changes), we looked for suitable experimental targets. An ideal target would be a protein that has more than one stable conformations; by changing the coding, the protein might get kinetically trapped into a different experimentally measurable conformation. To our surprise, an attentive study of the PDB data indicated that a very considerable fraction of experimentally resolved protein structures exist as an ensemble of several stable conformations thermodynamically isolated from each other — in clear violation to Anfinsen’s dogma.

We believe that this line of dogma-shattering works may change the way we conceive of protein structure, with deep impacts on how folding prediction models like AlphaFold are designed. This work is the result of a fruitful collaboration of the AAA trio — Dr. Ailie Marx (a structural biologist), Dr. Aviv Rosenberg and Prof. Alex Bronstein (computer scientists and engineers). We invite fearless students interested in the applications of human and artificial intelligence in life science applications to join un on this journey.


[1] A. Rosenberg, A. Marx, A. M. Bronstein, Codon-specific Ramachandran plots show amino acid backbone conformation depends on identity of the translated codon, Nature Communications, 2022

[2] L. Ackerman-Schraier, A. A. Rosenberg, A. Marx, A. M. Bronstein, Machine learning approaches demonstrate that protein structures carry information about their genetic coding, Nature Scientific Reports, 2022

[3] A. A. Rosenberg, N. Yehishalom, A. Marx, A. M. Bronstein, An amino-domino model described by a cross-peptide-bond Ramachandran plot defines amino acid pairs as local structural units, Proc. US National Academy of Sciences (PNAS), 2023

Supervisor(s): Prof. Alex Bronstein, Dr. Ailie Marx
Requirements: Basic statistical and machine learning tools
Molecular design and property prediction

Modern society is built on molecules and materials. Every technological advance – from drug therapies to sustainable fuels, from light-weight composites to wearable electronics – is possible thanks to the functional molecules at its core. How can we find the next generation of molecules that could potentially improve existing capabilities and unlock new ones? In principle, they should be somewhere among the collection of all possible molecules, which is also known as “chemical space”. We only need to find them. The only problem is that this space is practically infinite – so, to use the common saying, it is like looking for a needle in a haystack. We would never be able to make every possible molecule and test it to find out its properties and functionalities. High-throughput computational chemistry allows us to virtually screen millions of molecules, without having to synthesize them in the lab beforehand. This is usually performed by calculating certain molecular properties given the molecular structure of interest. However, some properties like the band gap, oxidation potential or the molecule’s nuclear magnetic resonance frequency chemical shift require expensive quantum simulations. The understanding of the structure-property relations and the fact that they exhibit regular patterns call for data-driven forward modeling. In our previous work [1], we used SO(3)-invariant neural networks to accurately predict various molecular properties from the molecular structure orders of magnitude faster than the existing simulation algorithms. Also, making the models interpretable allows to rationalize structural motifs and their chemical effects, potentially discovering unknown chemistry.

However, despite the availability of faster data-driven forward prediction of molecular properties, it is still prohibitively expensive to apply it to the entire chemical space. The solution is to invert the process. Rather than sifting through millions of molecules and testing each one to determine its properties, we should aim to engineer structures that will have the desired properties. This is known as “inverse design”, and it is often considered the holy grail of chemistry and materials science. In our recent study [2] published in Nature Computational Science, we demonstrated a new approach for automatically designing new and better-performing molecules. Our method combined a diffusion model for structure generation that is guided towards desired molecular properties by a neural network for property prediction. This so-called guided diffusion model was whimsically (and appropriately) named GaUDI – after the famous Catalan designer and architect, Antoni Gaudi.

Among the variety of chemical spaces, we focus on polycyclic aromatic systems – these are molecules that are organic semiconductors and can be used in various organic electronic devices, such as OLEDs, OFETs, OPVs. We use traditional computational simulators to generate carefully curated large-scale databases of structure-property pairs that can be used for training and evaluation. Among the various future directions that are of interest to us is the prediction of molecule’s “synthesizability” and forward and inverse modeling of local scalar, vector and tensor properties like current densities determining the molecule’s magnetic behavior.

This line of works is the result of a fruitful duet of Prof. Renana Poranne from the Department of Chemistry and Prof. Alex Bronstein from the Department of Computer Science. We invite fearless students interested in the development of new “AI” tools for scientific applications to contact us for additional information.


[1] T. Weiss, A. Wahab, A. M. Bronstein, R. Gershoni-Poranne, Interpretable deep learning unveils structure-property relationships in polybenzenoid hydrocarbons, Journal of Organic Chemistry, 2023.

[2] T. Weiss, L. Cosmo, E. Mayo Yanes, S. Chakraborty, A. M. Bronstein, R. Gershoni-Poranne, Guided diffusion for inverse molecular design, Nature Computational Science 3(10), 873–882, 2023.

Supervisor(s): Prof. Alex Bronstein, Prof. Renana Poranne
Requirements: Basic statistical and machine learning tools