ONGOING PROJECTS

Projects: Ongoing Previous
Signal & Image Processing
Acoustic Drone Navigation

Localization is the task of establishing an agent’s location in a known environment given some location-dependent readout, sampled by the agent’s sensors.
In acoustic localization we use the sound propagated through the environment as our signal for localization. Specifically, in our group we research the localization of drones based in the sound already emitted by their propulsion systems (i.e. their rotors) to localize them.
This sound emitted by the drone is propagated through the environment, where the latter controls the character of reflection and decay the sound undergoes before reaching the sensor array, which is in our case an array of 8 microphones mounted on the drone. In [1] we develop a deep-learning based solution to regress the drone’s location based on those sampled sound waveforms.

In this project we want to explore the possibility of using cross-sensor aggregation to improve various aspects of localization. We wish to see whether imposing ”agreement” between the prediction coming separately from each sensor can help achieve desired features for the regressor. We will measure this ”agreement” with location prediction w.r.t the input from each sensor, and test whether aligning those gradients can help achieve increased localization certainty, adversarial robustness, enhanced accuracy and more.

Supervisor(s): Tamir Shor
Requirements: Deep Learning course
Deep Learning Algorithms & Hardware
Diffusion-based Quantization

Quantizing neural networks is a common technique to reduce computation complexity [1].

In this project, we follow [2] and try to mimic the quantization process in order to produce a set of quantized NN based on classifier free guidance [3].

[1] https://arxiv.org/pdf/2103.13630.pdf 

[2] https://arxiv.org/pdf/2205.15437.pdf 

[3] https://arxiv.org/pdf/2207.12598.pdf 

Supervisor(s): Moshe Kimhi, Dr. Chaim Baskin
Requirements: Proficiency in Deep Learning (e.g. after cs236781), preferred knowledge in Diffusion models.
Diffusion-based Neural architecture search

Neural architecture search (NAS) is a meta process of finding an optimal network architecture from a discrete search space of possible basic blocks [1]. Looking at the model as a directed graph, we can search for an adjacency matrix only. In this project, We aim to use conditional diffusion models [2] in order to generate architectures out of an existing search space (such as NAS-Bench-101 [3]). 

Note that using diffusion models for generating architecture was published in [4]. Other ideas in this field would be valid for a project, please contact the supervisor.

[1] https://arxiv.org/pdf/2301.08727.pdf 

[2] https://arxiv.org/pdf/2207.12598.pdf 

[3] https://arxiv.org/pdf/1902.09635.pdf 

[4] https://arxiv.org/pdf/2305.16943.pdf 

Supervisor(s): Moshe Kimhi, Dr Chaim Baskin
Requirements: Proficiency in Deep Learning (e.g. after cs236781), preferred knowledge in diffusion models.
Evaluating the point-wise significance of sparse adversarial attacks

Background: Adversarial attacks are small bounded-norm perturbations of a network’s input that aim to alter the network’s output and are known to mislead and undermine the performance of deep neural networks (DNNs). Adversarial defenses then aim to mitigate the effect of such attacks. Relevant papers: “Explaining and harnessing adversarial examples”, “Can audio-visual integration strengthen robustness under multimodal attacks”.

Project Description: In this project, we discuss adversarial attacks on Multimodal models and aim to utilize the various input channels for improved adversarial robustness.

Supervisor(s): Yaniv Nemcovsky
Requirements: Deep Learning course
Adversarial attacks on non-differentiable statistical models

Background: Adversarial attacks were first discovered in the context of deep neural networks (DNNs), where the networks’ gradients were used to produce small bounded-norm perturbations of the input that significantly altered their output. Such attacks target the increase of the model’s loss or the decrease of its accuracy and were shown to undermine the impressive performance of DNNs in multiple fields. Relevant papers: “Explaining and harnessing adversarial examples”.

Project Description: In this project, we aim to produce adversarial attacks on non-differentiable models such as statistical algorithms.

Supervisor(s): Yaniv Nemcovsky
Requirements: Deep Learning course
Multimodal-based adversarial defense

Background: Adversarial attacks are small bounded-norm perturbations of a network’s input that aim to alter the network’s output and are known to mislead and undermine the performance of deep neural networks (DNNs). Adversarial defenses then aim to mitigate the effect of such attacks. Relevant papers: “Explaining and harnessing adversarial examples”, “Can audio-visual integration strengthen robustness under multimodal attacks”.

Project Description: In this project, we discuss adversarial attacks on Multimodal models and aim to utilize the various input channels for improved adversarial robustness.

Supervisor(s): Yaniv Nemcovsky
Requirements: Deep Learning course
AI in Quantum Optics

The adoption of advanced machine learning (ML) methods in physics has led to far-reaching advances in both theoretical predictions and experiments. Nevertheless, there are still physical phenomena, particularly in quantum physics, that have not yet benefited from this progress.  One important branch of quantum physics that might benefit significantly from the adoption of ML algorithms is quantum optics. Quantum optics has proven to be an invaluable resource for the realization of many quantum technologies, such as quantum cryptography, sensing, and computing. 

If we wish to employ learning-style optimization methods (ML/DL) for problems in quantum physics, it is crucial to have a good physical model of the quantum process in question and integrate it into the algorithm itself, which may be difficult to model when we are discussing problems from quantum mechanics. In a recent paper [1], we show how to employ machine learning algorithms for inverse design problems in quantum optics. Specifically, we developed an algorithm for generating high-dimensional spatially entangled photon pairs, by tailoring the nonlinear interactions of light. The work has generated a lot of interest [a, b, c] due to its potential to advance many areas in quantum optics. For example, the high dimensionality of these generated states increases the bandwidth of quantum information and can improve the security of quantum key distribution protocols.

In this project, we will refine and improve the algorithm beyond recognition in order to solve new important problems in quantum optics; such as applications in Metamaterials [2], improving the fidelity in quantum communication, or designing optical interaction that generates a wide realization of maximally entangled high-dimensional states. We will do this by using deep learning tools that replace internal modules based on optimization or numerical methods, and by applying advanced learning techniques such as Neural ODEs [3], and Implicit Neural Representations [4].

[1] https://doi.org/10.1364/OPTICA.451115

[2] https://www.science.org/doi/10.1126/science.abq8684

[3] http://papers.neurips.cc/paper/7892-neural-ordinary-differential-equations.pdf

[4] https://www.vincentsitzmann.com/siren/

 

[a] https://www.ynet.co.il/environment-science/article/skxiy2r0q

[b] https://www.technion.ac.il/en/2022/10/entangled-photons-computational-learning/

[c] https://quantum.technion.ac.il/Quantum-Entanglement-In-Crystal

 

Supervisor(s): Prof. Alex Bronstein
Requirements: We are looking for someone with high mathematical skills and practical experience in writing ML/DL algorithms (and/or advanced optimization methods). Advantages: interest/experience in the field of physics, Elementary course in Deep Learning (e.g. 236781), experience with PyTorch/Jax.
Bioinformatics & Computational Chemistry
Twists in the protein folding dogma

Fifty years ago, Christian Anfinsen conducted a series of remarkable experiments on ribonuclease proteins — enzymes that “cut” RNA molecules in all living cells and are essential to many biological processes. Anfinsen showed that when the enzyme was “unfolded” by a denaturing agent it lost its function, and when the agent was removed, the protein regained its function. He concluded that the function of a protein was entirely determined by its 3D structure, and the latter, in turn, was entirely determined by the electrostatic forces and thermodynamics of the sequence of amino acids composing the protein. This work, that earned Anfinsen his Nobel prize in 1972, led to the “one sequence, one structure” principle that remains one of the central dogmas in molecular biology. However, within the cell, protein chains are not formed in isolation, to fold alone once produced. Rather, they are translated from genetic coding instructions (for which many synonymous versions exist to code a single amino acid sequence) and begin to fold before the chain has fully formed through a process known as co-translational folding. The effect of coding and co-translational folding mechanisms on the final protein structure are not well understood and there are no studies showing side-by-side structural analysis of protein pairs having alternative synonymous coding.

In our previous works [1,2] we used the wealth of high-resolution protein structures available in the Protein Data Bank (PDB) to computationally explore the association between genetic coding and local protein structure. We observed a surprising statistical dependence between the two that is not readily explainable by known effects. In order to establish the causal direction (i.e., to unambiguously demonstrate that a synonymous mutation may lead to structural changes), we looked for suitable experimental targets. An ideal target would be a protein that has more than one stable conformations; by changing the coding, the protein might get kinetically trapped into a different experimentally measurable conformation. To our surprise, an attentive study of the PDB data indicated that a very considerable fraction of experimentally resolved protein structures exist as an ensemble of several stable conformations thermodynamically isolated from each other — in clear violation to Anfinsen’s dogma.

We believe that this line of dogma-shattering works may change the way we conceive of protein structure, with deep impacts on how folding prediction models like AlphaFold are designed. This work is the result of a fruitful collaboration of the AAA trio — Dr. Ailie Marx (a structural biologist), Dr. Aviv Rosenberg and Prof. Alex Bronstein (computer scientists and engineers). We invite fearless students interested in the applications of human and artificial intelligence in life science applications to join un on this journey.

References:

[1] A. Rosenberg, A. Marx, A. M. Bronstein, Codon-specific Ramachandran plots show amino acid backbone conformation depends on identity of the translated codon, Nature Communications, 2022

[2] L. Ackerman-Schraier, A. A. Rosenberg, A. Marx, A. M. Bronstein, Machine learning approaches demonstrate that protein structures carry information about their genetic coding, Nature Scientific Reports, 2022

[3] A. A. Rosenberg, N. Yehishalom, A. Marx, A. M. Bronstein, An amino-domino model described by a cross-peptide-bond Ramachandran plot defines amino acid pairs as local structural units, Proc. US National Academy of Sciences (PNAS), 2023

Supervisor(s): Prof. Alex Bronstein, Dr. Ailie Marx
Requirements: Basic statistical and machine learning tools
Molecular design and property prediction

Modern society is built on molecules and materials. Every technological advance – from drug therapies to sustainable fuels, from light-weight composites to wearable electronics – is possible thanks to the functional molecules at its core. How can we find the next generation of molecules that could potentially improve existing capabilities and unlock new ones? In principle, they should be somewhere among the collection of all possible molecules, which is also known as “chemical space”. We only need to find them. The only problem is that this space is practically infinite – so, to use the common saying, it is like looking for a needle in a haystack. We would never be able to make every possible molecule and test it to find out its properties and functionalities. High-throughput computational chemistry allows us to virtually screen millions of molecules, without having to synthesize them in the lab beforehand. This is usually performed by calculating certain molecular properties given the molecular structure of interest. However, some properties like the band gap, oxidation potential or the molecule’s nuclear magnetic resonance frequency chemical shift require expensive quantum simulations. The understanding of the structure-property relations and the fact that they exhibit regular patterns call for data-driven forward modeling. In our previous work [1], we used SO(3)-invariant neural networks to accurately predict various molecular properties from the molecular structure orders of magnitude faster than the existing simulation algorithms. Also, making the models interpretable allows to rationalize structural motifs and their chemical effects, potentially discovering unknown chemistry.

However, despite the availability of faster data-driven forward prediction of molecular properties, it is still prohibitively expensive to apply it to the entire chemical space. The solution is to invert the process. Rather than sifting through millions of molecules and testing each one to determine its properties, we should aim to engineer structures that will have the desired properties. This is known as “inverse design”, and it is often considered the holy grail of chemistry and materials science. In our recent study [2] published in Nature Computational Science, we demonstrated a new approach for automatically designing new and better-performing molecules. Our method combined a diffusion model for structure generation that is guided towards desired molecular properties by a neural network for property prediction. This so-called guided diffusion model was whimsically (and appropriately) named GaUDI – after the famous Catalan designer and architect, Antoni Gaudi.

Among the variety of chemical spaces, we focus on polycyclic aromatic systems – these are molecules that are organic semiconductors and can be used in various organic electronic devices, such as OLEDs, OFETs, OPVs. We use traditional computational simulators to generate carefully curated large-scale databases of structure-property pairs that can be used for training and evaluation. Among the various future directions that are of interest to us is the prediction of molecule’s “synthesizability” and forward and inverse modeling of local scalar, vector and tensor properties like current densities determining the molecule’s magnetic behavior.

This line of works is the result of a fruitful duet of Prof. Renana Poranne from the Department of Chemistry and Prof. Alex Bronstein from the Department of Computer Science. We invite fearless students interested in the development of new “AI” tools for scientific applications to contact us for additional information.

References:

[1] T. Weiss, A. Wahab, A. M. Bronstein, R. Gershoni-Poranne, Interpretable deep learning unveils structure-property relationships in polybenzenoid hydrocarbons, Journal of Organic Chemistry, 2023.

[2] T. Weiss, L. Cosmo, E. Mayo Yanes, S. Chakraborty, A. M. Bronstein, R. Gershoni-Poranne, Guided diffusion for inverse molecular design, Nature Computational Science 3(10), 873–882, 2023.

Supervisor(s): Prof. Alex Bronstein, Prof. Renana Poranne
Requirements: Basic statistical and machine learning tools