Previous PROJECTS

Projects: Ongoing Previous
Medical Imaging & Data Analysis
Deep learning for efficient MRI
Supervisor(s): Prof. Alex Bronstein, Dr. Michael Zibulevsky, Sanketh Vedula, Ortal Senouf

Magnetic Resonance Imaging (MRI) is a leading modality in medical imaging since it is non-invasive and produces excellent contrast. However, the long acquisition time of MRI currently prohibits its use in many applications – such as cardiac imaging, emergency rooms etc. During the past few years, compressed sensing and deep learning have been in the forefront of MR image reconstruction, leading to great improvement in image quality with reduced scan times. In this project, we will work towards building novel techniques to push the current benchmarks in deep learning based MRI. 

Deep Learning Algorithms & Hardware
Predicting injuries among combat soldiers
Year: 2024
Student(s): Nadav Galper
Supervisor(s): Prof. Alex Bronstein, Barak Gahtan

Predicting injuries among combat soldiers is critical for a variety of reasons [1]. First and foremost, it protects soldiers’ health and safety while reducing the psychological and physical effects of combat. Second, injury prediction contributes to mission success by facilitating improved resource allocation and planning, which sustains operational effectiveness and unit readiness. It also helps with cost-cutting, effective resource management, and long-term health issues, which benefits society and the military by lowering healthcare costs and the proportion of disabled veterans. In the end, this proactive strategy promotes soldier morale while simultaneously ensuring the success and continuity of military operations.
We are looking for students with some experience writing DL algorithms for an influential project that is a joint venture between the physical therapy department of Haifa University and VISTA lab. funded by the IDF on Wearables and Injuries in Combat Soldiers.
The study seeks to forecast injuries and identify the critical factors influencing soldier injuries. Our goal is to achieve this by utilizing data collected from wearables that Golani troops wore for a duration of six months.
In this project, you will analyze and create machine learning models that can identify critical elements influencing soldiers’ well-being and forecasting models that can anticipate injuries.
Contact Barak Gahtan if you are driven to work on a project involving real and big data to improve combat soldiers’ readiness.

[1] Papadakis N, Havenetidis K, Papadopoulos D, et al
Employing body-fixed sensors and machine learning to predict physical
activity in military personnel BMJ Mil Health 2023;169:152-156.

How quantization noise affects accuracy?
Supervisor(s): Chaim Baskin

When quantizing a neural network, it is often desired to set different bitwidth for different layers. To that end, we need to derive a method to measure the effect of quantization errors in individual layers on the overall model prediction accuracy. Then, by combining the effect caused by all layers, the optimal bit-width can be decided for each layer. Without such a measure, an exhaustive search for optimal bitwidth on each layer is required, which makes the quantization process less efficient.
The cosine-similarity, mean-square-error (MSE) and signal-to-noise-ratio (SNR) have all been proposed as metrics to measure the sensitivity of DNN layers to quantization. We have shown that the cosine-similarity measure has significant benefits compared to the MSE measure. Yet, there is no theoretical analysis to show how these measures relate to the accuracy of the DNN model.

In this project, we would like to conduct a theoretical and empirical investigation to find out how quantization at the layer domain effects noise in the feature domain. Considering first classification tasks, there should be a minimal noise level that cause miss-classification at the last layer (softmax). This error level can now be propagated backwards to set the tolerance to noise at other lower layers. We might be able to borrow insights and models from communication systems where noise accumulation was extensively studied.

Exploring the expressiveness of quantized neural networks
Supervisor(s): Chaim Baskin

It has been shown that it is possible to significantly quantize both the activations and weights of neural networks when used during propagation, while preserving near-state-of-the-art performance on standard benchmarks. Many efforts are being done to leverage these observations suggesting low precision hardware (Intel, NVIDA, etc). Parallel efforts are also devoted to design efficient models that can run on CPU, or even on the mobile phone. The idea is to use extremely computation efficient architectures (i.e., architectures with much less parameters compared to the traditional architectures) that maintain comparable accuracy while achieving significant speedups.

In this project we would like to study the trade-offs between quantization and over-parameterization of neural networks from a theoretical perspective. At a higher level we would like to study how these efforts for optimizing the number of operations interacts with the parallel efforts of network quantization. Would future models be harder for quantization? Can HW support of non-uniform quantization be helpful here?

Not just ReLU: Evaluate Activision on FPGA
Supervisor(s): Prof. Avi Mendelson and Moshe Kimhi

Activision functions are a crucial part of deep neural networks, as a nonlinear transformation of the features.
In this project we will aim to burn an FPGA accelerator and measure the performance of different activations, including some that we developed here in the lab, to help the algorithm developers and see another perspective of choosing the architecture.