Jhair Gallardo Personal Website

Jhair Gallardo

About Me

Hello! I am a PhD Candidate working under the supervision of Dr. Christopher Kanan in Chester F. Carlson Center for Imaging Science at Rochester Institute of Technology (RIT). My research is focused on self-supervised learning, continual learning, and computer vision. I am mainly interested in efficient continual representation learning systems with minimal supervision. Previously, I interned at Siemens Healthineers, where I worked on self-supervised learning techniques for medical imaging.

Jhair Gallardo

News

Sep 2024: New! Our paper "What Variables Affect Out-Of-Distribution Generalization in Pretrained Models?" got accepted in NeurIPS 2024!

May 2024: Our TMLR paper "SIESTA: Efficient Online Continual Learning with Sleep" got accepted in the journal track of CoLLAs 2024!

May 2024: Our paper "Human Emotion Estimation through Physiological Data with Neural Networks" got accepted in SoSE 2024!

Apr 2024: Our paper "GRASP: A Rehearsal Policy for Efficient Online Continual Learning" got accepted in CoLLAs 2024!

Oct 2023: Our paper "SIESTA: Efficient Online Continual Learning with Sleep" got accepted in Transactions on Machine Learning Research (TMLR)!

Aug 2023: Joined Siemens Healthineers as an Image Analytics Intern working on self-supervised learning for medical imaging.

Apr 2023: Our paper "How Efficient Are Today's Continual Learning Algorithms?" got accepted in the CLVISION Workshop at CVPR 2023!

Oct 2022: Gave an invited talk about "Classifying Images By Combining Self-Supervised and Continual Learning" as part of the Center for Human-aware AI (CHAI) seminars

Jan 2022: Got accepted in the AWARE-AI NRT program from RIT as a Trainee!

Oct 2021: Our paper "Self-Supervised Training Enhances Online Continual Learning" got accepted for poster presentation at BMVC 2021! (36.21% acceptance rate)

Apr 2021: Gave an invited talk at the Continual AI Reading Group about our paper "Self-Supervised Training Enhances Online Continual Learning"

Apr 2020: Joined the Machine and Neuromorphic Perception Laboratory as a PhD student working on self-supervised learning, and lifelong machine learning.

Apr 2019: Got admitted to the Rochester Institute of Technology Imaging Science Ph.D. program!

May 2018: Joined Everis as a Machine Learning Engineer working on recommendation systems, image classification, object detection, and tracking.

Apr 2017: Joined Siemens Healthineers as a Research Intern working on lung cancer detection.

Dec 2015: Obtained my BS in Mechatronics Engineering from Universidad Nacional de Ingeniería in Lima, Peru.

Research

NeurIPS 2024: What Variables Affect Out-Of-Distribution Generalization in Pretrained Models?

Md Yousuf Harun, Kyungbok Lee, Jhair Gallardo, Giri Krishnan, Christopher Kanan

What Variables Affect Out-Of-Distribution Generalization in Pretrained Models?

Embeddings produced by pre-trained deep neural networks (DNNs) are widely used; however, their efficacy for downstream tasks can vary widely. We study the factors influencing out-of-distribution (OOD) generalization of pre-trained DNN embeddings through the lens of the tunnel effect hypothesis, which suggests deeper DNN layers compress representations and hinder OOD performance. Contrary to earlier work, we find the tunnel effect is not universal. Based on 10,584 linear probes, we study the conditions that mitigate the tunnel effect by varying DNN architecture, training dataset, image resolution, and augmentations. We quantify each variable's impact using a novel SHAP analysis. Our results emphasize the danger of generalizing findings from toy datasets to broader contexts.

SoSE 2024: Human Emotion Estimation through Physiological Data with Neural Networks

Jhair Gallardo, Celal Savur, Ferat Sahin, Christopher Kanan

Human Emotion Estimation through Physiological Data with Neural Networks

Effective collaboration between humans and robots necessitates that the robotic partner can perceive, learn from, and respond to the human's psycho-physiological conditions. This involves understanding the emotional states of the human collaborator. To explore this, we collected subjective assessments — specifically, feelings of surprise, anxiety, boredom, calmness, and comfort — as well as physiological signals during a dynamic human-robot interaction experiment. The experiment manipulated the robot's behavior to observe these responses. We gathered data from this non-stationary setting and trained an artificial neural network model to predict human emotion from physiological data. We found that using several subjects' data to train a general model and then fine-tuning it on the subject of interest performs better than training a model only using the subject of interest data.

CoLLAs 2024: GRASP: A Rehearsal Policy for Efficient Online Continual Learning

Md Yousuf Harun, Jhair Gallardo, Christopher Kanan

GRASP: A Rehearsal Policy for Efficient Online Continual Learning

Continual learning (CL) in deep neural networks (DNNs) involves incrementally accumulating knowledge in a DNN from a growing data stream. A major challenge in CL is that non-stationary data streams cause catastrophic forgetting of previously learned abilities. Rehearsal is a popular and effective way to mitigate this problem, which is storing past observations in a buffer and mixing them with new observations during learning. This leads to a question: Which stored samples should be selected for rehearsal? Choosing samples that are best for learning, rather than simply selecting them at random, could lead to significantly faster learning. For class incremental learning, prior work has shown that a simple class balanced random selection policy outperforms more sophisticated methods. Here, we revisit this question by exploring a new sample selection policy called GRASP. GRASP selects the most prototypical (class representative) samples first and then gradually selects less prototypical (harder) examples to update the DNN. GRASP has little additional compute or memory overhead compared to uniform selection, enabling it to scale to large datasets. We evaluate GRASP and other policies by conducting CL experiments on the large-scale ImageNet-1K and Places-LT image classification datasets. GRASP outperforms all other rehearsal policies. Beyond vision, we also demonstrate that GRASP is effective for CL on five text classification datasets.

TMLR 2023: SIESTA: Efficient Online Continual Learning with Sleep

Md Yousuf Harun*, Jhair Gallardo*, Tyler L. Hayes, Ronald Kemker, Christopher Kanan

* denotes equal contribution.

SIESTA: Efficient Online Continual Learning with Sleep

In supervised continual learning, a deep neural network (DNN) is updated with an ever-growing data stream. Unlike the offline setting where data is shuffled, we cannot make any distributional assumptions about the data stream. Ideally, only one pass through the dataset is needed for computational efficiency. However, existing methods are inadequate and make many assumptions that cannot be made for real-world applications, while simultaneously failing to improve computational efficiency. In this paper, we propose a novel online continual learning method, SIESTA based on wake/sleep framework for training, which is well aligned to the needs of on-device learning. The major goal of SIESTA is to advance compute efficient continual learning so that DNNs can be updated efficiently using far less time and energy. The principal innovations of SIESTA are: 1) rapid online updates using a rehearsal-free, backpropagation-free, and data-driven network update rule during its wake phase, and 2) expedited memory consolidation using a compute-restricted rehearsal policy during its sleep phase. For memory efficiency, SIESTA adapts latent rehearsal using memory indexing from REMIND. Compared to REMIND and prior arts, SIESTA is far more computationally efficient, enabling continual learning on ImageNet-1K in under 2.4 hours on a single GPU; moreover, in the augmentation-free setting it matches the performance of the offline learner, a milestone critical to driving adoption of continual learning in real-world applications.

CVPRW 2023: How Efficient Are Today's Continual Learning Algorithms?

Md Yousuf Harun, Jhair Gallardo, Tyler L. Hayes, Christopher Kanan

How Efficient Are Today's Continual Learning Algorithms?

Supervised Continual learning involves updating a deep neural network (DNN) from an ever-growing stream of labeled data. While most work has focused on overcoming catastrophic forgetting, one of the major motivations behind continual learning is being able to efficiently update a network with new information, rather than retraining from scratch on the training dataset as it grows over time. Despite recent continual learning methods largely solving the catastrophic forgetting problem, there has been little attention paid to the efficiency of these algorithms. Here, we study recent methods for incremental class learning and illustrate that many are highly inefficient in terms of compute, memory, and storage. Some methods even require more compute than training from scratch! We argue that for continual learning to have real-world applicability, the research community cannot ignore the resources used by these algorithms. There is more to continual learning than mitigating catastrophic forgetting.

BMVC 2021: Self-Supervised Training Enhances Online Continual Learning

Jhair Gallardo, Tyler L. Hayes, Christopher Kanan

Self-Supervised Training Enhances Online Continual Learning

In continual learning, a system must incrementally learn from a non-stationary data stream without catastrophic forgetting. Recently, multiple methods have been devised for incrementally learning classes on large-scale image classification tasks, such as ImageNet. State-of-the-art continual learning methods use an initial supervised pre-training phase, in which the first 10% - 50% of the classes in a dataset are used to learn representations in an offline manner before continual learning of new classes begins. We hypothesize that self-supervised pre-training could yield features that generalize better than supervised learning, especially when the number of samples used for pre-training is small. We test this hypothesis using the self-supervised MoCo-V2 and SwAV algorithms. On ImageNet, we find that both outperform supervised pre-training considerably for online continual learning, and the gains are larger when fewer samples are available. Our findings are consistent across three continual learning algorithms. Our best system achieves a 14.95% relative increase in top-1 accuracy on class incremental ImageNet over the prior state of the art for online continual learning.

Publications

Peer-Reviewed Papers