Evan Shelhamer

I am a research scientist at DeepMind in London. I earned my PhD in computer science from UC Berkeley in 2019 where I was advised by Trevor Darrell as part of BAIR. Previously, I spent a wonderful year in Cambridge, MA as a research scientist at Adobe.

I believe in DIY science and open tooling for research and engineering.
I was the lead developer of the Caffe deep learning framework from version 0.1 to 1.0, and I still engage in open sourcery when I can.

Before Berkeley, I earned dual degrees in computer science (artificial intelligence concentration) and psychology at UMass Amherst advised by Erik Learned-Miller.

I take my coffee black.

shelhamer@cs.berkeley.edu  /  Google Scholar  /  GitHub  /  CV

Research

I'm interested in computer vision and machine learning, in particular the reconciliation of visual structure with end-to-end learning, plus dynamic inference by adaptive model complexity and computation.

See my scholar page for a full list of projects.

Selected Projects

Fully Convolutional Networks for Semantic Segmentation
Evan Shelhamer*, Jon Long*, Trevor Darrell   (*equal contribution)
PAMI, 2017
CVPR, 2015   (Best Paper Honorable Mention)
PAMI arxiv / CVPR arxiv / code & models / slides / bib

Fully convolutional networks are machines for image-to-image learning and inference.
These local models alone, trained end-to-end and pixels-to-pixels, improved semantic segmentation accuracy 30% relative and efficiency 300x on PASCAL VOC.
Skip connections across layers help resolve what and where.

Caffe Deep Learning Framework
Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, T. Darrell, and our community contributors!
BVLC + BAIR, 2013–2017
ACM MM, 2014   (Winner of the Open Source Software Competition)
project / code / ACM MM'14 arxiv / slides / bib

Caffe is a deep learning framework made with expression, speed, and modularity in mind. The deep learning shift was in part a sea change on the wave of open science and toolkits, including Caffe and its Model Zoo.

Tent: Fully Test-time Adaptation by Entropy Minimization
Dequan Wang*, Evan Shelhamer*, Shaoteng Liu, Bruno Olshausen, Trevor Darrell
ICLR, 2021   (Spotlight)
arxiv / slides / poster / code / bib

Tent ⛺️ helps a model adapt itself to changing conditions ☀️ 🌧 ❄️ by updating on new and different data during testing without altering training or requiring more supervision. Tent adapts by test entropy minimization: optimizing the model for confidence as measured by the entropy of its predictions.

Blurring the Line between Structure and Learning to Optimize and Adapt Receptive Fields
Evan Shelhamer, Dequan Wang, Trevor Darrell
ICLRW, 2019
arxiv / slides / bib

Composing structured Gaussian filters with free-form filters, and learning both, optimizes over filter size and shape alongside content. In effect this controls the degree of locality:
changes in our parameters would require changes in architecture for standard networks. Dynamic inference adapts receptive field size to cope with scale variation.

More Projects

Back to the Source: Diffusion-Driven Adaptation to Test-Time Corruption
Jin Gao*, Jialing Zhang*, Xihui Liu, Trevor Darrell, Evan Shelhamer†, Dequan Wang†
(* equal contribution, † equal advising)
CVPR, 2023
arxiv / code / bib

Most methods for test-time adaptation update the source model by (re-)training on each target domain. We update the target data instead, and project all test inputs toward the source domain with a generative diffusion model. Our input updates help on small batches, data in dependent orders, or on data with multiple corruptions.

Seasoning Model Soups for Robustness to Adversarial and Natural Distribution Shifts
Francesco Croce, Sylvestre-Alvise Rebuffi, Evan Shelhamer, Sven Gowal
CVPR, 2023
arxiv / bib

Models trained for different types of robustness can be merged by taking linear combinations of their parameters to achieve different types of robustness during testing.

Evaluating the Adversarial Robustness of Adaptive Test-time Defenses
Francesco Croce*, Sven Gowal*, Thomas Brunner*, Evan Shelhamer*, Matthias Hein, Taylan Cemgil
ICML, 2022
arxiv / slides / bib

Adaptive test-time defenses alter inference by iteratively updating the input x or parameters 𝜃 of the model to improve robustness to adversarial attack. Or do they? Our careful case study shows that more updates are needed to improve on the robustness of adversarial training.

Infinite Mixture Prototypes for Few-Shot Learning
Kelsey R. Allen, Evan Shelhamer*, Hanul Shin*, Joshua B. Tenenbaum
ICML, 2019
arxiv / bib

Infinite mixture prototypes adaptively adjust model capacity by representing classes as sets of clusters and inferring their number. This handles both simple and complex few-shot tasks, and improves alphabet recognition accuracy by 25% absolute over uni-modal prototypes.

Few-shot Segmentation Propagation with Guided Networks
Kate Rakelly*, Evan Shelhamer*, Trevor Darrell, Alexei A. Efros, Sergey Levine
arXiv, 2018
arxiv / code / bib

Extracting a latent task representation from local supervision allows for non-local propagation within and across images with quick updates for real-time interaction.

(Note: this subsumes our ICLRW'18 paper on conditional networks).

Deep Layer Aggregation
Fisher Yu, Dequan Wang, Evan Shelhamer, Trevor Darrell
CVPR, 2018   (Oral)
arxiv / code / bib

Deepening aggregation, the iterative and hierarchical merging of features across layers, improves recognition and resolution.

Loss Is Its Own Reward: Self-Supervision for Reinforcement Learning
Evan Shelhamer, Parsa Mahmoudieh, Max Argus, Trevor Darrell
ICLRW, 2017
arxiv / slides / bib

Loss is where you find it. With self-supervision for representation learning, experience without reward need not be so unrewarding for reinforcement learning.

Clockwork Convnets for Video Semantic Segmentation
Evan Shelhamer*, Kate Rakelly*, Judy Hoffman*, Trevor Darrell
ECCVW, 2016
arxiv / code / slides / bib

Adaptively computing layers according to their rate of change improves the efficiency of video processing without sacrificing accuracy.

Service

Area Chair: CVPR (2021, 2023), ICCV (2021), NeurIPS (2023).
Reviewer: CVPR, ICCV, ECCV, NeurIPS, ICML, ICLR, PAMI, JMLR, TMLR.
Tutorial Organizer: DIY Deep Learning with Caffe at CVPR 2015 and ECCV 2014.

cs188

Graduate Student Instructor, CS188 Fall 2013

Graduate Student Instructor, DIY Deep Learning Fall 2014