Emergence of Shape Bias in Convolutional Neural Networks through Activation Sparsity

NeurIPS 2023 (Oral Presentation)

1Carnegie Mellon University, 2Northwestern University
Interpolate start reference image.

Shape bias of our sparse CNNs versus standard CNNs and SOTA transformer-based networks in comparison to the shape bias of human subjects, as evaluated on cue-conflict dataset. The red dotted line shows the frontier of transformer-based networks with the best shape bias. The greed dotted line shows that sparse CNNs push the frontier of the shape bias boundary toward humans.

Abstract

Current deep-learning models for object recognition are known to be heavily biased toward texture. In contrast, human visual systems are known to be biased toward shape and structure. What could be the design principles in human visual systems that led to this difference? How could we introduce more shape bias into the deep learning models? In this paper, we report that sparse coding, a ubiquitous principle in the brain, can in itself introduce shape bias into the network. We found that enforcing the sparse coding constraint using a non-differential Top-K operation can lead to the emergence of structural encoding in neurons in convolutional neural networks, resulting in a smooth decomposition of objects into parts and subparts and endowing the networks with shape bias. We demonstrated this emergence of shape bias and its functional benefits for different network structures with various datasets. For object recognition convolutional neural networks, the shape bias leads to greater robustness against style and pattern change distraction. For the image synthesis generative adversary networks, the emerged shape bias leads to more coherent and decomposable structures in the synthesized images. Ablation studies suggest that sparse codes tend to encode structures, whereas the more distributed codes tend to favor texture.

Video

Visualize the information encoded by TopK & Non-TopK neurons

Visualizing the Top-K/Non-Top-K neurons via Texture Synthesis

Interpolate start reference image.

Visualizing the Top-K/Non-Top-K neurons via reconstruction

Interpolate start reference image.

CNN inference with Top-K to improve shape bias

The classification result on the Shape Bias Benchmark. This plot shows the shape bias of sparse CNNs, CNNs and humans on different class in texture-shape cue conflict dataset. It also show the shape bias of different sparsity degree. e.g. 5% means that only top 5% activation value would be passed to the next layer. Vertical lines means the average value.

Interpolate start reference image.

Shape Biased Few Shot Image Synthesis

Interpolate start reference image.

BibTeX

@article{li2023emergence,
  author    = {Li, Tianqin and Wen, Ziqi and Li, Yangfan and Lee, Tai Sing},
  title     = {Emergence of Shape Bias in Convolutional Neural Networks through Activation Sparsity},
  journal   = {NeurIPS},
  year      = {2023},
}