DeepVis Toolbox 

GitHub Support CommunityModel interpretability

This is the code required to run the Deep Visualization Toolbox, as well as to generate the neuron-by-neuron visualizations using regularized optimization. The toolbox and methods are described casually here and more formally in this paper:

Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. Understanding neural networks through deep visualization. Presented at the Deep Learning Workshop, International Conference on Machine Learning (ICML), 2015.

Features

Forward/backward prop: Images can be run forward through the network to visualize activations, and derivatives of any unit with respect to any other unit can be computed using backprop. In addition to traditional backprop, deconv from Zeiler and Fergus (2014) is supported as a way of flowing information backwards through the network.

Per-unit visualizations: Three types of per-unit visualizations can be computed for a network — max image, deconv of max image, activation maximization via regularized optimization — but these visualizations must be computed outside the toolbox and saved as jpg.

Official website

Tutorial and documentation

Enter your contact information to continue reading