The increased use of convolutional neural networks for face recognition in science, governance, and broader society has created an acute need for methods that can show how these ’black box’ decisions are made. We applied the decompositional pixel-wise attribution method of layer-wise relevance propagation (LRP) to resolve the decisions of several classes of VGG-16 models trained for face recognition. We find that ImageNet- and VGGFace-trained models sample face information differently even as they achieve comparably high classification performance. We also evaluated model decision weighting against human measures of similarity, providing a novel framework for interpreting face recognition decisions across human and machine.