Machine learning using deep neural networks (DNNs) has become ubiquitous in data-driven predictive learning. However, their complex architecture often obscures what they have learned from the data. This information is crucial to validate these models when their predictions can affect human life, for example, biological and clinical predictions. We design SensX, a model agnostic explainable AI (XAI) framework to explain what a trained DNN has learned. We introduce the notion of justifiable perturbations to systematically conduct global sensitivity analysis. Benchmarks using synthetic data sets show that SensX outperformed current state-of-the-art XAI in accuracy (up to 50% higher) and computation time (up to 158 times faster), with higher consistency in all cases. Moreover, only SensX scaled to explain vision transformer (ViT) models defined for input images with more than 150, 000 features. SensX validated the ViT models by showing that the features they learn as important for different facial attributes are intuitively accurate. Further, SensX revealed that there may be biases inherent to the model architecture, an observation that is possible only when the model is explained at the full resolution of the input image. Finally, we use SensX to explain a DNN trained to annotate biological cell types using single-cell RNA-seq data sets with more than a million cells and more than 56, 000 genes measured per cell. SensX determines the different sets of genes that the DNNs have learned to be important to different cell types.