Hearing loss is a pervasive global health challenge with profound impacts on communication, cognitive function, and quality of life. Recent studies have established age-related hearing loss as a significant risk factor for dementia, highlighting the importance of hearing loss research. Auditory brainstem responses (ABRs), which are electrophysiological recordings of synchronized neural activity from the auditory nerve and brainstem, serve as in vivo readouts for sensory hair cell, synaptic integrity, hearing sensitivity, and other key features of auditory pathway functionality, making them highly valuable for both basic neuroscience research and clinical diagnostics. Despite their utility, traditional ABR analyses rely heavily on subjective manual interpretation, leading to considerable variability and limiting reproducibility across studies. Here, we introduce Auditory Brainstem Response Analyzer (ABRA), a novel open-source graphical user interface powered by deep learning, which automates and standardizes ABR waveform analysis. ABRA employs convolutional neural networks trained on diverse datasets collected from multiple experimental settings, achieving rapid and unbiased extraction of key ABR metrics, including peak amplitude, latency, and auditory threshold estimates. We demonstrate that ABRA's deep learning models provide performance comparable to expert human annotators while dramatically reducing analysis time and enhancing reproducibility across datasets from different laboratories. By bridging hearing research, sensory neuroscience, and advanced computational techniques, ABRA facilitates broader interdisciplinary insights into auditory function. An online version of the tool is available for use at no cost at https://abra.ucsd.edu.