Image-based machine learning tools have emerged as powerful resources for analyzing medical images, with deep learning-based semantic segmentation commonly utilized to enable spatial quantification of structures in images. However, customization and training of segmentation algorithms requires advanced programming skills and intricate workflows, limiting their accessibility to many investigators. Here, we present a protocol and software for automatic segmentation of medical images guided by a graphical user interface (GUI) using the CODAvision algorithm. This workflow simplifies the process of semantic segmentation of microanatomical structures by enabling users to train highly customizable deep learning models without extensive coding expertise. The protocol outlines best practices for creating robust training datasets, configuring model parameters, and optimizing performance across diverse biomedical image modalities. CODAvision enhances the usability of the CODA algorithm (Nature Methods, 2022) by streamlining parameter configuration, model training, and performance evaluation, automatically generating quantitative results and comprehensive reports. We expand beyond the original implementation of CODA to serial histology by demonstrating robust performance across numerous medical image modalities and diverse biological questions. We provide sample results in data types including histology, magnetic resonance imaging (MRI), and computed tomography (CT). We demonstrate the diverse use of this tool in applications including quantification of metastatic burden in in vivo models and deconvolution of spot-based spatial transcriptomics datasets. This protocol is designed for researchers with interest in rapid design of highly customizable semantic segmentation algorithms and a basic understanding of programming and anatomy.