Multiplexed immunofluorescence microscopy provides detailed insights into the spatial architecture of cancer tissue samples by using multiple fluorescent markers. Classical analysis approaches focus on single-cell data but can be limited by segmentation accuracy and the representative power of extracted features, potentially overlooking crucial spatial interrelationships among proteins or cells. We developed a hierarchical self-supervised deep learning approach to learn feature representations from multiplexed microscopy images without expert annotations. The method encodes tissue samples at both the local (cellular) level and the global (tissue architecture) level. We applied our method to lung, prostate, and renal cancer tissue microarray cohorts to investigate whether self-supervised learning can recognize clinically meaningful marker patterns from multiplexed microscopy images. We found that the local features captured various marker intensity patterns. The global features separated tissue microarray samples by their location within the whole tissue section. We observed that the learned features identified prognostically distinct patient groups, which show significant differences in survival outcomes. Moreover, attention maps extracted from these models highlighted crucial tissue regions that correlate with specific combinations of proteins. Overall, the approach effectively profiles complex multiplexed microscopy images, offering potential for improved biomarker discovery and more informed cancer treatment decisions.