Accurate analysis of electron microscopy (EM) images is essential for exploring nanoscale biological structures, yet data heterogeneity and fragmented workflows hinder scalable insights. Pretrained on large, diverse datasets, image foundation models provide a robust framework for learning transferable representations across tasks. Here, we introduce EM-DINO, the first image foundational model pretrained on EM-5M, the largest standardized EM corpus (5 million images) encompassing multiple species, tissues, protocols, and resolutions. EM-DINO\'s multi-scale embeddings capture rich image features that support multiple applications, including organ-specific pattern recognition, image deduplication, and high quality image restoration. Building on these representations, we developed OmniEM, a U-shaped architecture for unified dense prediction that exceeds task-specific models in both image restoration and segmentation. In restoration benchmarks, OmniEM matches the performance of the EM-specific diffusion model while producing fewer hallucinations that could mislead EM interpretation. It also outperforms previous methods in both generalized mitochondrial segmentation and multi-class organelle segmentation. Furthermore, we demonstrate OmniEM\'s integrated capability to generate high-resolution segmentations from low-resolution inputs, offering the potential to enable fine-scale subcellular analysis in legacy and high-throughput EM datasets. Together, EM-5M, EM-DINO, OmniEM, and an integrated Napari plugin comprise a comprehensive end-to-end toolkit for standardized EM analysis, advance cellular and subcellular understanding and accelerating the discovery of novel organelle morphologies and disease-related alterations.