The accurate prediction of ADMET (Absorption, Distribution, Metabolism, Excretion, and Toxicity) properties is essential for early-stage drug development, helping to reduce late-stage attrition and guide compound prioritization. In recent years, machine learning models have emerged as powerful tools for ADMET prediction, leveraging diverse molecular representations ranging from handcrafted descriptors to graph neural networks and language model embeddings. Despite these advances, balancing predictive performance with computational efficiency remains a key challenge, particularly for high-throughput screening scenarios. Among unsupervised embedding methods, Mol2Vec has shown promise by capturing chemical substructure context analogously to word embeddings in natural language processing. However, its performance on comprehensive ADMET benchmarks has not been systematically assessed. In this work, we reimplement Mol2Vec with an expanded training corpus and higher embedding dimensionality, and evaluate its utility across 16 ADMET prediction tasks from the Therapeutics Data Commons (TDC). We show that while Mol2Vec embeddings alone are competitive, combining them with classical molecular descriptors and applying feature selection significantly improves performance. Our final MLP models with enhanced Mol2Vec embeddings achieved top-1 results in 10 of 16 benchmarks, outperforming all previously reported models on the TDC leaderboard in this regard, demonstrating that descriptor-enriched representations, paired even with relatively simple MLPs, can rival or exceed the performance of more complex models.