Thirty to seventy percent of proteins in any given genome have no assigned function and have been labeled as the protein unknome. This large knowledge shortfall is one of the final frontiers of biology. Machine-Learning (ML) approaches are enticing, with early successes demonstrating the ability to propagate functional knowledge from experimentally characterized proteins. An open question is the ability of machine-learning approaches to predict enzymatic functions unseen in the training sets. Using a set of Escherichia coli unknowns, we evaluated the current state-of-the-art machine-learning approaches and found that these methods currently lack the ability to integrate scientific reasoning into their prediction algorithms. While human annotators can leverage the plethora of genomic data in making plausible predictions into the unknown, current ML methods not only fail to make novel predictions but also make basic logic errors in their predictions. This underscores the need to include assessments of prediction uncertainty in model output and to test for hallucinations (logic failures) as a part of model evaluation. Explainable AI (XAI) analysis can be used to identify indicators of prediction errors, potentially identifying the most relevant data to include in the next generation of computational models.