Language is a hierarchically structured system that enables humans to communicate complex meanings. Despite recent advances, the neurocomputational mechanism underlying the composition of natural language remains unclear. Building on the neural population theory, we investigated how neural trajectories in latent spaces underpin natural language composition that integrates diverse lexical content and syntactic relations. We found that neural trajectories derived from human neocortical responses show an orchestration of distinct coding strategies during naturalistic story comprehension. Neural latent geometry is primarily associated with syntactic relations and exhibits more efficient compression relative to lexical content. We further demonstrate that these trajectories can be simulated by brain-inspired computing systems with near-critical dynamics and a preference for historical information. Overall, by positioning structure-based integration as a key computation of natural language comprehension, our findings provide a novel perspective on the mechanism underlying real-world language use and emphasize the importance of contextual information in the development of brain-inspired intelligent systems.