The brain seamlessly transforms sensory information into precisely-timed movements, enabling us to type familiar words, play musical instruments, or perform complex motor routines with millisecond precision. This process often involves organizing actions into stereotyped \"chunks\'\'. Intriguingly, brain regions that are critical for action chunking, such as the dorsolateral striatum (DLS), also exhibit neural dynamics that encode the passage of time. How such brain regions support both task-specific motor habits and task-invariant internal timing, two seemingly distinct functions, remains a fundamental question. Here we show, using recurrent neural network models, that these two functions emerge from a single computational principle: sensory compression, the functional compression of high-dimensional sensory information into a low-dimensional representation. We find that a sensory bottleneck forces the network to develop stable internal dynamics that implicitly encode time, which in turn serve as a scaffold upon which the brain learns action chunks in response to predictable environmental regularities. This mechanism unifies task-invariant time coding and sensory-guided motor timing as two outcomes of the same process of sensory compression, providing a general principle for how the brain mirrors environmental regularities in both internal stable neural trajectories and external consistent motor habits.