In addition to supervised motor learning, the cerebellum also supports nonmotor forms of learning, including reinforcement learning (RL). Recent studies in animal models have identified core RL signals related to reward processing, reward prediction, and prediction errors in specific regions in cerebellar cortex. However, the computational constraints on these signals remain poorly understood, particularly in humans. Here, we investigated cerebellar RL signals in a computationally-driven fMRI study. Human participants performed an RL task with no low-level sensorimotor contingencies. We observed robust RL signals related to reward processing and reward prediction errors in cognitive regions of the cerebellum (Crus I and II). These signals were not explained by oculomotor or physiological confounds. By manipulating the delay between choices and reward outcomes, we discovered that cerebellar RL signals are temporally sensitive, becoming undetectable at supra-second feedback delays. Similar delay effects were not found in other areas implicated in reward processing, including the ventral striatum and hippocampus. Connectivity analyses revealed that during RL feedback, cognitive areas of the cerebellum coactivate with a network that includes the medial and lateral prefrontal cortex and caudate nucleus. Together, these results highlight a temporally constrained contribution of the human cerebellum to a cognitive learning task.