The simple linear threshold units used in many artificial neural networks have a limited computational capacity. Famously, a single unit cannot handle non-linearly separable problems like XOR. In contrast, real neurons exhibit complex morphologies as well as active dendritic integration, suggesting that their computational capacities outperform those of simple linear units. Considering specific families of Boolean functions, we empirically examine the computational limits of single units that incorporate more complex dendritic structures. For random Boolean functions, we show that there is a phase transition in learnability as a function of the input dimension, with most random functions below a certain critical dimension being learnable and those above not. This critical dimension is best predicted by the overall size of the dendritic arbor. This demonstrates that real neurons have a far higher computational complexity than is usually considered in neural models, whether in machine learning or computational neuroscience. Furthermore, using architectures that are, respectively, more \'apical\' or \'basal\', we show that there are non-trivially disjoint sets of learnable functions by each type of neuron. Importantly, these two types of architectures differ in the robustness and generality of the computations they can perform. The basal-like architecture shows a higher probability of function realization, while the apical-like architecture shows an advantage with fast retraining for different functions. Given the cell-type specificity of morphological characteristics, these results suggest both that different components of the dendritic arbor as well as distinct cell types may have distinct computational roles. Our analysis offers new directions for neuron-level inductive biases in NeuroAI models using scalable models for neuronal cell-type specific computation