Transformer-based architectures currently achieve state-of-the-art performance across a wide range of domains, including biological sequence modeling. Motivated by the recent introduction of the xLSTM architecture, we investigate its effectiveness for RNA sequence modeling by comparing a 33.7M-parameter RNA-xLSTM model against two leading RNA language models: RNA-FM and RiNALMo-33M. We pretrain RNA-xLSTM on the RNAcentral database and evaluate its performance on two downstream tasks: RNA secondary structure prediction and splice site prediction. Our results show that while RNA-xLSTM underperforms compared to the similarly sized RiNALMo, it does outperform the larger RNA-FM model on certain tasks. However, its overall performance remains inconsistent, and its advantages over transformer-based models are unclear, suggesting that further work is needed to assess its true potential in RNA modeling.