Transformer-based large language models (LLMs) have significantly advanced our understanding of meaning representation in the human brain. However, increasingly large LLMs have been questioned as valid cognitive models due to their extensive training data and their ability to access context hundreds of words long. In this study, we investigated whether instruction tuning, another core technique in recent LLMs beyond mere scaling, can enhance models' ability to capture linguistic information in the human brain. We evaluated the self-attention of base and fine-tuned LLMs of different sizes against human eye movement and functional magnetic resonance imaging (fMRI) activity patterns during naturalistic reading. We show that scaling has a greater impact than instruction tuning on model-brain alignment, reinforcing the scaling law in brain encoding performance. These finding have significant implications for understanding the cognitive plausibility of LLMs and their role in studying naturalistic language comprehension.