Ask not "Why" but "Why Not": Harnessing the Power of Reasoning LLMs By Leo Kee Chye With a wave of advanced reasoning Large Language Models (LLMs) such as OpenAI's o3 mini, DeepSeek R1 , Google’s experimental Gemini 2.0 Flash , xAI’s Grok 3 , Perplexity’s reasoning and deep research model and Alibaba’s QwQ entering the market, many users may find ourselves overwhelmed—especially as we are still exploring the full potential of earlier models. This article argues that the rise of these sophisticated reasoning LLMs will dramatically enhance researchers’ capabilities. Rather than simply seeking answers from these models, researchers should focus on refining the questions we ask. Furthermore, instead of limiting our inquiries to “What,” “How,” and “Why,” we should also ask, “Why not?” Understanding Reasoning LLMs Core Features and Evolution Reasoning LLMs are designed to think step-by-step, deconstruct complex problems into components, solve them either sequentiall...