Additionally, they show a counter-intuitive scaling limit: their reasoning effort and hard work boosts with difficulty complexity approximately a degree, then declines Regardless of obtaining an sufficient token budget. By evaluating LRMs with their regular LLM counterparts underneath equal inference compute, we discover a few overall performance regimes: (one) https://illusion-of-kundun-mu-onl86159.pointblog.net/illusion-of-kundun-mu-online-an-overview-81049205