On the limits of LLMs (Large Language models) and LRMs (Large Reasoning Models). The TL;DR: "Our findings reveal fundamental limitations in current models: despite sophisticated self-reflection mechanisms, these models fail to develop generalizable reasoning capabilities beyond certain complexity thresholds." Meaning: accuracy collapse.
Interesting paper from Apple. https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf
Edited 21d ago