【论文速递】2025年第44周(Oct-26-Nov-01)(Robotics/Embodied AI/LLM)中文使用 googletrans 翻译,翻译不对的地方以英文为准Modern LLMs are trained to “think” primarily via explicit text generation, such as chain-of-thought (CoT), which defers reasoning to post-training and under-leverages pre-training data. We present and open-source Ouro, name