Albina Crypto
Albina Crypto @albinafoxcrypto ·
MIT Tech Review's #1 breakthrough tech of 2026: reasoning models — AIs that think through problems step by step. Not just faster answers. Smarter ones. This changes everything. #AI #ReasoningModels #MITTechReview
18
ABV Anton Biletskyi-Volokh
ABV Anton Biletskyi-Volokh @abv_creative ·
LLM “reasoning” isn’t a diary. It’s chemistry: bonds between steps. This paper argues distillation fails when you copy words instead of the topology—and MOLE-SYN tries to transfer the bond pattern go.abvx.xyz/ewbg62 #LongCoT #ReasoningModels #ModelDistillation #SyntheticData
The Molecular Structure of Thought: Why Long Chain-of-Thought Isn’t “Text” — It’s Topology

Why distillation fails, why “reasoning traces” are a moat, and how MOLE-SYN tries to copy the shape of thought — not the words.

From abvcreative.medium.com
51
RoxsRoss
RoxsRoss @RoxsRoss ·
🧠 Los modelos de razonamiento no controlan su cadena de pensamientos, y eso es bueno OpenAI presenta CoT-Control, un hallazgo clave para la seguridad de la IA. openai.com/index/reasonin…C #ReasoningModels #AISafety #CoT #RoxsRoss
Reasoning models struggle to control their chains of thought, and that’s good

OpenAI introduces CoT-Control and finds reasoning models struggle to control their chains of thought, reinforcing monitorability as an AI safety safeguard.

From openai.com
1
92
Xingyu Zhu
Xingyu Zhu @XingyuZhu_ ·
Presenting CONTEXTUAL DRAG 🚨 Humans could benefit from incorrect previous attempts and avoid making the same mistakes, but reasoning models are dragged toward similar errors (even when explicitly knowing the attempts are incorrect!) Work@PrincetonPLI #LLM #ReasoningModels
Yun (Catherine) Cheng Yun (Catherine) Cheng @chengyun01 ·
Humans anchor on the first piece of information they receive. Do reasoning models escape this bias? We uncover Contextual Drag: errors in context bias subsequent reasoning toward similar mistakes. It persists even if the error has been recognized via reasoning.
226