Loading...

AI can now REASON?! tl;dr: no, it cant!

AI can now REASON?! tl;dr: no, it cant!

Another day, another AI model drop! This time, it’s the OpenAI o1 series, and wow, the hype is all over my feed 🙄 Is this the breakthrough in reasoning we’ve all been waiting for? 🤔

OpenAI claims these models are designed for coding, math, and science. Supposedly, they’re better at reasoning through tough problems. But wait, what?! They aren’t actually reasoning. What they are doing is mimicking reasoning using a technique called “chain-of-thought” processing. How this works? Instead of jumping straight to an answer, these models break down a problem step-by-step, processing more data (tokens) and taking longer to respond. This mimics how humans reason through problems, but at the end of the day, it’s still advanced pattern recognition – the model is just generating text based on its training, not “thinking” like we are 💡

In theory, this should lead to more accurate outputs in areas like coding and math, where getting things wrong is a big issue. But there are trade-offs:

  • Reduced context windows due to the model using tokens for “reasoning steps”
  • Longer wait times for responses (something between few seconds to few minutes)
  • And yes, more money – these models are far more expensive to run

For now, they’re only in preview (with very strict usage limits) and limited to text-based tasks.

To me, the bigger takeaway is the shift toward specialized models. Instead of trying to make one model do everything, OpenAI is focusing on building models for specific, high-demand tasks like coding, math, and science. Maybe this specialization is the real innovation here.

What do you think?

Published on:

Learn more
Luise Freese: Consultant & MVP
Luise Freese: Consultant & MVP

Recent content on Luise Freese: Consultant & MVP

Share post:

Related posts

Stay up to date with latest Microsoft Dynamics 365 and Power Platform news!
* Yes, I agree to the privacy policy