May

11

I've Covered Robots for Years. This One Is Different
From sorting chicken nuggets to screwing in lightbulbs, Eka’s robotic claw feels like we're approaching a ChatGPT moment for the physical world.

Dendi Suhubdy offers deep analysis:

I have put some thought into that:

The Perception–Planning Gap: What's Actually Hard About Visual AI in 2026

The thesis I want to defend in this piece is structural: perception in pixels is largely solved at the representation level; perception for action is not, and the gap between the two is the central unsolved problem of visual AI in 2026. Frontier vision-language models can pass medical-board questions and explain radiographs at attending-physician level. Frontier robots, after a decade of foundation-model progress, still cannot reliably load an arbitrary dishwasher. Moravec’s paradox is not a quaint historical observation; it is the daily lived experience of every embodied-AI lab.

Big Al responds:

Thanks for sharing. Much I don't understand, but when I scroll down to the summary, I think I get the general ideas. I hadn't thought about the challenge of having effective testing/evaluation standards.

Asindu Drileba writes:

I thought this was going to be about humanoid robots (like Figure & Optimus). That still have very many problems. Industrial robotics how ever has always been making mind blowing, but quiet progress.

Good case studies are Amazon (robots operating in a warehouse about 10 years ago, its now way better), and Tesla, also 10 years back.

The reason is that the edge cases of these industrial robots are very few and can be comprehensively thought about. I think this robot ChatGPT moment already occurred in industry & manufacturing.


Comments

Name

Email

Website

Speak your mind

Archives

Resources & Links

Search