Robotics

Mentee Robotics unveils humanoid robot that ‘integrates AI across all operational layers’ – Robotics & Automation News


Mentee Robotics is developing an end-to-end humanoid robot with sufficient dexterity for a wide spectrum of activities in both households and industrial warehouses.

For the first time, Mentee Robotics demonstrates a complete end-to-end cycle – from a verbal command to complex task completion including navigation, locomotion, scene understanding, object detection and localization, grasping, and natural language understanding.

The production-ready prototype expected to be deployed by Q1 2025, will be powered by camera-only sensing, proprietary electric motors that support unprecedented dexterity, and fully integrated AI.

The rapid progress in AI enables a new world of automation, capable of autonomously fulfilling complex tasks in both home and industrial environments.

Founded in 2022 by Prof. Amnon Shashua, the chairman of Mentee Robotics, a world-renowned expert in AI, computer vision, natural language processing and other related fields; Prof. Lior Wolf, the CEO of Mentee Robotics and formerly a research scientist and director at Facebook AI Research; and Prof. Shai Shalev-Shwartz, a world-renowned computer scientist and machine learning researcher; Mentee Robotics has operated in stealth mode for the past two years, developing an end-to-end humanoid robot with sufficient dexterity for a wide spectrum of activities in both households and industrial warehouses.

The Menteebot prototype unveiled today integrates AI across all operational layers.

Locomotion is based on a novel Simulator to Reality (Sim2Real) machine learning approach, wherein reinforcement learning occurs on a simulated version of the robot – thus utilizing unlimited data for training, followed by adaptation to the real world with minimal data requirements.

The mapping of the environment is performed on the fly using NeRF-based algorithms (the latest neural network-based technologies for representing 3D scenes).

These cognitive maps store semantic information, allowing the robot to query them to find items and places. The robot localizes itself in the 3D map and automatically plans dynamic paths that avoid obstacles.

Transformer-based Large Language Models (LLMs) are used for interpreting commands and “thinking through” the required steps for completing the task.

An emphasis is placed on the ability to integrate locomotion and dexterity, i.e., dynamically balancing the robot when carrying weights or reaching out with the hands.

The prototype unveiled today, though not the final version ready for deployment, demonstrates for the first time a complete end-to-end cycle from a verbal command to complex task completion including navigation, locomotion, scene understanding, object detection and localization, grasping and natural language understanding.

The production-ready prototype, expected to be deployed by Q1 2025, will be powered by camera-only sensing, proprietary electric motors that support unprecedented dexterity, and fully integrated AI.

This AI enables complex reasoning for task completion, conversation, and on-the-fly learning of new tasks.

Prof Amnon Shashua, chairman of Mentee Robotics, says: “We are on the cusp of a convergence of computer vision, natural language understanding, strong and detailed simulators, and methodologies on and for transferring from simulation to the real world.

“At Mentee Robotics we see this convergence as the starting point for designing the future general-purpose bi-pedal robot that can move everywhere (as a human) with the brains to perform household tasks and learn through imitation tasks it was not previously trained for.”



Source

Related Articles

Back to top button