Top robotics names discuss humanoids, generative AI and more


    Last month, I took an extended break. In a bid to keep my robotics newsletter Actuator (subscribe here) up and running, however, I reached out to some of the biggest names in the industry. I asked people from CMU, UC Berkeley, Meta, NVIDIA, Boston Dynamics and the Toyota Research Institute the same six questions, covering topics like generative AI, the humanoid form factor, home robots and more. You’ll find all of the answers broken up by question below. You would be hard pressed to find a more comprehensive breakdown of robotics in 2023 and the path its blazing for future technologies.

    What role(s) will generative AI play in the future of robotics?

    Image Credits: Getty Images

    Matthew Johnson-Roberson, CMU: Generative AI, through its ability to generate novel data and solutions, will significantly bolster the capabilities of robots. It could enable them to better generalize across a wide range of tasks, enhance their adaptability to new environments and improve their ability to autonomously learn and evolve.

    Dhruv Batra, Meta: I see generative AI playing two distinct roles in embodied AI and robotics research:

    • Data/experience generators
      Generating 2D images, video, 3D scenes, or 4D (3D + time) simulated experiences (particularly action/language conditioned experiences) for training robots because real-world experience is so scarce in robotics. Basically, think of these as “learned simulators.” And I believe robotics research simply cannot scale without training and testing in simulation.
    • Architectures for self-supervised learning
      Generating sensory observations that an agent will observe in the future, to be compared against actual observations, and used as an annotation-free signal for learning. See Yann’s paper on AMI for more details.

    Aaron Saunders, Boston Dynamics: The current rate of change makes it hard to predict very far into the future. Foundation models represent a major shift in how the best machine learning models are created, and we are already seeing some impressive near-term accelerations in natural language interfaces. They offer opportunities to create conversational interfaces to our robots, improve the quality of existing computer vision functions and potentially enable new customer-facing capabilities such as visual question answering. Ultimately we feel these more scalable architectures and training strategies are likely to extend past language and vision into robotic planning and control. Being able to interpret the world around a robot will lead to a much richer understanding on how to interact with it. It’s a really exciting time to be a roboticist!

    Russ Tedrake, TRI: Generative AI has the potential to bring revolutionary new capabilities to robotics. Not only are we able to communicate with robots in natural language, but connecting to internet-scale language and image data is giving robots a much more robust understanding and reasoning about the world. But we are still in the early days; more work is needed to understand how to ground image and language knowledge in the types of physical intelligence required to make robots truly useful.

    Ken Goldberg, UC Berkeley: Although the rumblings started a bit earlier, 2023 will be remembered as the year when generative AI transformed Robotics. Large language models like ChatGPT can allow robots and humans to communicate in natural language. Words evolved over time to represent useful concepts from “chair” to “chocolate” to “charisma.” Roboticists also discovered that large Vision-Language-Action models can be trained to facilitate robot perception and to control the motions of robot arms and legs. Training requires vast amounts of data so labs around the world are now collaborating to share data. Results are pouring in and although there are still open questions about generalization, the impact will be profound.

    Another exciting topic is “Multi-Modal models” in two senses of multi-modal:

    • Multi-Modal in combining different input modes, e.g. Vision and Language. This is now being extended to include Tactile and Depth sensing, and Robot Actions.
    • Multi-Modal in terms of allowing different actions in response to the same input state. This is surprisingly common in robotics; for example there are many ways to grasp an object. Standard deep models will “average” these grasp actions which can produce very poor grasps.  One very exciting way to preserve multi-modal actions is Diffusion Policies, developed by Shuran Song, now at Stanford.

    Deepu Talla, NVIDIA: We’re already seeing productivity improvements with generative AI across industries. Clearly, GenAI’s impact will be transformative across robotics from simulation to design and more.

    • Simulation: Models will be able to accelerate simulation development, bridging the gaps between 3D technical artists and developers, by building scenes, constructing environments and generating assets. These GenAI assets will see increased use for synthetic data generation, robot skills training and software testing.
    • Multimodal AI: Transformer-based models will improve the ability of robots to better understand the world around them, allowing them to work in more environments and complete complex tasks.
    • Robot (re)programming: Greater ability to define tasks and functions in simple language to make robots more general/multipurpose.
    • Design: Novel mechanical designs for better efficiency — for example, end effectors.

    What are your thoughts on the humanoid form factor?

    3D illustration of robot humanoid reading book in concept of future artificial intelligence and 4th fourth industrial revolution . (3D illustration of robot humanoid reading book in concept of future artificial intelligence and 4th fourth industrial r

    Image Credits: NanoStockk (opens in a new window) / Getty Images

    Ken Goldberg, UC Berkeley: I’ve always been skeptical about humanoids and legged robots, as they can be overly sensational and inefficient, but I’m reconsidering after seeing the latest humanoids and quadrupeds from Boston Dynamics, Agility and Unitree. Tesla has the engineering skills to develop low-cost motors and gearing systems at scale. Legged robots have many advantages over wheels in homes and factories to traverse steps, debris and rugs. Bimanual (two-armed) robots are essential for many tasks, but I still believe that simple grippers will continue to be more reliable and cost-effective than five-fingered robot hands.

    Deepu Talla, NVIDIA: Designing autonomous robots is hard. Humanoids are even harder. Unlike most AMRs that mainly understand floor-level obstacles, humanoids are mobile manipulators that will need multimodal AI to understand more of the environment around them. An incredible amount of sensor processing, advanced control and skills execution is required.

    Breakthroughs in generative AI capabilities to build foundational models are making the robot skills needed for humanoids more generalizable. In parallel, we’re seeing advances in simulations that can train the AI-based control systems as well as the perception systems.

    Matthew Johnson-Roberson, CMU: The humanoid form factor is a really complex engineering and design challenge. The desire to mimic human movement and interaction creates a high bar for actuators and control systems. It also presents unique challenges in terms of balance and coordination. Despite these challenges, the humanoid form has the potential to be extremely versatile and intuitively usable in a variety of social and practical contexts, mirroring the natural human interface and interaction. But we probably will see other platforms succeed before these.

    Max Bajracharya, TRI: Places where robots might assist people tend to be designed for people, so these robots will likely need to fit and work in those environments. However, that does not mean they need to take a humanoid (two arms, five-fingered hands, two legs and a head) form factor; simply, they need to be compact, safe and capable of human-like tasks.

    Dhruv Batra, Meta: I’m bullish on it. Fundamentally, human environments are designed for the humanoid form factor. If we really want general-purpose robots operating in environments designed for humans, the form factor will have to be at least somewhat humanoid (the robot will likely have more sensors than humans and may have more appendages, as well).

    Aaron Saunders, Boston Dynamics: Humanoids aren’t necessarily the best form factor for all tasks. Take Stretch, for example — we originally generated interest in a box-moving robot from a video we shared of Atlas moving boxes. Just because humans can move boxes doesn’t mean we’re the best form factor to complete that task, and we ultimately designed a custom robot in Stretch that can move boxes more efficiently and effectively than a human. With that said, we see great potential in the long-term pursuit of general-purpose robotics, and the humanoid form factor is the most obvious match to a world built around our form. We have always been excited about the potential of humanoids and are working hard to close the technology gap.

    Following manufacturing and warehouses, what is the next major category for robotics?

    Overview of a large industrial distribution warehouse storing products in cardboard boxes on conveyor belts and racks.

    Image Credits: Getty Images

    Max Bajracharya, TRI: I see a lot of potential and needs in agriculture, but the outdoor and unstructured nature of many of the tasks is very challenging. Toyota Ventures has invested in a couple of companies like Burro and Agtonomy, which are making good progress in bringing autonomy to some initial agricultural applications.

    Matthew Johnson-Roberson, CMU: Beyond manufacturing and warehousing, the agricultural sector presents a huge opportunity for robotics to tackle challenges of labor shortage, efficiency and sustainability. Transportation and last-mile delivery are other arenas where robotics can drive efficiency, reduce costs and improve service levels. These domains will likely see accelerated adoption of robotic solutions as the technologies mature and as regulatory frameworks evolve to support wider deployment.

    Aaron Saunders, Boston Dynamics: Those two industries still stand out when you look at matching up customer needs with the state of art in technology. As we fan out, I think we will move slowly from environments that have determinism to those with higher levels of uncertainty. Once we see broad adoption in automation-friendly industries like manufacturing and logistics, the next wave probably happens in areas like construction and healthcare. Sectors like these are compelling opportunities because they have large workforces and high demand for skilled labor, but the supply is not meeting the need. Combine that with the work environments, which sit between the highly structured industrial setting and the totally unstructured consumer market, and it could represent a natural next step along the path to general purpose.

    Deepu Talla, NVIDIA: Markets where businesses are feeling the effects of labor shortages and demographic shifts will continue to align with corresponding robotics opportunities. This spans robotics companies working across diverse industries, from agriculture to last-mile delivery to retail and more.

    A key challenge in building autonomous robots for different categories is to build the 3D virtual worlds required to simulate and test the stacks. Again, generative AI will help by allowing developers to more quickly build realistic simulation environments. The integration of AI into robotics will allow increased automation in more active and less “robot-friendly” environments.

    Ken Goldberg, UC Berkeley: After the recent union wage settlements, I think we’ll see many more robots in manufacturing and warehouses than we have today. Recent progress in self-driving taxis has been impressive, especially in San Francisco where driving conditions are more complex than Phoenix. But I’m not convinced that they can be cost-effective. For robot-assisted surgery, researchers are exploring “Augmented Dexterity” — where robots can enhance surgical skills by performing low-level subtasks such as suturing.

    How far out are true general-purpose robots?

    illustration of robot arm pointing at stock chart

    Image Credits: Yuichiro Chino / Getty Images

    Dhruv Batra, Meta: Thirty years. So effectively outside the window where any meaningful forecasting is possible. In fact, I believe we should be deeply skeptical and suspicious of people making “AGI is around the corner” claims.

    Deepu Talla, NVIDIA: We continue to see robots becoming more intelligent and capable of performing multiple tasks in a given environment. We expect to see continued focus on mission-specific problems while making them more generalizable. True general-purpose embodied autonomy is further out.

    Matthew Johnson-Roberson, CMU: The advent of true general-purpose robots, capable of performing a wide range of tasks across different environments, may still be a distant reality. It requires breakthroughs in multiple fields, including AI, machine learning, materials science and control systems. The journey toward achieving such versatility is a step-by-step process where robots will gradually evolve from being task-specific to being more multi-functional and eventually general purpose.

    Russ Tedrake, TRI: I am optimistic that the field can make steady progress from the relatively niche robots we have today towards more general-purpose robots. It’s not clear how long it will take, but flexible automation, high-mix manufacturing, agricultural robots, point-of-service robots and likely new industries we haven’t imagined yet will benefit from increasing levels of autonomy and more and more general capabilities.

    Ken Goldberg, UC Berkeley: I don’t expect to see true AGI and general-purpose robots in the near future. Not a single roboticist I know worries about robots stealing jobs or becoming our overlords.

    Aaron Saunders, Boston Dynamics: There are many hard problems standing between today and truly general-purpose robots. Purpose-built robots have become a commodity in the industrial automation world, but we are just now seeing the emergence of multi-purpose robots. To be truly general purpose, robots will need to navigate unstructured environments and tackle problems they have not encountered. They will need to do this in a way that builds trust and delights the user. And they will have to deliver this value at a competitive price point. The good news is that we are seeing an exciting increase in critical mass and interest in the field. Our children are exposed to robotics early, and recent graduates are helping us drive a massive acceleration of technology. Today’s challenge of delivering value to industrial customers is paving the way toward tomorrow’s consumer opportunity and the general purpose future we all dream of.

    Will home robots (beyond vacuums) take off in the next decade?

    LEGO Home Alone

    Image Credits: Lego

    Matthew Johnson-Roberson, CMU: The advent of true general-purpose robots, capable of performing a wide range of tasks across different environments, may still be a distant reality. It requires breakthroughs in multiple fields, including AI, machine learning, materials science and control systems. The journey toward achieving such versatility is a step-by-step process where robots will gradually evolve from being task-specific to being more multi-functional and eventually general purpose.

    Deepu Talla, NVIDIA: We’ll have useful personal assistants, lawn mowers and robots to assist the elderly in common use.

    The trade-off that’s been hindering home robots, to date, is the axis of how much someone is willing to pay for their robot and whether the robot delivers that value. Robot vacuums have long delivered the value for their price point, hence their popularity.

    Also, as robots become smarter, having intuitive user interfaces will be key for increased adoption. Robots that can map their own environment and receive instructions via speech will be easier to use by home consumers than robots that require some programming.

    The next category to take off would likely first be focused outdoors — for example, autonomous lawn care. Other home robots like personal/healthcare assistants show promise but need to address some of the indoor challenges encountered within dynamic, unstructured home environments.

    Max Bajracharya, TRI: Homes remain a difficult challenge for robots because they are so diverse and unstructured, and consumers are price-sensitive. The future is difficult to predict, but the field of robotics is advancing very quickly.

    Aaron Saunders, Boston Dynamics: We may see additional introduction of robots into the home in the next decade, but for very limited and specific tasks (like Roomba, we will find other clear value cases in our daily lives). We’re still more than a decade away from multifunctional in-home robots that deliver value to the broad consumer market. When would you pay as much for a robot as you would a car? When it achieves the same level of dependability and value you have come to take for granted in the amazing machines we use to transport us around the world.

    Ken Goldberg, UC Berkeley: I predict that within the next decade we will have affordable home robots that can declutter — pick up things like clothes, toys and trash from the floor and place them into appropriate bins. Like today’s vacuum cleaners, these robots will occasionally make mistakes, but the benefits for parents and senior citizens will outweigh the risks.

    Dhruv Batra, Meta: No, I don’t believe the core technology is ready.

    What important robotics story/trend isn’t getting enough coverage?

    Illustration of a robot holds in a hand a wrench and repairs the circuit on a laptop screen.

    Image Credits: Yurii Karvatskyi / Getty Images

    Aaron Saunders, Boston Dynamics: There is a lot of enthusiasm around AI and its potential to change all industries, including robotics. Although it has a clear role and may unlock domains that have been relatively static for decades, there is a lot more to a good robotic product than 1’s and 0’s. For AI to achieve the physical embodiment we need to interact with the world around us, we need to track progress in key technologies like computers, perception sensors, power sources and all the other bits that make up a full robotic system. The recent pivot in automotive towards electrification and Advanced Driver Assistance Systems (ADAS) is quickly transforming a massive supply chain. Progress in graphics cards, computers and increasingly sophisticated AI-enabled consumer electronics continues to drive value into adjacent supply chains. This massive snowball of technology, rarely in the spotlight, is one of the most exciting trends in robotics because it enables small innovative companies to stand on the backs of giants to create new and exciting products.

     

     



    Source link

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here