Robotics

AI is making robots smarter. They’ll need boundaries


The most advanced general-purpose robots will be shaped like humans because they fit well into a world built for people. Several companies, including Boston Dynamics and Agility Robotics, have designed humanoid robots, and some are being tested in warehouses. Clearly, the plan is to mass produce these bipedal machines, and that’s why Hyundai Motor Co. bought Boston Dynamics in a deal that closed in 2021.

For those worried about intelligent humanoid robots taking everybody’s jobs, there’s no reason to panic. The unmatched dexterity of human hands will keep people useful at least through this century. These early humanoid robots have safety kinks, limited battery life and are somewhat slow and clunky. That shouldn’t stop the debate on how to handle an artificially intelligent machine that can move. The military has shown keen interest in all kinds of ground, sea and air robots. Is it possible to ban ground robots from killing humans? It would be fine to have one carry supplies or help evacuate wounded soldiers. Should they be allowed to be lethal as well?

It may be too late for that debate. Fully autonomous aerial drones are already killing people in Ukraine. The effort by a small group of activists and diplomats failed in an attempt to ban killer robots because the machines are too useful, said Peter Singer, a senior fellow at New America in an April 14 article.

The US must make these humanoid (and canine) robots at home and have a supply chain from the Americas. After inventing the industrial robot, the US failed to mass-produce them, and Japan now dominates the industry with about 46% of industrial robot output. Germany has 6%. The International Federation of Robotics doesn’t break out China’s production level, but it’s large (probably more than 100,000 a year) and growing rapidly. If China fields an army of robots, the US better be able to match.

On the civilian side, it makes sense to require that a human always be responsible for the actions of a robot or several robots. If there’s a glitch in the code or something goes awry, the person in charge should be able to recognize the problem immediately and take corrective action, including disabling the machine. This is straightforward in a factory or warehouse setting. It becomes more complicated when service robots are more developed and interact with the public. Who’s in charge? Should there be a kill switch? Can just anyone pull it?

These questions should be addressed now because robots are dangerous if not handled properly. A Bell Inc. employee was killed by a robot in 2022 while lubricating the rollers on a conveyor belt that formed part of an automated palletizer. The motion of the can of WD-40 over the conveyor triggered the system’s camera and a robotic arm was activated, crushing the 54-year-old worker, according to a report from the Occupational Safety and Health Administration.

Although it may take decades for robots to become sophisticated enough to routinely interact with people, now is the time to lay the groundwork for so-called embodied AI to work safely around humans. It’s a delicate dance because the rules shouldn’t stifle innovation.

No doubt, the marriage of robots and AI will benefit mankind. Robots are tools and will take on even more of the dull, dirty and dangerous jobs now performed by humans, freeing people from repetitive and physically strenuous jobs. But machines can go haywire, and the risks compound as they become more enmeshed in daily life. There must be rules in place to make sure the humans always have the last say.



Source

Related Articles

Back to top button