Why deliver total control to AI agents would be a big mistake

And when the systems can control multiple sources of information simultaneously, the damage potential explodes. For example, an agent with access to private communications and public platforms could share personal information on social networks. It is possible that this information is not true, but it would fly under the radar of the traditional mechanisms of verification of facts and could be amplified with a greater exchange to create serious damage to the reputation. We imagine that “I wasn’t me, I was my agent!” It will soon be a common chorus to excuse the bad results.

Keep the human in the loop

The historical precedent demonstrates why maintaining human supervision is fundamental. In 1980, computer systems falsely indicated that more than 2,000 Soviet missiles were headed towards North America. This error triggered emergency procedures that took us dangerously close to the catastrophe. What avoided the disaster was the human cross verification between different warning systems. If decision -making had been completely delegated to autonomous systems that prioritize speed over certainty, the result could have been catastrophic.

Some counteract that the benefits are worth the risks, but we argue that realizing that these benefits do not require delivering complete human control. On the other hand, the development of AI agents must occur together with the development of guaranteed human supervision in a way that limits the scope of what IA agents can do.

Open source agents systems are a way of addressing risks, since these systems allow greater human supervision of what systems can and cannot do. In the hugged face we are developing SmolagentsA frame that provides Sandboxed safe environments and allows developers to build agents with transparency in their nucleus so that any independent group can verify if there is appropriate human control.

This approach contrasts with the prevailing trend towards increasingly complex AI systems that obscure their decision -making processes behind patented technology layers, which makes it impossible to guarantee security.

As we navigate the development of increasingly sophisticated agents, we must recognize that the most important characteristic of any technology does not increase efficiency, but fosters human well -being.

This means creating systems that remain tools instead of decision makers, assistants instead of replacements. The human judgment, with all its imperfections, remains the essential component to ensure that these systems serve instead of subverting our interests.

Margaret Mitchell, Avijit Ghosh, Sasha Luccioni, Giada Pistilli work to hug their face, A global startup in the RESPONSIBLE OPEN COURTY AI.

#deliver #total #control #agents #big #mistake

Leave a Reply

Your email address will not be published. Required fields are marked *