
João Freitas is general manager and vice president of AI and automation engineering at Pager service
As the use of AI continues to evolve in large organizations, leaders are increasingly looking for the next development that will generate a higher return on investment. The latest wave of this current trend is the adoption of AI agents. However, as with any new technology, organizations must ensure that they adopt AI agents in a responsible manner that enables them to facilitate both speed and security.
More than half of organizations have already implemented AI agents to some extent, and more expect to do the same in the next two years. But many early adopters are now reevaluating their approach. Four in 10 technology leaders regret not having established a stronger governance foundation from the beginning, suggesting that they adopted AI quickly, but with room to improve policies, rules and best practices designed to ensure the responsible, ethical and legal development and use of AI.
As AI adoption accelerates, organizations must find the right balance between their risk of exposure and implementing safeguards to ensure AI use is safe.
Where do AI agents create potential risks?
There are three main areas to consider for safer AI adoption.
The first is shadow AI, when employees use unauthorized AI tools without express permission, bypassing approved tools and processes. IT should create the processes necessary for experimentation and innovation to introduce more efficient ways of working with AI. While shadow AI has been around for as long as AI tools themselves have existed, the autonomy of AI agents makes it easier for unauthorized tools to operate outside of IT’s purview, which can introduce new security risks.
Second, organizations must close gaps in AI ownership and liability to prepare for incidents or processes that go wrong. The strength of AI agents lies in their autonomy. However, if agents act unexpectedly, teams must be able to determine who is responsible for addressing any issues.
The third risk arises when the actions taken by the AI agents cannot be explained. AI agents are goal-oriented, but it may be unclear how they achieve their goals. AI agents must have explainable logic underlying their actions so that engineers can track and, if necessary, reverse actions that could cause problems with existing systems.
While none of these risks should delay adoption, they will help organizations better ensure their security.
The three guidelines for the responsible adoption of AI agents
Once organizations have identified the risks that AI agents may pose, they should implement guidelines and safeguards to ensure safe use. By following these three steps, organizations can minimize these risks.
1: Make human supervision the default option
Agency AI continues to evolve at a rapid pace. However, we still need human supervision when AI agents are given the ability to act, make decisions, and pursue a goal that may affect key systems. A human should be informed by default, especially for business-critical use cases and systems. Teams using AI need to understand the actions it can take and where they may need to intervene. Start conservatively and over time increase the level of agency given to AI agents.
Together, operations teams, engineers, and security professionals must understand the role they play in overseeing AI agent workflows. Each agent should be assigned a specific human owner for clearly defined oversight and responsibility. Organizations should also allow any human to flag or override the behavior of an AI agent when an action has a negative outcome.
When considering tasks for AI agents, organizations should understand that while traditional automation is good at handling repetitive, rule-based processes with structured data inputs, AI agents can handle much more complex tasks and adapt to new information in a more autonomous way. This makes them an attractive solution for all types of tasks. But as AI agents are deployed, organizations must control what actions the agents can take, particularly in the early stages of a project. Therefore, teams working with AI agents should have approval paths for high-impact actions to ensure that the scope of the agent does not extend beyond the expected use cases, minimizing risk to the overall system.
2: Bake safely
The introduction of new tools should not expose a system to new security risks.
Organizations should consider agent platforms that meet high security standards and are validated by enterprise-level certifications such as SOC2, FedRAMP, or equivalent. Additionally, AI agents should not be allowed free rein in an organization’s systems. At a minimum, an AI agent’s permissions and security scope should be aligned with the owner’s scope, and any tools added to the agent should not allow expanded permissions. Limiting AI agents’ access to a system based on their role will also ensure that deployment goes smoothly. Keeping complete records of every action taken by an AI agent can also help engineers understand what happened in the event of an incident and track the problem.
3: Make the results explainable
The use of AI in an organization should never be a black box. The reasoning behind any action must be illustrated so that any engineer trying to access it can understand the context the agent used for decision making and access the traces that led to those actions.
YoThe inputs and outputs of each action must be recorded and accessible. This will help organizations establish a firm overview of the logic underlying an AI agent’s actions, providing significant value should something go wrong.
Security underlines the success of AI agents
AI agents offer a great opportunity for organizations to accelerate and improve their existing processes. However, if they do not prioritize security and strong governance, they could expose themselves to new risks.
As AI agents become more common, organizations must ensure they have systems in place to measure their performance and the ability to take action when they create problems.
Read more of our guest writers. Or consider submitting a post of your own! See our guidelines here.
#Agent #autonomy #security #barriers #nightmare #SRE