Agentic AI systems – AI systems capable of pursuing complex goals with limited direct supervision – are likely to be broadly useful if we can responsibly integrate them into our society. Although such systems have considerable potential to help individuals achieve their own goals more effectively and efficiently, they also create risks of harm. In this white paper, we provide a definition of agentic AI systems and stakeholders in the agentic AI system lifecycle, and highlight the importance of agreeing on a set of core responsibilities and best practices security for each of these parts. As our main contribution, we propose an initial set of practices to ensure the safety and accountability of agent operations, which we hope can serve as a basis for the development of agreed baseline best practices. We list questions and uncertainties around the implementation of each of these practices that must be addressed before such practices can be codified. We then highlight categories of indirect impacts related to the large-scale adoption of agentic AI systems, which will likely require additional governance frameworks.
Practices for Governing Agentic AI Systems
![](https://definewsnetwork.com/wp-content/uploads/2024/04/practices-for-governing-agentic-ai-systems-860x860.jpg)
Leave a comment