Securities Attorney for Going Public Transactions

Securities Lawyer Blog

knowledge itself is power

Responsible AI Takes Two: Developers and Deployers Must Partner

As artificial intelligence (AI) continues to revolutionize industries, it is crucial to ensure that its development and deployment positively impact society. This requires a shared responsibility between AI technology developers and those who use it in their business operations. Responsible AI is a collective effort where every actor in the AI value chain plays a vital role in creating the future we envision.

Best Practices for Developing Responsible AI

1. Consistent AI Risk Evaluation: AI technology developers, like Workday, should perform consistent AI risk evaluations for every new AI product. This involves analyzing the risk level of new use cases concerning their context, technical design, potential impacts on individuals’ economic opportunities, and surveillance concerns. High-risk use cases necessitate additional guidelines and safeguards, whereas disallowed use cases, such as intrusive monitoring and biometric surveillance, should be avoided altogether.

2. Adherence to Responsible AI Guidelines: For permissible use cases, developers should adhere to responsible AI guidelines that cover transparency, fairness, explainability, human-in-the-loop, data quality, and robustness. This ensures the development of trustworthy AI technologies while mitigating unintended consequences, such as bias. Advanced developers should have dedicated teams focused on responsible AI governance.

3. Transparency to Customers: Providing transparency is essential. Developers should offer AI fact sheets that explain how AI tools are built, tested, and trained, including their known limitations and risk mitigations. This transparency helps customers understand the AI technologies they are integrating into their operations.

Best Practices for Deploying AI Responsibly

1. Understanding Roles and Responsibilities: Deployers of AI systems must recognize their roles and responsibilities in the AI value chain. Regulations like Article 26 of the EU AI Act specify obligations for high-risk AI systems, guiding deployers in managing AI risks. Resources like the Future of Privacy Forum’s “Best Practices for AI and Workplace Assessment Technologies” provide valuable insights.

2. Working with Trustworthy AI Developers: Selecting AI developers who are familiar with existing and evolving regulations is crucial. Trustworthy developers proactively build responsible AI-by-design and risk mitigation frameworks that align with dynamic regulatory environments. Engage developers who understand and align with frameworks like the EU AI Act and the NIST AI Risk Management Framework.

3. Ensuring Responsible Use and Effective Oversight: Deployers must determine if the AI technology effectively addresses their business challenges. They should conduct fairness testing on their local data and provide effective human oversight. This involves configuring the AI system to optimize business processes and monitoring its operation within these processes to ensure responsible use.

Moving Forward, Together

At Workday, we understand the importance of developing trustworthy AI systems and respect our role in the larger AI value chain. However, the responsibility does not end with the developers. Only through collaboration between developers and deployers can we ensure that AI technologies are used responsibly to amplify human potential and positively impact society.

For more details on our responsible AI governance program, read our “Responsible AI: Empowering Innovation with Integrity” whitepaper. It outlines the principles, practices, people, and public policy positions that drive our approach to responsible AI.

By committing to these best practices, both AI developers and deployers can create a future where AI technologies enhance human capabilities and contribute to societal well-being.

Gayatri Gupta