Ethics of Using AI
#Microsoft, #Google, #openai, and #anthropic announced today (7/26/23) that they are forming a new industry body – The Frontier Model Forum – to ensure the safe and responsible development of the frontier AI Model.
This is a significant move between these industry AI leaders and incoming ones to join the effort. Whenever any technology takes off, an industry body is formed to align the effort and collaborate with policymakers, academics, organizations, and application developers to develop best practices and standards and ensure responsible development.
While the industry body is forming, leaders want to ensure your organization produces responsible AI products, so the following tips should be considered.
- Mandate AI tools trained with diverse data sets
- Require chatbots identified as an AI
- Create regular reporting mechanisms on AI practices
- Conduct regular ethics and regulatory audits
- Hire outsiders to audit for accountability
- Identify specific metrics to ensure responsible AI practices
- Have policies and procedures in place to address ethical concerns
- Assess ethical risk and design a mitigation plan
- Receive qualified guidance from the board
- Listen to customers to incorporate communities into the design
- Share privacy practices with customers
These tips are summarized from the course “Ethics in the Age of Generative AI” by Vilas Dhar.