Development of Responsible AI practices

In recent work, 10 telco statements of intent around Responsible AI and ethical AI practices were analyzed.  The results showed a balanced approach to the content of these statements – most likely because of the influence of government and international bodies, building consensus on areas that need to be considered.

 

Describing the category areas, starting with the most popular:

1. responsible use of AI and risk management – the telco promises to act responsibly and manage risks such as safety, bias, and toxicity; along with risk management approaches such as assessments and timely management as they arise

2. governance, security and privacy – statements articulated the telco’s desire to comply with regulations, undertake holistic governance, maintain privacy, and ensure the explainability of models

3. human-focussed – ensuring human oversight of models, the building of models that empower/support humans, and a promise to respond to feedback from internal and external parties

4. ethical practices – deployment of AI that is trustworthy, respects international human rights, free from bias, non-discriminatory, and fair

5. other top-level goals – three sub-categories were distinguished: the desire to collaborate with external/industry organizations and regulatory bodies, promoting innovation.  To apply AI principles across the organization and to ensure that any AI used contributes to social well-being within wider society, beyond just benefiting the organization