Navigating the intersection of Artificial Intelligence (AI) and data privacy poses considerable challenges. While the potential opportunities are vast, it is crucial to acknowledge the possibilities of misusing data and the associated privacy risks. In the midst of the 4th Industrial Revolution, there is a growing scrutiny of AI’s promises and dangers, along with the necessary measures that companies must adopt to fully capitalize on its potential.

For engineers and developers, the concept of “ethics” in technological products may seem abstract. Although some technology companies have initiated efforts to achieve ethical objectives tangibly, it is imperative to break down barriers and share best practices. Collaborating and learning from each other will enable the industry to establish higher standards. To embark on this path, a focus on building trust is paramount.

At Workday, trust is ingrained in our culture. Our customers are aware that data privacy and security are fundamental concerns we have prioritized for years. Our commitment to data protection is deeply intertwined with Workday services, through our privacy principles, GDPR approach, and robust privacy program. As we move towards a future centered around human-machine learning integration, we emphasize incorporating privacy and security into our ethical approach to the design and utilization of machine learning.

While many companies boast high ethical principles in their AI products, these principles are only meaningful if put into practice. Workday has recently unveiled its Commitments to Ethical AI, showcasing how we implement principles that align with our core values of customer service, integrity, and innovation. In light of this experience, companies aspiring to follow these principles should consider the following eight lessons:

  1. Clearly define and agree upon the meaning of “ethical AI.” This definition must be specific and encompass all stakeholders within the company. At Workday, this translates to machine learning systems that align with our commitments to ethical AI, emphasizing prioritizing people, valuing society, complying with the law, ensuring transparency and accountability, safeguarding data, and tailoring machine learning systems to meet business needs.
  2. Integrate ethical AI into product development and launch. This integration should avoid isolating developer and product design teams. Workday adheres to these principles by implementing strict processes throughout product development, incorporating new machine learning oversight procedures into management control, and subjecting products that utilize machine learning to ethical reviews. Privacy protection has long been a core part of our processes, verified through third-party audits, and we continuously review and evolve our practices to meet industry best practices and regulatory guidelines.
  3. Establish cross-functional expert groups to guide decisions related to machine learning and responsible AI. Workday established a Machine Learning Task Force comprising experts from various departments, including Products & Engineering, Legal, Public Policy & Privacy, and Ethics & Compliance. This multidisciplinary approach allows us to comprehensively examine present and future machine learning use in our products, identifying potential issues early in the product lifecycle.
  4. Foster customer collaboration during the design, development, and deployment phases of responsible AI products. Workday actively engages representative customer advisory boards and early adopter programs to seek input on AI and machine learning matters. This collaboration enables us to understand customer ideas and concerns regarding AI and Machine Learning.
  5. Adopt a “lifecycle approach” to address machine learning biases. Workday is committed to mitigating negative biases in machine learning tools throughout their lifecycle. We implement different stages of control, from product design to after-sales support, to assess and correct any potential biases.
  6. Emphasize transparency in the use of data for machine learning purposes. Companies must clearly explain the data used, its purposes, and the algorithms involved. At Workday, we provide transparency by explaining how our Machine Learning technologies operate, the data they require, and how they benefit our solutions, ensuring accountability to our customers.
  7. Encourage employees to design responsible products. Workday equips its employees with toolkits and offers training, seminars, and workshops on ethical principles and their application in AI. For instance, workshops on human-centered design help participants understand the ethical design of machine learning technologies.
  8. Share knowledge and learn from others in the sector. Workday actively participates in industry groups and peer-to-peer meetings to collaborate on ethical frameworks for the technology sector. We advocate for ethical AI with elected officials and government bodies and support the development of ethical AI tools through various initiatives.

As the framework for ethical AI matures, it is crucial to share best practices and lessons learned. Workday’s collaboration with the World Economic Forum aims to encourage stakeholders to share their approaches, making the technology sector more responsible and ethical. Building trust and promoting responsible and ethical AI and technology is a collective responsibility that extends beyond individual enterprises.

Through collaborative efforts, we must shape the common good and foster trust to unlock the full potential and benefits of these emerging technologies.


0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *