As more companies adopt generative artificial intelligence models, AI ethics is becoming increasingly important. Ethical guidelines to ensure the transparent, fair, and safe use of AI are evolving across industries, albeit slowly when compared to the fast-moving technology.
But thorny questions about equity and ethics may force companies to tap the brakes on development if they want to maintain consumer trust and buy-in.
KPMG's 2024 Generative AI Consumer Trust Survey found that about half of consumers think there is not sufficient regulation of generative AI right now. The lack of oversight tracks with limited trust that institutions—particularly tech companies and the federal government—will ethically develop and implement AI, according to KPMG.
Within the tech industry, ethical initiatives have been set back by a lack of resources and leadership support, according to an article presented at the 2023 ACM Conference on Fairness, Accountability, and Transparency. Layoffs at major corporations, including Amazon's streaming platform Twitch, Microsoft, Google, and X, hit employees focused on ethical AI hard, leaving a vacuum.
While nearly 3 in 4 consumers say they trust organizations using GenAI in daily operations, confidence in AI varies between industries and functions. Just over half of consumers trust AI to deliver educational resources and personalized recommendations, compared to less than a third who trust it for investment advice and self-driving cars. Consumers are open to AI-driven restaurant recommendations, but not, it seems with their money or their life.
Clear concerns persist around the broader use of a technology that has elevated scams and deepfakes to a new level. The KPMG survey found that the biggest consumer concerns are the spread of misinformation, fake news, and biased content, as well as the proliferation of more sophisticated phishing scams and cybersecurity breaches. As AI grows more sophisticated, these concerns are likely to be amplified as more people may potentially be negatively affected—making ethical frameworks for approaching AI all the more essential.
That puts the onus to set ethical guardrails upon companies and lawmakers. In May 2024, Colorado became the first state to introduce legislation with provisions for consumer protection and accountability from companies and developers introducing AI systems used in education, financial services, and other critical, high-risk industries.
As other states evaluate similar legislation for consumer and employee protections, companies especially possess the in-the-weeds insight to address high-risk situations specific to their businesses. While consumers have set a high bar for companies' responsible use of AI, the KPMG report also found that organizations can take concrete steps to garner and maintain public trust—education, clear communication and human oversight to catch errors, biases, or ethical concerns.
The reality is that the tension between proceeding cautiously to address ethical concerns and moving full speed ahead to capitalize on the competitive advantages of AI will continue to play out in the coming years.
Drata analyzed current events to identify five ways companies are ethically incorporating artificial intelligence in the workplace.
Media Photos // Shutterstock
Actively supporting a culture of ethical decision-making
AI initiatives within the financial services industry can speed up innovation, but companies need to take care in protecting the financial system and customer information from criminals. To that end, JPMorgan Chase has a 200-person AI research group, including an ethics team to work on the company's AI initiatives. The company ranks top on the Evident AI Index, which looks at banks' AI readiness, including a top ranking for transparency in the responsible use of AI.
Gorodenkoff // Shutterstock
Development of risk assessment frameworks
The National Institute of Standards and Technology has developed an AI Risk Management Framework that helps companies better plan and grow their AI initiatives. The approach supports companies in identifying the risks posed by AI, defining and measuring ethical activity, and implementing AI systems with fairness, reliability, and transparency. The Vatican is even getting in on the action—it collaborated with the Markkula Center for Applied Ethics at Santa Clara University, a Catholic college in Silicon Valley, to recommend specific steps for companies to navigate AI technologies ethically.
Matej Kastelic // Shutterstock
Specialized training in responsible AI usage
Amazon Web Services has developed many tools and guides to help its employees think and act ethically as they develop AI applications. The Responsible AI course, a YouTube series produced by AWS Machine Learning University, serves as an introductory course that covers fairness criteria and methods for mitigating bias. Amazon's SageMaker Clarify tool helps developers detect bias in AI model predictions.
Rawpixel.com // Shutterstock
Communication of AI mission and values
Companies that develop a mission statement around their AI practices clearly communicate their values and priorities to employees, customers, and other company stakeholders. Examples include Dell Technologies' Principles for Ethical Artificial Intelligence and IBM's AI ethics, which clarify their approach to AI application development and implementation, publicly setting guiding principles such as "respecting cultural norms, furthering social equality, and ensuring environmental sustainability."
Ground Picture // Shutterstock
Implementing an AI ethics board
Companies can create AI ethics advisory boards to help them find and fix the ethical risks around AI tools, particularly systems that produce biased output because they were trained with biased or discriminatory data. SAP has had an AI Ethics Advisory Panel since 2018; it works on current ethical issues and looks ahead to identify potential future problems and solutions. Northeastern University has created an independent AI ethics advisory board to work with companies that prefer not to create their own.
Story editing by Jeff Inglis. Additional editing by Alizah Salario. Copy editing by Paris Close. Photo selection by Clarese Moller.