With AI becoming more ingrained in our everyday work and personal lives, it leaves us to wonder how we can create a more ethical and equitable future with technology. Today, we’re finding a slew of ethical dilemmas and questions raised alongside tech adoption. Is there a way to utilize facial recognition software fairly? Are hiring algorithms helping evaluate applications or unintentionally favoring specific ethnic groups? What geographic datasets are more susceptible to implicit bias?
It has become the responsibility of companies and their employees to think critically about the ethics behind the technology. Every organization has the opportunity to benefit from AI’s potential to transform business processes and the workforce. But beyond its technological capabilities, we are responsible for ensuring that AI is human-centered, socially beneficial, fair, safe, and inclusive for all.
How Bias Impacts Society
When malicious—but often unintentional—bias creeps into AI models, there are real consequences for individuals.
According to a study by the National Institute of Standards and Technology (NIST), most facial recognition algorithms exhibit bias in their systems. For example, facial recognition technologies falsely identified Black and Asian faces 10 to 100 times more often than white faces, creating false matches throughout the data. Consider the implications this could have if used as an identification tool in airports or checkpoints, where intense encounters, interrogations, or traumatic events with authority could occur.
We can mitigate these ethical risks by building proper education around AI ethics and technology’s impact on businesses and society. Because AI is still a relatively new topic, there is no established guideline on properly regulating its usage. The subconscious forms of bias are like ‘The Wild West,’ with racial, gender and other entry points for bias creeping into AI systems with no laws or structure. Companies such as Sony Group and IBM have been large advocates for developing AI ethics guidelines and boards for the betterment of their organizations and society, taking the steps necessary to establish initiatives that mitigate technological advancements’ risks.
Considerations When Developing an AI Ethics Committee
How can we build and implement software and Machine Learning that invent new ideas and approaches to benefit society, but without reinforcing bias or inequity? This question is the foundation of what drives an AI ethics committee forward.
From my hands-on experience working with machine learning and AI models and being one of the leaders building an AI Ethics committee at my company, here are some key steps that need to be taken to create an ethical team.
1. Build with Intention, but Don’t Boil the Ocean
It can be easy to lose the scope and the primary function of the AI ethics committee. If goals, objectives, and timelines are not established early on, the committee can become completely different from your original intention. Sit down with your committee and draw out what you want to achieve. Do your research on what is happening in the field and find examples of situations that could be prevented. When starting your committee, intentionality is critical to avoid taking too much on or having too broad a scope for what you hope to achieve.
2. Welcome Diverse Perspectives
Don’t close the door on team members lacking a deep tech background. Casting a wider net of people from various departments like legal, creative, and engineers at all levels, from the C-Suite down can broaden the perspectives shared and allow outside-of-the-box thinking. This will help represent more walks of life, open the dialogue and increase the likelihood of uncovering areas where ethical dilemmas may arise.
3. Take a Proactive Approach
Once your committee is established, it will require continuous input and thought. Be prepared to advocate for its continued growth and involvement across your organization’s efforts.
4. Define Clear Objectives
When establishing AI ethics committees, it can be valuable to consider the objectives you hope to achieve and how they relate to various stakeholders. Specifically, how can your committee educate employees and customers? How can you engage your community and government regulators? And how can you leverage the committee’s efforts to support the product roadmap? If you take all these questions into consideration, it will set you apart from other organizations developing ethical frameworks by providing more structure to your output.
AI May Be Unregulated, but Not for Long
AI’s pace of innovation has created an exciting foundation for its future. With this growth and attention, however, comes scrutiny. The growing global attention around AI regulation is increasing, and though these aspects are still in their infant stages, it won’t be long before they become more widespread. Government organizations are taking the necessary steps toward a more regulated AI future—just recently, the White House issued a Blueprint for an AI Bill of Rights, but this is only the beginning of a long journey ahead. Organizations can expect the progress around creating, monitoring, and auditing for more transparent technology will become the status quo—and those who prioritize ethical technology creation and adoption will come out on top.
When creating your AI ethics committee, you may face some bumps along the way, just like any new advancement. Stay true to the primary goals you set out to achieve and ensure your company’s technological journey is done the right way—fairly and ethically.
At Hyperscience, we believe in an ethical approach to AI and have created core principles to guide us forward. Our goal is to develop technology grounded in how work flows through an organization, that makes humans an integral part of the long-term solution to growing data processing needs—needs that are currently beyond human capacity.