Note: This post has been in the works for a while and was not inspired by recent events at OpenAI. However, seeing all of the hot takes about what those events mean for the future of responsible AI certainly encouraged me to hit publish.
Building a successful responsible AI team requires walking a tightrope while juggling knives. You’re trying to address complex, sociotechnical problems that require cooperation from multiple stakeholders with conflicting incentives while combating misconceptions about what it is that you do around here. Every collaboration with other teams feels like an audition, and the worthwhileness of the entire endeavor of “responsible AI” feels like it’s being evaluated.
Let’s start with a stark reality: in recent history, responsible AI (RAI) teams have been among the first on the list when layoffs are executed or when strategies change. Many industry observers view teams that focus on questions like “ethics” and “societal benefit” to be luxuries / ZIRPs 1 . Facing headwinds, they argue, firms need to re-focus on core business problems and execute with efficiency. Candidly, they aren’t wrong! Without a successful business, there won’t be a home for any team, much less a responsible AI team. The implication, then, is that successful RAI teams must align themselves with solutions to core business problems and as efficiency increasers. This isn’t mere semantics; you can’t just say these things and have them be true. And, it’s definitely not advocating rolling over and engaging in “ethics-washing” either. RAI teams need to address directly concerns with their firm’s AI systems and lead the way in developing solutions.
Mission alignment: helping the business helps you
A team’s alignment with the company’s mission can greatly ease its path to success. When the objectives of the AI team resonate with the broader goals of the company, it fosters a shared commitment to responsible AI across the organization. Obviously, if your company is building the Torment Nexus, you may find it difficult to find common ground. But, given that most of us are not, this takes intentional action. I have found that many people become interested in responsible AI as a form of tech criticism and seek out roles that enable them to engage in that as a full-time job. This is fundamentally misguided, in my view.
Teams aligned with the company’s mission will find it easier to succeed. When the objectives of the RAI team resonate with the broader goals of the company, it creates a cohesive environment where ethical AI practices are correctly perceived as integral to the business’s overall success. This alignment fosters a shared understanding and commitment to responsible AI across the organization. On top of that, RAI teams should want their companies to succeed and seek out ways to help them achieve that success responsibly. Creating or reinforcing “us vs them” dynamics won’t end well. Don’t put leadership in the position of having to choose between the business and RAI; there’s only going to be one winner.
In practice, this means identifying the core problems that the business is solving, understanding the potential pitfalls and risks of various approaches to solving those problems, and enabling product teams to deploy those solutions responsibly. Seek the highest potential impact, not problems that only your team finds intellectually interesting. A business that doesn’t do much NLP will have little use for debiased word embeddings, for example.
You’ll also need to develop strong relationships with the legal, privacy, and security teams. They often have the organizational authority to mandate actions (or inaction) that a new RAI team will likely lack 2. That being said, it’s important to recognize that each of those teams have different goals and incentives than an RAI team. Often, they will overlap. Importantly, they sometimes do not. Maintaining your independence increases the likelihood that you’ll be able to voice your unrestrained viewpoint and develop your own organizational identity. And, more pragmatically, being seen solely as a “compliance team” will likely create fairly strong ceilings on the growth potential for your team. Being aligned with business metrics (aka ‘up and to the right metrics’) makes this much less likely.
Building your team: tools not decks, expertise, and diversity
Hiring builders – those who create and implement actual tools – is crucial. These team members will help scale the impact of the team superlinearly with respect to team size. Many RAI teams take a “framework-first” approach, where their outputs are primarily frameworks, processes, and guidelines. While those artifacts are absolutely a part of any holistic RAI practice, many new RAI teams struggle with their adoption and become frustrated with a lack of impact or change. Fundamentally, most engineering teams rely on tools and infrastructure to encode decision-making and best-practices. You can’t just put PowerPoint decks out into the world and expect that teams will scramble to adopt your amazing 37-point checklist.
The same is true for RAI development and deployment. Anything that can be converted from a manual, voluntary checklist of best practices to a set of automated tests and outputs will be much more successful. This is not to say that RAI can be reduced purely to a set of automated tests, but you can have a lot of impact with a small team this way. Importantly, this also demonstrates technical competence and subject matter expertise while reassuring the engineering teams whose partnership you’ll rely on in the future that you’re not just going to add more tasks to remember to their already cognitively complex workflows.
Of course, AI expertise is non-negotiable for successful RAI teams. Team members must not only be knowledgeable about the latest AI architectures and methods, but also understand the broader context in which these technologies operate, including social, ethical, and legal implications. This deep understanding is critical to foresee potential issues and to develop AI solutions that are both innovative and responsible. I’ve seen teams index far too heavily on non-technical dimensions, only to find themselves
A diverse team, rich in different perspectives, is essential. As I argued above, successful RAI teams have to move beyond only having tech critics; it needs builders, visionaries, and pragmatists. Teams should consist of members who can constructively challenge ideas, propose innovative solutions, and foresee potential impacts of AI applications. A team that is diverse in skills, backgrounds, and viewpoints is more likely to identify and address the multifaceted challenges AI presents.
Live in the empirical world of risk, not the theoretical one
It is critical for RAI teams to avoid becoming the ‘Department of No’. While it’s their role to identify potential risks and ethical considerations, they should also be enablers of responsible innovation. This involves working proactively with other departments to find solutions that align with ethical standards while still allowing for creative and effective use of AI.
You can’t simply exist as a team of critics that offers objections to every new feature on the grounds that it could theoretically go poorly. Always describe the risks empirically — quantitatively if possible — by using rough expected-value style reasoning. Describe the nature of the risk that is posed, how likely it is to affect users, and outline the cost if this happens. Is it reputational cost due to an embarrassing model error? Regulatory fines due to compliance risk? A violation of the core values of your firm? Be specific, be truthful, and be pragmatic.
Every action a firm can take carries risk — but so does inaction. Recognize when the potential benefits may outweigh potential costs and communicate this. I’m not saying you should be strictly utilitarian and that any action is simply a matter of tallying up pros and cons, but I am saying that unprincipled obstructionism can end up costing a lot more in the long run. Importantly, being clear-eyed and pragmatic about managing risk also creates reputational and organizational capital that will allow you to make more principled stands when they truly matter. (In retrospect, this entire section could just be called “don’t let the perfect be the enemy of the good.“)
I have no inside information about how any of these teams operated, why they were part of layoffs or reorgs, and am in no way implying that these teams would have been retained if they followed these guidelines. I am merely speaking from my own experience. ↩
As an aside, one contrarian opinion that I have is that most RAI teams should not seek to have “blocking” power similar to that of legal teams. Not only are RAI questions less cut-and-dry than legal questions, being a blocking team comes with an organizational identity that is likely to limit your influence in the long run. Instead, prefer influence and partnership, calling in regulatory teams to block releases when necessary. ↩