Loading...

Armchair Architects: Are AI Charters Valuable?

Image

In this blog, our host, David Blank-Edelman and our armchair architects Uli Homann and Eric Charran will be delving into the significance of establishing an AI charter for organizations venturing into AI. They will talk about the importance of an AI charter, and what should be included. Spoiler alert: the short answer is “yes” AI charters are valuable. The conversation starting an AI charter was included in a previous blog Armchair Architects: Considerations for Ethical and Responsible Use of AI in Applications. Uli brought up the need for an AI charter in the previous blog and he will provide some insights into why it is needed.

 

Transparency and Ethics

One of the common fears associated with AI are the dystopian scenarios depicted in movies like "Terminator.” People think Skynet, people think Terminator, people think these evil things that will take over the world and enslave humanity scenarios. From Uli’s perspective, since this is the picture that humanity has of AI as a larger society, he thinks it's our obligation to be proactive and transparent to say “no”  we are using AI for good purposes and what good means is XYZ. Uli recommends that companies be upfront, visible, and transparent with their AI charter on how they will or will not use AI.


Once the AI charter has been created, employees working on AI projects can refer to the charter. One example was, an employee or an internal group could build Skynet, well that could be fun, but we won’t do this because I know what our charter is for AI.

 

Real-world Example where an AI Charter Helped
If you are in a business like Microsoft or a technology provider, you will get asked to do things with your AI technology. An example Uli shared, was a couple of years ago, before the large language model world, a request Microsoft received from a country with an authoritarian leadership model, had sought assistance in managing its population through face recognition technology. The sales team was very excited, big opportunity, lots of money, but the delivery team raised some questions, one of them being “what is this going to be used for?” This request was escalated internally, and Microsoft ultimately declined the opportunity, as it did not align with the company's charter that AI should support and supplement human capabilities. Ultimately that was a very clear decision based upon Microsoft’s charter, based upon what we do and don't do with AI.

 

Write it Down

An established and transparent AI charter can help guide your company when faced with questionable or uncomfortable requests on the use of your AI technology. Eric mentioned that if you do not write down your principles you will not know what they are, especially in a morally ambiguous situation. He remembers being involved with those conversations in the example Uli provided and that important scenario largely led to Microsoft codifying a lot of the responsible AI principles that we have now.


It's almost like it's 2015 and 2016 talking about these same things, these same considerations, but they're even more salient it now than they were back then when we were just doing statistical, algorithmic, probabilistic modelling.

 

Weapons of Math Destruction
Uli recommended a good book to read, called “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy”. In the book they didn’t talk about LLMs yet, which are even more powerful than what statistics models can do. One of the topics the book dove into was what statistics models can do in terms of how wrong they can be with the best intentions and the impact of “small things” which is covered later in this blog.

 

AI charters are not just about avoiding catastrophic outcomes but also about preventing smaller harms that could negatively impact specific groups of people. It stresses the importance of having clear principles and an escalation path for employees to follow, ensuring that AI is used responsibly and ethically.

 

Small Things

On a less catastrophic scenario like Skynet from Terminator, Uli talked about it on a smaller scale. Let's assume you're applying for a credit card. Most likely an algorithm would evaluate you before it even gets to a human being.

  • How is that algorithm being built?
  • How is that algorithm being trained?
  • What is that algorithm looking at?

And if the algorithm then says “no”, you're not going to get a credit card, that's obviously a very bad decision for you because you were looking forward to that credit line. That is a “small decision” with a very large impact on you as an individual.

 

The book mentioned earlier goes into greater detail about “small things”. There are many decisions that are being made using algorithms and data. And if the data is skewed a certain way, the algorithm will work in certain way.

 

Tips on Creating or Updating your AI Charter

Your company may need to develop an AI charter or on the other hand have one that is outdated as AI technology such as LLMs advance rapidly.

 

Some things to consider are:

  • What is the structure?
  • What does it look like?
  • How do I adjust it?
  • Do I need to make any adjustments?

Eric is taking what his company currently has and then looking at it through the lens of what LLMs give us today. One area which is hard about implementing LLMs, especially in the hosted foundational model space today is, you want to make sure that you have a purpose and specific to LLMs to clearly:

  • Define the intended purpose.
  • What are they going to be used for?
  • What do we absolutely not trust this technology yet to be used for and set those boundaries and revisit them?

In his case there are also things to consider such as data and ethics, emphasizing the importance of high quality bias, free tokenized data with no PII, and making sure that they match regulatory compliance.

 

Even if you're creating your own RAG (retrieval-augmented generation) libraries for posted foundational models. Think of RAG as a librarian of a store of knowledge that an LLM can actually access and look up information as it generates responses. If you're providing that library of expertise to a hosted foundational model, you need to have the same vigilance in terms of high quality, bias free data that you're providing to that model.


And before you productionize it, then the difficult topics like we discussed last time around transparency and explainability; you can control what you feed it, you can control and observe what comes out of it. However, what happens in the middle during that process is anybody's guess. You have to get more purposeful and predictive, so you're not surprised with the output.


Managing what goes in and what comes out and analyzing it and doing prompt engineering and looking at how people are interacting with it judiciously are all going to be important. Then there's elements that bleeds into security and control, Which is robust cyber security measures to protect events against unauthorized access to the LLMs, to defining clear processes for detecting adversarial prompts (if you're exposing props to regular users) and then outlining accountability mechanisms for LLM outputs and actions.

 

These outputs could be an LLM making something up, if it hallucinates and you're not able to detect it, what happens? You don’t want it to make crazy promises on your behalf as an organization, what do you do? How do you deal with that? And then we have human in a loop integration, however there are certain things that are going to be automated and no human in a loop once we have that degree of confidence. What does that look like for LLM usage? How soon can you eliminate human in a loop and trust that the prompts and the outputs are actually going to be accurate and low risk for the organization?

 

To create a foundation for any AI usage in any company:

  • Create an AI Charter.
  • Create an escalation path.
  • Make it a living document: things change all the time and don't just put it up as a trophy on your website. It's something that has to lived and be reviewed constantly.

Learn more about Microsoft AI Principles and Approaches and see how the company is committed to making sure AI systems are developed responsibly and in ways that warrant people’s trust.

 

Watch the episode below or watch more episodes in the Armchair Architects Series.

 

AriyaKhamvongsa_0-1712279649589.jpeg

 

 

 

Learn more
Author image

Azure Architecture Blog articles

Azure Architecture Blog articles

Share post:

Related

Stay up to date with latest Microsoft Dynamics 365 and Power Platform news!

* Yes, I agree to the privacy policy