Loading...

Armchair Architects: What Is Responsible AI?

Image

In this blog, our host, David Blank-Edelman and our armchair architects Uli Homann and Eric Charran will be discussing what is responsible AI? This is built upon the previous discussion they had on Armchair Architects: Considerations for Ethical and Responsible Use of AI in Applications .

 

They we will dive deeper into the meaning of “what is responsible AI?” and what it entails. It sounds like a cool concept, but let’s have the armchair architects share their views on it as it may not necessarily be well understood.

 

Genesis of Responsible AI

From Uli’s perspective, responsible AI came out of the initial usage of algorithms to make decisions based on mostly statistics and machine learning capabilities. There was a credit card decision example using AI and data he provided in Armchair Architects: Are AI Charters Valuable? In that blog, he noted the impact of an AI “small decision” that could impact someone who was denied a line of credit. If you make credit decisions using AI and data, you need to make sure that the data is correct, the data isn't biased, that you have a fair of an evaluation of the applicant.

 

That is how responsible AI as a movement started, and the concepts are about transparency, fairness, and bias that mixes all those elements together. Now with large language models (LLMs), it becomes even more important because they've been used more broadly, whereas the statistical models were in a specific domain, and they were managed by specialists.


What LLMs and applications such as ChatGPT and Bard have done is they've democratized the usage of AI to pretty much every person on the planet, and now everybody's using it. The notion of responsibly using AI as a provider and consumer is becoming more important than when we just looked at statistical models.

 

Beware of Unintended Consequences
Eric 100% agrees with Uli’s explanation, and from his perspective he thinks about it in terms of unintended consequences. Nobody sets out to build a harmful, biased or compromised LLM implementation. Assuming accountable parties don’t want to harm people, Eric’s main point is to ensure that the unintended consequences become known to you prior to doing anything meaningful.


Examples of unintended harms and unintended consequences might be bias against a subpopulation of individuals, usage patterns in which you have not seen before based on your use case definition. Once it's implemented, live, and grows in usage and popularity across the world maybe there are cultural norms and behaviors that you haven't taken into account for this English trained LLM at the core of your app for example.


In the previous blog there was discussion around explainable AI and when you're looking at large language models, you can't go for explainable AI that has thousands and millions of variables. However, what you can do is go to observability, not explainability anymore. So, although you know the inputs and the output are (observability), you may not be able to fully explain how the LLM processes the input to create the output. This is one of the reasons why a living AI charter that was discussed earlier is valuable.

 

Continuously Update Your Set of Practices and Procedures as LLMs Evolve

With LLMs they are ever evolving and hence similar to the expression “moving the goal posts” as to what they can do. So there has to be a live and continuous set of vigilant practices and procedures and reviews that every organization and every team must implement. The goal posts will keep moving as LLMs get better, more widely utilized, and implemented in more places.


You're going to have to continue to increase your hyper vigilance around what's going into it. What am I giving it as to my libraries of expertise and then what's coming out of it, then studying how people interact with it, keeping a log of prompts, responses and continually and vigilantly analyzing it.


From Uli’s perspective, it is not only possible to be a responsible user of AI, but also mandatory. You have to be because as Eric said, you are ultimately responsible and accountable for what this model does or doesn't do because you are presenting it to the world. Whether this is either an internal world or an external world, it doesn't really matter and therefore it's up to you to figure out how to go and drive this responsible AI capability.

 

The Old World Versus the New World of LLMs

The LLM world has made this a bit more complicated. In the old world, you were completely in charge of how the model works, what the model is, what the data is that the model works on and the application usage. So, you had all these elements under your control, even if you were binding the capability.

 

In the new world you now may buy an LLM where you don't know what the data is that the system got trained for, you don't know the algorithm. So now you are buying a black box and the black box is super powerful. OpenAI, Gemini, and other models have shown they can answer questions in an amazing way. They have lots of knowledge and now you want to use them, but now you are still a responsible AI user.


How do you go about being a Responsible AI User?

To be a responsible AI user, what can you control? For example, some people say they get mad when the sun goes up and down. Sorry, you can't control that the sun goes up or down. What you can control though is which model you use.

 

Now that you picked a black box model, let's assume that now you cannot control what the data set that it got trained on, how it got trained on the algorithm, and that's out of your control, like the sun rising and setting. Now what you can control is you asking the model questions, which is a prompt. You engineer that prompt that is in your control. You could argue that a prompt, because it's structured, you control it. The text that gets sent by the human that uses the prompt is not under your control, but you can do certain things to it in terms of understanding that is it viable language or bad language etc., etc. so you can do all of that.


Now you can say I can prove to my user that the prompt that I engineered is following responsible guidelines. So that's something you can prove and then it goes into the black box, it gets processed and on the return path you can structure the response while the LLM will tell you that it is going and putting text out.


Ways to use Prompt Engineering

There are many ways to use prompt engineering for example to force the model to respond in JSON. So, where you get a JSON schema and everything that doesn't fit into the schema gets automatically rejected and you can reason over that JSON schema problematically to make sure that the content that got returned is the right one and is not biased. Then you can go and feed that into your application.

 

From Explainable AI to Observable AI
So, it's a different way of thinking about responsible AI, where you go from explainable AI meaning transparent algorithms, transparent datasets, to observable AI where you observe the inputs and the outputs rather than understanding the machinery.


Architects’ Influence on Responsible AI

The way that Uli thinks about it is to pick a responsible AI provider such as Microsoft with Azure OpenAI, or other models and once you have picked one proceed to do the following.

 

Use techniques and technologies out there such as filter the prompt for specific words, violence and so forth and also the response content safety. Then the last piece is there's open source technologies like TypeChat, which was introduced in July 2023 that will go and help you build the prompt and the response in a structured way that is using JSON to help you with schematized response, which is an early binding of the response type, which means your applications can easily consume it and you can easily validate it. Some of the LLMs are starting to take the same route where inside the prompt, you can do inline requests to the LLM to respond in JSON for example.


In short, Uli recommends picking the right model and partner, think about content safety and then think about your application layer through capabilities like TypeChat. There are other ways to do it, but that's a very concrete way of doing it.

 

Prefiltering and Moderation

Eric thinks the implementation steps Uli outlined are great and wanted to add some of his insights. For him, the big rocks are around pre-filtering and moderation. Utilizing your provider's built in content moderation tools if it has them and/or determining whether or not you have to build your own content moderation tools along with what your hosted foundation model provider has to filter and flag potentially harmful or invalid outputs.


The second one is API access and control. From an architecture perspective you don't want people going behind the scenes and talking to your models without you knowing about it. So, make sure to programmatically define specific constraints and parameters.


He mentioned lots of models now are actually in their return pay mode or specifying confidence intervals associated with the responses and where that's possible you can start to error track for that in your code.


For example, if a response comes back and it's at 30% and it seems even the model is telling you, it's not terrific that's something that you can track for an exception from an implementation point of view and then fine tuning and customization. Explore ways on how do you fine tune the model based on your data sets.


What is the data that you're providing to it from a library point of view so that the model when it consults that data, is it accessing biased information or tokenized, untokenized information or PII and things like that.

 

Resources

Watch the episode below or watch more episodes in the Armchair Architects Series.

 

AriyaKhamvongsa_0-1712349366912.png

 

 

Learn more
Author image

Azure Architecture Blog articles

Azure Architecture Blog articles

Share post:

Related

Stay up to date with latest Microsoft Dynamics 365 and Power Platform news!

* Yes, I agree to the privacy policy