Loading...

Armchair Architects: Considerations for Ethical and Responsible Use of AI in Applications

Image

In this blog, our host, David Blank-Edelman and our armchair architects Uli Homann and Eric Charran will be discussing the ethical and responsible considerations architects should consider when using large language models (LLMs) in their applications, including the risks of leaking confidential information, societal biases, and increasing regulations.

Thinking about Confidentiality

As an architect Eric worries about a couple main areas in the AI space. First there are ethical and responsible considerations for architects integrating LLMs into any type of design, collaboration, or application experience.


There are some real-world examples of unintended LLM training on things such as confidential information that has leaked or unintentionally present secure information in the corpus of data that was used to train LLMs. In addition, there are potential societal biases and discriminations that may be embedded in the corpus of training data for LLMs. Architects need to think about what are some strategies at the platform level for data anonymization, access control, algorithmic transparency and asserting those things to regulatory bodies and the specter of increasing regulations that might appear in this space.


Transparency in Algorithms and Responsible AI

Transparency of algorithms is a nice idea if you're doing a little bit of data science and have some statistic models, however if you have a LLMs with trillions of variables and if the model is a closed model like OpenAI, Gemini or other models that are entropic and so forth, how are you going to go get transparency in algorithms?


Uli believes, you won’t be able to get transparency in algorithms then folks in the responsible AI space will start to request explainable AI. You can't explain a LLMs with trillions of variables, it just doesn't work. In addition, part of what the LLMs expose into this world is the need to rethink what responsible AI means and to answer some of the questions before choosing LLMs.

 

Picking a Model and Partner
There needs to be very clear picture of what are you going to choose as the basis of your LLM. There are really two choices: closed models such as OpenAI, Gemini, or whatever it may be or you select open-source LLMs such as Llama and many other models. While there is a little bit more transparency because the source is open, the code is still very complex. You might not be the expert to really understand what that model is, and you also might not know who is behind the model.

 

Some considerations for the open-source LLMs are:

  • Is this a viable organization that has the right intention?
  • Where did this model come from?
  • Do they adhere to specific promises that they give you in terms of what the model doesn't misappropriate for their LLM?
  • How do you know that the model doesn't have code embedded that you just can't find?

Effectively, it starts with which model are you going to pick, which really means which partner are you going to pick? Then when you pick a partner, you have to ask “What are the promises that the partner is actually giving you?”


With OpenAI, there's a version called Azure OpenAI and Azure OpenAI gives a couple of promises, for example:

  • Microsoft will never use your data to train the foundational model. That's a guarantee Microsoft gives you and stands behind. Other competitors such as Google and AWS may be doing something similar.
  • Microsoft also guarantees that all data that you feed into the Azure OpenAI model is isolated from any other data that is being hosted in the same multi-tenant service.

After you have picked the right partnership, either open source or closed source and think through what you want to achieve with that capability. Then you have to take into consideration your company’s stance on ethics and responsible AI usage. When Uli talks about AI on stage, either publicly or in conversations with specific customers or partners, they get very excited. However, he recommends that one of the first things you need to do is have an AI charter.

 

One consideration is, “What is your company going to do with AI and what is it not going to do with AI?” This is done so that every employee and every partner that you work with understands what the job of AI in your world is.


Microsoft has been public about what they think AI will do and will not do since 2015 and has made it transparent to the public. Internally, every Microsoft employee also has a path to escalate if there is a usage of AI that's being proposed that you as an individual don't feel comfortable with. There is a path that's not part of the employee’s organizational management path where they can escalate and ask, “Is this really in line with our values?”


In summary, you first pick the model and the partner(s), the second thing is what are you going to do with AI and what are you not going to do with AI? Then the last piece that Uli thinks about is “When you're looking at large language models, you can't go for explainable AI.” He believes unfortunately, it doesn't work.


However, what you can do is go to observability, not explainability anymore. You can observe what's going on. In an LLM, when you interact with it you use a prompt. You could argue that a prompt is like a model itself; you know exactly what the structure of the prompt is going to be, what you are going to add to the prompt before it enters the LLM, etc...

 

Create Model using JSON

You can observe what goes into the prompt and you also know what's coming out of the LLM.  You can observe what the output is, and you can also create a model for the output using JSON for example. OpenAI has a model, but there's other open-source capabilities like type chat that force the LLM to respond as a or with a JSON schema.


Which means you can now reason over that schema as a model and say, “oh, only data that fits that schema will effectively end up in my response prompt.” This is a very good way of saying, “Yep, I can guarantee I don't know how the LLM gets to the result.” We can mathematically prove that it's right, but we don't know exactly how the code path works. But we know what we fed in, we know what came out and we can enforce rules against the input and the output. That's what Uli means about observability.


These are the three elements that Uli generally talks through with customers and partners: start with the right partnership, what is your AI perspective and then how do you go and observe rather than explain AI.


Practical Tips to get Started

Some of the ways to get started from Eric’s perspective is to focus on model reporting analytics, investigating visualization and explanation tools. Then aggressively engaging in studied user interaction and feedback while keeping within the guidelines of the ethical considerations and your company’s AI charter that Uli mentioned earlier.

 

Modeling reporting analytics: clearly documenting the model architecture. You might not be able to dive into the layers, but also focus on the training data and sources. In addition to understanding the dimensionality, the cardinality of that data, what's you're obfuscating and tokenizing, what you're removing from that data set, what biases might exist in that underlying trading data. Eric believes it will help developers understand how the model works and to potentially pre-empt any particular issues even before you begin the foundational model development process or the fine-tuning process.

 

Uli has a differing opinion, in that what Eric is describing is awesome if you're in a traditional model world, but we're not as LLMs are not traditional models. You don't control the data set that OpenAI uses, that Gemini uses, that Anthrop\c uses. You don't even know what they're training on.

 

Financial Services Approach
Eric, agreed with the point Uli mentioned in that LLMs are not traditional models and you do not know what they are training on. In the financial services industry, there's a combination of approaches that Eric is taking.

 

The approaches are consumption of foundational models from your Hugging Face or subscribing to cloud-based models and a hosted foundational model exercise. There are also ones in which based on proprietary information and data sets that these organizations have, he is going to create his own foundational model.


In the circumstances in which you are curating and creating a data platform and utilizing your own foundational model, you will have purview and visibility to the dimensionality of the data that you're going to feed your LLM to create your own hosted version. So, in this particular circumstance, training data sources helps you get that visibility and that includes, if Eric is downloading a LLM and utilizing it from GitHub he wants to start asking himself “How much documentation is associated with that foundational model and how do I know how it was made and trained?”


The owners or the creators of that LLM might not provide that at great detail, but it doesn't mean that you shouldn't be asking yourselves these questions before you just start utilizing it.


Commoditization of Generative AI

Uli is hearing of a very large conversation going on between should I build my own foundation models and what's the return on investment on that capability? Never in Eric’s career has he seen such a rapid commoditization of a computing workload as he has seen with generative AI.

 

Organizations need to ask themselves some questions such as:

  • Why would I create my own foundational model?
  • Why would I accrue a data platform which has petabytes of training data and then try to create my own custom infrastructure, build my own spines, host this giant model and my own infrastructure?

Because the hyperscalers have yet to come up with their own concrete version of a custom LLM hosting environment. So why would I do all of that versus just subscribe to one that exists out there that I can configure and utilize. My data transmission to those things are secure that they're not going to be used to train, that I don't have to worry about betraying trade secrets in prompt interactions.

 

Feedback for Prompt Tuning
Let’s assume that a LLM is still a black box, and user interaction and feedback are very important. As you engage in prompt tuning, you want to see if you can develop confidence scores and human-in-the-loop systems that allow you to say “What if I ask this thing, these sets of questions, or provide these prompts” as you are prompt tuning it.

 

Some questions for consideration are:

  • How do I understand what the scope and spectrum of the models’ outputs are?
  • How can I root out prior to going to production based on prompts that people are asking, or maybe unexpected prompts that people are asking?
  • How can I not be or minimize my surprise as to a model's particular outputs, either in terms of its veracity and accuracy, whether or not it's doing hallucinations, whether or not it's actually doing what you asked, or if it's just giving up too early in the process,

These are all things that you're going to want to try to root out through aggressive interaction and feedback mechanisms prior to actually putting this thing in your app or releasing it to the world.


Then finally, for Eric, there's the underpinning ethical considerations, which as Uli mentioned earlier, if you're in a foundational model world, you don't know what the data sets are. They can tell you what the data sets are; you can believe them, or they might not even tell you. But the idea here is through that exploration, that prompt tuning and the very specific and stringent study of interactions, how can you root out bias and mitigate it?


Some considerations are:

  • How can you make sure that it's being respective of governance and privacy implementation considerations for your particular vertical?
  • When it goes wrong, how can you take accountability for that?
  • How can you understand the prompts and even create your own kind of catalogue of what the types of the spectrum of responses are?
  • If you're in retrieval-augmented generation (RAG); how can RAG synergistically provide quality gates during the prompt and response process?

Content safety

Uli discusses foundational model as a service capability that the hyperscalers are providing; the hyperscalers are providing capabilities to host your own model. Uli is going to focus the discussion on closed models such as OpenAI, Gemini and so forth. They are just LLMS that provide you with answers depending on what you ask, RAG, or other patterns using. Now there are complementary services that companies like Microsoft have built which are what's called content safety.


They allow you to take your prompt, feed it into another LLM to go through the prompt and see if there's anything offensive in that prompt and also things like jailbreak, where you try to trick the model into doing something that you don't want it to do.


Content safety capability is a service for Microsoft and there are other capabilities out there that you can utilize to effectively safeguard your prompt input and your output. Where when you are using Azure OpenAI, it is actually built in, you actually can't avoid it.


For example, when the prompt coming in is being filtered and the result coming out is being reviewed and filtered according to the policies which you can set up. There's a default set of policies, but you can tune them based upon your requirements and how you think about it. These content safety capabilities utilize the entire learning from responsible AI journeys.


This is not new, this is something Microsoft has been doing for a while as an industry and are incorporating that capability set into a single service that filters bias all that stuff based upon what you know. The goal is to not go and say” I know exactly how this thing got trained, I know exactly and control what it got fed.” There are certain areas where you just can't know. So, you now focus on what you can control, which is the input and the output into the system. Uli thinks that's a different philosophy for responsible AI than we had before.

 

Thank you for reading the insights provided by Uli and Eric or you can view the video version below.

 

Resources

 

The Danger Zone Part 1.jpg

 

The Danger Zone Part 2.jpg

Learn more
Author image

Azure Architecture Blog articles

Azure Architecture Blog articles

Share post:

Related

Stay up to date with latest Microsoft Dynamics 365 and Power Platform news!

* Yes, I agree to the privacy policy