Navigating ethics and AI in business

Navigating Ethics and AI in Business

Peter van der Putten, Director of PEGA’s AI Lab sheds light on how companies can balance the interests of all stakeholders—the company, customers, and citizens. 

Highlighting the fundamental principles for deploying AI ethically at an enterprise level, we explore how to inject a human element into the AI process. Balancing value and responsibility is a tricky path to tread, and we discuss how companies can set up processes to scale AI usage responsibly. Join us to expand your understanding of the ethics of AI and how it’s shaping the future of business.

This episode of Customerland is sponsored by


Full transcript below

Mike Giambattista 

So today I am honored again to be talking to Peter van der Putten, who’s director of PEGA’s AI lab, and if you were part of our podcast audience several months ago, peter and I were talking about some of the I would just call it the media cycle hype issues that AI was generating at the moment. But AI is moving fast in our marketplaces and the perceptions of it are growing, the utilities of it are growing, the worries of it are growing, and Peter sits right in the middle of all that, running PEGA’s AI lab. So first let me just thank you for joining me, because this is going to be a lot of fun. 

Peter van der Putten 

Yeah, it’s great to be here. 

Mike Giambattista 

So I would like to focus at least a part of today’s conversation on how companies can or should be thinking about ethics and AI, because AI the utility of it is exploding not only what you can do with it, but how many companies are deploying it in various ways, and there are a handful of people I know yourself among them who are concerned with and working out issues related to ethics and AI. So tell us, if you would, just what that looks like on a day-to-day basis at PEGA’s AI lab. 

Peter van der Putten 

Yeah, no great. Yeah, like indeed. And you know I don’t think sometimes people think about ethics, as you know, almost like a scary thing or, you know, that’s to the part when that says no right. So, whereas I think for us ethics is, it’s ultimately more about like how to lead a good life. So, even as a company, you know, like how to behave well and that, ultimately, is also something that pays off right. So I’m not a stern believer in that there’s a trade-off between ethics and, you know, having a profitable business, for example. I see the other way around. I think there’s only one sustainable route towards long-term profit and also towards long-term use of AI is used in an ethical and a trustworthy manner. So that makes it a lot more kind of an interesting and exciting topic. You know it’s not. You know it’s not a topic around. You know we’re not the no police. It’s more about the yes police, in a way. 

Mike Giambattista 

Interesting. I know that in some conversations I’ve had on this topic, there seems to be a kind of a sliding scale between ethical AI and profitable use of AI, as if it’s an either or proposition, and I agree with you that it is not an either or you can have and should have both to be worked out effectively. But you’re probably the only person I know who’s at least a component of your job is to work that out. 

Peter van der Putten 

Yeah, yeah, absolutely. 

Mike Giambattista 

Yeah. So that to me means that you’re part evangelist, but partially as well you’re baking in ethics into the work that you’re doing at Pega. 

Peter van der Putten 

Yeah, because it shouldn’t just be talk, right. So you should be able to kind of operationalize that ethics into kind of, let’s say, for us into real working software, because ultimately we are an enterprise software provider but for our clients, into experiences and interactions with our customers that are valuable to both. Right, because I think that’s the most easiest way or the most high level way to boil down ethical use of AI. Simply, don’t do to others what you don’t want to have done to yourself, right. So you need to make sure that you balance the benefits for all stakeholders the company or, if you’re the government, your interest as a government, as well as the customer or the citizen or the partner, whoever you’re dealing with right, and that’s not a need or right. So, like you said, so, if I give you, maybe I think that’s the most important thing, because people talk a lot about these ethical principles like bias and fairness and transparency and robustness, and for sure, we’re going to come back to those topics here in the podcast. But I think the more fundamental issue is making sure that you serve the purposes of, let’s say, all stakeholders in these interactions and processes where you’re using AI. So, in a marketing example for John, but it could be. 

You know like, let’s say, you have like a large libraries of recommendations you could give to customers, right? Well, first off, you want to make sure that you trim in a particular, let’s say, there is a particular interaction with the customer. Customer opens up the mobile app and you want to make some recommendations. So, first off, you want to make sure that you apply some you know hard rules potentially, which encode your ethical policy around. It’s awesome that your AI predicts that this person will click on the. You know, give me a two and a half million dollar mortgage with the picture of the nice house with the pool. But you know, if you’re, it makes sense that if you’re like a 14 year old, that that’s something that gets you excited, right? But we’re not selling these mortgages to 14 year olds, right? So it’s even applying some of those simple kind of ethical rules. 

And, to narrow down in this case, you know the universal messages you could talk about. But then the next step is you’re left with a lot of things that might still be relevant, and so the key question then becomes how do you prioritize? Are you going to prioritize on profit, like we make a lot more money on mortgages and credit cards, or are you going to look at like, hey, but what is most relevant for this customer right now? You know, what are they most likely to be interested in and that that could go, and so that you can see here. You know, this is something that you can actually operate life, right. So if you rang those messages, is it purely on likelihood for this to be interesting, or are you taking, you know, factors like what’s the margin we make, all these offers into account? Yeah, so we’re, and that’s where you can actually strike a balance. 

Yeah, also, when you think about, you know what is our library of messages that we could potentially talk about, you could say, well, we just want to focus on hardcore sales, but you can also say, no, we also need to have some of these software service messages in there. 

Or we also need to focus on, not how we can get more value out of the customer, but how the customer can get more value out of the relationship Right, so, making sure that they whatever that they, that they use in products and services that they’re entitled to, yeah, or even have recommendations that are not your products and services but that do feature a mission. So one of our banks. They say we want our customers to be. You know, we want to ensure and enhance and secure their financial well-being. Well, so some of the recommendations they make, they have to do with all kinds of government benefits these banking clients could qualify for. And so you’re not selling a banking product, but you are doing what your mission is telling you to do and which is, you know, enhancing and securing the financial well-being of your customers, right? 

Mike Giambattista 

In a sense it reminds me a little bit of the early days of programmatic advertising on the web when, you know, somebody figured out that algorithmically we could serve enough of these kinds of ads, that we would make enormous amounts of money without any consideration for what that felt like on the customer side. 

And I’m sure there are those actors right now who are leveraging AI for those purposes. 

But I think with I’m just going to call you an evangelist because it seems appropriate in this context people like yourself who are actively talking about and I love the way you just put this that not just extracting value from the customer but delivering value on behalf of the company, that it’s not going to take too long for this ship to kind of ride itself and start balancing itself out. And I know that you know reputable players, which is almost everybody that I see out there are working in that same direction. But there are always profit motives behind everything. Let’s face it. That’s a lot of the stakeholders are profit-minded people. But I think there’s going to be a maturation of this and hopefully the fear mongering that we’ve seen in the media and AI stole my baby and the craziness that we’ve seen out there will start to kind of fade away and we’ll get down to a level of utility where AI is an expected component of any enterprise’s toolbox. It’s not just this wild thing that’s going to rob people of their jobs and make us millions of millions of dollars. 

Peter van der Putten 

Yeah, absolutely, because we also did some research on a panel of 5,000 customers globally and their two thirds said that they expect that most major departments in a company not just these customer-facing departments like customer service and marketing, whatever, but let’s say virtually any department in an organization will be run using AI and automation in the foreseeable future, right? So I think, in a way, customers already have high expectations in that sense and, of course, they expect it to be used in their best interest. As a customer, I don’t mind if my bank makes profit actually, but as long as there’s a bit of a win-win. I think that’s why this particular example, the marketing example, is a good one, because this idea of win-win, it’s not just talk, right? So let’s say, you lean more towards, rather than saying I’ll just go for whatever the likelihood of the click and what’s the margin on that offer, or, even more purely, I don’t care about that likelihood, but I only care about the margin. 

Then customers are going to check out and you’re training them to that whatever you feed them something that’s good for you but not good for them. So you’re not going to have the success of your marketing messages in that example. That’s even exaggerated or extrapolated if some of those pages are being delivered indirectly and maybe through agents in the context center, because they’re constantly talking to customers. So if they see that whatever that’s being recommended is not relevant, they’ll check out, they’ll start using the system and they might type in some phony offers and say the customer didn’t want it. So this is why it’s going to pay off to do the right thing to quote Spike Lee, my favorite filmmaker. It does pay off. It does quote ultimately earn you more money as well, so it’s the right thing to do. 

Mike Giambattista 

It layers in and leverages a longer term customer value equation. For sure, on the one hand, that’s the right thing to do, but the fact that you are focused on what you’re focused on at a company like Pega doesn’t really surprise me. My first interactions with Pega were speaking with one of your colleagues several years before you and I were connected and we were talking about empathy at scale. So even I don’t know, maybe five years ago or something like that a company that was focused on building technologies that could deliver customer experiences embedded with empathy real, genuine human empathy and that’s a complicated topic and hard. It’s difficult to figure that out in all the different contexts. I thought it was impressive at the time, but take that four or five years, wherever it’s been, and you’re in the middle of Pega’s AI lab, focused on things like ethics in AI and how to deliver technology solutions that have ethics at the core of them Exactly. 

Peter van der Putten 

You have to really embed it into those capabilities. How to make it real? Right, Not just real, but you also need to make it easy. Let’s face it, we’re all lazy human beings to some degree, so you really need to make it easy, attractive to actually work with AI in that particular way you know, and then the rest will follow. About this idea of empathy at scale, it might have been Rob Walker you’ve been talking to- it was Robin Collier. 

Oh, robin Collier, there you go. But empathy at scale, that’s a great concept, and I think people sometimes think that empathy is all about emotional intelligence and of course, yes, emotional intelligence is important beyond IQ. But I think there’s an even deeper kind of expectation behind empathy, and that’s the moral expectation not even the emotional empathic expectation, but the moral expectation that whenever we engage, we have joint interest right, so you do what’s right, not just for you, but also for me. You know like, and I think that’s something you can, given the marketing example just gave, it’s something that you can really operationalize in these large scale automated, autonomous, ai-driven decisioning systems. And I gave a marketing example. But the same applies to customer service or to, you know, intelligent automation or customer operations, any of those processes, and that’s one of the developments in the last five years that we kind of branched out from using it primarily into this kind of one-to-one customer engagement space to making sure that you can do this across all of your customer interactions, across all of your business processes. 

Allant is an audience orchestration engine leveraging data analytics, experience management, and MarTech integration to power the solutions behind successful customer journeys. 

Everything they do is based on the proven principle that customers want a more relevant, less intrusive and privacy centered relationship with brands and they want to interact with – on their terms with their needs in mind, in the channels they choose. Allant gives some of the largest and best known brands in their category a competitive advantage to deliver better lifecycle experiences, to individualize, nurture, and energize the relationship between the brand and their customers.

Check them out at allantgroup.com. That’s A-L-L-A-N-T group.com.


Mike Giambattista 

That gives me two ideas. One is that the next time we do this, we should invite Spike Lee into the conversation, to what he has to say about it. It’d be great fun. But two and a hair, more practically, is to talk a little bit about how PEGA is actually doing that. What products are and is PEGA deploying LLMs and ethical AI, and how do you do that? I mean, how does that really get worked out? Because it’s one thing to talk about it and have what a great idea it is, but it’s a complex undertaking. 

Peter van der Putten 

Yeah, absolutely. I kind of hinted at this like the ultimate principle, which is do what’s right, not just do the right thing to cross Spike Lee again and the right thing is not just what’s right for you, but also for the other. And one of the examples that I gave was like, if you really think about in this marketing example on how to prioritize what to talk about, you can lean towards what’s right for the customer, Like what is something which is good for the customer or that they’re interested in. But there’s also other elements. So we can make sure that when you make those automated decisions and maybe the cool kids with the AI startups that make you think like, oh you know, we have this big super-duper language model or black box reinforcement learning engine and it will take care of all of it, but that’s not how it works. 

So you probably will want to embed your particular strategies, your particular policies, particular business rules on top of those models, so something that an automated decision on what to recommend to a customer, for example. There’s a lot more that goes into it than just predicting the likelihood to accept those recommendations. So you need to have the ability to deal with all these kind of classical AI systems. You know the good old, boring business rules on top of it to translate all those predictions, but combine it with your business knowledge and strategies and policies to turn it into a decision what to do. So that’s another element of how to operationalize all of this. 

And then probably you want to do things like make trade-offs around transparency. You know, like a certain bits of AI certain to trade Machine learning models that are highly accurate, but they are less transparent. It’s harder to understand how these predictions are being made, which might be fine for some form of marketing decision, but it’s not fine for, let’s say, if you have a core model that decides on whether someone can get a loan or not. So you need to be able to make those trade-off decisions in terms of transparency and accuracy when you decide on what types of AI to use. Also, what you want to do is to make sure that you test for things like a bias and to make sure that you keep your automated decisions fair. So can you maybe run some simulation on some historical data? In this example where I want to, let’s say, decide automatically on who gets a loan or not, it’s nice because you don’t have to constantly talk to some bank employee to know whether you can borrow something. It can be a quick interaction in your mobile app. So customers in principle love that, but we have that logic and models and data that’s driving it. How can you make sure that there’s no bias that’s crept into those decisions, be it in the trained data of the models, but it could also be just in some of the hard-coded rules that you apply. So you need to be able to test for that. It’s my bias within limits before I launch that new logic. 

And then you want to be able to keep monitoring and tracking that as and when these kind of AI decisions are being made. But then that’s important for you as a company to monitor what’s going on. But also at a lower level, like when I would pass on those marketing recommendations to a customer service agent, it’s nice if you can give some automated explanation like how did I get to that decision? Or maybe even directly to a customer. So monitoring is not just for your own employees, it could be all the way down to providing those automated explanations back to end users like agents or customers. 

And then generative AI also adds some additional kind of challenges into the mix that you need to take into account. For example, the use of proprietary data. Is that using particular data, can we send it on to, like some central servers like OpenAI, or is this particular use case or data too proprietary? We first want to filter out any of the proprietary data, either in the problems or in the output that we get returned, or we even don’t want to send it to a public service. We want to use some private language model because this is information that should never, ever leave the company. Those are some of the particular challenges and solutions. 

Mike Giambattista 

more in the generative AI space you just made me think of something that you’re talking about building in all of the nuances of ethics and morality, if you will, into the large language models, and for everything that I read about AI and all of the fascinating new whatever the shiny objects of the moment there are millions of them coming out every day I’ve yet to hear anybody really talk about tuning your large language model to reflect your brand values. 

I mean, it’s one thing to have a large language model that works itself out with some AI capabilities to solve problems where it’s problem solution, question answer in the customer service world, and to do that ethically or produce advertising and segmentation models that are done ethically. But I see a lot of people talking about the wonders of generative AI for the marketing, advertising, customer experience world, because of all the great tools that will save us loads of time and all that, but nobody’s really talking about using or tuning their large language models to reflect values that are important to the company and are important to the customer, and it seems to me like I would really like to see that and you touched on it. You have to have the values baked in and the language there to reflect that too. 

Peter van der Putten 

Yeah, absolutely I agree. 

And so, for example, and then you can do that at multiple levels so you can take the output, whether it’s these large language models or more traditional machine learning models, I think, automated decisions or interactions it’s never just a model and that’s why I spoke about Google Fashioned AI, the classical business rules that you actually stick on top of them where you can control them. 

But you know, particularly in the area of generative AI, there’s lots of stuff you can also do, not just through kind of training those models, which is kind of very costly, but even in clever prompt engineering, right. 

So if you say I want to, let me say I want to generate some creatives using generative AI. You can be quite specific about. And if you want to codify that, then you know, like you don’t want your marketers to go off and just hammer stuff into chat GPT manually, you want to build it really into a workflow where you can make sure that those elements of the prompts are also there, right? So some standard elements of your prompts to make sure that you know whatever is being generated here through the brand values that you want to communicate right, in addition to other kind of you know constructs that you may want to put into the prompt like an example that we use a lot is to use, you know, different persuasion styles based on the Cialdini. 

you know, the social proof and authority whatever. So there’s a lot of things that you can steer in the prompt, and some of them could be more geared towards like, okay, making sure your message resonates, you know, or that you’ll get better outcomes and results, but also it could be more around making sure that it aligns with your brand values, which is the other example I get. 

Mike Giambattista 

So you and Pega have come out with three basic high level column rules for deploying AI ethically at an enterprise level, and I’m just going to read them off here, but I’d love to spend a few minutes just on unpacking them. Number one leverage AI as a starting point. In other words, it’s not the end, all be all, it is the starting point. Second, read the results. And third, balance value with responsibility, and we’ve talked about some of this, but I think breaking that down. For companies and individuals who are considering scaling up their AI usage or building it out from scratch, these are great ways to evaluate or at least set up your own processes. Is this how you do this internally? 

Peter van der Putten 

Yeah, like we constantly look at you know, like what are some good principles to follow? So, at the, ai is a starting point. I think that’s where and it also depends a little bit on your definition of AI and like, in a very narrow definition, when people think machine learning etc. And it’s already kind of this realization that to get to a decision, even if it’s an automatic decision, there’s a lot more that goes into the mix. Like I said, basically maybe these business rules that encourage your ethical policies, or maybe parts and the pros that define your brand values. But also, when you think about AI just starting points, then it’s also thinking about like, oh, is there a human in the loop? Is there a human before the loop? You know, is the human somewhere? So maybe back to, because we’re using the marketing example as a running example. 

Anyway, I don’t believe that we want a decent short term. We don’t want to have Jena AI getting into an interaction and, completely on the fly, coming up with some kind of offer or marketing recommendation, because I think it’s important to be in control. What we can’t have is you go to your website and we need to wait for the market to decide a lot better to show. So, essentially, you want to have a human kind of almost like before the loop. 

So use a lot of the creative powers of Jena AI, maybe to come up with new interesting propositions and treatments, or maybe to refresh those treatments and messages that don’t really resonate, but there’s always, there will always be, then kind of a sign off on the marketer like, hey, these are good, great new creatives to use and that’s basically the right brain that helped out. 

Then, as the human that comes in the loop and says like I approve this message to put a little bit into, let’s say, what we hear in all those commercials this is Peter and I approve of this message. But then it goes into the library and then maybe the left brain, the automated left brain, kicks in and prioritizes those messages etc. Right, so you’ll make a good use of AI and automation at skill and MPD at skill at the critical point. In this case, when do we decide whether to release these new treatments into our library of possible recommendations? That’s a critical point where, where you know Peter says Peter approves of this message, right, interesting, that’s where AI well, that’s maybe something in the direction of this AI’s assignment. 

Mike Giambattista 

The second and third points read the results. That’s self-explanatory. What’s happening, evaluate it, retool as needed. And then, thirdly, balance value with responsibility, which goes back to almost everything we’ve been talking about thus far, which is value not just for the company, the profit motive, but also delivering value from the company to the customer. 

Peter van der Putten 

Yeah, yeah, and I mean it’s a high-level principle. I think that’s a very healthy principle to keep in mind, right, because you can use it with virtually any use case. You can think about that principle and make sure that you’re delivering on that right. And in the marketing principle, I explain it in terms of how do you decide how to rank those messages, or how do you make sure that you have interesting messages to talk about, not just your hardcore sales. In the loan application example, it could be making sure that you provide the right levels of transparency about how you got to this decision. Even if it’s a no – no, we can’t give you this loan or if it’s a no, if you’re thinking about best interest, is there a way to come up with alternatives right, to still make sure that we can provide maybe, let’s say, a smaller loan to this customer to help this customer with their problems, right? 

Mike Giambattista 

A plan B or some way to preserve the relationship. 

Peter van der Putten 

yeah, yeah exactly, or if I can see. Well, yeah, I’m making a lot of money on this over-debted customer, but if I consolidate these credit card loans into something which has lower rate interest, I might be losing money in the short term as a bank. But it’s the right thing to do for my customer right and day-to-day, and right now it’s maybe cash threat. You know, I challenged, maybe I just got out of college and I just got my new job and I had to make a lot of expenses and it simply became too much. I mixed out my three credit cards. At five years from now, I’m a wealthy individual, right? 

Mike Giambattista 

So if you help me through these periods, you just signed up a customer for life, yeah, well, honestly, this always happens when we talk about these kinds of things, but I’ve got over 30 questions and another 15 topics I’d love to just sit here and chat about. But maybe we’ll make that a third installment of this conversation when we can get Spike Lee onto the chat here. But for now, peter, I can’t thank you enough for the time in the conversation, but I think, even more broadly, for being one of those handful of people who are dedicated themselves to the ethics of AI and making sure that that has a place in the corporate conversation. 

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Previous Article

Loyalty Landslide: US Consumers Demand More as Loyalty Declines

Next Article
customer experience & AI

Opportunities in AI & CX

Related Posts

Subscribe to TheCustomer Report

Customer Enlightenment Delivered Daily.

    Get the latest insights, tips, and technologies to help you build and protect your customer estate.