GenAI is reshaping Finance user experience with hyper personalization, minimizing complex workflows and summarizing mountains of data in seconds to inform faster decision-making. While GenAI has the potential for significant productivity gains, those gains can be diminished by a poor user experience - Human Usability still Matters! How will we ensure next gen experiences are easy to use or the accuracy of an AI-driven decision against expert users’ decades of experience? This webinar delivers practical GenAI UX strategies to improve usability, earn user trust and explore some of the unique challenges posed by AI-driven experiences for Finance Professionals and Consumers.
What You'll Learn:
- What’s better for Usability & Productivity? Let’s talk about AI-Native vs AI Augmentation. What is AI-Native and when is it worth the investment to improve productivity, usability and trust in decision making?
- Moving beyond GenAI Conversational Format: It’s easy to get mired in technology and give less attention to GenAI-driven user experiences beyond canonical conversational format. Understand how to improve productivity and value via new user-driven approach to GenAI.
- Easing GenAI User Cognitive Load: Effectively leveraging GenAI requires a significant shift to the user’s mental model of workflows and tasks in order to produce desired outcomes. Learn new UI paradigms for guiding users to desired outcomes and decreasing cognitive load. identify which outcomes are worth the extra cognitive load and investment?
- Increasing User Trust with AI: Learn how to better address critical issues like data veracity, exploration efficiency and user skepticism by including Humans in the Loop. We’ll discuss several methods such as Trust Triggers, Human Evaluations, CRAG and Guided Prompt Paradigms.
- Overcoming GenAI User Adoption Hurdles: Expert users have complex mental models, reasoning and instincts that drive decision making. Learn how to augment the way they think with GenAI.
- Key UI Paradigms for GenAI:- GenAI-driven experiences need to evolve with new paradigms that augment conversational formats. Discover a structured UI approach and paradigms to decrease cognitive load, improve usability and enable better decision making.
1
00:00:13.760 --> 00:00:38.690
Laura Smith: Alright. Everyone thanks so much for attending our webinar today, gene and finance. Human usability matters. Just wanted to kick off. Let you all know we're recording today's session. So we will send out a follow up email with a copy of the recording if you have any questions during the webinar feel free to drop them in the QA. Box at the bottom of your Zoom Meeting, or you can chat myself. The host. I'll be happy to help as well. With that. I know we've got a full
2
00:00:39.083 --> 00:00:46.560
Laura Smith: scheduled today, so I'll get started. With our speakers. We have Glenn, Posick, Tim Baker, and Karim Jamal.
3
00:00:47.910 --> 00:00:51.995
Tim Baker: Great thanks, Laura. So we are going to get straight into it.
4
00:00:53.240 --> 00:00:56.913
Tim Baker: So let me see if I can make my there we go?
5
00:00:57.960 --> 00:01:03.986
Tim Baker: so we'll do intros in a minute. But this is what we're going to talk about. Lots and lots of words.
6
00:01:04.980 --> 00:01:10.590
Tim Baker: today. We're going to explain a few words, and we're going to introduce some new ones like generative at Ux.
7
00:01:11.346 --> 00:01:16.409
Tim Baker: But generative I AI is already creating a seismic shift
8
00:01:16.903 --> 00:01:21.289
Tim Baker: in mental models. And Lynn's going to talk about what a mental model is.
9
00:01:21.380 --> 00:01:28.480
Tim Baker: you know. But what we're talking about is how this technology is changing how humans interact with data with content.
10
00:01:28.520 --> 00:01:30.190
Tim Baker: how they create data.
11
00:01:30.290 --> 00:01:33.028
Tim Baker: But most importantly, how they make decisions.
12
00:01:33.740 --> 00:01:46.390
Tim Baker: the reality is the agents and add ins and co-pilots aren't the panacea for a poor user experience? Especially as with most of our client engagements, there is an expert user in the mix.
13
00:01:46.560 --> 00:01:53.650
Tim Baker: So today's discussion will delve into why building a ux for expert users.
14
00:01:54.421 --> 00:01:57.849
Tim Baker: differs from the typical consumer interface.
15
00:01:58.040 --> 00:02:02.550
Tim Baker: and we'll talk about how AI and Ml is being sprinkled in, and
16
00:02:02.660 --> 00:02:11.840
Tim Baker: you know how maintaining trust in that process, and the integrity of the of the interaction is so important. And we're going to give some live examples.
17
00:02:11.970 --> 00:02:17.990
Tim Baker: and we'll show a couple of video clips at the end to kind of help. Bring this this all to life.
18
00:02:18.710 --> 00:02:19.750
Tim Baker: So
19
00:02:20.660 --> 00:02:21.799
Tim Baker: on the
20
00:02:23.210 --> 00:02:25.640
Tim Baker: on the panel today. I've got
21
00:02:25.710 --> 00:02:31.409
Tim Baker: 2 experts from Xpero to help kind of unravel this kind of word
22
00:02:31.908 --> 00:02:39.270
Tim Baker: mix we have. Lynn's the co-founder of Xpero, and runs our user experience and product design practice.
23
00:02:39.480 --> 00:02:47.729
Tim Baker: And Karim is our kind of proxy for our clients. He works a lot with the clients in the trenches as a senior tech lead an architect?
24
00:02:48.244 --> 00:02:58.219
Tim Baker: But both are actively involved in in working with clients on these problems. And and I'm Tim Baker, and I lead the financial services practice here at X Bureau.
25
00:02:59.210 --> 00:03:12.119
Tim Baker: So before we kick off the the quick, infomercial for those of you that don't know. Xpero. We have over 20 years designing and deploying digital solutions
26
00:03:12.130 --> 00:03:18.580
Tim Baker: with a relentless focus on creating productive and powerful experiences, especially for expert users.
27
00:03:18.670 --> 00:03:24.829
Tim Baker: We found that trained users, experts in their fields have particular wants and needs
28
00:03:24.930 --> 00:03:27.760
Tim Baker: that transcend the industries they work in.
29
00:03:28.140 --> 00:03:36.500
Tim Baker: whether managing electric grids for safety, prospecting for hydrocarbon deposits, managing pharmaceutical supply chains.
30
00:03:36.540 --> 00:04:01.959
Tim Baker: or, in the case of finance, maybe managing client assets. They all work differently than most users. They explore their data. They have high levels of expertise and require high performance from their software. And they really work with high stakes involved with more complex data and success and failure at their day to day. Job is is mission critical to them and their firms.
31
00:04:02.050 --> 00:04:08.579
Tim Baker: so they need solutions that are deeply useful and not just intuitive or pretty.
32
00:04:09.134 --> 00:04:15.070
Tim Baker: So we've built solutions across many industries. With these demanding requirements in mind
33
00:04:15.210 --> 00:04:21.459
Tim Baker: and our digital solutions practice that Lynn leads has built products and design products.
34
00:04:21.993 --> 00:04:25.269
Tim Baker: as well as working with our full stack team.
35
00:04:25.682 --> 00:04:38.959
Tim Baker: Which ranges from looking at product strategy, user experience, architecture and development. And then we work across a variety of different types of engagements from modernization to kind of new build
36
00:04:38.990 --> 00:04:40.990
Tim Baker: or building pocs.
37
00:04:43.180 --> 00:04:50.809
Tim Baker: and then quickly in financial services. We work across all of the range of financial services, from wealth tech to banking and risk
38
00:04:51.371 --> 00:05:05.468
Tim Baker: and that's with other industries. We tend to favor complex domains like fraud, detection, amc, trading systems, and advanced analytics. So if you'd like to know more about Xpero hopefully, by the end of this.
39
00:05:05.900 --> 00:05:11.900
Tim Baker: this video. If you're watching or the webinar. If you're live, then please do reach out to us.
40
00:05:12.350 --> 00:05:14.909
Tim Baker: Okay without further ado.
41
00:05:15.413 --> 00:05:26.626
Tim Baker: Let's start by outline outlining some of the most common pain points that we see in finance. And Karim, you, you and I deal a lot with with clients.
42
00:05:27.020 --> 00:05:44.629
Tim Baker: and we've had a few projects recently where you know the interface has been. Let's say it's somewhat kind of lacking. What? So? So what are some of the issues that you've seen over the years, and what have been some of the resulting inefficiencies that these challenges tend to lead to.
43
00:05:45.240 --> 00:06:13.810
Karim Jamal: Yeah, so yeah, before we even get into the the Gen. AI, and how that can alleviate some of these problems. Let's really talk through what they are. So we were focused on. You know what what we're trying to solve here. You know, there's there's the notion of disparate data, right? So there's data sources that get subscribed to. There's different places where that data is either coming in and being ingested scraped. What have you right. And they're all coming in in different forms.
44
00:06:13.810 --> 00:06:15.300
Karim Jamal: different factors.
45
00:06:15.594 --> 00:06:39.999
Karim Jamal: and there's a very real challenge of finding the real estate on already crowded desktops right to make room for that one other piece of data that's coming in right? And one way to do it is adding another screen like you you see in the image here, right? But that only scales so much. At some point they're probably gonna start tipping over, or you're the next cubicle is gonna start complaining that, hey? You know, you're
46
00:06:40.000 --> 00:06:47.269
Karim Jamal: crowding into my space. So you know, how how do we? How do we help sort of consolidate and reverse a little bit of that.
47
00:06:47.950 --> 00:07:03.729
Karim Jamal: And with these disparate data sources many of them have their own. You know, applications that are used to render them either proprietary or open. But they're not fitting in existing applications. And so for that, you need yet another application.
48
00:07:03.730 --> 00:07:20.630
Karim Jamal: each one having its own workflow user experience, keyboard shortcuts and the like. Right? So you're you're having to balance a lot of different information. And it, it just breaks your flow and slows you down as you switch between apps because of all that context switching
49
00:07:21.395 --> 00:07:45.490
Karim Jamal: and with that actually comes a lot of high cognitive load, too. Right. So you're having to keep for for each application you're having to keep all those you know specifics and and quirks of each app in your mind you're at. You're also having a lot of different types of data being thrown at you in different ways. Cause you know, they're all exposing it differently different color schemes, different flashing
50
00:07:45.850 --> 00:08:03.410
Karim Jamal: things like that. And the problem here is that you're being loaded up. But then you're also being expected to make make quick inference of that data coming in. So you can make timely decisions right? Because it's all about timing. Right? It's you have to react quickly. Take it in and react quickly.
51
00:08:03.850 --> 00:08:21.229
Karim Jamal: and then a lot of it is just repetitive actions, right? Doing the same thing over and over and over again, either to generate some reports. So you can make those decisions, or, you know, just collating the data sources. So you can figure out some some trends over. You know how you what actions you want to take.
52
00:08:21.726 --> 00:08:23.789
Karim Jamal: You know, for the rest of the day type thing.
53
00:08:24.570 --> 00:08:30.890
Tim Baker: Yeah. And I see these patterns across. You know, asset managers on the buy side, but also banking
54
00:08:30.960 --> 00:08:35.209
Tim Baker: and trading. You know, lots of applications, information overload
55
00:08:35.659 --> 00:08:44.040
Tim Baker: and lots of what what our friends at openfin call swivel chat. Lots of switching between dozens of different applications.
56
00:08:47.510 --> 00:08:56.319
Tim Baker: So what kind of regulatory kind of requirements like do we have to think about in in these spaces? There's a few listed here.
57
00:08:56.960 --> 00:09:10.990
Karim Jamal: Yeah, I mean, I think what makes finance a little different is is that it is highly highly regulated. As you mentioned, right? There's like so explainability, accuracy, confidentiality, are very high high needs here.
58
00:09:11.190 --> 00:09:30.870
Karim Jamal: but also the need need for humans to be able to sort of make the call. Right. So here's the data presented at you. Now you're the expert in the room. You need to take that and make a human call out of it right. And of course, institutions are always looking for more and more output. So you know, profits, trades
59
00:09:31.773 --> 00:09:33.339
Karim Jamal: higher returns
60
00:09:34.024 --> 00:09:56.279
Karim Jamal: and we'll discuss all of these in a bit, but it's just the you have more and more information being thrown at you. The business wants more and more output. Yet you still need to somehow make all these decisions quickly, with high confidence, right? Because of the highly regulated market. So it's it's a lot of sort of conflicting things that have to strike the right balance to make this flow work.
61
00:09:57.050 --> 00:10:14.680
Tim Baker: Yeah, a lot of our clients are going through transformation right now. We do a lot of design work where I will work, say, with open fin, and we'll figure out how these application applications can get a, you know, work together more effectively. But I think this whole notion of generative AI,
62
00:10:14.880 --> 00:10:21.109
Tim Baker: I think, may also, you know, provide some other opportunities to to improve productivity at the user end.
63
00:10:21.260 --> 00:10:29.680
Tim Baker: So so, Lynn, let's let's talk more about the user experience side of things. At experiment, we tend to work with those experts, with
64
00:10:29.730 --> 00:10:32.440
Tim Baker: traders and investors and engineers.
65
00:10:32.670 --> 00:10:40.569
Tim Baker: How do experts think differently about their tasks against kind of regular, you know, consumers and and individuals.
66
00:10:42.050 --> 00:10:43.809
Lynn Pausic: Yeah, no, that thanks, Tim.
67
00:10:43.870 --> 00:10:50.500
Lynn Pausic: Yeah. So Sid mentioned, you know, in the financial sector and through many.
68
00:10:50.500 --> 00:11:14.120
Lynn Pausic: many industries, or where they're complicated and do mission critical things. What you really have. The lion's share of the audience is what we would call domain expert users. So there's some B to C in there, but it's a lot of b 2 b kind of users where they are bringing tons of expertise to bear. And they have tremendous
69
00:11:15.180 --> 00:11:40.179
Lynn Pausic: set of experiences, and these built up instinct factors over time. And you know what we're seeing out there, not just in finance, but across a variety of these complex domains is this seems to be getting glossed over. There's a lot of emphasis on productivity which yeah, that's something that can be an outcome of generative AI, and there's some you know, attention to. Okay, everybody wants to hopefully have
70
00:11:40.180 --> 00:12:05.079
Lynn Pausic: have meaningful outcomes, trustworthy outcomes. But when you're talking about domain experts. Who they have a good spidey sense for what an outcome should look like or what range it should be in. Or, you know, there's a whole litany of potential factors there. And if they smell something that is off, you're gonna lose their trust right away. And and really in finance. And again.
71
00:12:05.080 --> 00:12:13.809
Lynn Pausic: across these other industries, we're really just scratching the surface of that is starting to get into okay, moving beyond. Just wow. There's this really now.
72
00:12:13.810 --> 00:12:37.670
Lynn Pausic: natural language, conversational format, lots of different ways to access information and customize it, and all the things but making sure that it meets the needs of these kinds of expert users, and ideally, whether it's a conversational format or some other format of generative AI. What these folks need is augmentation.
73
00:12:37.670 --> 00:13:00.689
Lynn Pausic: They need to have the technology feel more like a trusted colleague that they can tap on the shoulder and ask ask questions or generate. You know, new data generate visuals, say all kinds of different ways of further exploring and understanding. To get to an answer. They're not typically just expecting an answer to pop out from a question that's asked. Right?
74
00:13:00.690 --> 00:13:05.710
Lynn Pausic: These users. They're exploring, they're investigating. They are
75
00:13:05.720 --> 00:13:32.700
Lynn Pausic: using guideposts along the way to get from one end of a problem to another, and it often involves many steps. Generative AI may be one of them, and there'll there'll be other tools in there. So done right. You know, we can augment all of this knowledge that they're bringing to the table. And we have some examples today that we're going to talk to you about some new paradigms and some ways of helping to sure up veracity a bit.
76
00:13:33.275 --> 00:13:56.744
Lynn Pausic: But done wrong. You will lose these users if they, if it doesn't meet their expectations for trustworthiness or helping them along the way to be able to continue to explore and get to the outcomes that they seek and excuse me. So that that's some of what we want to talk about today. Now, I'm gonna get into here.
77
00:13:57.500 --> 00:14:22.210
Lynn Pausic: Next thing, a a little bit of explaining some of the Tim. If you get the next slide. Yeah. Explaining a little bit more about, you know, kind of domain expert users and kind of their mindset so on the left. Here we have B to C users, and on the right, we'll say domain expert users. So in finance, these are traders. These are compliance experts, fraud investigators, wealth, managers, right.
78
00:14:22.210 --> 00:14:47.180
Lynn Pausic: etc, etc, front office, back office. These are people that bring a tremendous amount of experience to the table. So just some to highlight some comparisons here, if something is simple and it's like more consumer focused. Or maybe it's a simpler domain. Often there is a very clear start and a very clear end to the task or or things that the user is trying to accomplish like with a piece of software
79
00:14:47.494 --> 00:15:14.839
Lynn Pausic: on the domain expert case. It's a lot more exploratory. Typically like they know, they need to arrive at a decision, or they're looking for a certain answer, a certain set of insights and answers to eventually come to fruition. But it's a journey, and it can be a very explorative. And there's not a very clear point, A to Point B, and how to get there. Which gives us a bit more of a challenge. Whether it's generative, AI or just something
80
00:15:14.840 --> 00:15:39.665
Lynn Pausic: more deterministic. That's always been a challenge in creating these types of experiences is, how do you keep users on the rails? They don't end up, down a rabbit hole. And that's more true than ever now with generative AI. But how do you let them have the capability to explore both in the data and the navigation, and and maintain that trust? There are domain expert users have
81
00:15:40.010 --> 00:15:51.570
Lynn Pausic: a high set of ex expertise and experiences under their belts that they bring to bear on how they're interpreting what they see, interpreting what generative AI is presenting to them?
82
00:15:52.046 --> 00:16:15.199
Lynn Pausic: Versus someone that is a non expert user that you know, they just might not question the integrity of what they're seeing, right? They just accept it. Kind of court launch. It's like, oh, this is what Jen AI recommended. I should do that right. Expert users aren't going to do that. They're gonna question integrity along the way. And this is not new. And they've been doing this for decades. Right? So in software. Pre
83
00:16:15.200 --> 00:16:25.330
Lynn Pausic: AI. It might be a deterministic piece of information, a prediction, a, you know, recommendation whatever that's done in a deterministic way, where
84
00:16:25.340 --> 00:16:40.330
Lynn Pausic: for that to feel trustworthy. The expert has to be able to see, perhaps, how a calculation was derived. You know, and so on, and so forth. And giving that that explanation of like. Well, why is this, you know, recommended? etc.
85
00:16:40.640 --> 00:17:09.197
Lynn Pausic: And then, you know a couple of other things real quick. You know, overall. Whenever you are an expert, you're typically involved in sort of a mission critical business, right? It could be mission critical. I'm a fraud investigator, and I have 1 min to determine whether this transaction or Kyc. Do I know this counter party? I should allow it to pass through right or I shouldn't. And and you have to make those decisions real quick.
86
00:17:09.569 --> 00:17:35.780
Lynn Pausic: Prior to all the wonderful AI driven recommendations that can move very quickly. Those users are running on instinct, and they just kind of get a feel for it right over the years. If things are less critical, right whenever you're in a simpler domain or non expert users. It's less likely to have a big impact. Like it will when something is critical, like keeping the lights on in the electric grid, or being able to determine if a transaction should go through.
87
00:17:36.333 --> 00:17:51.200
Lynn Pausic: Now, let's talk about Tim. I want to talk about that study. That was done. And so we think about expert and non expert users. There's this interesting study that was done by by Bcg, and what they were looking at is
88
00:17:51.350 --> 00:18:09.304
Lynn Pausic: doing tasks utilizing AI and then non utilizing AI, and they were tracking how the AI performed, and in some cases you could see kind of the that line there that looks pretty squiggly. That's AI, and did it perform well on a certain task, or did it perform poorly.
89
00:18:10.130 --> 00:18:24.110
Lynn Pausic: in the terms of this context of the conversation today. The key takeaway from this is one of the things of using AI not using AI that this study was after was AI performance, but also productivity gains.
90
00:18:24.120 --> 00:18:33.359
Lynn Pausic: But what also came out of this study is that our expert users could also tell whenever AI
91
00:18:33.360 --> 00:19:02.210
Lynn Pausic: was not working properly, like it just felt it it. The outcomes were off in some cases dramatically off and so that was another outcome of this, like, yeah, productivity. But at what cost? So we might be more productive perceived productivity. But there's a lot of inaccuracies in there. So it was an interesting study, and you know there's there's a link there. If you get the chance. Go ahead, Tim.
92
00:19:02.210 --> 00:19:12.640
Tim Baker: Yeah. And I think this study came out, you know, a while ago. But what struck me was that that blue curve where the you know, the
93
00:19:12.700 --> 00:19:14.200
Tim Baker: really, the the
94
00:19:14.290 --> 00:19:18.949
Tim Baker: that just shows the shift in productivity. And it was quite material. And this was when
95
00:19:19.130 --> 00:19:36.639
Tim Baker: the tools were not as reliable, and it was earlier in the kind of adoption curve. So I'd imagine that you know tools have got better already, and that curve is even further shifted to the right. But the need for a human in the loop is definitely still there, I think. And we'll talk about some of those those issues.
96
00:19:37.148 --> 00:19:46.700
Tim Baker: We did do a little bit of an inventory. Of some of the most common use cases that that we that we see see and hear about, and I think
97
00:19:47.300 --> 00:19:53.499
Tim Baker: some of the banks have famously come out and said there are hundreds of use cases that they've identified, that they're all very kind of point.
98
00:19:53.570 --> 00:19:58.260
Tim Baker: you know, point solutions in a way. So, Lynn, how do you think about
99
00:19:58.830 --> 00:20:06.970
Tim Baker: how we need to kind of think about AI in the context of the user experience. Is it enough to just deliver these point solutions for each task?
100
00:20:07.090 --> 00:20:14.220
Tim Baker: Or, just strap a chat bot on the front of, you know software, and that solves all the problems.
101
00:20:15.450 --> 00:20:18.209
Tim Baker: you know. Where? Where do you kind of come out on this stuff.
102
00:20:18.840 --> 00:20:32.020
Lynn Pausic: Yeah. Well, 1st and foremost, and we'll talk about this at near the end of our discussion today. Understanding the the problems in this case, in finance, the user pain points
103
00:20:32.380 --> 00:20:56.890
Lynn Pausic: the potential use cases and juxtaposing that against what AI is good at right, what generative? AI specifically is good at and not good at is is huge. And from that let's say, the front office and the back office. You see things there like summation and recommendations, and being able to enable exploration and the next. We have a few examples. So we want
104
00:20:56.890 --> 00:21:15.787
Lynn Pausic: touch on real world concrete examples of how this can shake out in the front office and the back office sometimes, and and some of the the paradigms that we're using with domain expert users specifically, and some non expert users. In the experience. So
105
00:21:16.700 --> 00:21:27.996
Lynn Pausic: so this 1st one here, when it's we're gonna press on today, a bit sort of moving beyond that canonical conversational format. Great great format. Very inviting.
106
00:21:28.400 --> 00:21:53.109
Lynn Pausic: However. There are more things we can augment. This 1st thing we want to talk about here today is not really having conversational format at all, but thinking about generative AI agents and being able to leverage that as a way of having personalization, mass customization. And these are Gen. AI. Agents are
107
00:21:53.110 --> 00:22:06.079
Lynn Pausic: intelligent agents that are always kind of running in the background. Right? You could have many of them. To be able to produce an experience. Produce the the data and the content
108
00:22:06.700 --> 00:22:26.700
Lynn Pausic: and but they're always running in the background in this case, like an example. So what you're looking at here is a generative experience for wealth advisors. One of the things that the agents are doing all the time. Is considering a lot of different factors. They're generating. Call lists.
109
00:22:27.005 --> 00:22:50.800
Lynn Pausic: That wealth managers clients. They should call on for various reasons. It could be because of a shift in strategy. It could be something that happens in the news. It could be clients. Sentiment has changed. It could be. They just haven't touched base with them in a while. But they're going. These agents are continuously looking for. Who are those clients that really require a touch point in the next 24 h. Who
110
00:22:50.800 --> 00:23:11.409
Lynn Pausic: who can we have that next best conversation with right? And then, in addition to the agent going through and being able to prioritize those clients, we also are layering on summarization aggregation capabilities where pairing the client with. All right. Here are the top things that
111
00:23:11.830 --> 00:23:29.930
Lynn Pausic: The wealth manager would want to know about their portfolio. Here are considerations in the news here. Considerations from a number of different data sources. To be able to have that that conversation in a in a much smarter way, right there at their fingertips. So huge productivity gain for the user.
112
00:23:29.930 --> 00:23:42.930
Lynn Pausic: There's actually, in this case, there's no conversational Ui involved. It's a lot of agents and a lot of AI doing what it does really well, on the generative side of summarizing and and aggregating. I don't know, Tim, if you had anything that you want.
113
00:23:42.930 --> 00:23:43.720
Tim Baker: No, we I'm.
114
00:23:43.720 --> 00:23:44.130
Lynn Pausic: Shut up!
115
00:23:44.130 --> 00:23:56.811
Tim Baker: At the end of you know, we're working with with a client specifically on this. And the client was actually on the last webinar we did in this topic. It's a company called Ipc. So I'll show the I'll show you some of the
116
00:23:57.150 --> 00:24:01.649
Tim Baker: the initial designs that we've done as a video, and you'll get a sense for this. But I think
117
00:24:01.700 --> 00:24:04.080
Tim Baker: this project in particular
118
00:24:04.420 --> 00:24:13.360
Tim Baker: just brings so many different disciplines together in the background. You're right. There's there's there's, you know, there's AI built in, but it's also
119
00:24:13.520 --> 00:24:28.650
Tim Baker: integrating with salesforce. It's integrating with your Crm, so it's it's moving from those disparate separated pieces of software and actually putting them all together in a super intuitive way. And one of the issues with.
120
00:24:28.760 --> 00:24:33.640
Tim Baker: you know, certainly with traders, is they're not very good at filling out that Crm interaction.
121
00:24:34.210 --> 00:24:39.280
Tim Baker: So with a telephony solution you can be listening in. And the summarization
122
00:24:39.340 --> 00:24:42.009
Tim Baker: and that call report can be written for you.
123
00:24:42.400 --> 00:24:48.069
Tim Baker: So that's a huge gain, you know, for the institution as well. So it's really improving
124
00:24:48.150 --> 00:24:56.360
Tim Baker: the performance of the individuals. The clients like it because they're getting much more targeted calls. They're not just getting the same call as everyone else.
125
00:24:56.560 --> 00:25:06.269
Tim Baker: and the Crm interaction is filled out. So there's, you know, by just joining up all of those workflows. There's huge benefits, for you know, for the individual and the and the enterprise.
126
00:25:06.270 --> 00:25:11.829
Karim Jamal: This. This is actually a great example of fighting the urge to just throw a chat bot at.
127
00:25:11.830 --> 00:25:12.230
Tim Baker: Yes.
128
00:25:12.460 --> 00:25:13.850
Lynn Pausic: Yes, exactly.
129
00:25:14.165 --> 00:25:40.379
Karim Jamal: One thing we've noticed is that you know, regular users or b 2 c users are okay using chat, right? Like Chat Gbt, or talking to a chat bot, but more and more the expert users don't want that interaction model as much they may in some cases, but a lot of the time. They want it as a supplemental tool. That sort of automatically surfaces the information they need, right? So they can continue on their workflow without hindrance. Right?
130
00:25:40.683 --> 00:25:50.089
Karim Jamal: Like to take the the co-pilot term right? You probably heard that a lot but like taking it quite literally, you know. Think of regular b 2 c users as
131
00:25:50.090 --> 00:26:18.743
Karim Jamal: passengers in a plane. Right? They just wanna take me here at, you know. Take me from here to there. Right? So just answer this question for me, or find out that information versus a true co-pilot who might say, Hey, there's some patchy weather coming up ahead. Do you wanna turn the seat belt site on right? So think of it in that sort of a mindset where it's like automatically gathering the information and giving you some recommendations that you may or may not want to act on right. And then there's different business models right
132
00:26:19.030 --> 00:26:25.310
Karim Jamal: in. If you're a passenger, and you want some peanuts. Then you can upcharge for those peanuts right? And so.
133
00:26:25.310 --> 00:26:25.860
Lynn Pausic: Like.
134
00:26:25.860 --> 00:26:41.940
Karim Jamal: Chat gpt, you can subscribe to their monthly thing, and, like, you know, pay for the tokens and stuff. But you probably don't want to do that to the pilots. Right? Cause? Yeah, they're flying the plane, so you know, don't try to mess with their flow or try to upcharge them at the last minute.
135
00:26:41.940 --> 00:26:57.219
Karim Jamal: and then your card is not on file, then it's only contactless delivery. And then you're sort of stuck right? So th those types of decisions and the type of expert users actually influences a lot of things, including Ux. And then business model considerations and all that as well.
136
00:26:57.980 --> 00:27:06.327
Lynn Pausic: Yeah. And I, yeah, no. Well, well, said Kareem. And and this is a great segue to you were talking about recommendations. Right?
137
00:27:06.690 --> 00:27:30.314
Lynn Pausic: so the the context here for for this particular example, instead of paradigms, happens to be a combination of anti money laundering and cyber and actually linking those 2 together to help fraud investigators. Better understand? The the full picture of how and where fraud is is happening.
138
00:27:30.760 --> 00:27:52.850
Lynn Pausic: but what we're again moving beyond just that conversational experience. In this case we have some agents running and what they're doing on a routine basis is summarizing information. So you're getting a readout now to arrive at this moment a user would probably have received, like an alert or something where they're they're getting into.
139
00:27:53.230 --> 00:28:15.169
Lynn Pausic: And the specific moment here, where we can say, Here is here is a summary over there on the right. I know it's small of events. That have occurred in a little bit of of rationale, and then below that, based on setting that context for the user. The generative AI has come up with some recommendations. Now.
140
00:28:15.260 --> 00:28:18.397
Lynn Pausic: we talked earlier about the workflows.
141
00:28:19.190 --> 00:28:28.780
Lynn Pausic: and finance particularly, can become very complex. The way the recommendations are being leveraged here are not to, as Kareem said, like, just like
142
00:28:28.780 --> 00:28:51.440
Lynn Pausic: go from point A to point b, 1 flight. It is a stepping stone to start the investigation of what has happened. That is what Jen AI is recommending in this case, it's saying, here are some interesting things. That are anomalies that are, you know, somehow, maybe tied to some other you know, nefarious looking individuals.
143
00:28:51.769 --> 00:29:16.180
Lynn Pausic: And so we think these are the AI thinks this is some interesting stepping stones right? User can go in and we don't show it depicted. Here we have some examples later. To be able to say, okay, you gave me these recommendations. Why should I trust them? What about the provenance? What about the source? What about you know? The rationale, etc? So combining those 2 methods?
144
00:29:16.180 --> 00:29:22.389
Lynn Pausic: And it starts the journey. It's not a finite journey asking a prompt, you know one question.
145
00:29:22.830 --> 00:29:50.859
Lynn Pausic: and then I think we can expand on that a little bit. Tim, if you want to go to. Yeah. So this is same context. Still, no, your counter still Aml fraud. And here we're getting into looking at counter parties. So we've had some. That stepping stone come in, and this might be an example of a next step would be created. Now, what generative AI is doing for us here is. It's continuing to summarize at the very top of the screen. There you could see it highlighted.
146
00:29:51.228 --> 00:30:10.781
Lynn Pausic: Some information. Of, you know, setting the context for what the user is, seeing. This is what they might see when they click into kind of one of those recommendations, and we have here broken out additional detail. That would have been behind one of those recommendations. Well, these different
147
00:30:11.150 --> 00:30:23.619
Lynn Pausic: IP addresses, these accounts. These entities or individuals are being flagged for a reason. There's a little bit on why they're being flagged. Some risk level there
148
00:30:24.000 --> 00:30:34.770
Lynn Pausic: and then over on the right. We have sources the user could drill into those sources to understand, you know, to get to using rag architecture. Some of those original
149
00:30:35.530 --> 00:30:44.605
Lynn Pausic: pieces of content. That live within those sources. So we're building trust. But now also something else that's being done here is
150
00:30:45.040 --> 00:30:52.170
Lynn Pausic: based on those findings. AI is also generating a visualization for us. And it's an interactive visualization.
151
00:30:52.170 --> 00:30:54.309
Tim Baker: So that's the generative Ux.
152
00:30:54.350 --> 00:30:56.000
Tim Baker: That that's that's what we call.
153
00:30:56.000 --> 00:30:56.640
Lynn Pausic: Over on the right.
154
00:30:56.640 --> 00:31:08.229
Tim Baker: So the AI is going. Show me now, because we've we've led to this point. Now show me those relationships. And that's controlling the interface for the user. That's very cool.
155
00:31:08.230 --> 00:31:37.489
Lynn Pausic: Yeah. And so on the left. You have the tabular version of that, and on the right. Now we can see it visually, which you know, if you want to be able to understand in terms of fraud, and know your counterparty, who is connected to what and who may be, you know higher risk. That's what the color represents on the left, in the tabular data as well as in the visualization. Now, you can really start to understand if you're a fraud investigator, how this comes together? Well, we have some cyber issues going on. We have some individuals that may.
156
00:31:37.490 --> 00:31:52.924
Lynn Pausic: We have previous fraud. They're higher risk. And we can can combine the 2. And then further, we have that that layer of starting to build trust with the expert user saying, hey? We're revealing the source. Go check it out for yourself. Right?
157
00:31:53.900 --> 00:31:54.770
Lynn Pausic: So what's.
158
00:31:54.770 --> 00:32:03.150
Tim Baker: Moving away from that kind of rigid, you know, text box interaction and actually pulling in visualizations and explaining the data.
159
00:32:03.270 --> 00:32:08.590
Tim Baker: So let let's talk a little bit about this notion of a mental model.
160
00:32:09.290 --> 00:32:17.040
Tim Baker: which you I'd never heard that before, so you had to explain it to me yesterday. But everyone's talking about these conversational experiences. But
161
00:32:17.350 --> 00:32:26.039
Tim Baker: but with all these great experiences and technology, we're actually kind of glossing over, you know this, what you call a seismic shift in the mental model.
162
00:32:26.170 --> 00:32:27.140
Tim Baker: So
163
00:32:27.680 --> 00:32:29.280
Tim Baker: we're placing a bet.
164
00:32:29.932 --> 00:32:37.709
Tim Baker: That the additional cognitive load of the users is worth the productivity trade or the productivity trade off?
165
00:32:39.240 --> 00:32:40.490
Tim Baker: can you kind of
166
00:32:40.510 --> 00:32:47.760
Tim Baker: double click on what you mean by that? And and this seismic shift in the mental model? What is the mental model.
167
00:32:48.230 --> 00:32:51.469
Lynn Pausic: Yeah, yeah, yeah. So
168
00:32:51.760 --> 00:33:15.829
Lynn Pausic: first, st this is one of the a couple of things we're gonna talk about here that's highly applicable to both expert users and non expert users, because the world is shifting, you know, as soon as you're in the conversation, the very least the world is shifting right. And the way that you did tasks before. So just to kind of level the playing field here to get everybody on the same page. So from a psychology standpoint. A user mental model
169
00:33:15.830 --> 00:33:22.729
Lynn Pausic: is is an internal kind of representation that they hold in their head about a process or a task.
170
00:33:23.012 --> 00:33:45.330
Lynn Pausic: And and how they think about it in in their head, versus how it exists in the real world. Their mental model may really mirror the real world, or it may kind of have its own kind of hops and steps. That's things that they think about, that they need to be doing while they're completing a task that other people wouldn't know, just looking at the software or just looking at some other tool that they're using right.
171
00:33:45.330 --> 00:33:45.710
Tim Baker: And we just.
172
00:33:45.710 --> 00:33:46.759
Lynn Pausic: Kind of what a mental model.
173
00:33:46.760 --> 00:33:49.300
Tim Baker: Okay, we kind of do that without thinking. In a way.
174
00:33:49.300 --> 00:34:13.589
Lynn Pausic: We do. Yes, we build mental models about everything. It could be how you get groceries right? What's your mental model if you're ordering from Instacart. What's your mental model of how you approach grocery shopping when you're in the store? Do you start off at home, going through your cupboards and having a list and then going to the store. And I right. Everybody has their own mental model, and there'll be some similarities, and there would be a lot of differences. If you had everybody kind of
175
00:34:13.590 --> 00:34:25.010
Lynn Pausic: draw out their mental model. Grocery, shopping right? The mental models develop over time. Right? So the more you engage with reality, we update the models in our head whenever reality changes.
176
00:34:25.010 --> 00:34:48.579
Lynn Pausic: Like you introduce new technology. For example, we have to change the mental model in our head with generative AI. What's happening? And as Tim pointed out, what's being glossed over is that it turns out it's a pretty big leap. It's not a little increment. It's a big leap. We've done it before, and we'll do it again. But this this one is is a little bit.
177
00:34:48.580 --> 00:34:50.500
Tim Baker: Feels big. It definitely.
178
00:34:50.500 --> 00:34:50.840
Lynn Pausic: It is.
179
00:34:50.840 --> 00:34:52.880
Tim Baker: Everything. It's touching everything.
180
00:34:53.159 --> 00:34:53.840
Tim Baker: It is.
181
00:34:53.840 --> 00:35:08.369
Lynn Pausic: And so with that you get increased cognitive load which can decrease productivity and also make users a little, maybe uncomfortable. But so when we talk about taking the leap here, here's a simple example. Right? So
182
00:35:08.370 --> 00:35:29.489
Lynn Pausic: a while ago, as some of you may remember, Yahoo, before there was Google. There was Yahoo and Lycos and other search engines where it was more about browsing. The Internet was a lot smaller, right? We would browse down a series of links and a taxonomy to get to a list of sites or products or things we were interested in. Right?
183
00:35:30.466 --> 00:35:33.550
Lynn Pausic: Then came Google, and Google
184
00:35:33.810 --> 00:35:35.880
Lynn Pausic: put out an empty field.
185
00:35:36.070 --> 00:36:00.030
Lynn Pausic: Right? Text field. Yeah, what what were people supposed to do with that? They were used to being prompted. They were used to Yahoo and Lycos feeding them the next step right now. Good news world is your oyster, but we have a blank slate. How do I begin to get to what I want, right mental model blown up on how you're going to find the products and sites and things that you're interested in. So. But guess what
186
00:36:00.030 --> 00:36:17.050
Lynn Pausic: we got used to it. Most people did took time. And my mom still struggled with it a bit. She's not a techie but you know, we got used to it, and it made us all more productive. And God knows we needed it because the Internet was about to really take off and scale right and
187
00:36:17.050 --> 00:36:23.343
Lynn Pausic: stick in an egg problem. Google enabled it to scale even further. So let's let's put this now in the context of
188
00:36:24.160 --> 00:36:26.890
Lynn Pausic: generative. AI, so you can go to the next slide. Tab.
189
00:36:27.510 --> 00:36:42.749
Lynn Pausic: So if we think about user mental model for a simple, how did we do it before versus AI. So on the top there, let's say the goal was to create new content. If I'm a wealth manager, I want to create some new content for my clients. Well.
190
00:36:42.810 --> 00:37:06.599
Lynn Pausic: okay, that's lovely that I want to be able to do that, personalizing. All that stuff takes an incredible amount of time. So you either get generic content, maybe based on a big cohort or you spend a lot of time with you and your staff personalizing things right. But at any rate, it was something like you sat down. Your draft, you iterate, and then eventually you put it out there to your client. Right? Enter generative. AI,
191
00:37:06.896 --> 00:37:24.393
Lynn Pausic: the goal post is moved, and how we're gonna get there is pretty different. We're not necessarily sitting down and writing very much right? So maybe the 1st thing is, maybe I do write a little bit, or maybe I copy and paste from something I've written before to set the context and kind of train the Llm. Right?
192
00:37:25.210 --> 00:37:43.285
Lynn Pausic: and then we get into probably a series of it, returning right? What it thinks you want. You're prompting it, and then it's returning, and you're now having to work through it with the automation to get to that outcome you want. So that's very different than
193
00:37:43.700 --> 00:38:09.160
Lynn Pausic: the mental model. Previously, if I'm gonna sit down and write this thing. And most users don't think like data engineers. And so that's really the seismic shift in the mental model, right? We're asking users to start to be able to think a little bit more like data engineers, and individuals. That understand how to get the Lln. To work for them. We'll get there, but it's gonna be a it's gonna take a while for people to get good at it right.
194
00:38:09.160 --> 00:38:16.019
Tim Baker: And and I think one of the biggest new job postings has been around these people called prompt engineers.
195
00:38:16.080 --> 00:38:19.949
Tim Baker: And as I've kind of researched that that's kind of the
196
00:38:20.070 --> 00:38:22.049
Tim Baker: the behind the scenes.
197
00:38:22.100 --> 00:38:24.330
Tim Baker: prompting that you have to do
198
00:38:24.840 --> 00:38:35.470
Tim Baker: from an engineering standpoint and using the I guess the Openai Api to kind of coach this thing without an end user having to do all of that manually.
199
00:38:35.928 --> 00:38:44.260
Tim Baker: And that's a you know. That's certainly something that our engineers have been experimenting with as one of the components of of our solutions that we're building.
200
00:38:44.550 --> 00:38:45.770
Lynn Pausic: Yeah, and then.
201
00:38:45.770 --> 00:39:07.429
Karim Jamal: Google, you know where you can specify sort of keywords like the label or the site to restrict your search, you know. And this type of prompt engineering, you can, you know, start putting in some guardrails and stuff so like, Hey, don't lie, or don't hallucinate right at the end of each prompt which will change how it interprets the results and give you this different results. Based on that. A little more conservative.
202
00:39:07.430 --> 00:39:08.550
Tim Baker: Exactly. Yeah.
203
00:39:08.550 --> 00:39:12.869
Karim Jamal: And so that's just the process of learning. And loading up. You know, the new mental model.
204
00:39:12.870 --> 00:39:23.780
Tim Baker: Yeah, I'm being very specific. Those guardrails are key in finance, which is, don't don't hallucinate. Point me at real data, or pull data from a trusted source.
205
00:39:25.600 --> 00:39:34.000
Lynn Pausic: Yeah. And, Tim, I promise to keep it brief if you just go back a slide. Just to highlight kind of what? What you all are saying. So
206
00:39:34.210 --> 00:39:56.560
Lynn Pausic: across these 3 things, these are different ui paradigms that you can utilize in your user experience to keep the users on the rail. So on the far left, we're starting off. It isn't a conversational format this time, but we're starting off with some options for the user so they could get in there and just start with an empty prompt. Or we're we're saying, Hey.
207
00:39:56.560 --> 00:40:07.989
Lynn Pausic: you know, these are the things that you probably should be focused on so kind of pick a context and same thing with the second example here in the middle. If
208
00:40:08.140 --> 00:40:18.770
Lynn Pausic: in this case, just in this, excuse me, particular piece of software, you're you're actually asking the user to choose a context because we know
209
00:40:19.067 --> 00:40:41.392
Lynn Pausic: you know what those models contain and where it's kind of safe. Right? You're gonna get the veracity you're looking for. Hopefully, you don't have any hallucinations, and so we're putting keeping the user on the rails in a safe space by saying, Choose one of these 4 kind of areas to go after, and then you can get into the conversation right, and then the one on the far right, slightly different paradigm.
210
00:40:42.190 --> 00:41:07.129
Lynn Pausic: once you're on the rails you could start to bake workflows into that conversational format. So what we have here at the end? The prompt coming back that the user can choose to take is saying, Okay, you know, we've gone through all these things. The user can say, alright. I just, I want to. Now go on to the next step. I want to generate something. So given some progress that has been made. The Ll.
211
00:41:07.130 --> 00:41:17.939
Lynn Pausic: Is, is coming back with some next steps appearing that might lead to additional conversation, or might generate a whole new kind of like workflow, or, or.
212
00:41:17.940 --> 00:41:30.420
Lynn Pausic: you know, series of tasks for the user to do. And you can embed that as well. We're we're doing that that type of you know how to facilitate workflows within a conversational experience. Kind of a thing.
213
00:41:30.420 --> 00:41:34.829
Tim Baker: An example at the end of of a wealth example, a screener example
214
00:41:35.140 --> 00:41:47.020
Tim Baker: which I think is super powerful, especially for novice users who you're trying to kind of deliver quite a complex result. For so it's sitting in between, you know, complex data and the user.
215
00:41:47.340 --> 00:42:15.500
Lynn Pausic: Yeah. And so the next one here just touch on this real quick. And and it really, we already looked at a, you know, an example of visualization. So this is one where you might be having a conversation, and it spawns a visualization that then you can dive into. And we're, you know, keeping the visualization and the conversation connected to each other. Right? So that visualization is going to change when you're having the conversation and and back and forth you go, and you can further explore from here.
216
00:42:17.870 --> 00:42:41.779
Lynn Pausic: And then finally, we've been mentioning this, you know. Generative uis they are starting to be more real than than people think. I would say. What we're seeing is, you know. I think the goal is to have it. Be cool kind of fully non deterministic right now, the ones that we're involved with there's still quite a bit of, you know, within
217
00:42:42.130 --> 00:43:05.320
Lynn Pausic: confines of something that is more, let's say, deterministic workflow. We're doing non deterministic things. Right? So there's some some guardrails on it. But we can create these very hyper personalized experiences. Within a set of boundaries that you know the the code is that's where the generative bit comes in. The code is is generated.
218
00:43:06.270 --> 00:43:25.330
Lynn Pausic: And being able to translate from, you know, a kind of a series of key things that the user wants to be able to accomplish and some different attributes. We can generate an experience for certain more kind of simplistic workflows on the fly.
219
00:43:28.040 --> 00:43:32.091
Tim Baker: So, Kareem, I wanted to kind of come back to this.
220
00:43:33.080 --> 00:43:44.399
Tim Baker: so a little bit more around kind of some of these trust issues that we, you know, we we've touched on you know, financial services hallucination hallucinations. Just kind of on an option.
221
00:43:45.125 --> 00:43:50.760
Tim Baker: How does the user experience need to be designed to consider this, and more generally.
222
00:43:52.450 --> 00:44:02.369
Tim Baker: you know what are the Trust related? Approaches that we've we've taken to address, you know some of the things that are on the screen here.
223
00:44:03.700 --> 00:44:29.337
Karim Jamal: Yeah. So I mean, really, transparency is key here. Right so so you start building that trust. And what I'd say is, sort of AI or Gen. AI is no different than you know. Human Intelligence building Trust takes time right? And so that's just something that we have to get used to where it's not gonna be an overnight thing. You sort of have to design your workflow so that users start trusting it more and more.
224
00:44:29.690 --> 00:44:45.259
Karim Jamal: of course we all have that sort of conspiracy theory, uncle, that will never trust AI like they'd never trust anything else. Right? So you have to be able to deal with that as well. And and the reason I mentioned that is because the way we design our our next.
225
00:44:45.984 --> 00:44:57.959
Karim Jamal: You know. Paradigm of experiences here have have to keep in mind that there will be people that just don't trust AI. And so, if Gen. AI is Gen. AI is disabled.
226
00:44:57.960 --> 00:45:18.210
Karim Jamal: they should still be able to, you know, flow with their work flow, and the ux should not break and stuff. So right? So that's another thing to keep in mind of sort of graceful degradation in that regard. And so how how do we build this trust over time? Right? So there, there's the explainability aspect. So
227
00:45:18.250 --> 00:45:48.199
Karim Jamal: show your work is something you may or may not have heard, but as you're going through, keep a log of the things that you are doing underneath the covers. Where, whether it is sort of sequel queries you're making. Or, you know, graphs that you're exploring. Keep a log of that, and then be able to sort of surface that to the user if they want to see how you came about that answer, because they may disagree with it, or they may actually want to. Actually run those
228
00:45:48.200 --> 00:45:53.370
Karim Jamal: same commands directly on the database or the system that they have.
229
00:45:53.550 --> 00:45:58.100
Karim Jamal: and be able to verify the results. Right? Those are some key ways to sort of
230
00:45:58.140 --> 00:46:02.353
Karim Jamal: reinforce that you are getting the right, answers
231
00:46:03.200 --> 00:46:24.549
Karim Jamal: and then that also gives you a way to. If it is wrong, they can see that you've at least shown your work. And the the intent was clear and sort of genuine. And so then, that's when you can start tuning and improving. And that's where the human in the loop really comes in to sort of help validate tune and improve as we go on right
232
00:46:24.850 --> 00:46:44.043
Karim Jamal: a a another sort of analog to real life, if you remember, on school assignments. You're often asked to show your work instead of just, you know, showing the answer. Well, one reason for that is so that yeah, you're not cheating but another thing is to just see, were you on the right track? Right?
233
00:46:44.700 --> 00:46:46.590
Tim Baker: Yeah, do you understand the concepts.
234
00:46:46.890 --> 00:47:04.939
Karim Jamal: Yeah, like. Oftentimes, though, even if your answer is wrong, you'll often get partial credit if your your line of reasoning was correct. Right? I I got saved a lot of times, cause I was. I was on the right track and then goofed up at some point but you know that. That's sort of how you do it. I think the early stages there's gonna be a lot of that
235
00:47:04.970 --> 00:47:22.059
Karim Jamal: partial credit type thing where the gene AI answer is not correct, but if we can show it, then the user still gets that trust is like, Oh, I see where it went off the rails, and then this is where you come back and add some more guardrails in right to get them on onto the onto the right path.
236
00:47:22.715 --> 00:47:23.725
Karim Jamal: Type thing.
237
00:47:24.240 --> 00:47:34.299
Tim Baker: Trust is something that is earned, but very quickly lost, I think you know, in finance. If you get the wrong recommendation, or if you're told to pull the wrong client about.
238
00:47:34.530 --> 00:47:43.570
Tim Baker: Yeah, it's your wife. Birthday tomorrow, oh, actually, it's not, you know. I'm not married, you know, so that as soon as as soon as you have a goof like that
239
00:47:44.123 --> 00:47:50.320
Tim Baker: or the calculations done, or there's some hallucination. So it's so important, I think, to.
240
00:47:50.530 --> 00:48:06.990
Tim Baker: you know, to to put these guardrails in place and build that trust and use a lot of the visualizations. I think the next slide, Lynn, I love. I love this slide because it really kind of shows that users really just have a very low tolerance for for mistakes.
241
00:48:07.900 --> 00:48:26.422
Lynn Pausic: Yeah, they they really do, especially these expert users we've been talking about today. You you kind of get what? Like? Tim was saying, one shot at it with these guys. 2, if you're lucky, and then that's it. And then you really have to work hard to earn their trust back if if you can even do that right.
242
00:48:27.260 --> 00:48:30.415
Lynn Pausic: and so you know, being able to
243
00:48:31.130 --> 00:48:49.237
Lynn Pausic: make sure that you are giving them what they need to be able to to validate against their instincts. Right? Cause? That's what they're going on right? They're they're saying, this is this should be. This outcome should never have been within the realm of possibilities. According to my 30 years of experience, right kind of a thing.
244
00:48:49.550 --> 00:48:49.950
Tim Baker: Thank you.
245
00:48:49.950 --> 00:48:51.970
Lynn Pausic: That, right? Yeah, right?
246
00:48:51.970 --> 00:48:57.959
Tim Baker: And I think that transparency helps. And I think when you're building, you know a trustworthy interface.
247
00:48:58.030 --> 00:49:07.269
Tim Baker: that explainability, you know, and being able to what's the rationale behind this recommendation, or who to pull next and giving that visibility?
248
00:49:07.460 --> 00:49:08.080
Tim Baker: Yeah.
249
00:49:08.080 --> 00:49:31.490
Lynn Pausic: And I and I. And I think to to touch on it just briefly from a if you are someone building products with generative AI, you need to understand from your users what constitutes trustworthiness, because what's trustworthy from a data science standpoint is different often from what the user needs to see, to be able to trust it right. You need to get your feature engineering right. All those things that could be a whole, another webinar.
250
00:49:31.490 --> 00:49:38.770
Lynn Pausic: But it. It's important to make sure you you don't just you need to get out there with your users and and understand
251
00:49:38.770 --> 00:49:40.790
Lynn Pausic: what's going to make it trustworthy for them.
252
00:49:41.190 --> 00:49:43.499
Tim Baker: Yeah. And and Kareem, you talked about
253
00:49:43.550 --> 00:49:45.320
Tim Baker: failing gracefully.
254
00:49:47.310 --> 00:49:52.980
Tim Baker: I don't know who wants to take this one. But how do you? How do you fail gracefully when the AI screws up.
255
00:49:54.340 --> 00:50:22.090
Karim Jamal: You. You sort of have to give other alternative paths. It may not be the recommended path that has been designed for, but alternate paths to sort of manually go through that. So, for example, if you had a way where the the Gen. AI was doing some automatic report, generation or template generation for you to allow you to skip 3 steps. Then those 3 steps that were skipped you should still be able to do manually should. Should the Gen. AI fail right?
256
00:50:22.090 --> 00:50:37.349
Karim Jamal: Or like almost like function or text expanders? Right? They're sort of a simple, very simple example where you can type a couple of letters, and it'll expand it to a large sentence for you. Right? So you save on typing. Well, if that fails, you should still be able to type out the whole sentence yourself.
257
00:50:37.871 --> 00:50:43.769
Karim Jamal: and so different different alternative paths that still lets you reach the end successfully.
258
00:50:43.990 --> 00:51:04.396
Tim Baker: I I find I find Gpt is quite good. If you get it to do an image, and the image is completely wrong, it'll give you the option, try again, or what was, did you prefer this one? So, being able to actually kind of learn and get get that feedback in the loop as well. I think we've covered a lot on models.
259
00:51:04.930 --> 00:51:10.889
Tim Baker: you know. But how you know. Just talk to us quickly through this kind of adoption cycle that you've got here.
260
00:51:12.000 --> 00:51:25.896
Lynn Pausic: Yeah, I mean, in short, the main hurdles to adoption with expert users. 100% trust right? The the other things are making sure that you're identifying
261
00:51:26.360 --> 00:51:40.570
Lynn Pausic: high value. Use cases for users like, you know, don't just do it because it's sexy to have a conversational Ui right. Are you really moving the needle for those users? It might be productivity or something else, and then make sure that the use cases that you pick.
262
00:51:40.570 --> 00:52:00.259
Lynn Pausic: Gen. AI is is good at solving right? It's. I know everybody's excited about Gen. AI. But there are lots of other forms of AI And algorithms and other things, other weapons in the arsenal that we can throw at this right. And if you get all that lined up, you, you stand a chance of getting the user adoption that you're looking for.
263
00:52:02.020 --> 00:52:03.950
Tim Baker: And and you know.
264
00:52:04.510 --> 00:52:17.390
Tim Baker: we have several engagements with clients around this topic. You know, how do you create that kind of structure to create the amazing, seamless experience, you know, for the for the client engagement.
265
00:52:18.490 --> 00:52:35.720
Lynn Pausic: Yeah. And again, what I was just mentioning previously sticking to under, you know, understanding the problems that the client is trying to solve or anybody's trying to solve. And and those used cases. And then do they actually map? You know, what weapon do we need an arsenal? Do they actually map to
266
00:52:36.040 --> 00:52:58.868
Lynn Pausic: too generative AI or some other form of AI or something deterministic, right? So, knowing what it's good at and avoiding what it's not good at, like Llms typically aren't really good at math or very logical reasoning. They only know the data that they have learned right? So they have. They're not sentient yet, anyway. So logical reasoning is not necessarily their forte.
267
00:52:59.410 --> 00:53:03.130
Lynn Pausic: when you go to the next one, that's next week. Yeah. Next episode, we'll talk about.
268
00:53:03.130 --> 00:53:03.650
Karim Jamal: Yes.
269
00:53:03.650 --> 00:53:28.499
Lynn Pausic: And so when this is kind of a a mapping of a little bit of how you know, we approach things where, if we're trying to on the left. Honor, what generative AI is good at? We very. We tried to very clearly identify problems where you have things like limited human capacity. You're looking for productivity gains. Maybe you're looking for personalization like once you identify
270
00:53:28.500 --> 00:53:46.929
Lynn Pausic: those kinds of problems, then you start to have your used cases. And you could start to map. Okay, how can we handle each use case with, you know, either generative AI or something else? And then also, then you start to look at. Well, what data do we have? You know all all the things to be able to carry that to fruition.
271
00:53:46.930 --> 00:53:59.140
Tim Baker: I. I love that discipline, disciplined approach, and it does avoid you. Kind of just saying, Oh, this is an AI problem. Throw that tool at it. And this is one thing that I think Xpero is is really good at.
272
00:53:59.210 --> 00:54:08.530
Tim Baker: So so just just very quickly to wrap up Karim. You know, you and I have been, you know, working with our AI solution, which we call jetpack
273
00:54:09.245 --> 00:54:15.100
Tim Baker: I thought it'd be quick kind of cool to just run through the philosophy that we've kind of
274
00:54:15.110 --> 00:54:27.420
Tim Baker: develop now around around Jetpack. And you know what we think. Practically, you know. You know, are the other simple things to do and the thing and the things to kind of avoid.
275
00:54:28.210 --> 00:54:33.950
Karim Jamal: Yeah, I'll I'll run through this quickly. And of course we're available to discuss each in depth further, if people want
276
00:54:34.632 --> 00:54:58.820
Karim Jamal: but we we do dog food, the information that we just spoke about right? So we don't trust the the Llm. As sort of the the end. All be all right it. It is a tool, not an expert user. And then we have layers on top of that. That. Then give more non hallucinated you know, flows to the user to to help them as as a tool in their sort of arsenal.
277
00:54:58.920 --> 00:55:03.350
Karim Jamal: And then it's, you know, relying on it as a
278
00:55:04.149 --> 00:55:26.319
Karim Jamal: basically, Ellen, consulting real data for authoritative, authoritative answers through queries. But then it all gets baked into our semantic model where it's almost like a sort of a canonical format with context awareness, right? And so that allows us to bring all the data together. So we have as much as we can to work with.
279
00:55:26.320 --> 00:55:39.510
Karim Jamal: but then it is used again as a tool that you, you know you can trust, but verify in a sense. And so it's a recommendation, but not you shouldn't just take all your answers off of it
280
00:55:39.690 --> 00:55:43.579
Karim Jamal: as as the examples we saw earlier, and we'll see here in a bit.
281
00:55:44.040 --> 00:55:54.019
Tim Baker: Yeah. And I think what I've seen. As I've observed, our engineers build these solutions from clients. Obviously, we start with the the design work that Lynn's team does.
282
00:55:54.230 --> 00:56:00.979
Tim Baker: But now we've kind of got. I see. I see this AI spaces as as a set of ingredients.
283
00:56:01.040 --> 00:56:03.620
Tim Baker: And so you you kind of pull.
284
00:56:04.107 --> 00:56:16.189
Tim Baker: A little bit of prompt engineering, and maybe you build a rag. Maybe you do this, and and you and and it's you know, we're like master chefs. We then create that amazing, you know, kind of dish.
285
00:56:16.787 --> 00:56:20.530
Tim Baker: But but those ingredients are constantly changing. So
286
00:56:20.840 --> 00:56:21.400
Tim Baker: yeah.
287
00:56:21.400 --> 00:56:41.280
Karim Jamal: The the recipes may need to call for more salt or less salt, right? So like industry. And the client, if they're very, if they're very sort of risk, averse, and you can get in trouble like legally or monetarily. Then maybe you want to have a more conservative approach to you know how you introduce Gen. AI versus a more sort of radical approach.
288
00:56:42.130 --> 00:56:53.960
Tim Baker: So I thought, we've only got a couple of minutes. So I wanted to start these videos playing. So these are the 2 examples of projects we're working on. The 1st one is is a screener. So think about a screener as
289
00:56:54.488 --> 00:57:02.229
Tim Baker: you know, fairly complex tool to navigate quite large amounts of data, and we're working with our Morningstar partners to build.
290
00:57:02.380 --> 00:57:07.820
Tim Baker: you know, a new take on a mutual fund screening. But what we found is
291
00:57:08.310 --> 00:57:13.689
Tim Baker: for a novice user to use a screener. They quite often don't get to the end of the process.
292
00:57:14.064 --> 00:57:26.799
Tim Baker: Because they get confused about terminology. They don't know what a sharp ratio is. They don't know what they really kind of are looking at. So so this is an example of a conversation with the AI
293
00:57:26.920 --> 00:57:36.759
Tim Baker: about the needs of the end user. And then it translates that into the inputs, into into the screener, the the filters, if you like.
294
00:57:36.830 --> 00:57:42.850
Tim Baker: So I think this is a great example of where we've taken a very common problem, navigating data.
295
00:57:44.300 --> 00:57:57.329
Tim Baker: and and then helping the client get to a set of answers from a trusted source. This data is coming from Morningstar, so there's no chance of resulting kind of hallucination.
296
00:57:57.894 --> 00:58:15.189
Tim Baker: And and the and you can see the filter, the the filter outputs are there at the top, and they've been. They've been generously put in by the AI. The AI's g1. 0, that's what the clients actually looking for. I'm going to now take control of the AI and then present the results.
297
00:58:15.796 --> 00:58:19.720
Tim Baker: Let me show you the cool use case
298
00:58:20.103 --> 00:58:35.890
Tim Baker: and this is a proof of concept. But we're working with the client right now on actually building this, I'm super excited because this is one very close to my background. And this is really showing you how the the and it's not just generative. AI, but there's a lot of other
299
00:58:36.355 --> 00:58:45.489
Tim Baker: technology and and simulation of data that goes into building a granular profile of clients. And then, as Lynn mentioned.
300
00:58:45.660 --> 00:58:51.720
Tim Baker: on a daily basis on a continuous basis, marrying that with with new information and new content
301
00:58:51.820 --> 00:59:15.549
Tim Baker: and helping the in this case, the sales trader put together a mass customized call list when the calls over that pink box. There is the Crm that's dropping that the output. And of course, that data then goes back into the input so if the client says, actually, I'm not interested in trading this. I'm you know, currently looking at this, that then goes back into the
302
00:59:15.610 --> 00:59:17.569
Tim Baker: into the Crm.
303
00:59:17.810 --> 00:59:37.889
Tim Baker: So the next time there's an event on that stock it's built in. So it just makes the the client experience so much more rich. But obviously, also, the business outcomes the likelihood of that client. Then going back to the bank or the broker and saying, Okay, let's put that trade on. So it really has very direct kind of benefits.
304
00:59:38.170 --> 00:59:43.870
Tim Baker: So with 1 min to spare, we we just brought it home.
305
00:59:45.130 --> 01:00:03.389
Tim Baker: you know, doing these things. Live is always a little bit stressful, but there's a lot of content there. We won't do. QA. This time, but there's I know there's a bunch of people dialed in. Please do reach out with questions. I almost feel like we'll probably do another one of these in 2 or 3 months, when the whole world has changed again.
306
01:00:03.749 --> 01:00:13.640
Tim Baker: But I just want to thank Lynn and Kareem for you know, for managing through all this content and Fielding hard questions, and look forward to the next one.
307
01:00:16.550 --> 01:00:17.279
Tim Baker: Thank you.
308
01:00:17.280 --> 01:00:18.040
Lynn Pausic: Everybody.
309
01:00:18.830 --> 01:00:20.149
Tim Baker: Thanks, bye, for now.
Tell us what you need and one of our experts will get back to you.