The National Association of Insurance Commissioners (NAIC) states that The Coalition Against Insurance Fraud incurs costs of $308.6 billion annually for businesses and consumers.* (source: https://insurancefraud.org/). These claims predominantly affect property & casualty, automotive, and business insurance sectors. Adding to the complexity, each of the 50 states in the US has its own regulatory body, each with distinct rules and regulations. Some states have enacted legislation targeting various types of insurance fraud schemes, such as agent and broker schemes, underwriting irregularities, vehicle insurance schemes, property schemes, doctor and personal injury schemes, up-charging damages, inflating damage values, salvage fraud, among others. Special Investigations teams are leveraging deeply connected patterns with Neo4J Graph algorithms to explore emerging technologies that reduce settlement times, optimize fraud detection, and enhance claim adjustment efficiency.
Michael Moore, Ph.D., Senior Director of Strategy & Innovation at Neo4j, will join Scott Heath, VP of Fraud at Expero,to demonstrae how to utilize GenAI with Neo4J Graph technology, Human-in-the-loop, and visualization techniques to ensure organizational compliance with Special Investigations units.
This webinar aims to pinpoint areas where Graph DB and Graph Machine Learning, Geospatial, TimeSeries, AI/ML/LLM, Visualization, and Graph technology can enhance Claim fraud identification accuracy by over 45%, and how involving the ‘Human in the Loop’ can preempt your state’s fraud prevention legislation.
Throughout this online meetup, you'll glean insights from our experts on how Neo4J and Expero can unleash your organization's potential.
Key Focus Areas:
Identifying Fraudulent Claims: Discover how Graph Databases (Graph DB), Graph Machine Learning (GML), and AI/ML/Large Language Models (LLMs) can boost claim fraud identification accuracy by over 45%.
Staying Ahead of Regulations: Learn how "Human-in-the-Loop" techniques ensure compliance with evolving state-level fraud prevention legislation.
Optimizing Investigations: Explore how Neo4J Graph technology, combined with visualization tools and machine learning from Expero, empowers investigators with faster, more efficient claim processing.
Key Learning Objectives:
Challenges in Claim Fraud Investigations: Delve into emerging threats, audit & compliance concerns, and investigate best practices for combating claim abuse.
Unlocking Technological Innovation: Understand why fraud investigators should leverage Neo4J with advanced AI, ML, graph algorithms, and LLM models to minimize false positives and enhance accuracy.
Empowering Investigators with Next-Gen Tools: Discover how visualization technologies and human-centric processes streamline workflows for fraud management, investigators, and data analysis teams.
Harnessing the Power of Explainable AI: Learn practical approaches to utilize AI, time-series data, spatial analytics, and ML/Graph algorithms (including LLMs) to empower non-technical investigators as "humans-in-the-loop" for improved accuracy and streamlined processes.
What You'll Gain:
Master the Complexity of Insurance Fraud: Explore how government regulations and graph link analysis techniques are shaping the future of claim investigations.
Harness Neo4J Graph Analytics: Learn how to implement practical methods for claim fraud identification, complex dependency management, and "human-in-the-loop" collaboration.
The Art of the Possible: Witness live demonstrations showcasing Expero's Connected Platform, a powerful combination of visualization matching, graph analytics, and machine learning, designed to reduce false positives and enhance accuracy.
1
00:00:12.240 --> 00:00:22.676
Laura Smith: Everyone. Good morning. Thank you for attending today's webinar combating claim fraud reduce false positives with AI Lm, and graph databases.
2
00:00:23.933 --> 00:00:33.030
Laura Smith: Our speakers today are Scott Heath, Vp. Of fraud and analytics with expiry, and Michael Moore, senior director strategy and innovation with Neil for J.
3
00:00:33.050 --> 00:00:47.689
Laura Smith: Before we get started. I wanted to go over a few housekeeping items. There's a QA. Icon at the bottom of your Zoom Meeting. If you have any questions during the webinar. Please leave them in that QA. Box that'll pop up when you click the button.
4
00:00:47.750 --> 00:00:51.639
Laura Smith: and then we can go over that at the end of the session. We'll have some Q. And A,
5
00:00:51.920 --> 00:01:01.370
Laura Smith: we are recording today's session. I'll be sending a follow up email this week with the recording. And if you have any questions or issues during the webinar, please. Message the host. Thank you.
6
00:01:02.040 --> 00:01:05.103
Laura Smith: Thanks, everybody. My name is Scott. And I have
7
00:01:05.680 --> 00:01:16.879
Laura Smith: really exciting information that Michael and I are going to share with you today. What I'm gonna do is probably gonna drop out my video so that the presentation that I show will be more clear. But, Mike, do you want to say, Hi.
8
00:01:17.500 --> 00:01:22.609
Laura Smith: hey, folks, thank you for taking the time to join us today? We're really excited to be
9
00:01:23.010 --> 00:01:33.250
Laura Smith: We're presenting with one of our leading partners at Neoperj, at Sparrow. So have a ton of really great solutions and accelerators to help you get started with graph and graph based solutions
10
00:01:33.960 --> 00:01:36.810
Laura Smith: super. So without further ado, we'll go ahead and jump in.
11
00:01:37.296 --> 00:01:59.959
Laura Smith: We're gonna walk through kind of the high level. And then I'm gonna talk a little bit about some of the concepts. And then Michael's gonna bring us home. What what we're gonna see today is a couple of vignettes of of what life would look like with this kind of technology. And then we'll talk specifically about why Neil, for J. Is so much different and is such a game changer. So before we get started
12
00:02:00.818 --> 00:02:08.310
Laura Smith: let's talk a little bit about sort of the backdrop of of what's going on. So in a lot of the situations, nobody has to tell us that
13
00:02:08.509 --> 00:02:27.690
Laura Smith: claim fraud is in many different flavors right? It's out there and the numbers can be big. The other thing that we want to keep in mind is not only is it auto health life er it could be anything right. But the takeaway is as payers and and providers and claim
14
00:02:28.574 --> 00:02:44.859
Laura Smith: we sort of investigation, or, quite frankly, just claims in general. Is. The process is that if you pay and you pay incorrectly, it is extremely difficult to get that money back. And so, as such, we really really need to keep our net promoters
15
00:02:45.169 --> 00:03:12.439
Laura Smith: scores high. We need to be as as good as we can in the claim process, and quite frankly, after a couple of claim conventions that we've been through this year. Speed is of the essence. However, we now need to look out for where this fraud occurs, we need to find it fast. We have to look for non trivial, complex networks. And then, more importantly, we still have to do more with less. Well, most most of the folks that are out there today
16
00:03:12.590 --> 00:03:14.419
Laura Smith: departments are under
17
00:03:14.726 --> 00:03:30.503
Laura Smith: sort of stress to make sure that you do it faster. And you do all these things well, if you do it wrong, there's almost no way to get that money back. And some people just sort of call that part of doing business. Well, we think today we can do better. And so part of that, what we're gonna talk today is
18
00:03:30.750 --> 00:03:49.160
Laura Smith: looking at, regardless of what line of business you're in, or or, again, where you are. It is part of this claim process that's so incredibly important. And how do we suss it out. How can we look at things like upcharging and up coding, disaster or storm chasing? We'll see a little bit of that today.
19
00:03:49.400 --> 00:04:03.839
Laura Smith: Frame fraud rings or collusion in in some of these other elements those are all real right. And then there are probably 4 more lists of these depending on what line of business again. Those those things that come at us in different ways.
20
00:04:04.070 --> 00:04:18.349
Laura Smith: The good news that we we are bringing to you is is now this premise of what sort of technology can I bolt on to? And why? Why is this so hard to do right? When you look at the complexity behind the scenes.
21
00:04:18.684 --> 00:04:48.100
Laura Smith: You know, we have lots of data. But this paying first and the connection or the disconnection of these dependencies is very difficult. Right? If I have 12 SQL. Databases, and they're they're separated from each other. There's no way to sort of identify when I paid if I paid properly. And then this idea of going back in time. It's very hard to look at things the way they were, or to go back and triage how those things happen. But again.
22
00:04:48.220 --> 00:05:16.899
Laura Smith: it's ultimately leading to this scoring. If I can simply share with the human things in in near time or real time, it makes an enormous difference, and then being able to tie back time and that data dependency of the of the connections of the information. And then, finally, when I see that in history and then in real time. Obviously, this is an enormous problem. And so what we hope to do today is demystify some of that, and bring to you some some good news. Now.
23
00:05:17.190 --> 00:05:32.250
Laura Smith: back in the day we've all seen the movies with the the red lines. And this case was that. And this is over here. Well, we can do better. And and that today, in in what we call graph analytics. Now, that's not a pie chart graph. That is a style of data.
24
00:05:32.690 --> 00:05:54.450
Laura Smith: That neo, for J is quite frankly the leader. And and again, Michael is gonna walk us through more details, but at a at a macro level for business people, you don't have to rip out what you're doing today. This is a bolt on relationship. But what graphs do that are intrinsically different is instead of saving the the again, if you're not tech technology based
25
00:05:54.450 --> 00:06:18.990
Laura Smith: the one to many big tables. Table walking lots of outer joins. Lots of of complicated words means expensive. And sometimes for what we're looking at in these connections can be very slow. But that doesn't mean we have to throw that out. What we're now saying is with a neo for J, which now saves those relationships and at scale allows me to think in more of a logical
26
00:06:18.990 --> 00:06:21.089
Laura Smith: way. But inside of
27
00:06:21.090 --> 00:06:35.190
Laura Smith: what we'll see inside of neo for Jay, then, is this ability to do logical becomes physical. That helps us both understand it. And where we're going here in a minute. Is this sort of concept that whether you're in
28
00:06:35.520 --> 00:06:37.380
Laura Smith: parts of insurance.
29
00:06:37.750 --> 00:06:47.130
Laura Smith: underwriting or whatever it is, everything is actually connected to everything else. It's about how the data is connected. And this is really where a graph
30
00:06:47.522 --> 00:06:56.780
Laura Smith: will stand tall. This can make an enormous amount of difference both in understanding the complexity of a claim and a dispute and multiple claims and rings.
31
00:06:56.780 --> 00:07:19.740
Laura Smith: or all the way over on the right. Now. Where those things came from as they work their way to the left. So if I have a claim, where was it? What were the deductibles? What were the risk? Factors, what was underwriting? And where was all of that? Over on the left. All of that can now very concisely be held inside of a graph database and at our fingertips to do lots of different kinds
32
00:07:19.740 --> 00:07:23.309
Laura Smith: of use cases not just lost waste and abuse
33
00:07:23.310 --> 00:07:32.619
Laura Smith: or potentially fraud. But I can actually start to look at positive use cases, too. And that's really sort of the other exciting thing about this. And so when we see this.
34
00:07:32.810 --> 00:07:56.959
Laura Smith: we see now over on the left, I can do things like customer journey upsell, cross cell chargebacks. But what we really came to talk about today is perhaps fraud, waste, and abuse. And where does a claim and dispute turn into fraud? Or how can I start to identify in very simplistic ways? Well, over on the right now, what we're getting from Neo for Jay is. Imagine now, if I had a smart system
35
00:07:57.150 --> 00:08:20.280
Laura Smith: that could indeed look for patterns and look for what if, or predictions, or giving the human a scorecard that we'll see here in a minute for a live demonstration, where they could be prompted to say why and what, and then I can see it very concisely. And I can plug into my existing claim system, or I could bolt on to
36
00:08:20.280 --> 00:08:28.200
Laura Smith: any of the the systems that I may have today. And that's really the power of this is number one, be able to find it, and to be able to do it.
37
00:08:28.550 --> 00:08:32.410
Laura Smith: In in in a much faster and more concise way. Right.
38
00:08:32.490 --> 00:08:55.079
Laura Smith: So again, some people like things from top to bottom, some things like things from left to right. But what we see now is the entire sort of capability of a graph database. But we're really segmenting it over here, and maybe we want to use it for other use cases. But today, we'll focus on that. So that's kind of setting the table right? We're trying to mitigate our risk and decrease our cost.
39
00:08:55.395 --> 00:09:04.240
Laura Smith: For what that is. Now, where does this fit in the overall landscape? In one of the conferences that we were recently at connected claims.
40
00:09:04.240 --> 00:09:28.108
Laura Smith: They've very clearly called it out. That claim fraud is simply a bolt on to our claim process or in a technology stack. It is part of that early warning system that radar that we can give either investigators or just rank and file claim. Administrators. Right? Where are those items? And can we put this to work? Is what we're gonna see today.
41
00:09:28.470 --> 00:09:42.789
Laura Smith: But the ecosystem has a place for that right? And and that's really what what we want to show today is, why is this so powerful? And how can it work in line to save us that time? Save us that money decrease our false positives, etc?
42
00:09:43.010 --> 00:09:56.950
Laura Smith: Now, one of the products that we integrate with is something called guidewear. Well, they do a full claim management process, and, as you can see here is that there really isn't call out. And what we want to show today, then, is during that claim intake
43
00:09:56.950 --> 00:10:25.550
Laura Smith: in the assigning. And the evaluation process is really where we're looking at today. Focusing is that if we could automatically notify the teams that are working on those and simply in English, point out what the anomalies are, and where there may either be a potential issue, or if there's a problem, or even if it's all the way over to fraud, that's really sort of how we do this. But again, a good backdrop on sort of where this fits in the overall process.
44
00:10:25.790 --> 00:10:41.429
Laura Smith: Now, what is this graph thing right again. Michael is going to talk to us more about this specifically, but what we've seen over time, then, is, is back in the way back machine. Here in the 1990 s SQL. Was very, very powerful. We were able to do things we had not been able to do.
45
00:10:41.430 --> 00:11:11.360
Laura Smith: But then came the 2 thousands, these 2 dimensional bi tools. Well, I can look in my rear view mirror, and I can build really interesting connections. But looking in the future was really hard. And that's really sort of the age of this graph database that's come into line is now I can look backwards. I can look at current, and I can start to look at those patterns very simply and very easily inside of our neo for J. Instance. And now I can start to go with this projection of things like Llm.
46
00:11:11.380 --> 00:11:39.020
Laura Smith: That that ability to sort of move up talk to the human the investigator or the claim manager. And now I can start to find those patterns with a much higher accuracy, and I can decrease those false positives. So now I can start to cash that check of saving that money or denying the claim before it actually gets paid. If there are enough anomalies, or quite frankly turning it over as potentially fraud to folks that may need it.
47
00:11:39.260 --> 00:12:04.159
Laura Smith: Now, what does that really mean? That that means that if you simply throw some good machine learning at it. We've had a customer tell us that was that look. Machine learning is great. The problem is that it's difficult to do it at scale. But now, if I can couple that with a graph database, and I can start to look at, how do I run? Supervised and unsupervised? It gets even richer, and that's where we start to see that boost. Then if I couple it with a human
48
00:12:04.160 --> 00:12:15.499
Laura Smith: right, the different team members that are out there and get the human in the loop. This is an enormous boost. And, by the way, we've done some empirical things, we'll see here in a minute. Where it can be even
49
00:12:15.500 --> 00:12:45.429
Laura Smith: higher, it can be up to 83% better than more of a linear or simple. If, then, kind of logic, and we don't lose any risk. And I think this is a really really important sort of note here is by using the graph database. And again, we'll see some of that. What Gartner says here in a minute. But the takeaway, the real empirical data is this can make a really demonstrable effect to your pop and your bottom line as well as making a lot of folks lives easier. So
50
00:12:45.700 --> 00:12:55.770
Laura Smith: how did we do this? So what I'm going to do now is I'm going to step into a little bit of what, as practitioners we might look at, and then I'll hand it over after this section to to Michael.
51
00:12:56.200 --> 00:12:57.680
Laura Smith: So in this side.
52
00:12:58.035 --> 00:13:25.819
Laura Smith: What is this sort of process? Well, we ingest data, we match it. We do that today, right? Most of the folks in technology do a lot of that today. But what if now, I were to augment that with this graph data structure, what would that look like? Very simply? Graphs are very good at connecting disparate data in that sort of mental map that we saw previously, and where we start to do that now is, we can incorporate some of the tools from Neo for J.
53
00:13:25.820 --> 00:13:54.949
Laura Smith: And now what I'd like to show you today is the connected suite of tools that would work for claim managers where we can now see those algorithms. I can see how mirror sort of mortals, if you will, can wield this incredibly powerful new technology. And that it is fairly straightforward to use, and it doesn't upset what our flow or what our processing of those claims are. And so the beauty of this now is sort of plugging it in as we're as we're perhaps a system that we already have.
54
00:13:55.400 --> 00:14:23.870
Laura Smith: Now, how does all this fit together so logically, even though you may use guidewire today. Or you may use one of these other tools. That's okay. These tools can plug into those. Or if you don't have one of those. Obviously, we have, we have tools for claim management as well. But again, we're using that data from others. We're connecting the dots, perhaps on a verisk or an open corporates against guidewear and claims and data. And what we're now able to do is make smart connections.
55
00:14:23.870 --> 00:14:39.613
Laura Smith: and we can now here in the middle, start to use that power of the Neo for J. Graph data Science Library, which we'll hear about here in a minute, in addition to machine learning. And what that does now is that gives us this very simplistic way for for different kinds of roles
56
00:14:40.197 --> 00:14:54.469
Laura Smith: to start to go and and investigate that and right increase that accuracy and the velocity of our claim processing many times. That's up to 50 or 60% on the speed or the velocity of that. So it's now helping us in a couple of ways.
57
00:14:54.870 --> 00:15:19.790
Laura Smith: Now, when we get into sort of what is behind the scenes. And again, Michael will talk a little bit about this. But these different kinds of algorithms are extremely powerful. So to be able to find connections or similarities of people that have been trying to either defraud us as a ring or as an individual looking for recommendations or shortest paths to other
58
00:15:19.790 --> 00:15:40.229
Laura Smith: kinds of of connections, etc. That's really where neo is. Gonna help us here to go do that. And what do these things look like. Well, what we're gonna see here in a minute is there is something that we call a logic or an alert builder that a super user can do they can simply use these similarity of patterns. And we're gonna see how we do that for a couple of different objects
59
00:15:40.491 --> 00:15:49.899
Laura Smith: to be able to do that. But this is really now what neo for J is bringing to us underneath. And so a quick version. And again, this is something no one would ever see.
60
00:15:49.900 --> 00:15:57.650
Laura Smith: But it simply looks at that claim data over on the right. And it actually starts to connect those dots. And what we may see is things in blue
61
00:15:57.650 --> 00:16:25.189
Laura Smith: or green are good, and those are the ones that sail right through the process, and maybe the ones that are in red or magenta. At the bottom are ones where addresses were. Were you reused by witnesses or addresses, or incongruent for witnesses and doctors, and those kinds of things that what we see then is, it's very easy for that pattern detection and then for new claims coming in, we can find it. And then what it looks like on a screen is simply reject.
62
00:16:25.230 --> 00:16:39.729
Laura Smith: You don't have to worry about all the complexity of what just went on, but we're able to surface that in a very simplistic way. And so what we see then are simple kinds of of user interfaces. These addresses are not correct. Show me what that looks like. Right.
63
00:16:40.020 --> 00:17:04.980
Laura Smith: this connection of claims and not just this claim. Maybe there were multiple claims through several carriers, something that we can pull in from Barisk, where we can make those connections and realize well, this is not just a singular event, and it might be happening over time in other areas. So again, this power is really getting us to that next level. So that's really sort of what we're trying to bring to you today is talking about how to increase that
64
00:17:04.980 --> 00:17:21.309
Laura Smith: speed. Use this power. To go do that, and without further ado, then that's what I'm gonna show here in a minute. Right? Is this top layer. Is simply that we now have claim persons. Maybe you're in a customer 360
65
00:17:21.649 --> 00:17:49.580
Laura Smith: and and you have other kinds of products and services. And then you can see that in there. And that's really what we're trying to show now is the normal business of insurance and claim operations. Stay the way they are. And we're now bolting on. And then, if you're a graph data science person or you're in sort of more of that, it user the neo, for J products have a wonderful set of user interfaces, so that now you can build and test those kind of logic elements. Now that you can start to incorporate into production.
66
00:17:49.895 --> 00:18:10.810
Laura Smith: So this is kind of that logical layer for the technology folks that are in there. Neo, for J is is basically connected to the experiment. We call it a semantic layer, so that you're able to sort of use it out of the box, and I think that's the other sort of big component here is. There's not a huge sort of ramp or or time curve. This can actually be done pretty quickly.
67
00:18:11.280 --> 00:18:30.249
Laura Smith: Couple of different modules. And again, your mileage may vary on these, but but really, what we're saying is, they are flexible, and they are connectable to the existing claim management solutions that are out there today. But again, they're there for ones that you need, or if you don't need all of these you can. You can pick and choose.
68
00:18:30.370 --> 00:18:32.850
Laura Smith: Now, what I'd like to do is switch over to the demo
69
00:18:33.290 --> 00:18:37.110
Laura Smith: as I'm switching over here. Let's see if I can do this
70
00:18:39.680 --> 00:18:52.769
Laura Smith: alright. So in our first demonstration, what I want to do now is sort of share. What this would look like. Now, I'm going to show 2 different roles in my demonstration. The first role is more of an administrator
71
00:18:53.142 --> 00:19:06.777
Laura Smith: or a super user. In this instance. What I've seen then is, I am an insurance company that is looking at property and casualty, and auto and home. And so in this vignette. What I see then, is a dashboard.
72
00:19:07.250 --> 00:19:20.829
Laura Smith: The dashboard is showing me a couple of different elements. Number one is. It's showing me geography, and across the bottom. I'm looking at time now during that weather event. In this case it was a series of violent storms. I see.
73
00:19:21.222 --> 00:19:47.677
Laura Smith: That ingest of different data. I may have loaded varisk data. I may have loaded national weather data in this case. That's what I've done here, and I can start to see an overlay and I'm seeing a risk probability that I have already run inside of Neo for J. So it's already risk scoring for me here in a minute. I'm gonna show you how I built that with sort of a drag and drop window. But in this sort of dashboard I'm now seeing those different elements. What I'd like to do
74
00:19:48.244 --> 00:20:07.070
Laura Smith: is actually start to dig in. So now I see where there were in red where there were events where there should be claims. And then things in yellow where there might be lesser claims and definitely things in in blue. There shouldn't be any claims. So I'm now using again that grab capability. But maybe I want to zoom in here.
75
00:20:07.070 --> 00:20:35.590
Laura Smith: What I can start to do now is zoom in and start to say graph database, hey? Where are there likely predictions of potential fraud? So, for instance, somebody is claiming their car was total, and there was no hail in that area. Well, that is something that we're gonna look for. We'll see how we build those algorithms. So in this case, this super user is starting to see sort of where those are. And what I can do then is I can zoom in. And now what I want to do is show me specifically
76
00:20:35.590 --> 00:20:50.510
Laura Smith: where I am looking at hotspots. And I'm actually now going to potentially go deploy claim adjusters out in the field. And so what I want to do then is, where should I send them? Number one? So I can use the graph to optimize my deployment
77
00:20:50.870 --> 00:21:01.410
Laura Smith: of those different claims persons right? And now what I can see then is, well, that is the density of what's going on. The second thing that I've done is, I see now the power of an Llm.
78
00:21:01.450 --> 00:21:05.179
Laura Smith: Or a generative AI. What it's saying, then, is.
79
00:21:05.230 --> 00:21:23.480
Laura Smith: I wanna see where these are. A high high probability of what those are, and what I've done now is, I popped up and said, Would you like me to go build an alert right? Would you like for me to go find that other data to go do that. And so what I wanna do now is, actually I wanna pop over
80
00:21:23.826 --> 00:21:35.244
Laura Smith: here. And I can see now that I can go build those different kinds of alerts. In this case I perhaps wanna go look for maybe there's in this particular instance.
81
00:21:35.957 --> 00:21:43.450
Laura Smith: I want to go see what's going on. Maybe there, I'm looking for a a in this case, a syndicate of persons. But what I can simply do
82
00:21:43.450 --> 00:22:10.879
Laura Smith: is drag the data from my Neo for J data source. I can say, well, in this case I've grabbed an insurance agency. You can see over here. I could grab a block over here for a customer or any other kind of data. But what I'm doing is, I'm building this logic tree. And I'm saying that if there was an auto and maybe there was a connection, and there were previous claim damage in that same visual region. What I can start to do then is, is, see that
83
00:22:11.483 --> 00:22:18.820
Laura Smith: the other element that I can start to do then is is is go through the different process. So if I wanted to test this.
84
00:22:18.820 --> 00:22:42.420
Laura Smith: I could go do that. But down here at the bottom is really where, again these network kinds of things are are showing. What we're doing. What I'm able to do then, is I can go down and look at these network algorithms from Neo for J, so things like clustering in this case, I'm looking for strongly connected to a fraudulent claim in in the previous. I could even do fuzzy matching or cycle detection.
85
00:22:42.430 --> 00:22:53.520
Laura Smith: Now Michael's gonna talk about sort of what's going on behind that. But these are those elements that are now allowing this alert logic to be so much more productive. Now I can sort of skip ahead here.
86
00:22:53.640 --> 00:22:55.250
Laura Smith: and I can say, Test this
87
00:22:55.533 --> 00:23:13.020
Laura Smith: and I can go see what my accuracy is. And again I'm I'm running that in a test environment. But now what I've done is I've run that test and then I can either install it. But in this case, now, what we're gonna see here in a minute is I wanna run that I want to automatically, perhaps suspend that. And I wanna pop up a queue
88
00:23:13.020 --> 00:23:29.760
Laura Smith: to my claim administrator and say, these are the 3 or 4 things that I would like for you to do, and then you can make the judgment. But I have brought this now to the human's attention, that if there was an auto damage. And again, this is this screen that I'm showing here is typically for a super user.
89
00:23:30.040 --> 00:23:37.539
Laura Smith: Now, I want to come back to my other screen over here. And what we can see then is, I'm basically going to sort of fast
90
00:23:38.157 --> 00:24:02.820
Laura Smith: forward through some of those elements. But now what I can do is now that I've got this short list here. I can go in and say, well, somebody filed a claim in an area here. That is not correct right? What is that? That perhaps I want to do? Here. And what it's done then is that alert that I had showed you previously has now popped up a quick list of things for me to do to say, well.
91
00:24:02.820 --> 00:24:09.520
Laura Smith: as I'm looking at the claim, obviously I have my regular claim elements. I can see that. Well, you're outside of the region
92
00:24:09.520 --> 00:24:33.710
Laura Smith: right? You shouldn't be having that kind of damage, and it looks like now. The recommendation again, back from New York, for Jay has told me that. Well, you should walk through these, and I can see over here that I've got a little chat. Bot, that's saying. Well, I found these anomalies here. I found that this homeowner that basically has done some things previously where he was trying to get away with some things. I see that there was some geographic
93
00:24:33.710 --> 00:25:02.600
Laura Smith: location that's outside of the band of what's going on. And again, my neo for J. Risk algorithm has told me what that is. But what I'd like to do is, let's just step in to go see that. And now what it can do is again the graph database is bringing me that recommendation to say, I think you should reassess this, and it looks like there's a few elements in here. That are indeed problematic. And so I'm gonna walk you through sort of what those are. And the next one, then, is this geographic anomaly.
94
00:25:02.770 --> 00:25:08.190
Laura Smith: Now, what you can see over here on the right is that that property in question is indeed
95
00:25:08.210 --> 00:25:19.739
Laura Smith: very wide of where all the heavy activity occurred. And so when we get our adjuster online, we now have effectively sort of 2 strikes for why this this may be
96
00:25:20.006 --> 00:25:40.553
Laura Smith: a problem. And then we see some other detail down here on, on sort of what their claim is. And then, finally, what I can do is I can go use that graph database and time, and say, this person has indeed done some other kinds of elements that are down here as I as I'm stepping through this, and I think that's sort of the the key here is that I can say, well.
97
00:25:40.820 --> 00:25:51.147
Laura Smith: this one over here in Red says, well, sure enough, he tried to previously claim something in his home. That was clearly already existing right? So we had denied that claim
98
00:25:51.470 --> 00:26:19.319
Laura Smith: right, and we can see that there is an auto one down here, where, indeed, they did something very similar, where they were trying to claim something where it was next to a neighbor. And so what we start to see now is the power of the graph is not sort of in your face, but what it's doing is it's behind the scenes. And now you can start to see sort of where and why? This is so incredibly powerful with with kind of what and how this fits into the the scenery. So
99
00:26:19.678 --> 00:26:22.901
Laura Smith: what I'm gonna do now is actually come back
100
00:26:24.430 --> 00:26:25.510
Laura Smith: over here
101
00:26:27.170 --> 00:26:30.210
Laura Smith: and share what that application is
102
00:26:30.793 --> 00:26:41.479
Laura Smith: here, and then we'll go back to our previously listed Powerpoint. So as we're shifting over here. Now, what we're gonna do is we're gonna look sort of behind the scenes.
103
00:26:42.092 --> 00:26:45.117
Laura Smith: And I'm going to pass the baton over to you.
104
00:26:45.760 --> 00:26:46.570
Laura Smith: I called.
105
00:26:47.820 --> 00:26:49.959
Laura Smith: Thanks. Scott. Yeah, that was terrific.
106
00:26:50.000 --> 00:27:10.720
Laura Smith: Yeah. So so what? I thought we would do in this next section is you know, Scott gave a really good rundown of the kinds of inferences and the kinds of experiences. As part of an analytical investigative flow. That the experiment connect solution provides. And so I thought, what we would do is
107
00:27:11.074 --> 00:27:31.970
Laura Smith: take a peek a little bit under the hood and get an understanding of exactly how the graph is. Po is creating these insights. So graphs are, really coming into their own? We're seeing the analyst community responding very well. To to advances in the space. And we're and you're seeing statements, like 80%
108
00:27:32.000 --> 00:27:32.910
Laura Smith: of
109
00:27:32.930 --> 00:27:42.249
Laura Smith: all data and analytic innovations are going to be powered by graph just within the next few years. And we certainly are seeing that to be true across our customer base.
110
00:27:43.127 --> 00:28:07.729
Laura Smith: Another and interesting quote here is that finding relationships and combinations of diverse data using graph techniques at scale at scale will form the foundation of modern data analytics. And we're we don't have time to touch on it in this particular webinar. But one of the things we're absolutely seeing is that all of the interest in Gen. AI and combining those conversational experiences with
111
00:28:07.730 --> 00:28:14.930
Laura Smith: knowledge graphs in order to make a very trustworthy and reliable. applications.
112
00:28:15.894 --> 00:28:28.285
Laura Smith: For enriched customer experiences. And so I think that that's going to be a a big area of growth particularly into the insurance industry. So let's go to the next slide, Scott. So
113
00:28:28.630 --> 00:28:48.450
Laura Smith: let's sell just level set. So when we talk about a graph, you know, we're not talking about charts. We're talking about this this interesting data structure. And the data structure is composed essentially of 2 types of data. We have nodes, so you can think of those as records. And so here I have a collection of records. I have employee records, company records and city records.
114
00:28:48.490 --> 00:28:58.769
Laura Smith: and then those records are then connected by relationships. And one of the differentiating features of graphs is that those relationships are stored in
115
00:28:58.830 --> 00:29:19.209
Laura Smith: memory and on disk. And you can put data on them. They have directionality, and they also have a semantic type. And so you can see, for example, here that this company has a CEO. And so there's a relationship there. And so we know that this employee is, in fact, the CEO, because that
116
00:29:19.210 --> 00:29:40.349
Laura Smith: that is the relationship that's being described here. We also see that this company is located in a particular city, and you could see that there's a relationship there. That specifies this. Now, this is a really toy example. But when you load millions of records into a graph, you end up with a fully connected network
117
00:29:40.400 --> 00:30:04.800
Laura Smith: of data. So a fabric of data that you can then query, and you can go as deep and wide as you would like with the with your queries in a graph, because all of the logical possible relationships are being constructed and stored. And it's computationally super cheap to discover all of the connected data from a given starting point.
118
00:30:04.830 --> 00:30:20.449
Laura Smith: And so you can. You know, you can go across multiple hops, you know, 1020, 30 as deep as you want and do the kinds of operations that would be physically impossible to do in legacy, SQL, or no sequel environments.
119
00:30:21.200 --> 00:30:47.200
Laura Smith: So that's a little bit about graphs. They naturally handle complex data. And so you, you know, data shapes where you might have wide data so think of like a 3 60 view. You know. Maybe I wanna understand something around a particular object. All of the kind of data shapes where you've got complex, many to many linkages, hierarchies, recursion deep paths. All of those kinds of analyses are easily performed in a graph.
120
00:30:47.200 --> 00:30:59.642
Laura Smith: and then and then in terms of how it fits into the it portfolio. As as Scott mentioned early on, we typically see customers put in graphs on top of, you know, legacy data silos or
121
00:31:00.260 --> 00:31:24.030
Laura Smith: or data warehouses and data lakes where data has been, I say, centralized but not necessarily mobilized. And and then, of course, there's a whole world of analytics of whole graph analytics where you can look at the topology of the graph run algorithms and then and then infer additional relationships. And so we see this happening for things like link prediction or vector similarity.
122
00:31:24.940 --> 00:31:48.659
Laura Smith: For semantic search. For example, neo for J, is the leader in the graph database world. We started this category about a dozen years ago. We have a huge community of developers. Over 250,000 now. And and we are the, and we are the top offering and have over 50% market share.
123
00:31:48.950 --> 00:31:50.740
Laura Smith: Let's go to the next slide.
124
00:31:52.785 --> 00:32:20.580
Laura Smith: we're widely adopted. Across a wide, a wide range of verticals. So all of the North American banks are using us. All the top aircraft manufacturers, all of the top auto makers, most of the top retailers, most of the telcos most of the pharmaceuticals and and in insurance 8 out of 8 out of 10 insurance companies, and on the Fortune 500 are already using us
125
00:32:21.030 --> 00:32:23.049
Laura Smith: to the next slide. And so
126
00:32:23.420 --> 00:32:51.420
Laura Smith: what what is the value that we're providing. So when you begin to connect your data and you create these knowledge graphs, where you're where you're connecting and mobilizing data from disparate domains. All kinds of value is released. And so you can do. We can support use cases around data, driven discovery and innovation. Hyper personalization for retailers or or healthcare providers decision making and you know which is the
127
00:32:51.420 --> 00:33:03.870
Laura Smith: topic of of this discussion around fraud prevention we've got. We've got banks using us for any money laundering we have intelligence agencies using us to, you know, identify bad actors.
128
00:33:04.252 --> 00:33:21.500
Laura Smith: data integration. And then, of course, a ton of work around data science where graphs are actually being used to either predict and solve problems inside the graph or or engineer new features that could be then exported into traditional deep learning architectures. Let's go to the next slide.
129
00:33:23.490 --> 00:33:48.860
Laura Smith: The neo for J is is a really capable system. It it is able to support 3 major workloads associated with the data management. So yeah, you can run transactional workloads which are sometimes known as O Ltp type workloads. You can run large scale analytics for reporting and analysis. So that would be called an olap workload. And we can also do
130
00:33:48.860 --> 00:34:05.349
Laura Smith: machine learning workloads with our large port portfolio of of graph algorithms. And so it's a really unique system in a sense that it could support all 3 major analytics workloads all in the same environment with no movement or export of data.
131
00:34:06.070 --> 00:34:07.189
Laura Smith: Let's go the next slide.
132
00:34:07.420 --> 00:34:25.039
Laura Smith: And so some of the core components of the O 4 J is, we use what's known as a native graph architecture. And so that means that the data is stored on this in graph format, which means that there's no mathematical limit to the size of the graph. And indeed, we have customers that are
133
00:34:25.463 --> 00:34:41.820
Laura Smith: running graphs that have billions of nodes and hundreds of billions of relationships. We have, we have the ability to ingest data from a variety of different data types. We can, we can handle these hybrid workloads that are that have both
134
00:34:42.295 --> 00:34:55.609
Laura Smith: transactional and analytical demands. And we have the largest set of integrations with in terms of tooling and drivers to support a full enterprise ecosystem.
135
00:34:56.120 --> 00:35:07.510
Laura Smith: We're unique in that. We have this very large library of algorithms that are that run in a in a dedicated analytical
136
00:35:07.831 --> 00:35:27.120
Laura Smith: memory space that doesn't interfere with your ongoing read, write operations. And of course, we have a giant community of developers, which means, if you want to go down the path of building out a graph coe in your in your organization, you it's it's not difficult to find a highly capable and trained neo for j developers
137
00:35:27.170 --> 00:35:28.219
Laura Smith: for the next slide.
138
00:35:29.744 --> 00:35:34.715
Laura Smith: So so let's dig into a little bit of
139
00:35:35.450 --> 00:35:36.090
Laura Smith: of
140
00:35:36.920 --> 00:35:49.610
Laura Smith: of some examples around fraud and fraud detection. So I think everybody on the phone has probably a heard of the Panama papers right? So this was a big leak of
141
00:35:50.420 --> 00:35:57.959
Laura Smith: of the information pertaining to offshore entities that have been stood up.
142
00:35:58.220 --> 00:36:03.999
Laura Smith: and many of those offshore entities were stood up for the purposes of tax evasion.
143
00:36:04.110 --> 00:36:05.240
Laura Smith: And so
144
00:36:06.076 --> 00:36:19.280
Laura Smith: one of the things that the organization, the is the international consortium of investigative journalists did is they took all of all of those documents, and they loaded them into a neo perj graph. And they basically
145
00:36:19.860 --> 00:36:29.040
Laura Smith: constructed a graph that had this relatively simple design of entities with addresses and officers and intermediaries, and
146
00:36:29.100 --> 00:36:44.210
Laura Smith: with this they were able to identify some very significant, politically exposed individuals who had, in fact, been staffing cash offshore. In these in these special entities.
147
00:36:44.240 --> 00:37:06.320
Laura Smith: and so and very difficult to unravel. All of the different layers of these of these corporations. But you put it in a graph, and it becomes very clear and you have the ability to essentially walk across all of that data and discover who is the actual ultimate owner of a set of accounts. So that's one example. So the next slide.
148
00:37:07.700 --> 00:37:36.449
Laura Smith: if you think about the insurance industry graphs are actually applicable across a a wide range of the data of business processes in insurance. All you know, beginning with things like personalized recommendations. You know, doing things like bundling around. You know, marketing and advertising sales lead generation. The actual process of underwriting risk assessment. We have customers that are using graphs to improve
149
00:37:37.485 --> 00:37:49.912
Laura Smith: their risk assessment capabilities. Managing policies. Determining, for example. What does a customer actually own across across my different
150
00:37:51.035 --> 00:38:17.310
Laura Smith: product lines and doing things like looking for. You know, major life events. Right? So if you know, if you have a kid that turns 15, hey? Guess what you know. You're gonna have a new driver in the family very shortly. Things like that claims processing fraud detection. We'll we'll touch on that in just a bit. And then, of course, things like customer support and renewals and retention. And so graphs are very good and can provide deep insights across all of these areas
151
00:38:17.790 --> 00:38:18.920
Laura Smith: for the next slide.
152
00:38:20.110 --> 00:38:28.179
Laura Smith: So, for example, we have a customer that is using Neo for J to understand agent efficacy.
153
00:38:28.290 --> 00:38:33.519
Laura Smith: And this is the design, roughly, of what that graph looks like. And
154
00:38:33.870 --> 00:38:46.010
Laura Smith: and this particular and insurance company has a really large independent agency. And so it's very important to them that they have the ability to actually
155
00:38:46.547 --> 00:38:57.659
Laura Smith: send information to agents about what their next action should be relative to data that they understand about that policy holder on their side. And so
156
00:38:58.476 --> 00:39:10.620
Laura Smith: and they're able to do things like householding. They're able to look across all of our product lines. And they're able to basically work with the agent and say, here's what we think this particular customer might be very interested in.
157
00:39:11.490 --> 00:39:13.880
Laura Smith: And so that's one example.
158
00:39:13.930 --> 00:39:16.439
Laura Smith: let's go look at some fraud examples.
159
00:39:16.730 --> 00:39:20.459
Laura Smith: And so in the world of claims, fraud, particularly in auto
160
00:39:22.570 --> 00:39:26.339
Laura Smith: we see some really common patterns, and
161
00:39:26.390 --> 00:39:54.139
Laura Smith: and the power of the graph is its ability to actually connect the data in such a way that details that might not be obvious from an individual claim when they're actually connected in in groups what we would call a subgraph. Certain. Those details jump out very quickly, and you can see that this is a really anomalous pattern. And so one of the one of the types of fraud that we see a lot
162
00:39:54.992 --> 00:40:00.629
Laura Smith: being explored with, perhaps, is this business of fraud rings where you have individuals who are
163
00:40:00.700 --> 00:40:01.790
Laura Smith: occupying
164
00:40:02.520 --> 00:40:03.930
Laura Smith: different roles
165
00:40:04.070 --> 00:40:20.999
Laura Smith: across a set of seemingly independent claims. And so in this model you might say, you know we have a say, 2 accidents, and so there's a node for accidents, and you see that there's a there's a node representing different cars, and then different persons.
166
00:40:21.680 --> 00:40:24.700
Laura Smith: and you can have relationships that describe
167
00:40:24.720 --> 00:40:36.309
Laura Smith: what was the role of that person relative to that car in that accident. And so you see here, I have several people, and this person, number one was, was both a driver
168
00:40:36.340 --> 00:40:45.209
Laura Smith: in one accident and a witness in a second accident, and similarly, person 2. Here was a witness in the first accident, but was a passenger
169
00:40:45.530 --> 00:40:46.989
Laura Smith: in the second accident.
170
00:40:47.320 --> 00:40:58.719
Laura Smith: And then, for example, you might find that you know they're getting legal representation from the same attorney, right? Or maybe they're going to the same physical therapy provider.
171
00:40:59.270 --> 00:41:27.649
Laura Smith: And so this ability to basically understand, hey, this is a really unusual situation that there's this individual that was involved in so many different claims, but seemingly you know, performing different roles in those claims. So graphs can very, very quickly identify these kinds of patterns. Because you just write a query and say, Show me, you know, show me all the people who have who have participated in accidents and across multiple roles say, within the last 2 years.
172
00:41:27.760 --> 00:41:28.980
Laura Smith: and
173
00:41:29.120 --> 00:41:30.820
Laura Smith: and you know, and so
174
00:41:32.124 --> 00:41:41.269
Laura Smith: you know, you'll get an act answer in just a second or 2 from the Neo. For J. Graph. Let's look at another claims fraud example. So this one is a little bit more detailed.
175
00:41:41.320 --> 00:41:46.929
Laura Smith: but this shows you really the kind of inference that we can do. So. Here, let's assume that you have
176
00:41:47.400 --> 00:42:01.229
Laura Smith: a similar structure to this graph. But we've actually done some other things we've we've actually exploded out. Say, for example, the driver's license ids and the phone numbers and the addresses of all of the participants. And then we've joined the data.
177
00:42:01.555 --> 00:42:13.820
Laura Smith: Using those common elements. And out of that we might get a graph that looks like this. And so here you can see, we have a set of cars. We have a set of individuals. And we have a couple of accidents.
178
00:42:14.430 --> 00:42:23.120
Laura Smith: Now, what's interesting right off the bat is, you can see that person, one and person 4 actually share the same driver's license. Id.
179
00:42:23.460 --> 00:42:30.169
Laura Smith: even though they're located at different addresses. You can see that right up at the top there, and they also share a phone number. So
180
00:42:30.460 --> 00:42:31.929
Laura Smith: go to the next slide, Scott.
181
00:42:32.290 --> 00:42:52.809
Laura Smith: So one of the things that we can do in a graph is when we see it interesting and anomalous pattern is, we can actually set a whole new relationship. And so that relationship might be. This shared Ids relationship, and it falls into question which one of these individuals is actually has a true identity, and which one is pretending to be somebody that they're not.
182
00:42:53.440 --> 00:42:59.160
Laura Smith: Similarly, we, we can run some analytics on this graph and go to the next slides.
183
00:42:59.810 --> 00:43:06.010
Laura Smith: And we can. We can deduce that because there's multiple accidents here.
184
00:43:07.530 --> 00:43:08.760
Laura Smith: that
185
00:43:08.860 --> 00:43:10.120
Laura Smith: that
186
00:43:11.080 --> 00:43:26.389
Laura Smith: alright. And and we, you know, we see, for example, the fact that this person, with a sketchy identity, was involved in an accident, one on the on the left side, and then was also involved in an accident.
187
00:43:26.590 --> 00:43:30.580
Laura Smith: Number 2 over on the right side of the slack flag there.
188
00:43:30.930 --> 00:43:33.730
Laura Smith: that also calls into question those accidents.
189
00:43:33.900 --> 00:43:35.737
Laura Smith: and then, similarly
190
00:43:36.370 --> 00:43:42.279
Laura Smith: we might see additional linkages with other individuals who are present at one or the other accident.
191
00:43:42.300 --> 00:43:46.400
Laura Smith: and we'll call into question. Say, accidents, you know accident number 3
192
00:43:46.480 --> 00:43:47.920
Laura Smith: down at the bottom.
193
00:43:48.090 --> 00:43:53.270
Laura Smith: and this is an example of the kinds of inferences that you can do. I think if you go to the next slide
194
00:43:54.040 --> 00:43:58.199
Laura Smith: right? And so then you and then the other interesting thing is is that
195
00:43:58.617 --> 00:44:04.499
Laura Smith: while we identified person one and synthetic person, number 4 on the basis of their id.
196
00:44:04.590 --> 00:44:10.480
Laura Smith: the linkages and the involvements in these other accidents are also revealing to us
197
00:44:10.810 --> 00:44:19.059
Laura Smith: person number 2, and Person number 5, who also are suspicious participants in this group of accidents.
198
00:44:19.160 --> 00:44:25.760
Laura Smith: and this would be an actual indication of a fraud rate. And so now we know that you know person one for
199
00:44:25.930 --> 00:44:28.659
Laura Smith: 2 and 5
200
00:44:28.710 --> 00:44:30.820
Laura Smith: are potentially colluding
201
00:44:30.830 --> 00:44:34.399
Laura Smith: to create false claims. Let's go to the next slide.
202
00:44:35.340 --> 00:44:50.040
Laura Smith: And so it's exactly this kind of logic. That we've got a you know. We have a number of customers that are implementing this at scale. And I'll tell you an interesting story. One of our customers. We did a Poc for them. We turned the graph on.
203
00:44:50.510 --> 00:44:57.089
Laura Smith: and within just a couple hours of the graph going into production, they immediately identified 2 auto claims, rings.
204
00:44:57.210 --> 00:45:04.559
Laura Smith: and and one of them was a garbage collection company, and the other one was a landscaping company.
205
00:45:04.570 --> 00:45:16.340
Laura Smith: and just within a couple of hours the ballot and the and they estimated that those rings were each costing them something like $500,000 a year.
206
00:45:16.990 --> 00:45:43.399
Laura Smith: And so right there basically paid for their new Oj license. So pretty cool and so some other benefits is, you know, when you have this kind of analytical capability, this ability to actually look at how things are connected. This really does help cut down on things like false positives that are being that are being generated by more simple rules based systems. It allows your special investigations units to to operate more effectively.
207
00:45:44.270 --> 00:45:51.960
Laura Smith: And you'll end up with a significant drop in fraud payout. And and you'll see the Roi around that. So the next slide?
208
00:45:52.958 --> 00:45:55.349
Laura Smith: Zurich insurance uses us as well.
209
00:45:56.010 --> 00:45:57.200
Laura Smith: and
210
00:45:57.430 --> 00:46:13.099
Laura Smith: and without going into the details, specific details of their implementation, I've got a few things that we can share with you on that. So they claim that they've they've saved themselves over 50,000 h of investigative investigator hours.
211
00:46:13.485 --> 00:46:35.229
Laura Smith: They? And these are some of the some of the statements that they're making. You know about this implementation. You know, they that pre previously they had a lot of data that didn't have a lot of con contacts, but their ability to look at things like bank accounts and addresses and customer data together was seen as a major benefit
212
00:46:35.450 --> 00:46:37.989
Laura Smith: because they can link them holistically in the graph.
213
00:46:38.130 --> 00:46:44.580
Laura Smith: And they can also update that in real time and see the shifts in the graph
214
00:46:45.190 --> 00:46:48.230
Laura Smith: which helps them with their with their reconciliations.
215
00:46:48.270 --> 00:46:49.319
Laura Smith: The next slide.
216
00:46:51.180 --> 00:46:52.280
Laura Smith: And so
217
00:46:52.930 --> 00:47:17.230
Laura Smith: you know, they're able to basically be bring together. You know the same kinds of information that we've been showing and they're also enriching this interestingly with data from external sources like national databases, blacklists and other economic data like credit scores. To provide additional context, around the individuals in their databases.
218
00:47:17.290 --> 00:47:32.729
Laura Smith: And so you know, they love the ability to rapidly identify issues. They can see the context and they can drill down to a specific claim. See what else it's linked to. They can compare it to past behavior, get a full understanding of everybody that's involved in the claim.
219
00:47:32.760 --> 00:47:34.990
Laura Smith: And then, I,
220
00:47:35.520 --> 00:47:42.190
Laura Smith: their investigators like this solution so much that our stakeholder there shared that, you know. If we were to
221
00:47:42.330 --> 00:48:06.069
Laura Smith: take it away there would be a huge outcry. So anyway, the point is is that this kind of capability really is helps. You find, you know, those needles in the haystack do it at scale, and be able to really improve the productivity. Of your investigators, as well as actually manage down your false positives and your payoffs
222
00:48:06.680 --> 00:48:09.879
Laura Smith: the next slide. And I think we are done
223
00:48:11.960 --> 00:48:20.449
Laura Smith: super. Well, Michael, thank you so much for that. And and I think what what we're hoping the audience has seen today is a couple of different things, right? Which is
224
00:48:20.500 --> 00:48:22.440
Laura Smith: this really can be
225
00:48:22.550 --> 00:48:34.209
Laura Smith: fast. It can be incredibly accurate, and it can really move the needle. What you've seen today is is like, literally one of thousands of kinds of use cases in a couple of different ways
226
00:48:34.260 --> 00:48:56.059
Laura Smith: and and and kind of where we're going here is that with Xpo and Neo for jay. We can do these kinds of things at a different level. We can make them fast. Obviously, Neo can scale the performance. The machine learning those kinds of aspects that bring that to it. And then, hopefully, what you saw today was expiro and the ability to make it easy
227
00:48:56.060 --> 00:49:04.829
Laura Smith: whether it's a chat capability. It's bringing up for analysts or other kinds of sort of more technical side folks or
228
00:49:04.960 --> 00:49:23.329
Laura Smith: claim adjusters and very simplistic kinds of customer focus team members. It doesn't have to be overwhelming, and it really should be easy and fast to sort of bring all that together. And then here at the end, look us up online. Neo, obviously, it has an enormous
229
00:49:23.628 --> 00:49:36.470
Laura Smith: sort of self learning capability that are out there, there's code. If you're interested, lots of great things that are going on out there. So certainly go check out all these links. And there's even more online with neo.
230
00:49:36.620 --> 00:49:40.920
Laura Smith: And then for expiry, we're in most of the major clouds
231
00:49:40.970 --> 00:49:45.329
Laura Smith: we can install either on-premise or the cloud. But certainly look us up
232
00:49:45.712 --> 00:49:56.590
Laura Smith: as well, and as we're getting to our next slide here, we'll open up the floor for questions. If you have any questions, please enter them in the box.
233
00:49:56.973 --> 00:50:00.420
Laura Smith: And we will be sending out the recording of the video
234
00:50:00.968 --> 00:50:11.720
Laura Smith: at the end of this week, and we'll be able to do that from there. So we'll take a minute here and basically identify questions if you want to send those in Michael looks like one
235
00:50:15.430 --> 00:50:35.520
Laura Smith: super. Thank you all so much. For your time today. This was a pre-recorded session, but we actually had a few questions come through so wanted to go ahead and address those. There was one Scott that came through that says we use guidewire for claims, and I saw that this might work with it.
236
00:50:36.640 --> 00:50:45.613
Scott Heath: Yeah. So we actually have some adapters. And it kind of depends on how your guide wire set up. If you have all of the modules of guide wire we can plug in, either individually
237
00:50:46.177 --> 00:51:09.829
Scott Heath: or at an enterprise level, pretty straightforward. The other thing that you saw is we can actually do some of the fraud that will actually sit inside of the guidewire front end. So it's all integrated. Or if you have a tableau kind of a third party style dashboard where guide wire, and perhaps you even have other kinds of solutions. So kind of the answer is, yes, plugs right into guide wire and then
238
00:51:09.830 --> 00:51:12.990
Scott Heath: and it actually has different ways that we can do it.
239
00:51:13.430 --> 00:51:14.160
Scott Heath: Okay.
240
00:51:14.408 --> 00:51:18.130
Laura Smith: Great. And then also, if we wanted to do a Poc, how might that work.
241
00:51:18.839 --> 00:51:31.399
Scott Heath: So, Pocs are really pretty straightforward. Really, what we do is we look for a focused area of functionality. So what we showed in the some of several of the Demos, and and Michael did a good job of that as well, which is
242
00:51:31.400 --> 00:51:54.409
Scott Heath: whether you want to do sort of the surfacing, and you want to score a claim, or you want to be more elaborate in part of the demonstration that I showed. We simply scope that out, and we like to keep it in a 4 to 8 week. Kind of a timeframe. We use a neo for J. Database on the back end. We use a either synthetic data set or real data kind of depends on your mileage, and we can do that on premise
243
00:51:54.720 --> 00:52:10.409
Scott Heath: or in the cloud and then we basically run it. We look at those algorithms. Or, again, what your use case or focus is and then we wrap it up. And we we do those usually fixed fee and they're usually very, very successful.
244
00:52:11.950 --> 00:52:34.489
Laura Smith: Okay, great. That was all the questions. I don't see any others. So just as a reminder, we did record today's session again. So we will send out an email with a copy of the recording. If you have any questions or need anything feel free to respond to the email, we'll be happy to assist. With that we'll go ahead and end today's session. And we look forward to the next one. Thanks. Everybody.
Tell us what you need and one of our experts will get back to you.