News From the World Wide Web, Not the Regular Blog

Manish Garg on being the right amount of paranoid about AI by Sarah Wheeler for HousingWire

HousingWireHousingWire

Editor in Chief Sarah Wheeler sat down with Manish Garg, senior vice president of product and technology at EarnUp, an autonomous financial wellness platform, to talk about how his company is using gen AI to deliver a personalized experience for customers at scale. Garg has a deep background in building enterprise software and has spent the last decade working with fintechs in the mortgage lending space.

This conversation has been edited for length and clarity.

Sarah Wheeler: What differentiates your tech?

Manish Garg: We focused on borrower, financial health, compliance and risk data protection as the guiding principles while building our tech stack. We always work from the desired outcome backward, focusing on consumer financial health. We bake in aspects where we serve our tech to servicers or credit unions or banks to help them reduce default risks by doing a lot of data analysis behind the scenes to help them identify how they can reduce risk of non-payments, reduce any default risk, and keep their books healthy but also keep the consumers in a healthy place. So we have a lot of tech in place for predictive analytics.

SW: How are you leveraging AI?

For a very long time, we were mostly doing traditional AI, which is building forecasting models, predictive analytics and being able to classify risk into different categories and providing all of that back to the enterprises. In the last 18 months or so, things have changed dramatically.

We were in a fortunate place to have some visibility to that very early on. And we started investing in that right at the beginning and we’ve now built core capabilities in our platform to be able to do things that we’ve only talked about in the industry for years. It’s like pipe dreams finally coming true — being able to generate compelling, hyper-personalized content for consumers to help the loan officers, back office, underwriter or processor to do their jobs in even more efficient ways. These are capabilities that we all hoped someday would be real, but it seemed like science fiction, and suddenly it’s not. Suddenly, it’s here.

SW: How have AI capabilities changed in just the last six months?

MG: We’ve had AI around for a while. Most of the people on the tech side understood it and appreciated it, and the data scientists, but for a lot of business users, the value was not clear. But for the first time, it’s something everyone can touch and feel. So that is something that is fundamentally shifted and why there is so much adoption and why there is so much optimism around that. The second part of it is doing things which seem almost magical or very, very difficult to do  — that has become very easy to do because of large language models (LLMs) and AI.

For example, creating hyper-personalized content for a consumer. We do a lot of that with our customers where we are able to ingest a lot of personal finance information about consumers, from their banks, from credit bureaus, from many other sources, and then the consumers can interact with their personal information to understand more about it. That was not possible before — you would have to build a full application for it before, but now I can talk to my own data.

For the enterprises, for the loan officers, it’s about competing on rates. As the refi market hopefully starts to come back with falling rates, everyone’s going to be competing for the same set of borrowers. They’ll be flooded with very similar looking offers like lower your rate. But now someone can actually leverage gen AI, and if they work with us, they can create a very personalized offer for a consumer. ‘Hey, it looks like you’ve got these types of debt. You seem to have enough equity in your home that if you took $62,000 of cash out, you can pay some of this debt off, and financially, you’re going to be so much better.’ I’m much more likely to go to a lender like that.

SW: How do you think about security?

MG: I think security is a really big and serious topic. There have always been security risks, and new security risks will keep coming up — it’s an arms race. AI has enabled us to address security in ways that was not possible earlier, by helping us identify security threat patterns that we may not have modeled. If you have to build a predictive model, it has to be able to predict certain things, which means you are assuming certain things. But it’s very hard to assume new security risks that will come a year from now. Like nobody knows that, but with gen AI, you don’t have to know everything. It can identify new patterns on its own without you having to tell it to do that.

So that’s made it really powerful tool and an ally to be able to identify and address new threats, but it’s also brought new security threats. For example, there’s a new type of security threat called prompt injection, where you can put in malicious prompts and get AI to do things that it is not supposed to do and return you responses that it should not be responding to.

Other things that we are seeing with generative AI is that the output of the AI is not always something you can accurately predict, because that’s the nature of it. It is generating brand new content that has never existed before so you can’t really predict what it’s going to generate. So how do you test that it is secured, it is compliant? We’ve been looking at many new technologies around this.

For example, something called generative network and discriminative networks, which is where one AI model tests the work of another AI model based on probabilities — like things like these are becoming real. So even the way you build and test new applications is going to completely change.

And there’s the whole topic of generative adversarial network, or GAN, which is basically a network where AI models test each other’s work. And there’s a whole framework to that, because we need to do that in a methodical way and not just do it randomly. So we have to really be at the cutting edge to make sure that we are ahead of what’s happening in the industry today. This is what it means to make AI applications enterprise-ready. It’s not just building sexy new interfaces and great demos, but really digging very deep into what goes into building compliance and security and making it safe to use.

SW: What keeps you up at night?

It’s part excitement, part fear that keeps me up at night. And as exciting as it is, like you have to really be paranoid about certain things. I feel very excited that AI is finally starting to take off. That’s really exciting, but the pace of innovation is also very, very fast, and accelerates like nothing we’ve seen before. We are measuring something known as AI years, where a week or two of AI is like a human year compressed down to a few weeks.

But as all of that happens, companies will have to run very, very fast just to keep in the same place and the ones who are going to innovate are going to far exceed the ones which will not be able to innovate. We’ve seen that with general tech, but it’s going to be even more pronounced now.

SW: How do you build a tech team that can handle the scope and pace of AI innovation?

MG: I think our team is one of our core differentiators. Our core team is comprised of very specialized engineers who can build business-critical fintech applications where we can move hundreds of millions of dollars and reconcile and account for all of that, and that’s a huge undertaking that we do all day, every day. It takes very specialized types of engineers to be able to work on such business-critical applications. It’s mostly our developers, security, compliance — people who are very proficient with cloud and building things which are cloud native data platform APIs.

And then we have a dedicated AI division where we continuously keep evaluating our core strengths. As the world of AI changes, we have to reshape our team and bring in expertise as required. We very quickly moved from what we now call traditional AI to what we are doing with LLMs in generative AI, and the kind of expertise that I need from team is very different.

We have to think a lot about the end-user experience, because what does an end user experience actually mean in this case? It cannot just be a conversational interface, because conversational interface is like a room within infinite doors, like you can keep going from one place to another — but you have to also confine it. So how do you combine a conversational interface with a point-and-click traditional application, so that you provide enough flexibility, but you also provide structure to the consumers to be able to use your application and be productive. We have very specialized design and development teams that think about these problems all the time and test it in the market, beyond just our core engineers who are very proficient with LLMs.

FromAround TheWWW

A curated News Feed from Around the Web dedicated to Real Estate and New Hampshire. This is an automated feed, and the opinions expressed in this feed do not necessarily reflect those of stevebargdill.com.

stevebargdill.com does not offer financial or legal guidance. Opinions expressed by individual authors do not necessarily reflect those of stevebargdill.com. All content, including opinions and services, is informational only, does not guarantee results, and does not constitute an agreement for services. Always seek the guidance of a licensed and reputable financial professional who understands your unique situation before making any financial or legal decisons. Your finacial and legal well-being is important, and professional advince can provide the support and epertise needed to make informed and responsible choices. Any financial decisons or actions taken based on the content of this post are at the sole discretion and risk of the reader.

Leave a Reply