Insights

AI Talks: Applications in Healthcare, Finance, and Cybersecurity

September 26, 2023
by: Cohen Circle Team

Join industry leaders in Healthcare, Financial Services, and Cybersecurity for a discussion of real-world applications of AI at their companies - Oscar Health, Pagaya, and IronNet. From opportunities to transform patient care to optimizing financial operations and fortifying cyber defenses against evolving threats in cybersecurity, tune in for a lively discussion.

With Panelists:

With a special introduction by Betsy Cohen, Co-Founder, Cohen Circle and The Bancorp.

Below are some highlights from the webinar, followed by a replay.


On the Risks and Threats of AI

Understanding the ethical pieces of technology is extremely important to make sure we're moving in the right direction, and that we're utilizing technology that is actually better for mankind. This would not be the first time this has happened. I think that if we were able to do that with nuclear energy 70 years ago, which is probably the most destructive thing that was ever invented, we will be able to do it with AI.  - Avital Pardo, Pagaya

When you think about AI like this unbelievably brilliant child, how do you bring that child up to act responsibly in our culture, which may be different than the culture of another country who wants this child to be brought up as a lethal weapon or as somebody who spreads disinformation to benefit that country...I think we are going to see this area evolve into a huge part of warfare. If they attack us in that area, are we ready to defend? And the answer, in my opinion, is, no, we are not. - General Keith Alexander, IronNet

On the Opportunity in LLMs and Data Structuring

It's very difficult to get a complete picture of the healthcare data sets. We have these little islands of data captured, and those islands may be multiple versions of the truth. It creates a real hazard when you're working in healthcare, when you try to apply a large language model, while against that data set you have to spend extensive amount of time and energy to get it right before you apply the large language models to the data. - Mark Bertolini, Oscar Health

Being able to get more data Is a huge influence of how good your AI becomes, but also standardizing the data, correcting the data, and building a correct framework of data...The tech infrastructure that is required to analyze huge sets of data becomes easier over the years. Ten years ago, when we started, we needed to develop the entire stack ourselves. But now we don't need to do that anymore.  I think that both the data, the infrastructure, and the usage of a better algorithm over time is what will make AI better and better. - Avital Pardo, Pagaya

We have to come up with a solution that gives us the radar picture in cyber that allows us to see the attacks coming so we can respond and defend. You can see the issue even for the upcoming elections. Adversaries are going to put disinformation into the network. They've done it in in the past. We see it in Ukraine. We saw it in our previous elections. We're gonna see it in 2024. How do we defend against that? How do we stop that disinformation?  - General Alexander, IronNet

On What's Next with AI

Can we make the system more convenient, accessible, and affordable by virtue of reducing the administrative burden on people and the way they have to use the healthcare system ...Can we use it for pharmaceutical research and using the data that's been used in the past to approve drugs, and therapies in a way that we can do that quicker and more affordably than running all these massive, patient studies and waiting so long to get these technologies introduced into society? As an investor, I have to say, where do I want to place my bets? Is it on the experience side or is it on the clinical side? And how can I make money on that side of the business?  - Mark Bertolini, Oscar Health

I think what's going to be extremely interesting about this revolution, is that this is the first time that we're seeing a machine that kind of thinks in a similar way to a human. That has deduction capabilities and reasoning capabilities and generalization capabilities that is very human-like thinking. And that can interact with humans. We're still in the very early days of that. And I think we'll see that developing over time to a place where the machines think like humans.  - Avital Pardo, Pagaya


Full Webinar Replay: AI Talks

Daniel Cohen, (Moderator): We're super excited about this topic, because for a number of months, everybody has been talking about this – about AI.  And we're lucky enough to have 3 people who've been involved in various elements of the applications of AI across 3 different sectors in very prominent positions, healthcare, finance, and cybersecurity.  Mark Bertolini of Oscar Health, General Alexander of IronNet, and Avital Pardo from Pagaya.

You have all been involved in changing technology through AI. Over the last 6 to 9 months as it's stormed onto the scene and everything from you know, people's homework to making up court filings that are completely wrong. How has this  changed your lives, or how do you expect it to?   

Avital Pardo: For me, as someone who's been working on different types of AI models for the past 10 years, I saw this revolution starting 10 years ago. It was clear that in a specific set of problems, where we can standardize the data, we have enough samples, and that we have enough attributes regarding what we're trying to predict that the AI framework would be better than what humans can do. So even 10 years ago, when we started in credit underwriting, it was obvious for us that AI would win over humans. 

With the release of ChatGPT, the entire domain of AI is having somewhat of a moon landing moment, where something that was happening for a long time behind the scenes had some kind of an event that made the general public aware of it. And I think this will have a lot of different implications for the technology, and for this domain. We'll see a lot of investments go to this area. We'll see that spending within generative AI in different places of the value chain. I also see it in in different ways of analyzing complex data like a reconciliation of financial data.  

Daniel Cohen, (Moderator): Mark, Oscar Health has some areas where it implements AI. How do you see this revolutionizing what Oscar does?  

Mark Bertolini: Oscar is really a technology company with a health insurance laboratory. We have been very intentional since our inception of owning our tech stack front to back. And as a result, we're an industry where the industry is at a 0 NPS, we're at an NPS score of 57. So, a very positive score from the member experience standpoint. 

 As we look at how we deploy AI, we're very careful about discriminating between things that make it easier to use. On the back end of our business, we deploy AI very large language models very specifically against things like fraud, waste, and abuse like risk adjustment. All those sorts of algorithms that we would run in the past with actuaries, and teams inside the company, we can now do with large language models. We're in a continuous hackathon. 

We've now got 47 large language models that we’re applying against our business tech stack front to back in large part because we have a single threaded tech stack, and a single version of the truth when it comes to data, versus where a lot of companies actually have to take their data, clean it up to use it in our language models in order to make it more effective. But we're very careful. And we have significant review on things that touch clinical care where patients can be impacted. 

Daniel Cohen, (Moderator): Avital, when involved in lending, which is a highly regulated business, when you apply AI, do you still feel that you're able to get 100 percent bias free? 

Avital Pardo: We work in lending and with major banks, so everything we do in AI needs to be viewed from a compliance standpoint and specifically a model compliance standpoint, which is sometimes difficult for the data scientists. But this is the right thing to do.

Our mission is to promote financial inclusivity. Do I think AI does that better? Actually, much better. I don't know if bias free is something that exists. But I think machine logic is much more bias free than human logic. We've run a lot of internal studies regarding that to understand and different types of decision making that are not just related to the way the company builds models or underwriting or makes credit decisions. What creates biases? In multiple verticals of landing in industries where there's some kind of meeting between the person making some kind of decision in the process. And the person was asking, ‘Who's asking for the loan?’ The level of bias that gets introduced to the system to the decision framework is higher. The more we remove people from making the decision and leave them to supervise how the decision is being made, I think we will get much better. We'll get less biases, and we'll promote financial inclusivity.   

Daniel Cohen, (Moderator): General Alexander, you really were part of the formation of CYBERCOM. Maybe you can tell us about how you were dealing with AI, both adversarial and the uses that it could have in stopping adversarial threats. 

GEN (Ret) Keith Alexander: When you start thinking about AI, the key thing is the data set that you're using. Is it curated? Do I trust it? And if I trust it, then I'm going to get good responses. We're seeing with the more recent models a lot of stuff has happened, as you saw, with ChatGPT. They burst onto the scene because they were able to pass the logs in where the previous models were at the bottom percent. Now they're at the top. So these hundreds of billions parameter models, huge models provide much better response. So that's a good thing. 

The key issue that we face, and cyber faces is how accurate, how good is the data that we're using? How do we curate? And when we start to think about cyberspace, whether we'd like it or not, war is going to be conducted in this space as well. And you can see that going on in Ukraine today. 

You can create AI models that look at vulnerabilities and threats that bring things together faster than we can as humans. You have all these things that now beat what humans can do. The adversary is going to attack you with AI use across a whole series of things to probe, to detect, to get by. And the only way to respond is with AI models that now can see, understand, and defend.  And now this digital space is going to rapidly evolve, and it gets around things like disinformation. We have it in the real world. Let's say hypothetically that some people don't agree on the facts. And you're gonna have that in the virtual world, and that is one of the biggest issues that we face.  

When you think about AI like this unbelievably brilliant child. This child is really brilliant. How do you bring that child up to act responsibly in our culture, which may be different than the culture of another country. Who wants this child to be brought up as a lethal weapon or as somebody who spreads disinformation to benefit that country. So, we have all these elements coming in in cyber. I think we are going to see this area evolve into a huge part of warfare. If they attack us in that area, are we ready to defend? And the answer, in my opinion, is, no, we are not. 

If you think about missile technology, to defend against missiles coming in gives us about a 30 min response time. Now, when you think about cyber, it's in milliseconds. The government cannot easily see what's hitting Oscar Health, what's hitting Pagaya, what's hitting the banks. What's hitting the energy sectors. The consequences? Our nation is at risk in this area. We have to come up with that solution that gives us the radar picture in cyber that allows us to see the attacks coming in so we can respond and defend it. You can see the issue even for the upcoming elections. Adversaries are going to put disinformation into the network. They've done it in in the past. We see it in Ukraine. We saw it in our previous elections. We're gonna see it in 2024. What do we? How do we defend against that? How do we stop that disinformation?  

Daniel Cohen, (Moderator): One of the things that both IronNet and Pagaya have pioneered was you had the insight that the more we need data, the more data we have, the better the models. 

Avital Pardo: We've built an AI work plan for 3, 4, or 5 years ahead. Data was a huge part of that. And being able to get more data Is a huge influence of how good your AI becomes, but also standardizing the data, correcting the data, and building a correct framework of data.  Another thing is the ability to actually use this set of data. The tech infrastructure that is required to analyze huge sets of data becomes easier over the years. 10 years ago, when we started, we needed to develop the entire stack ourselves. But now we don't need to do that anymore.  I think that both the data, the infrastructure, and the usage of a better algorithm over time. This is what makes AI better and better. 

GEN (Ret) Keith Alexander: On this point, there's a key thing that Avital is bringing up. And that is the exceptionally large language models. 300 billion parameters up to a trillion are extremely expensive to build. We have these huge corporations that can now jump in and lead that innovation. Because what we're talking about here is the innovation wave like building computers in the sixties and seventies. Now, we're talking about artificial intelligence and the innovation that comes from that. That's hugely important for our nation. This is a great opportunity, and you know a lot of people pick on big companies. But if it weren't for the big companies, we wouldn't be able to afford the big models. So this is this is something that I think we must come to grips with. How do we do that? How do we make it responsive to our social and cultural requirements?  

Daniel Cohen, (Moderator): Mark, so how do you see the data? If you look at AI in terms of the data sets and what you're using today, I mean, where do you think that is? How do you let large language models or AI structure data for itself? 

Mark Bertolini: The healthcare data sets are all sort of captured little islands that are considered proprietary by insurance companies, pharmaceutical companies, hospitals, physicians. And so, it's very difficult to get a complete picture of the data. We have these little islands of data captured, and those islands may be multiple versions of the truth. Where Mark Bertolini is in the system in multiple different ways based on the type of technology they use. So, it creates a real hazard when you're working in healthcare, when you try to apply a large language, while against that data set you have to spend extensive amount of time and energy to get it right before you apply the large language models to the data. 

Daniel Cohen, (Moderator): Right now, all your companies, Pagaya, IronNet, and Oscar Healthcare are in this island system, where we have strong data protections, sharing data across providers and things like that. Both of your companies also have similar strong protections. As you've been able to take anonymized data and use that in large data sets by partnering among multiple organizations to get the data that you really need and create. As you look forward, could you see a day where the same models that are analyzing the data are applied at a different level to structuring the data and take out the human component of deciding how it was going to be structured? And how do you think that's gonna evolve?  

Avital Pardo: It's a very good question. In the past we've done a lot of work on data structuring and trying to automate the structuring. What we found is that what actually works in all automation is the ability to correct data. Especially in our industry, where credit bureau data is very important for creating decisions. Your data is reported by tens of thousands of different organizations, and in various ways that are sometimes not correct.  

We started by trying to do it in a heuristic way, saying, ‘Okay, we see the same person, and we see different types of reporting regarding this same person. So what can we conclude?’ So we started heuristically, but at the end, we started building automations to find that this is a much more difficult domain because it's difficult to create supervision for the models. You're trying to make a credit decision, the story that you have an historical data set saying, 'Okay, this is a specific loan, and it was either paid or not paid.' Or this is what ended up happening to this loan. But when you're looking on an unstructured data set and saying, 'Okay, I know some of it is mislabeled. Let's try to find what is mislabeled here,' based on a very, very specific domain. It's much more difficult. 

Daniel Cohen, (Moderator): If you look at the risk, General Alexander, do you have anything to add to how you see that approach? 

GEN (Ret) Keith Alexander: Yeah, I'II agree with Avital. I think we're on the cusp of this when I look at what we're doing and machine learning, so I'll call that the baby steps to AI. The real issue is now acquiring all the data and logical groupings and how you use that data. So you have these huge foundational models. And you build on top of those models that you're gonna run that are gonna hit specific areas. Vulnerability assessments, detection algorithms, threat related data sets. You can see these models starting to evolve as logical areas that you would have in AI. So, I think the structuring of it because the foundational models that are these big ones, just keep going on as those grow and become available. Then you're gonna see all these other things spring up around it that allows us to take in the data and structure it.  

Mark Bertolini: On the healthcare data access, I would not call it a privacy problem. I would call it a policy problem. If we were to have a single patient identifier for everybody in the countryike our social security numbers, and we were to have a single provider identifier you could seek through node encryption that data that you need for that patient. And I would argue, it's better to have it everywhere versus in one place where it can be attacked. If you could use that, you could actually access the data you need to make decisions about a patient.

But we refuse to make those kinds of decisions as a society where, as a political infrastructure around this issue of HIPPA and privacy, and how are we going to use the data? But if you make the patient identifier something that the patient releases to the provider to go seek that data it allows us access to the data. So, the transparency problem is largely a political policy issue. It's not a technology problem. 

Daniel Cohen, (Moderator): Yeah, the problem is we all have that instinct, like most of us grew up at a time where when you were 13 years old, you were anonymous. You had to find a pay phone to call your parents, and if you're missing, you were generally missing, even though you might have been right outside your house. Today, I know exactly where my kids are. I look on find my iphone and that's where they are. I don't want the world knowing about my medical records or my financial records, or anything else like that. But AI, with its ability to move forward in these large language modelsf or better or worse, will eventually be able to de-anonymize me.   

How do you guys really feel like we should deal with them and isn't it inevitable, General Alexander, that somebody will be able to associate American military personnel data with transactions with other things and get to a mapping of a whole country. And if we can't prevent it? Don't we have to jump in?   

GEN (Ret) Keith Alexander: Yeah, just look at the break in of some of this security information on all the military, hundreds of millions of records stolen?  I think that's a great concern. Now, having said that, I think there are more destructive paths that people are going to use cyber for. Just think about all the things in healthcare that you can do to impact patient care, things that you can do to limit movement of goods, things that you can do to shut down the IT and energy sectors. All of these are issues that I think are going to be the early battles in terms of cyber and cyber defense, because up to this point, you know, this is pre-airplane. Nobody could reach us. They'd have to get on a boat and come. And now they have missiles and other stuff, and now they have cyber. 

Daniel Cohen, (Moderator): Do we have to regulate?  

GEN (Ret) Keith Alexander: I think this is an area, because 90 plus percent of it is in the private sector and the public sector needs to partner with the private sector to make this work. The public sector, the government, the military, has the authority to actually respond to attacks against our nation. The commercial sector does not have that authority. Nor does it have that in cyber. So, if you are allowing an adversary to take an unlimited number of shots at you in the physical world with weapons, you'd say, where's the military who's protecting me? We protect that. So we have to do that in cyber as well.   

Mark Bertolini: We as a society do not respond fast enough to technological threats from other nations. China has been buying 62.5% of all the fiber optic cable in the world to wire for 5G. Why? Because 5G allows you less than a millisecond of latency and computing on the edge of the network, the devices can be dumber. And as a result, they can use it more effectively, both in commerce and in cyber. And yet in the United States to get fiber optic cable laid in an appropriate enough level to create the response that we would need as a country. It's going to take years to get it done. And it's because we have to get permits for every foot of cable we have to bury. We have to do it town by town, city by city, state by state. And that's gonna create a huge competitive disadvantage. I think it's a national defense issue in the end. 

Avital Pardo: I would also distinguish between regulation and data protection, which I think as you both said are different issues. Defending against the Chinese or the Russians, or a country effort to steal data. I think, Daniel, the kind of scenario you portrayed, I would think most of it is happening. I think that some countries out there know your medical records. I think this is a real threat that is happening. But I'm not sure this will be solved by regulation, and I think that when we look at regulation especially when LLMs become stronger and AI will become stronger and stronger. We need to start thinking about it and understanding it.

Understanding the ethical pieces of technology is extremely important to make sure we're moving in the right direction, and that we're utilizing technology that is actually better for mankind. But this would not be the first time this has happened. I think that if we're able to do that with nuclear energy 70 years ago, which is probably the most destructive thing that was ever invented, we will be able to do it with AI as well.   

GEN (Ret) Keith Alexander: What Avital brings up is one of the questions that people are wrestling with. How do you know that you control the AI that you develop? That's gonna be an issue, so that having the right guard rails around it. Think of this like SkyNet under Terminator, how do you make sure that the AI that you do actually has the culture and other things that it needs not to evolve a step beyond where it was intended. That's the concern I think people have, because if you train it to be like a person and now it can actually reason and interact, what does it reason and what is its prime directive? And how do you make sure that's all controlled? I think that's an issue that causes a lot of discussion and alarm. It shouldn't slow us down. It's something that we should flip around and say--How do we ensure that happens because AI is coming at us. We're not gonna stop it. 

Daniel Cohen (Moderator): It's a question of nuclear proliferation, right? It happens period, no matter what you want it to be, and you can make a big fuss about it and arrest a couple of people. But in the end the proliferation will happen, because it's so desirous to evolve it that countries that don't have the resources of the United States are able to develop it without ways of of doing lots of stuff. The future holds a lot of risks for us. And the regulation, you know, is going to be difficult.  

I'm just gonna ask you guys, if somebody were making a venture investment in AI, if you were the head of Softbank, where would you be looking right now? 

GEN (Ret) Keith Alexander: My thought is they're going to have to figure out how they build a large language model. Do the foundational models that all these other applications are going to build on. That's going to be a competition. I believe that most of them right now are all US-based. Everybody's gonna try to go do their own. They're expensive. They're hard to do. But they're gonna develop their own. Hence the evolution. You know what you mentioned the versions between chat, and now it's at a point when it broke through with these larger models, with billions of parameters. All of a sudden they're doing things, everybody goes. Wow! This is great. And now it's out there. 

All these other countries are going to say, we need to develop our own foundational model at scale. And so I think others are going to say, how do we compete in that area? And that's going to be a huge area of competition, because all the other things that reside in it are going to be useful. 

Mark Bertolini: It is use cases that demonstrate the capabilities. In our business we would look at in healthcare two versions. One is, can we make the system more convenient, accessible, and affordable by virtue of reducing the administrative burden on people and the way they have to use the healthcare system. So instead of getting referrals and prior authorizations, can we give them a roadmap to use the system without having to get permission. So that's one set of parameters, and those are the parameters we're working on with vigor at Oscar right now.  

The other side is the clinical stuff and a very different level of investment. That's around how accurate can we get on image reading by using large language models? Using the data available versus having humans have to look at it. What can we do to cut the cost of that? The convenience of it? Where do you want to make your bet? Can we use it for pharmaceutical research and using the data that's been used in the past to approve drugs, and therapies in a way that we can do that quicker and more affordably than running all these massive, patient studies and waiting so long to get these technologies introduced into society? As an investor, I have to say, where do I want to place my bets? Is it on the experience side or is it on the clinical side? And how can I make money on that side of the business  

Avital Pardo: I think that when you examine technology, and try to think about investment, then there's two main aspects. The first is the use cases, and the second is what we call the defensibility of the technology or the business model. I think what's unique about this technology is that the use cases - everybody can see them. You don't have to be a business expert. You can see them in a lot of places, and it's easy to envision different use cases. What's more difficult, and I think we don't have a good answer yet, is what would be a defensible business model over time and what will not? And I think that this is where I would start thinking. I think this is what will be decided over the next year or two.  

I would bet on everything that has to do with infrastructure. So the infrastructure of building, training the model is definitely like that. Nvidia is a good example. But there's going to be a lot of the tech stack is going to be huge for this. Everything that has to do with infrastructure. Then everything that has to do with something that creates unique data, you can train the models on. So now, the models are trained on everything. But if you have a specific domain where you hold the unique data that enables you to train the model on that. That I think that credit and healthcare are gonna be part of that. Then this will definitely, I think this can create a mode over time. And the third thing is human interaction. If you create something that creates a lot of human interaction, you can improve the models.   

Daniel Cohen (Moderator): It will be very interesting to see how this will shape up. If you separate the world into hardware, software, and applications in this AI space, where would you rather be? 

GEN (Ret) Keith Alexander: I think hardware chips is an amazing area. And you mentioned some of those, Nvidia, and everybody's coming up with the dance chips. But I think the real thrust of this will be in the software, the tools and the capabilities that companies like Pagaya, Oscar Health, IronNet and others create using that.  

Mark Bertolini: I would say more platform. What is the platform that's being created here? And is the platform valuable? The economics of the platform? Because I think the specific applications the use cases are going to get into the public domain. They won't be hard to copy, it's how you use them against the customer base. It's going to matter. So it's really the platform capabilities that you could build from it. 

GEN (Ret) Keith Alexander: When you say that, Mark, do you mean the platform like the model itself, the foundational models, large language models, or you mean the hardware platforms? 

Mark Bertolini: No, I'm talking about the platform, digital platform that you're using to run the business like, you know, Amazon uses cloud technology as a platform that does all sorts of things for its businesses and generates most of its earnings. So, it's that platform capability.  

Avital Pardo: I kind of agree with both. I would say, not in apps, not in hardware. But something in the middle. Can be software, can be kind of a software infrastructure. But somewhere there. I think what's more important is what's happening in the application of the level of technology, and less important how it was actually built. I think what's going to be extremely interesting about this revolution is that this is the first time we're seeing a machine that kind of thinks in a similar way to a human. That has deduction capabilities and reasoning capabilities and generalization capabilities that is very human-like thinking. And that can interact with humans. We're still in the very early days of that. And I think we'll see that developing over time to a place where the machines think like humans.  

There would be some kind of a machine logic that is different than a human logic. It's very similar to how machines started playing chess. They started playing chess like humans. But for the past 5 years, they're playing chess in a very different way. And then we understand that human strategies within chess are just a subset of strategies and do not represent everything. And I think it's going to be the same with human logic. And we understand that the way we think of the world and human logic is just a subset of potential logic, and that what we're now excited about is that machines started thinking like humans. But in a few years, we’ll understand that they will stop thinking like humans. It will be very scary, but I think a lot of the value will come from there over time. 

Daniel Cohen (Moderator): And will we then learn how to think in different logical patterns?  

Mark Bertolini: Humans will never be able to analyze the level of data that these new machines will think. So how do we create the connection that allows us to learn, because our brains are too primitive to do what they ultimately will do. So, when you talk about neural networks, from the human standpoint, we're primitive. They're going to be far more advanced, less emotional.  

Daniel Cohen (Moderator): How do you think AI will help patient engagement and members getting in and staying in care? 

Mark Bertolini: I think it'll be very powerful, and what I call the next best thing to do for a patient. So, here's where I am at this moment in time. And here's the next most important thing I can do to improve my quality of life. And that's going to be patient determined. So, quality of life is determined by me, not by the front cover of men's fitness magazines. You know, what is my version of health? Because most people view their health as a barrier to the life they want to lead, not as some sort of condition. So, if I can solve the limitations in my life around health by taking the next best step, then that's where these machines will help. 

Daniel Cohen (Moderator): I hope everybody who's listened to us enjoyed it, and I hope most of all our panelists enjoyed it. I've definitely learned a decent amount. Thank you everybody and I hope to talk to you soon.  

tags:
share:
more insights

5 Questions with FinTech Founders - Nova Credit

28.11.2023
Read more

What the FinTech? | SPAC to the future

21.11.2023
Read more

Retail And Algo Trading: The New Giants Of The Financial World

07.11.2023
Read more
 Read about
the launch of
Cohen Circle logo
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram