Soundbites

Newsday: AI Regulations and the Need for Cybersecurity with David Ting

June 26, 2023: David Ting, CTO and Founder of Tausight, joins Bill for the news. How does the EU AI Act aim to restrict and regulate AI use, and what are your thoughts on its limitations? Should AI models be required to disclose every piece of copyrighted material used to train them, and what are the implications of such a requirement? Can AI models unlearn biases and adjust their weighting based on different sources of knowledge? Is the fear surrounding AI similar to past technological advancements, such as nuclear energy or computers, and should regulations be driven by a fear of the unknown or by embracing the potential benefits? What are the potential consequences of stifling AI development and imposing excessive restrictions on its use? How can AI be effectively leveraged in the healthcare industry, and what are the implications for improving operational efficiency, disease management, and patient engagement?

Key Points:

  • EU AI Act
  • Paranoia and misconceptions about AI
  • Control and regulations on AI use
  • Statistical nature of AI and its limitations
  • Ransomware Attacks
  • Holistic Care
  • Healthcare industry and AI applications

News Articles:

 

Read on thisweekhealth.com.

Video Transcript:

This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.

Today on This Week Health.

There is no putting this genie back in the bottle. We’re gonna use ai, fundamentals of it is still statistics. It is the manner in which we can leverage the statistical properties of the knowledge to our advantage.β€Š (Intro)

Welcome to Newsday A this week Health Newsroom Show. My name is Bill Russell. I’m a former C I O for a 16 hospital system and creator of this week health, A set of channels dedicated to keeping health IT staff current and engaged. For five years we’ve been making podcasts that amplify great thinking to propel healthcare forward.

Special thanks to our Newsday show partners and we have a lot of ’em this year, which I am really excited about. Cedar Sinai Accelerator. Clear sense crowd strike. Digital scientists, optimum Healthcare it, pure Storage Shore Test, Tao Site, Lumion and VMware. We appreciate them investing in our mission to develop the next generation of health leaders.

Now onto the show.

(Main) πŸ“ all right, it’s Newsday, and today we are joined by David Ting, CTO and founder of Tao Site. David, welcome back to the show.

Thank you, bill. It’s a pleasure to be here

again. And we were just laughing and commenting that we started our conversation prior to hitting the record button and had some really interesting dialogue around ai.

We’ve got a couple great stories. We’ve got health system’s, top 10 AI priorities. We’ve got an Andreessen Horowitz. It’s too long of a story to cover, but we’ll hit some tidbits on it. Outside of ai, I do want to get to the Illinois Hospital links closure to ransomware attack.

I think you live in that security space. I wanna talk about the vulnerabilities that you’re seeing in the world of security. But let’s start with the big news yesterday so people will know when. We’re recording. This was the E U A I act. And I did a today show on it this morning. And, my rundown was similar to yours in that I looked at it and I said, wow these things that they’re restricting AI for, they absolutely should restrict AI for.

Did you see some of those things? I’ll pull up. Here it is. AI systems using subliminal techniques, manipulative and deceptive techniques to distort behavior. I’m like, yeah, you absolutely should limit it. And they had a pretty good list of things. So things that do social scoring, things that collect sensitive attributes and characteristics take exploit vulnerabilities of specific groups so that this outright said, AI cannot use as a tool against humanity.

Well, I mean, what are your first thoughts on the EU AI Act as you read it?

I think there’s a lot of paranoia around ai. AI is not this genie that has intelligence. It’s is leveraging statistics. It’s leveraging data that is aggregated through some means, the data that we feed it.

And I think the Horowitz. Paper was very good in saying it’s really, we feed it we power it up, we write the code that goes into it. It’s not some alien form life form that’s creating these decisions. We control what goes into it. We also should be able to control how we use it. It’s great to say that we wanna put constraints on it.

Statistics has been used to analyze data forever. I mean, this goes back to statistical analysis that people have done in social studies since the sixties and seventies. This is just now much on a larger scale, and we think that it’s almost like we believe it’s sentient, which it isn’t.

Yeah, it is. And the Horowitz AR article really drives that home. It’s not thinking, it’s not creating and those kind of things. However, the EU act is really about us using it incorrectly.

Correct. That I believe is appropriate. I think we, we need to control how it’s being used. No, no different than how a.

Retail website uses the statistics of what people buy as a group to tell you and focus your ads to you. And so are we concerned that we might be targeting different ads to you now because it’s got quote ai now, there are definitely things that you should be. Trying to constrain in terms of protecting people’s rights and their privacy.

I think it goes back to privacy. I don’t wanna be in my my preferences to be known. I should have the right to say, Hey, keep me out of it and don’t Yeah. And don’t use my data, it’s so let’s go. You’ve trained AI models and I’m curious how this is gonna work out cuz they specifically talk about chat, G P T.

The the one, I’m not sure what they’re called in the EU Congress person or whatever. I heard one, one person talking and they said, anything generated by ai, we want to be marked as generated by ai. So you think about all the stuff that chat GPT is generating right now, they wanna make sure that it’s like, it has a a mark on it of some kind that says, generated by ai, like a water marking on the document.

Yeah, so, so if Grammarly revised my English in my documents, should I mark it? Check by Grammarly, I mean, this is like, I think we’re carrying this way too far in terms of saying, did AI refine my document? Well, I. There are all kinds of grammar rules that it can be applied. Does a spell checker need to say, Hey, this document verified by a spell checker?

I think this is an overreaction person.

Right. So these are the two questions I want to ask you. The, let’s start with the first one is chat. G p T. They’re saying essentially that it can’t use copyrighted material, and this has been a hot topic and it has to provide transparency of what it has been used to train it.

And I actually saw Sam Altman was saying, Hey, look, we want to comply, but essentially we might have to develop a different product for the EU at this point.

I believe that’s gonna happen if you are going to make those constraints. In terms of saying, just because I trained it on using copyright data, does that mean when I studied my engineering.

Textbooks. I used a copyright. I have to now declare that I learned my calculus on this book, and this knowledge is different than did I use specific material that was copyrighted. I mean, I think there’s a fine line in terms of having to disclose every piece of copyright material that we as humans have leveraged in our building, our knowledge base.

That’s my perspective anyway.

Can an AI model unlearn something or do you have to create a new model? You have to create a new model and untrain the biases that were built into it based on the document. But the idea of training is to have the statistics generated over a very large collection. So you really don’t know where the weightings are.

So you may have one. Source of knowledge that differs from the other ones is bias is going to be biased less.

Well, let me ask you this. Let’s say you were president of the United States and you’re seeing this cuz this will inform, the EU will inform China has said, Hey we’re looking at these kinds of regulations.

The US has said, we’re looking at these regulations and this will be the first body that puts out something. So it’ll inform. Things moving forward. I stated this morning that I think the, what we will see from China and the US will be much less. Restrictive? I think so because the potential is so great and the two countries are looking at it going, look, we can’t fall behind in this.

This has to be something where we’re pushing the envelope, whatever that looks like. And so I think it will be a lot less restrictive from those two countries. I mean, what’s your thoughts on that? Oh, I totally believe that. I think the genie’s already out of the bottle. I think there are going to be more hearings about the power of ai and there are going to be more efforts to constrain it.

But at the end of the day, it’s a mathematical technology that we just happen to now uncorked and have been able to leverage to deal with accumulated knowledge from different sources of data. And you cannot put that back in the bottle. Or you’ll fall behind. It’s one of the most powerful things we’ve invented having a statistical representation for knowledge.

It’s still not sentient. It’s doesn’t have his own intelligence. It’s making statistical decisions, and you can ask it the same question twice, and it’s gonna come out with two different sets of answers at different times. I mean, that’s ai. It’s, it is deterministic, but yet it’s the castic at the same time.

Yeah. I sort of like that, and I have done that, by the way, I’ve asked Right. Essentially the same question and gotten different results back, and it is because I use it as a brainstorming tool. That’s helpful.

It is it. If you had the ability to have all the content in the Library of Congress at your fingertips and you can ask it questions and they can find that the answers for you and assimilate multiple data.

So data sources. That’s a very powerful tool not to unleash for the general public. It accelerates our learning. It accelerates having the ability to have all that information, all that knowledge, be searchable and accessible and presented to you in a format that you can readily assimilate.

That’s the power of ai. We’re turning it into this monster beast that has horrible consequences. You could say that about almost anything we do.

Well, and that’s where I was gonna go. I mean, if you think about nuclear advancements, knowing how it’s been used, would you have put massive regulations around it back in the day and said, no, because we know it’s going to be used to build a bomb.

Knowing that there’s all sorts of nuclear medicine, there’s all sorts of right nuclear energy, and there’s all sorts of advancements that have come out of it. Would we have done it differently? The computer is another example of people will say, Hey, look, a lot of job destruction and that kind stuff, and it changed a lot of things.

And now people are, addicted to their phones and whatnot. Maybe we should have put regulations around the computer back in the day. And it would’ve slowed things down. I feel like it’s, it is that same conversation. It’s really out of a fear of the unknown. Like, what is this gonna lead to?

So, in the seventies I was late seventies, early eighties, I was part of the cad cam generation where we replaced drafting tables with computer aided design and users designed on a computer. The model for what you’re building that you used to sketch out on blue. On blueprints and on drafting tables was now a mathematical model and a computer.

And all the draftsmen would say, you’re replacing my job. And the engineers will never be able to do this job on their own with a computer and a mouse. Fast forward, we wouldn’t be anywhere close to what we can design today, had we not. Converted that physical model into a mathematical model that could be easily manipulated, simulated do all kinds of cost calculations all in inside the computer.

And yet nobody basically said, oh, you’re gonna be able to do horrible things with this technology, even as it’s simplified. The design process we’re watching the benefits of all these complex machines that are being built for medicine, for for building automation of dangerous jobs. That’s all the byproduct.

And. If we had bottled that up in those days because we’re worried about the shifting in job types of jobs, then we would be really handicapped, I think in our ability as a society to advance.

πŸ“ πŸ“ β€ŠWe’ll get back to our show in just a minute. Ever wonder how technology can reshape the patient experience? Join us for our next live webinar, the Patient Experience, a Technology Perspective on July 6th. At 1:00 PM Eastern Time, we’re bringing together expert speakers to dive into the intersection of technology and healthcare.

We’re gonna explore topics on digital health tools and the impact of ai, blockchain, and other things around this. Whether you’re a CIO or part of a healthcare IT team, we think you will gain practical insights from this discussion. Uh, don’t miss out on this conversation. Register today at this week, health.com.

We hope to see you there. Now, back to our show.

πŸ“

I do want to get to this top 10 AI priorities, but I know I noticed that you worked at Kodak and of course we have the overused example of Kodak and the digital camera being presented to the executives and the executives saying, put that back in the box.

Put that back in the drawer. I don’t wanna see that ever again. And the same thing could be true here. Like if I were talking to Congress today, I. I would say, look, this happened before it happened with Kodak, and so if you’re going to restrict the us it’s not like AI is not gonna develop everywhere else.

It absolutely is. The question is, do you want it to develop here where we can put some ethics around it, put some, constraints around it as it develops and seeing where it goes as opposed to saying, no, we stopped it. We stopped this development in 2023 thinking that the world wasn’t going to advance.

The world’s gonna advance. It’s gonna advance the software development. This is mathematics applied to the computing powers that we have today. In the memory, the distributed computing, the access to digital knowledge. There is no putting this genie back in the bottle. We’re gonna use ai, we’re gonna leverage the, people say, oh, AI is this whole branch.

I go, yes. Fundamentals of it is still statistics. It is the manner in which we can leverage the statistical properties of the knowledge to our advantage. It’s not a sentient being that we’ve trained, not some brain with tons of neurons connected to it that is sitting in some lab,

Well, the other thing that I was interesting is, And I don’t know if this ended up in the law again, I haven’t read the law. I just read a bunch of stories about what was passed. But I saw someone recommending that we essentially give a carbon footprint to the answer that was generated from the from the algorithm.

And I thought, well, that’s interesting because if you’re gonna do it there, you’ve gotta do it everywhere. Right. And that, that would be an interesting calculation. I do wanna get to this health system’s top 10 AI priorities for the next two years because it is an interesting list.

So number one is operational optimization, then health disease management, predictions. This is from Becker’s Pop Health Management, value-based care patient member engagement, clinical research, imaging. And then some categories that are pretty low on the on the group. What jumps out at you from this list?

It’s clearly the lack of the use of AI for cybersecurity and securing the the infrastructure in healthcare. That was the first thing that jumped out. The choice of applying AI to operational efficiency, perfectly valid. That’s basically, or applied to. The gee how can I make things happen more efficiently in my organization?

And the AI uses that we’ve talked to some of our customers and prospects have been around, gee, as you accumulate more data around it, ops and How can you leverage that information to the longitudinal records, health records for patients to improve efficiency, to improve the outcomes?

So it’s a combination of improving it efficiency or where you’re spending your dollars, where your doc, where your doctors are spending Their time. So one of the simplest metrics used to be what’s the wait time for a doctor on a machine? Do they wait for machines because they aren’t available?

It’s the most common question I used to ask CIOs, how do you know where to put the next machine in your, do you put it in the hallway? Do you put it in the lounge? Do you put it in the patient’s room? And his answer was, Based on the number of complaints I get, I just pop in one more computer to, to give access to some, to the doctor who complains.

And yet he said, when I walk down the hallway, I find tons of empty unused computers that we put in. So having the statistics allows you to improve the operational efficiency. That’s a classic thing that AI can do. Why is because you have now statistical data on something as simple as that.

In another hospital I had they said patient engagement would be another place where we can improve the survey scores. And the survey scores were dictated by how long it took for the patient to get admitted. And they said they factored it. They spent a lot of dollars doing a research and found out it was a delay in finding the wheelchairs to wheel the patient in.

From admissions to their room because the wheelchairs are always left distributed through the hospital, so they ended up having, hiring somebody to collect the wheelchairs, and all of a sudden the survey scores went up. Now those are kinds of things that you would an, you could use to statistics in AI to figure out what are things that you can optimize for.

It’s an optimization problem, and AI is great for that. Once you have the data.

Yeah. Oh, well, it’s interesting. I think, security obviously is a great admission from this because this was probably done from an executive team perspective. It was a class research U P M C class did it together.

And it was 49 providers and payers published on June 8th. And I think the other o mission for me is the for lack of a better category, clinician experience, cognitive load on the clinicians. It’s just not in here. And I think that’s one of the areas it’s going to excel. Is taking the mundane through computer vision, through AI models and constantly, improving and learning.

And it’s gonna be able to take a camera in a room and tell us about bed turns. Tell us about exactly fall risk and by the way, and document that stuff. That the clinician used to have to do. I think there’s a significant opportunity. Here’s the thing I wanna do, cuz you, you brought it up.

With the wheelchair example, I think there’s a whole study of health systems and from an efficiency standpoint where you just take the the journey, the patient journey from one end to the other. And we have these journey maps, and I would love to see how much data we actually have of that journey.

To feed into a model and have that model come back out at us. And I think the first response from the model would be, are you kidding me? Like, like, we could fix 10 of these things tomorrow, but I don’t, I think we’re so busy being tactical and solving like that one problem that we never look holistically at the patient journey from beginning to end.

Oh my gosh. It took them, eight additional hours to get discharged. Oh they’re Their transition to to a sniff or whatever. That was an inefficient pro. We don’t look at the whole flow of that journey. Not only the patient, but also the clinician experience across that and fix it, like holistically fix it.

So that’s a really interesting perspective. And once upon a time I actually raised that I presented the scenario where if healthcare were a manufacturing process and the patient was your raw material, your goal would be to optimize that journey through all the stages of care to get the patient out in a much better state while optimizing their experience, lowering the cost, lowering so it becomes a manufacturing process and you can apply all the things that you normally do to manufacturing processes to optimize, but that requires you to have a lot of.

More aggregated data than you could probably analyze within the healthcare system. It covers the clinical side, the operational side the patient journey, if you will. It has to be now mapped out and then all the data points that feed into it has to be modeled will have to be modeled and then optimized around some dimension.

I think that’s gonna happen because at the end of the journey, you really want the patient to have had a really good experience, not in a better state. Now I have heard in talking to several academic institutes that they’re trying to take the structured and unstructured notes and patient records and feed them into chat G p T to come out with a.

UpToDate summary with all the information that can be accumulated, not only just the data that’s in your E M R, but possibly your insurance claims your notes that your doctors are writing outside of the E M R they’re doctor correspondence feeding all that into Chat GPT to come out with a normalized summary that is up to date and they find that is much more accurate.

Several of the research places I’ve talked to said, the moment we can do that and feed real time unstructured content about that patient into this chat G P T model, the summary comes out looking much more accurate and patient care gets improved because everybody will see the same information in an UpToDate manner.

I think that’s where things like generative AI work, you can feed in, Hey, this is the reality. Here’s our suggestion of what could be done based on additional input as opposed to treating it as a source of truth.

Yeah. Not only that coding. Right. You feed all that information in. I would trust generative AI to do coding more than I would trust.

Absolutely.

That was the next thing, which is while coding is such an artificial. Operation that you count on the user and then the coders to basically summarize the scenario and say, here’s what we treated that patient for. Here’s the code that’s appropriate for it. This is wholly an area that generative AI could help with.

I did we had one of our 2 29 CIO events and one of the more funny moments was a doctor. Who shared he was, I, tell me a little bit about, your hospital and whatever he goes, he said we actually covered Gettysburg and, we have a large number of people who dress up in, wool outfits in the middle of July and go out and do reenactments.

And he goes just for the record, the ICD 10 code for a bayonet. A wound is, and he gave you the ICD Ten coat. I’m like how many places is that used? He goes, well, it covers more than just a bayonet wound. But he goes, but if somebody does come in and they do with a bayonet wound, We know the ICD 10 code.

Wow, that’s pretty funny. That is. They’ve already precache the list of codes appropriate to them for that. Let me give you this last one. You live in the world of security. I want to hear what’s going on in this Illinois hospital. So a ransomware attack hit s and P Health in 2021 and halted the hospital’s ability to submit claims to Medicaid for months.

Sending it into a financial spiral and the hospital is citing that ransomware attack as one of the reasons it’s closing. And I don’t recall another health system actually citing ransomware. Yes.

In their clo. I think there was another one cited in that same article that talked about the fact that they went out of business due to ransomware locking them out from being able to accept new patients and or to file claims.

I mean, that is just awful. But a lot of these ransomwares have multiple prong attacks, which is not only I lock up your system, so you can’t do anything with it. I also will ransomware your E P H I so that you can access, you cannot access it as an organization. And then thirdly, I will extort you to, with a threat that I will release it to the public if you don’t pay me again.

So this triple and then the fourth, sometimes people will say, well, they also will go after the patients themselves using the information that they stole. So the latest generation of ransomware is very sophisticated. We read some of the analysis on some of the newer, like black Basta is a incredibly well-engineered attack that through social engineering, once they’re in the attacker basically loiters and does reconnaissance for several days, find the data that they want to steal.

Mostly the PHI, personal data, p h i data. They steal it, exfiltrate it out to some location, and then they launch the full ransomware attack to lock up your system, lock up your data, and then basically extort you to first of all, un unlock your system, unlock your data, and then basically say, now that we have your data we will release it if you don’t pay us again.

So you’re really at their mercy when you have one of these Data extortion type ransomware. It’s not just the old fashioned I encrypted all your system files. Look, you have to go back to a backup file to restore your system. That is really what’s hitting a healthcare these days. And the attackers have done this in a very well mannered way.

I mean, they spend time on the, on your machines. They live off the land. They import tools over their back door that are whitelisted on most systems. What you have to really do in what we’ve been stop telling people they need to do. And this is also the advice from a lot of legal firms that do incident response.

Get rid of all as much PHI data as you can that you’re not using on an active basis from your machine so that when you do get compromised, you don’t have this treasure trove of data that they’ll just steal and then hold you ransom for. So the analogy I always use is if your wife has a huge amount of jewelry that she owns, but only uses a very small amount, Put the rest of it in a safety deposit box and keep the minimum amount in your house so that when you do get compromised and have an attacker, they only have a very small amount to to take away.

The other thing is make sure your individual clinicians aren’t hoarding tons and tons of p h I data in their email accounts or under their personal accounts or on their laptops that they keep. Walking around with

Notorious Hoarders? Absolutely.

Oh, it is just some of the data that we’ve seen as we’ve done our analysis on different machines, different organizations there’s a lot of data that should be taken offline and then air gap so that it’s available.

But it’s not gonna be available to an attacker that comes in the front door, phishes you for an attack for a credential, and then sits on your network, loitering, ready to steal your data. That’s just, reduce that surface area.

Yeah I tell the story that I challenged Deloitte was our in internal auditor, and I challenged them once my team felt really good about the security posture.

I said, all right. Have at it. If you guys are able to get in, I’ll pay your fee. If you’re not able to get in you’re essentially doing this service for free. And they got in within 48 hours. As you would imagine, they have former NSA people and whatnot. Yeah. But, and you would think, oh, they did some sophisticated thing.

They didn’t, they created a website that looked an awful lot like our website. Right. They sent an email out and they ended up with, I think and they limited it to one of our hospitals, ended up with 25 credentials. And so when people think, oh, it’s it’s not that big a deal that these hoards of data exist within the environment I like to tell that story to say, you know what, it’s really not that hard for them to get credentials.

It really isn’t. And the thing that Deloitte told me after that, I mean everybody knows this now, is operate as if they’re already on your network. Exactly and if they’re already on your network, you shouldn’t have stuff lying around

That’s. That’s the surprise actually.

A lot of CISOs and CIOs will say, oh, I didn’t realize we had this much data sitting around. I said, well, you keep buying servers. There’s gotta be a reason you’re buying more storage servers. There’s something on them. You take them off and our archive them. Or, and then you look at orphan files where the owners are no longer there, there’s a good percentage of that.

You have duplicate files. You have files that haven’t been accessed. In years. And what we look at is, gee, if you compare the time of creation with the last time of access in a healthcare setting on large servers, what’s the average window? It’s 30 days after 30 days, you’re most likely not ever going to use that file.

So why keep it on your server? Why keep it on spinning platters waiting for somebody to take it? Right, and they’re not hard files to find. I’m sorry, they’re hard files to find. But if you have an attacker that’s in there, loitering for four days from a remote location laterally moving from machine to machine, looking for more credentials, looking for more servers, you’re in a heap of trouble.

Yeah it doesn’t surprise, I mean, I feel really bad for those hospitals that fell all the way to being in so to going into bankruptcy for it. Attacks are gonna happen and it’s unfortunate.

Well, I’m looking at a picture of the hospital that went bankrupt. I can guarantee you it wasn’t just the ransomware event.

It’s pretty small. It doesn’t look like it has a lot of beds and we know that size hospital is struggling. Financially, even without ransomware attacks across the board. It’s just they’re so, such on the precipice that a ransomware attack took them over the edge. I’m sure. David, you and I could talk for another 45 minutes and I’m already looking forward to.

Our next conversation but we’re out of time for this show. Hey, thanks again for coming on the show. Really appreciate it. Absolutely,

It was a pleasure and lot more to say on the, on these topics, but thank you for having me. Oh, well, we’ll have you back in six weeks and we’ll talk some more about it.

Thank you. Thanks.

πŸ“ And that is the news. If I were a CIO today, I think what I would do is I’d have every team member listening to a show just like this one, and trying to have conversations with them after the show about what they’ve learned.

and what we can apply to our health system. If you wanna support this week Health, one of the ways you can do that is you can recommend our channels to a peer or to one of your staff members. We have two channels this week, health Newsroom, and this week Health Conference. You can check them out anywhere you listen to podcasts, which is a lot of places apple, Google, , overcast, Spotify, you name it, you could find it there. You could also find us on. And of course you could go to our website this week, health.com, and we want to thank our new state partners again, a lot of ’em, and we appreciate their participation in this show.

Cedar Sinai Accelerator Clearsense, CrowdStrike, digital Scientists, optimum, Pure Storage, Suretest, tausight, Lumeon, and VMware who have πŸ“ invested in our mission to develop the next generation of health leaders. Thanks for listening. That’s all for now.