Newsday: The Progression and Potential of AI with David Ting

May 13, 2024: David Ting, Founder and CTO at Tausight joins Bill for the news. Ting dives into the fascinating world of AI, cybersecurity, and healthcare through an engaging discussion that leaves audiences pondering: what untapped potential lies within our vast reservoirs of healthcare data? How can AI catalyze greater advancements in cybersecurity and make our digital world safer? As we navigate the intricate landscape of AI-driven technology in healthcare, what is the future we’re heading toward? This conversation goes beyond surface-level explanations and delves into the intricacies of AI model building, security concerns, and technological shifts altering the workforce, all the while keeping the focus on improving healthcare outcomes.

Key Points:

  • Role of AI in Cybersecurity
  • AI Influence on Job Market
  • Edge Computing Prospects
  • Improving AI Models

News articles:

Read on thisweekhealth.com.

 

Video Transcript:

This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.

Today on Newsday.

The ultra class processor. 10, roughly 10 teraflops, using next to no juice. No angry electrons there. It’s just purpose built to consume complex models and vectorize the hell out of it so that it can run.

Many times faster than we can do on a general purpose machine. That’s just the tip of the iceberg   My name is Bill Russell. I’m a former CIO for a 16 hospital system and creator of This Week Health. where we are dedicated to transforming healthcare, one connection at a time. Newstay discusses the breaking news in healthcare with industry experts and 📍 we want to give a big thanks to our Newstay partners, Clearsense, Sure Test, Tausight, Order, Healthlink Advisors, Cedars Sinai, Rackspace, Crowdstrike, and Fortified Health .

Now, let’s jump right in.

(Main)   📍 all right. It is Newsday and I’m joined by David Ting, chief technology officer, technical guru at Tausight do you guys consider yourself a security firm with the,

we see ourselves as an AI company that allows our technology to be leveraged into the cyber security space.

you guys are identifying the troves of information that reside all over healthcare that people don’t know exists and don’t even know it’s out there. So it’s hard to protect.

Correct. It goes back to the HIPAA security rule that says, hey, you must do a complete and thorough assessment Of all your risks and vulnerabilities to EPHI I think I wake up in the morning and I recite that.

It’s so fundamental to what we need to do. And as we were talking about AI is such a critical piece of taking advantage of all the newer technologies around. Solving what used to be labor intensive problems that are, you know, if you can define the scope of the problem and you can have a language model that you can run effectively and accurately in a calibrated manner so that you know what the error bounds are, the performance bounds and you can apply it at the edge.

That’s what we do. Take advantage of AI to find your data and I think you’ll get to the point where risks and Threats, vulnerabilities are all going to be detected at the edge using AI. We are just going to have enough entropy in those models to allow us to find the things that we don’t know about.

As opposed to, gee, I have all these heuristic rules and you can program them in, which is how we do a lot of stuff today when it comes to recognizing patterns. And I apologize, cause I’m getting off on a tangent on the differences in AI approaches.

let me ask you this. So, people may not know.

So you, you were around in the early days of Improvada. I mean, you’re one of the architects of that platform. I’m going to ask you maybe an unfair question. Because I, I watched this video on social media and it cracked me up and it was the use of AI in products.

And it had the bottom bun of a burger and they put some sauce on it. That was the AI. Then they put the burger down, they put some more sauce on it. Then they put the lettuce on it, more sauce. They put the bun on, more sauce. Put the fries in, more sauce. They closed the lid and put sauce on the top and they said, look, it’s AI.

Anyway, I think everything can be AI.

I know it’s, but that’s what’s going on in our industry. You’re one of the few people that I trust in this topic. In fact, we’re going to talk AI, we’re going to talk cybersecurity. Today, but if you were building out the Improvvata product back in the day and all of these AI models were available, how different would the architecture be?

How different would you approach that, problem set?

So it’s interesting. One of the key capabilities that the Improvata agent had in being able to recognize different screens to know, hey, this is a logon screen, this is a password change screen. You could program the rules as decision tree if you wanted, but it doesn’t give you robustness.

It doesn’t give you latitude to say, hey, what happens if an application worked just a little differently than you had predefined in your code? those used to be called heuristic programs. You took advantage of what you knew about the problem, but if you wandered outside of that space, your program probably wouldn’t work.

Your decision logic would be wrong. I actually embedded a fair amount of fuzzy reasoning into the recognition logic, even at Improvata. to give it more flexibility so that it can accommodate changes. And that’s where AI is really good. And AI the ability to have a lot of knowledge that you didn’t program in because you’re getting away from having a state machine that only goes through the states that you define.

AI can have multiple branches, has infinitely more numbers of states that it can get to depending on the input and depending on the randomness that you built into the AI model and that’s where the flexibility for AI. It can learn things and it can have huge amounts of Latitude in terms of responding to things.

So we use AI, for example, to recognize when we see sensitive data in unstructured content. It doesn’t require us to say, hey, there’s a keyword here. There’s another keyword here. That would be how you would do heuristic programming. Combinations of these three words, it must mean this. Combination of four words, it must mean these.

Thus, you’ll never get to all the rules. You’ll never get to the variability you need. So, in our model, for example, we have 40 million parameters that can be set to recognize this stuff. You will never in a, Years of programming get that much logic into your decision tree to accurately recognize it. And that’s condensed down from a model that’s way larger.

And that’s what these large models can do. You can learn, you can have these parameters, you can have the architecture of the neural net. The weights associated with it, the way they’re connected, the the randomness that occurs in that, it all gives you more entropy, more knowledge, if you will, and more latitude in terms of responding to noise in the data where the data doesn’t look exactly like what you trained it for, but guess what?

There’s some statistical similarity, it will find it. that’s how AI is so powerful.

If you were sitting down with a chair for a department college or university today, and they were talking about training the future programmers in our world, how would you change that curriculum?

do they still need to understand the base? Like people laugh at me, I’m like, I know the OSI model. They’re like, well, how important is that today? I’m like, still pretty important.

I think that’s a great question. I think the way we teach Programming is still going to be relevant in the sense of that, gee, everything above the hardware level.

there’s still assembler code in there, but nobody knows what assembler looks like. There’s still a low level code, how you get to the programming steps and how do you build a program and process split up the work into different subroutines or methods or functions, good concept.

I don’t think we need to know it to that detail. I think. The logic of translating a design into executable code, we’ll be able to do that with AI within the next three or four years. Now, you will go back to the, what we talked about, which is prompting. The first time it comes back, you can ask the system, Hey, write me a piece of code that looks like this.

If you gave it all the specification, it probably won’t come back with what you told it to. But if you hinted and said, hey, I’m trying to write a multi thread solution where each thread does some work, the sequence of prompts will get you to a better solution. And then you break it down. It’s the same way we would design anything.

Start at the top, break it down into pieces, break it down. Those levels of hinting that you give to your GPT model, it’s pretty simple. For it to improve what it gives you for the next iteration. That’s where we’re going to end up. So I think schools will be much better at teaching design and how to ask the right questions. of your system that basically is trying to learn what you want.

But won’t that be a basic course? I mean, they keep talking about democratizing this stuff. I think so. Yeah, this will be a, this will be something like, how do you get the most out of the tools that are available to you now? they’re already starting to show up.

A lot of your Copilot, for example, is found in almost all the Microsoft Visual Studios products. Oh, they’re Products out there that will glom onto your development environments to help you write test codes, to help you summarize, to help you comment. This is already starting to happen. It’s going to be the next leap when We see enough code samples to say, Hey, I’ve already seen all the code that people are writing.

Let’s say get, I can help you write better code based on the fact that I reviewed other people’s code already. It’s the same way we teach people how to improve their programs, look at other people’s software, find out what’s what you like, what you don’t like. It’s the same thing that AI is gonna be able to do, but on a much larger scale. ’cause there are pattern, there are patterns in software design.

Yeah, you know what’s, what’s interesting to me is I saw my team talk about democratizing this stuff. And so my chief operating officer has, is using Airtable and she’s, creating this pretty sophisticated thing because we’re tired of using Google Sheets.

She wants to move it over there. So today was the first time I looked at it. And I’m sorry I’m sharing this story. I know she’ll listen to this and whatever, but I’m looking at it and I’m going to share it. That’s bad database design. That is going to lead you into an alley. At some point, that’s going to lead you into an alley.

Like, you need to design this differently. Otherwise it’s going to become whatever. So, there’s that knowledge that I have. So, we’re democratizing this stuff. But we still need to understand the core principles and the fundamentals of these things in order to take these things into production, or we’re going to end up with a lot of problems.

So I think where AI can help are those smaller discrete steps. Once you break them down and say, hey, I need this piece done, I need that piece done, for now, until it gets the patterns for the larger problems that you’re trying to solve.

So, so you think that’s where we’re at today, but you think over the next couple of years, that the AI is just going to continue to improve to the point where the democratization is going to be possible?

Yes, I think both from a combination of the feedback that humans are providing and in the prompting allows it to learn from the things that it didn’t do well on. You’re punishing it and you’re rewarding it. And so the AI model is going to improve based on crowdsourced feedback. How you ask the questions, whether you like the answer or not, indicates to it, hey, I need to improve here, I need to improve there.

And so over time, it’s going to learn the essence of what we know from our years of experience. You could look at a design and you go, hey, it’s deficient here. I can look at a system architecture or I can look through a body of code and I go, this is bad, this is good, this is not good, this can improve. If I can give that level of input to a software system, over time, you will see the statistics of, gee, David’s input, other people’s input as they read code, they’re all going to bomb it together because there’s statistical correlation between all of them.

And boom! It will have that piece of pattern built into its model to say this makes good design, this is not so good, or these are the things that you, every time I read through people’s code, I go, I use my patterns to analyze other people’s code. And I go, here’s what I would see as a problem.

Same thing will happen with software, with AI based systems. And it will get better.

Sam Altman, OpenAI CEO, his new shtick is to come out on stage and say, I apologize for chat GPT 4. This is the worst AI model you will ever use in your life. And he’s probably right, because it’s just He’s probably right.

It improves every day. And so that’s his new shtick. And, he’s not quite Steve Jobs yet, but between NVIDIA and OpenAI, I do think that they’re, they will be more watched than the Apple keynote next year. I’d like, I think next year it will be. I think so. And so he is, he’s training to be the next Steve Jobs in his presentation.

📍 📍  Hi, I’m Drex DeFord and I am thrilled to host Unhack the News on This Week Health. Alongside our established Newsday show, we’re diving deep into cybersecurity and healthcare. So whether it’s Unhack the News with me or Newsday with Bill Russell, you’ll find the latest insights and commentary in This Week Health’s newsroom.

Expand your knowledge and stay ahead of industry trends, and mostly don’t miss out. Join us every Monday on This Week Health for all the latest stories.   📍

One of the things we were talking about earlier is AI at the edge. And he talked about that as well. It was in a talk I saw just this week, he was saying, look we’re not going to be able to just take mass amounts of information and build these massive models and stuff. he was talking about specialization.

He was talking about agents and he was talking about at the edge. He goes, and he pointed at the phone and he goes. That’s archaic. What we’re looking at today with the phone will be archaic in just a couple of years. Like, we will interact with our technology very differently. being an electrical engineer, I’ve built hardware and I always follow what goes on in the latest advances in chips.

advances that we’re making today in the next gen chips for processing, not only GPU advances that NVIDIA and AI and sorry Intel and AMD have made, all these vendors are now building specialized chips. that will allow you to take models to run them at the edge. Same way we had in the early days.

So are we going to put the models on the chips or are they just going to be able to run them? They’re going to be able to run the models on those chips effectively. So so we work closely with the Intel group and so their MPU chip that they came out with, there’s a paper that’s written out there that I just read.

It’s 9. 4 teraflops of computing power that they can put in this one single little chip that they added onto the rest of the silicon for the what do they call it? The ultra class processor. 10, roughly 10 teraflops, using next to no juice. No angry electrons there. It’s just purpose built to consume complex models and vectorize the hell out of it so that it can run.

Many times faster than we can do on a general purpose machine. That’s just the tip of the iceberg. This is where we’re going to end up with smart machines on the end that can do language translations. It can do content recognition. It can do generative AI to improve for creativity reasons. It can do all the stuff that we’re doing today.

And I was telling you about the MIT. media lab event that was co sponsored by the Intel folks and the two professors got up there and said exactly the same thing. We’ll never have enough computing power in the cloud to process all of the requests for AI. You have to be able to take these super, super large language models with their trillions of AI.

and condense it down into task specific models that you can run at the edge. And I, you can see a model where transference of those models, the models become the IP, if you will, that gets shared and gets updated. And you take a large model and you say, Hey, I need a model for refining code. So boom, a small, smaller, much smaller model comes down, which is task purposed for Doing what you need to do.

Gets downloaded, runs on your MPU, gets runs on your AI at the edge, and helps you do the tasks that you need to do. That’s where I think the architecture is going to be. And so their bottom line was he’s to the AI team that was in that media lab, and it goes, you, the next generation of students and business leaders, you need to think about how distributed AI can work because that’s where the future is going to be.

And I thought that’s the right. I’m going to take you to the past. I think we’re about the same age. I’m not really sure. But do you remember the PC coming into the world, right? Yeah. All right. So the PC comes into the world. Scratch. Right. But well, because right now what’s happening at all the health systems and everywhere else is like, what’s the future of my job?

Like, what does my job look like? And people are starting to see glimpses of it. They’re starting to. feel angst and you see different articles where it’s like, no, and you saw nurses actually go on strike and say, no AI. You saw screenwriters say this needs to be a part of our contract and that kind of stuff.

So let’s talk about the advent of new technologies and how it changes the nature of work and eventually generates new jobs that we didn’t even know were needed. So with the PC comes in, and I remember the first PC at M& M Mars and I remember I got one on my desk and I was an intern. Like they gave it to the intern ’cause nobody else knew what to do with it.

it’s like he’s, he doesn’t have any gray hair. He’ll know what to do with it. So they gave it to me. I, and I knew what to do with it. I created spreadsheets and I remember showing them the spreadsheet for the first time and they were like, oh my gosh, that’s amazing. Isn’t that crazy? Yeah. So, I remember conversations, but the conversations back then were.

Man, I’ve got to get one of those. I can’t believe how quickly you could do that or, wow, you could print those out on a printer and then carry that with you. I mean, there’s, I can’t remember how much, I can’t believe how much stuff we used to do by hand. I talked to to bookkeepers and accountants and they said, yeah, we used to write on these sheets.

Everything we write in a spreadsheet, they used to write out on sheets. So when they first saw the spreadsheet, they immediately said, that’s going to make my life better. But that’s not necessarily happening here with AI. Some people are stepping back going, wait, that’s going to fundamentally change my job, and some people are worried it’s going to take their job.

I mean, how do you, based on your knowledge of how different technologies have come in, and how it has changed the nature of work. What are we saying to nurses today? What are we saying to IT staff, to security staff today? I think that’s a great point. I think any time you go through a digitization experiment or transition, You have the same angst.

I remember Draftsmen who said engineers are never going to translate their design onto paper, the way, because we do that job. We draw the lines, we do the blueprints, when people remember blueprints. We erase the lines when they make changes. Engineers will never be able to do this. They will never enter their designs into a computer.

We saw drafting tables go out as we built CAD CAM systems for them. We saw draftsmen who either made the transition or they said, gee, I need, if you’re young enough you made the transition. They became excellent at using the CAD system. They, They got onto the, gee, you can actually simulate.

The whole movement, you can actually simulate CAD to say how the part’s going to be machined directly from a design versus having to do all this extra work. You can do costs. I mean, all of a sudden, the knowledge and the capability to manage that information became really powerful. I think the same thing’s going to happen today.

I mean, we have EMRs that represent the summation of the knowledge about what was done to a patient. The process of entering that data is going Still tedious, still laborious. You still write notes, you still write stuff and you edit. That’s going to be simplified. That’s the first step, which is really the augmentation.

And then what’s the additional insight that you can get using AI? So we have decision support systems today. Those are all rule codified rules that are well known. I think the AI technology will improve that. It will get better summaries, more real time summaries, more integrated into the learning and gee, did you know about these things?

Or in the ability to correlate even more sources of knowledge that’s built into the AI model. Today, it’s all codified in rules and decisions that the engine will run. In the future, it will be an AI model that has the. Cardiology specialties or specialists or all the stuff that you might want it to know about.

And certainly in cybersecurity, nobody can track the amount of data that comes into a system as it’s being generated.

It’s impossible. It’s real time,

right? You can’t even build rules because the environment changes so quickly. Adaptation is one of the things that AI systems are good at doing, either supervised or guided Supervised learning.

You can learn those patterns. It can know, gee, this is really bad. When I have new applications that are coming on the system that I didn’t see before, those are patterns that it can pick up and respond faster than, gee, we should get a report and review it and say, Hey, these are new programs.

Why are they running across our system? Who brought these things in? All these patterns for security should be all built into it to AI. On the UI front, we should be asking more human more intelligent questions and say, what are my concerns? You should have a much better level of interaction with the data that you have to find the patterns that today would require you to formulate some queries in some obtuse language that understands all the nuances and then say, Hey, the query worked or the query could have hallucinated because you brought the wrong parameter.

AI will help you do that. All right, so closing question you and I could talk forever, and I love our conversations. we didn’t even get to the news site we’ll focus on AI. We’ll close this out with this Success in AI, we were talking earlier. I’d love for you to share some of where people in healthcare are finding success in AI today. So where it’s being introduced and where it’s been successful.

So I, I heard at a meeting the other day with some new startup companies that were successful in AI, in terms of their ability to do a couple things.

And I thought these were really important. One, they could demonstrate value to the community. The provider organization that brought in AI. They could show value either in the improvement in efficiency to their clinical staff or real demonstrable claims processing. I don’t think anybody has really said, Gee, I think.

outcomes have improved yet, but it has to be demonstrable. It has to be valued. It has to be something that is well characterized in terms of a workflow. You said here’s the five steps that I used to do by hand to, to get the data into the system or to make a decision. Now only need two steps because the AI in between these workflow steps.

have helped me make a better decision that is faster, more accurate, and most importantly, it’s real, you can rationalize it. It’s not inscrutable. It made a decision. You can go and see, yeah, I can understand why it made that. So a lot of this is also written in the presidential order on On AI efficacy, a lot of it is brought up in that paper, which I thought was really well done.

It really did bring up, here’s the things that you need, obviously things around hallucination, the non inscrutability of the results, the fact that it has to have value and also provide. The security and privacy controls that you need. That’s a really good basis and I’m looking forward to the NIST guide.

While the NIST has come out with the guideline, I’m looking for the reviews on saying, hey what are the guardrails we need in order to make this technology really secure. It’s in a manner that benefits us. So there’s always going to be that tension, right? Unfettered research, freedom, guardrails.

It’d be interesting to see how that works. Yeah, definitely. I, I love the idea of. Finding return is important. Obviously, that’s important for the administration, but it’s also returned for the individual. So, I talked earlier about the spreadsheet and people looking at it and just going, Oh my gosh, this is going to make my life easier.

When we reduce workflows from five steps to two steps, and we have that transparency into the model, and we’re able to. Delve into it and that becomes very valuable to the end user. And one of the other things you were saying earlier is, when we’re able to bring expertise into the workflow that wasn’t, that we had to find before or whatever, like the AI models are good at like going out, finding this stuff and bringing it in based on.

It’s reasoning patterns. It’s a reasoning engine, right? So it’s able to say, Hey, I think what you need right now is this information. Boom. Here it is. Whereas that would be a step for us. We’d have to potentially go outside of this system, find it in another system, bring it back.

And adding that expertise is one of the things we’ve been talking about in healthcare for a long time. It’s the right information at the right time to deliver the best care. And without overwhelming the, without overwhelming your workflow, right? And, I always come back to this, that the best technology is the technology that has, just faded into the background.

We don’t even see it anymore. We don’t experience it. It’s helping us, but it’s I mean, here’s my keyboard. When this goes away, that will be the culmination of the PC. When it gets to the point where I’m just interacting with something, then it will truly be. a personal computer.

You just have to point it at my neck and say, Hey, can you point it at my neck?

Why neck? All the neurons are going. Oh, is that like a a matrix reference? I’m just going to plug it into the, to the thing.

Oh man. , I’m not getting that. I’m old enough. I’m not getting that surgery. When my grandkids come to me and go. Oh my gosh, you can’t even plug in? I’m like, no, I don’t plug in. Jack sizes. I have the old Jack. Exactly. Yeah. Yeah, mine’s RJ11. What are we at now? RJ what? They’re going to say, aren’t you wireless yet?

Wireless? Oh my gosh. Oh man, David, always great to catch up. Thanks. Thanks for coming on.

📍 📍 Thanks for listening to Newstay. There’s a lot happening in our industry and while Newstay covers interesting stuff, another way to stay informed is by subscribing to our daily insights email, which delivers Expertly curated health IT news straight to your inbox. Sign up at thisweekealth. com slash news.

Big thanks to our Newsday sponsors and partners, Clearsense, Sure Test, Tausight, Order, Healthlink Advisors, Cedars Sinai, Rackspace, Crowdstrike, and Fortified Health you can learn more about these great partners at thisweekealth. com slash partners. Thanks for listening. That’s all for now