How Cybersecurity Tools Can Address the Needs of Diverse Health Systems

How Cybersecurity Tools Can Address the Needs of Diverse Health Systems (Video Transcript)

Bill Russell (Interviewer): We talk about healthcare like it’s one homogenous thing, but the reality is…I just interviewed a health system that has 150 beds. I interview health systems that have a hundred hospitals. And so, you have critical access. You have rural versus urban, you have access to the cyber resources. The budgets are different. The talents different. Just implementing a framework is so different. I would think from the government standpoint, you’re just looking at it going: can we just get an agreement on a framework to approach this and get adoption across healthcare?

David Ting (Tausight CTO and Founder): Well, that was the conclusion of the First Cybersecurity Task Force, which adopted the NIST Cybersecurity Framework as a model for how to approach securing your system. And that starts out with inventorying all the assets that you believe are critical to your workflow. Well, if I inventory the patient records, the applications, the endpoints that you need to deliver your workflow. The second piece is figure out how they’re protected. How do you secure the data? Is it encrypted? Does it have proper access control? Are your applications secured? Who has owner and access rights to modify those?

The third aspect is track or detect changes in your system. Well, how are you gonna do that? You don’t even know if there’s a piece of rogue software sitting on your endpoint and it would take you 60 days to find it. Is that going to be sufficient or do you need something that’s closer to real time? If you don’t have that, how do you even begin to respond to the challenges of software that gets exfiltrated, applications that get compromised, rogue software that worms its way across multiple endpoints? You need to have visibility into what’s going on across your system in order for you to respond. And then when you do get compromised and you do have an incident, how do you know you close down all secondary potentials for secondary attacks? How do you know the first one wasn’t just ploy to deploy deeper other agents? So, why does it take so long to recover after an incident is because you have to scrub every system to say, are there things that I didn’t know about that were planted by that first wave of attack that implanted deeper agents, modified more software on my system that I didn’t know about?

How do I know there wasn’t something lingering? So the whole model for the framework that you talk about is how do you approach it in a systematic way? And I think the guidelines that are used are rigorous, and it requires a whole rethinking of how we do this. As opposed to just try to secure the perimeter, just try to secure the network. Which is to me a good technological approach for securing the hardware and the systems. What you really need to do is to secure and have visibility to what’s changing on your system and where’s the data, and what are the things that impact my workflow? So it goes all the way back in healthcare to the clinician. What did they touch? What did they interact with? Where do they do this across all the endpoints? And do I have visibility from that perspective? And can I do this across time and across all the endpoints and do it in an integrated fashion, not trying to pull logs from multiple machines and trying to do this ginormous blend. And do this, frankly, in pseudo-real time. None of this, gee, I had an incident pull the logs and see if I can sift them together.