But this need for visuals makes things computationally harder, right?
Yes, a big pain for us right now is the level of data. It’s a lot of data. We’re running simulations with up to 580 million red blood cells. There’s interactions with the fluid and red blood cells, the cells with each other, the cells with the walls — you’re trying to capture all of that. For each model, one time point might be half a terabyte, and there are millions of time steps in each heartbeat. It’s really computationally intense.
Your team has tried to reduce the computational demands using machine learning tools. How has that gone?
We had a recent paper about a system that takes about 10 minutes to train the new model with flow simulations for each patient. You can then use machine learning to predict, for example, if you change the degree of the stenosis (which is how much an artery narrows), what would the overall blood pressure become? The current FDA-approved tools take about 24 hours to train for that patient. Here, you can get real-time interaction while the patient might still be in the clinic.
What’s behind this big leap?
It’s the combination of machine learning with a smaller physics-based model. We figured out how many surgical treatment options we need to simulate for each patient’s training to provide real-time predictions. We’re using one-dimensional models instead of relying on 3D for the machine learning training. It’s just calculating along the blood vessel’s centerline — that captures a 3D structure but not the full x-, y-, z-coordinates of flow. It’s smaller data, so it’s faster to train. And you can use different algorithms from machine learning so it’s faster to run.
That improvement must come at a cost. What do you give up with machine learning?
We always want things to be interpretable, especially when it’s going into the clinic. We want to ensure that doctors know why they’re making a decision and can interpret what factors influenced that prediction. You lose some of that when it turns into a black box. Right now, a lot of our work tries to understand uncertainty. How accurate does a sensor have to be to lead to a change in your blood-flow simulation?
Then there’s just the potential for bias that you have to be aware of, especially with wearable sensors. A lot of the data may come from more affluent areas. If you’re only training on Apple Watch data, are you getting the right population mix? It’d be great to have a large population set of different ages, different genders, different activity levels, different comorbidities.
What could you do with all that data?
With up-to-date medical images for the 3D model and continuous, dynamic, high-resolution sensor data, we could figure out, for example: If you had a heart attack when you were 65, did something happen at 63? Are there ways we could have been more proactive and identified some nuance in the blood flow?
Geometry is really different between people. You need a ton of data to be able to figure out what the small difference in the blood flow is and why it would matter.
What are the limits of simulating health like this?
I don’t think there’s necessarily limits — just a lot of challenges. There’s so much that we could tie in, like the nervous system and lymphatics. We want to integrate data from many different systems, but having them talk to each other in feedback loops is really complicated. I think we’ll get there. It will just be about adding one system at a time.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Quanta Magazine – https://www.quantamagazine.org/with-digital-twins-the-doctor-will-see-you-now-20240726/
Unveiling 2024 Community Health Assessment: Join the Conversation and Collaborate for a Healthier Future!