Skip to content
Gun.io
February 7, 2019 · 12 min read

The Intersection of AI and Medicine with Robert Fratila

One of most promising areas for artificial intelligence research rests at the intersection of biology and medicine. That’s where we found Robert Fratila, CTO and Co-founder of Aifred Health. He and his team won an XPRIZE at the Annual Conference on Neural Information Processing Systems. He’s worked on brain-state classifiers, computer vision packages for autonomous underwater vehicles, and predictive models for cancer patients, just to name a few. In this episode we dig into deep learning, neural networks, and hype-busting truths about the current limits of AI.

Robert Fratila

CTO and Co-founder of Aifred Health

Robert Fratila is a Co-Founder and CTO at Aifred Health. Trained in computer science and biology, his passion for combining medicine and AI throughout his work and research has grown extraordinarily. As a software developer at the Montreal Neurological Institute, he worked on integrating state-of-the-art deep learning techniques in the healthcare industry, specifically brain imaging. This has given him valuable experience in finding efficient solutions to complex problems. As CTO of Aifred Health, he manages machine learning research and product engineering teams, producing world-leading software and extremely promising results for mental health care.

Robert has also given many invited talks and workshops about his work and research at a variety of events, including the Canadian University Software Engineering Conference (CUSEC) and the Sixth Biennial Conference on Resting State and Brain Connectivity. He is committed to spreading knowledge and demystifying the wonderful field of machine learning.

Read transcript

Ledge: Robert, thanks for joining us today. It’s great to have you on. How about if you just give a little intro about yourself?

Robert: It’s a pleasure to be here. I’m Robert. I’m the CTO of Aifred Health. My background is in computer science and biology so I love working right at the intersection of these two fields.

I’ve done a lot of deep learning for Magnetic Resonance Imaging. These are sort of the brain scans at the Neurological Institute here at Montreal. A lot of the talks Reach Events that I do is to help inform the public about the work that we do in AI or artificial intelligence ─ what are the limits?

Ledge: You’ve done brain scanning and you’re working in AI. Do those things connect? Can we singularity style, model the brain, and start to make deep learning networks and neural networks? Where is the hype and science fiction versus the real stuff now?

Robert: That’s a great question. In this current day and age, AI is sort of spread out in this ability to look at unstructured data ─ you mentioned brain imaging or text or audio or some sort of signals ─ and our ability to look through thousands of data points and be able to find these correlations and be able to use them to predict some sort of variables ─ is this cell tumorous or not? ─ or very specialized tasks.

We’re not at the stage where we can sort of transfer all this knowledge the same way that we can ─ what is common sense is actually a very difficult topic to model.

For now, the focus has really been in leveraging these complex algorithms for very specific tasks. So whether it is to analyze brain images or an essay or listening to audio and recognizing voices, they’re often very specific tasks.

Ledge: So that would lead me to believe that you’re thinking about AI and maybe all of us could think about AI as sort of human augmentation and not sort of this replacement paradigm. In fact, you cannot artificially replace the common sense. There’s no algorithm for that and I imagine that, obviously, you’re working in health care and medicine. There’s got to be all kinds of manufacturing and business processes and implications.

What’s the state of the art there now?

Robert: Right now, what you said that AI not essentially replacing but augmenting, the way we think about AI is that it’s always a tool to help you get better at what your job is.

For our scenario ─ let’s say, a physician ─ a patient comes in and they’re able to diagnose them like “I know this person has depression.” Then, the issue is “What is the best treatment?”

You’d have to read hundreds of pages and hundreds of papers in the literature to be able to stay up to date. Whereas, we can just have this tool right next to me where I can sift through hundreds of data points for this one patient and be able to find non-linear correlations and say, “My suggestion is Treatment A, Treatment B, Treatment C. These would be the highest impact of this patient.”

It’s always a tool. It can never replace the doctor’s professional decision making. It’s mostly just to augment to help sift through all that data for them so that they can spend more time with the patient and less time looking at charts.

Ledge: Let’s just focus on that particular application. How did it learn how to be close to correct on the things that it’s telling you? It’s some kind of a learning process. Literally, what is that? How does that get done?

Robert: Right now, if you look at all of the retrospective data that was collected through studies ─ these professionally coordinated studies of hundreds or potentially even thousands of patients where they would all go through the same protocol ─ they essentially collect as much about the patient as possible.

And so, at the end of two weeks or four weeks or six weeks, everyone was given, let’s say, the same treatment and they would see, at the end of that certain period, whether it worked or not.

And so, what we can do is go back and look at all the data that was collected ─ that baseline, timestep zero, let’s say ─ and then, at the next timestep where they were checking to see if the treatment worked or didn’t.

Essentially, the AI’s job is to take into account all of these features that were collected and be able to find ways that they can interact with one another so, at the end, at the six-week period, we can confidently see sort of a probability of whether or not this treatment will work.

And so, oftentimes, it’s a way of “How do you frame the question? Are you asking ‘What is the best treatment? Will this treatment work?’”?

All these different questions affect how you design your system. Again, this goes back to how specialized an AI is at this point.

There’s a lot of research going into this multi-label classification. For instance, we can predict what treatment would be good but, then, we can also predict the sort of continuous variable of how much was the dosage of this treatment so we can start to generalize the specialty of the prediction you’re trying to make. And it’s been really fantastic.

Ledge: Quite literally, at some point in AI, it’s cool to refer to it as sort of like a being. But what is it? When we say we have an AI, what kind of computer system is that on? How is it stored? How does its memory grow? What’s the actual nature of learning in a computer environment?

Robert: AI is kind of for a lot of tools. For instance, we had expert systems where you would sort of specify the tree of decision that you would need to make. And then, we’d get into linear regression, logistic regression so it’s simply attaching this correlation ─ these coefficient weights ─ to each feature and then being able to apply this mathematical function like a sigmoid function.

I don’t want to get too technical. But we have machine learning and this is where you come into contact with support vector machines, random forest (these aren’t trees) and, as I’ve mentioned, logistic regression.

And then, we’d get into what has sprouted out more recently which we refer to as “deep learning.” It takes a lot of inspiration from some of the earlier works but it’s simplifying it down to what we call “artificial neuron” or a node that just applies a very simple function. It could be like addition or multiplication or a very basic operation.

And then, when you tie these together, let’s say, to one layer of this, sort of one to one mapping to each of your features, then, each feature gets applied some sort of operation. And then, you can essentially approximate any function with that; and, essentially, the more layers you add, you have this notion of abstraction.

So when you look at a picture, you can’t just look at the pixels; you have to look at the context around it. It’s important that you can look at one row of pixels but it’s important to look at the row of pixels above and below it and then sort of go from a very close-up view of it to understand that this set of pixels is an edge; this set of pixels is now a corner. And so, you get increasingly abstract concepts. And, eventually, you’ll go, “Oh, I’m actually looking at a cast” but it’s a series of all these little operations that you try to pick out.

That’s why it’s very interesting where, essentially, each one of these layers of artificial neurons is responsible for finding these sort of patterns.

I’ll go back to the analogy of analyzing pictures. The first layer is sort of the glorified edge detector. So it’s looking for basic patterns and edges, and then the next layer takes what it’s learned in the previous pattern like those set and edges and then starts to combine them.

Now, we have curves and we have little bit more complicated edges. And then, we go to the next layer where we have “This here is an eye” or “This here is an eyebrow.” And so, it’s always building up from what it previously learned the deeper it goes into these layers of neurons.

And once you get to the end, you have what we call a “latent space.” It takes all the data that you inputted and then turns it into something that understands ─

This tool will sort of take all this data at various levels of operations and, eventually, come down to a series of numbers. Those numbers are not interpretable by us but, at the end, it’s going to take that and transform it into, let’s say, trying to predict what it is that you’re looking at. It’s going to take that number and then just turn it into a probability.

Does that make sense?

Ledge: Absolutely! I imagine that it takes on the learning mantle because that’s, essentially, modeled after the way that our brains work from a young age. So it’s to categorize and bucket particular things and build upon those abstraction. The whole model is based upon that neurological development.

Very cool!

Along the lines of abstraction, I wonder, in the tooling space, as a developer, one conversation that I’ve had repeatedly about blockchain, for example, is that there’s this missing sort of toolset or building block set in the middle ware that will allow someone who is a competent, let’s say, Javascript developer, a competent backend developer to start to access blockchain in a way that it just is. It’s just a protocol; it just does the things it’s supposed to do and I can rely upon that.

Where do you see that in machine learning and the AI space? At what point does it become abstracted enough that anyone can take advantage of that as a service or as a consumable asset?

Robert: That’s a great question. Recently, there’s been a lot of push from a lot of big corporations in trying to make it easier to ─ you have, let’s say, images and labels; it doesn’t matter what you use; just get me a model that will do that.

There are a lot of these web interfaces that would let you do something like that. But for developers, you’ll probably hear about some of the more major deep learning frameworks like PyTorch or TensorFlow. They’re heavily engineered to be easy to access the sort of abstract way all of those complicated operations I was mentioning earlier and sort of just let you think about your problem and what you’re trying to predict and then focusing on just improving the performance of this model.

But, say, you would like to abstract even further about “What if I don’t know what the best model is?” or “I don’t want to have to read hundreds of papers that are coming out every month about the new state of the art because there is sort of this race to the bottom.”

And so, there’s been a lot of work in even automating that procedure as well. So recent papers came out where you essentially put in your data and then you have here labels and it will automatically find the best network, the best architecture for you to optimize your loss.

One of the most important things is for a developer to understand what they are trying to maximize or minimize and what it is that they’re trying to predict.

And then, once you have the problem structured like that, you’d be able to use these frameworks. They’re very well documented so anyone can pick it up and go from there.

Ledge: Fantastic! Thanks for doing that. A question I always ask everyone as we wrap up is ─ obviously, we’re in the business of very senior remote engineers. And I always like to ask the tech leaders we talk to, “If you were tasked with hiring remote talent in a very senior unicorn kind of way for engineers, what are the factors that you would look at and how would you discern that? What are the heuristics?

Robert: The person that I’d look for is someone who sort of understands what they do very well. I would, oftentimes, just look over a CV. I have a bit of a template but I’d like to just dig really deep into some of the projects that they have done.

One of my favorite questions is how they work and how passionate they are about what they’re working on. I really appreciate when somebody can admit if they know something or if they don’t know something because that really goes a long way.

So being blunt with someone is definitely a skill I look for. It’s a difficult one to develop ─ to admit when you don’t know something ─ but just being truthful to one another definitely helps with the whole team’s growth,.

Another question is “What makes you passionate about a project?”

It doesn’t matter which project they’re working on. They just sort of pick one and I just like to listen to how they describe the problem ─ how they solved it, the greatest challenge that they encountered, and how they overcame that challenge.

What I’m looking for is mostly just how passionate they are about what they’re working on and seeing if they all of the complications that they’re working on and that they run into. I’m just looking for someone who is truthful about their capabilities ─ what is it that they know, what is it that they don’t know ─ because it really goes a long way with building a solid team if everyone is on the same page. It’s really just to take advantage of everyone’s strengths.

Ledge: Fantastic! I love that. Robert, thanks so much. I really appreciated the thoughts today and we’re looking forward to promoting them to the audience.

Robert: My pleasure! Thank you very much.