In episode 18, The Frontier investigates a topic dominating technology developments across multiple industries: Artificial Intelligence, or AI. One of most promising areas for AI research rests at the intersection of biology and medicine. In this episode, Robert Fratila, CTO and Co-founder of Aifred Health digs into deep learning, neural networks, and hype-busting truths about the current limits of AI.
Fratila has a background in computer science and biology. He’s worked on brain-state classifiers, computer vision packages for autonomous underwater vehicles, and predictive models for cancer patients, just to name a few.
Where is the line between AI & Sci-fi?
While Sci Fi fans may dream of a future where robots replace all jobs, the reality of AI is much different. AI should be thought about as human augmentation and not a replacement paradigm. Importantly, there’s one thing that humans are good at that AI can’t model: common sense.
Fratila: “AI sprouted out of this ability to look at unstructured data – brain scans, speech, text, etc. – find correlations and use it to predict some sort of variable. It essentially leverages complex algorithms for very specific tasks such as analyzing brain images or assays, listening to audio and recognizing voices.”
AI is a tool to help you get better at your job. For example, a physician who needs to diagnose and treat a patient would need to read hundreds of papers to stay up to date, whereas a tool could sift through all the hundreds and thousands of data points, find these nonlinear correlations and make recommendations for treatment. AI is a tool never intended to replace doctors’ professional decision-making.
What is AI?
For all the hype around AI, many don’t understand how it works and how it learns.
Fratila: “AI is a broad term for a lot of tools. Without getting too technical, it can be thought of in terms of how one sees a picture. One sees the pixels, but that’s not the complete picture. It’s important to also see the context around it and go from close-up to bigger picture. Sight goes from pixel to edge to corner to eye to cat, getting increasingly abstract at each layer.”
Similarly, AI leverages machine learning and deep learning and starts by simplifying everything down to one artificial neuron or node, a very basic operation. From this basic level, patterns are found in each layer of artificial neurons. AI learns from this pattern and then combines this information with the patterns detected in the next layer. It’s a continuous process of abstraction until at the end there is latent space. This turns all data that was input into something that the tool understands, it then produces a series of numbers which are then turned into a probability for what one is trying to predict.
How Can the Average Developer Leverage AI?
Fratila: “For developers, there are deep learning frameworks like PyTorch and TensorFlow. These AI engineering platforms are designed to easily access all these complicated operations and let developers think about the problem, what they are trying to predict and then focus on improving the performance of this model.”
If developers don’t know the best model, then tools like AdaNet can help them identify what is state of the art. With AdaNet, developers put in data and labels and it will automatically find the best network and architecture to optimize. One of the most important things for developers to understand is what they want to maximize or minimize, what they want to predict.
How Can my Organization Leverage AI?
Gun.io works with the top AI and Machine Learning experts from across the US and introduces them to clients who are interested in using AI or ML to spur business growth. As freelancers, these experts are able to consult with your organization for 20 hours, and then make a set of recommendations for high-return AI implementation. Interested in learning more?