Improving UX with Automated Business Solutions with Tyler Foster of Sentient Technologies
Tyler Foster is the Vice President of Engineering for Sentient Technologies. As both a senior individual contributor and executive, Tyler has spent more than 18 years delivering technical solutions to the worlds hardest problems.
Tyler’s past-experience includes leading firmware and control system development for subglacial lake exploration ROVs deployed in Antarctica with the MSLED / Wissard project, front-end platform architecture and service design at Apollo Group, one of the world’s largest private education companies, and distributed systems deployed by many Fortune 500 companies to solve their most complex data problems at Cloudera. Most recently Tyler led a cloud infrastructure startup with operations in the US, UK, and Asia.
Ledge: Tyler, thanks for joining us. It’s really cool to have you.
Tyler: Thanks, Ledge.
Ledge: Can you give your two- or three-minute background, story, and history about you and your work just for the listeners?
Tyler: Sure. My background is primarily large-scale distributed systems. I’ve built platforms at a few different companies including one of the largest education companies in the world as well as Cloudera, a Hadoop distribution company.
Lately, I’ve been working on evolutionary algorithms and evolutionary computation with Sentient which is the company that I’ll be talking to you about today.
We do optimization of stochastic problems and online learning ─
Ledge: Talk to me about stochastic problems and online learning. What is the particular domain there? I mean, you’re talking about some pretty serious technology. Break it down for the business guy.
Tyler: Basically, what our platform does is it takes a series of ideas, potentially millions of permutations of different possible solutions to a problem and then, over time, optimizes it to find early wins and early performance gains but it will also identify the global maxima of the problem space.
What that really means in kind of practical terms is that we’re focused on optimizing the user experience of websites where people can put in tens of ideas to how they want to change their website sort of different contents to try, etcetera.
And then, we will collect data about those and identify the optimal combination of those changes at any given time.
Ledge: And so, business-benefit wise, is it a really predictive engine for the user experience and engagement?
Tyler: Yes, basically, it’s an optimization engine. So you turn us on; you put a bunch of ideas in; and if you’re an e-commerce site your revenue goes up, basically. It continually improves throughout the time that our system is running. It’s a faster automated optimization compared to an A/B or a standard multivariate testing approach.
Ledge: How does that actually work? What’s the stack of technologies and how does that integration work?
We are in AWS primarily so we use Lamda Edge quite significantly as well as containers in ECS and then different data-handling systems: Athena, Kinesis, the usual suspects.
So, basically, the algorithms are based around the concept of genetic algorithms where we will introduce a lot of the ideas. We’ll create possible solutions, test fitness on those possible solutions; and then, we’ll continue to produce variations on those tested solutions to find the local maxima but also to continue to search until we find the global maxima as well.
Ledge: And how do you know when you’ve reached the optimal solution? In any given case, that’s a broader algorithmic question there. The universe keeps changing.
Tyler: Yes. We kind of adhere to the idea that there isn’t a permanent global maxima. We’re constantly trying to find the best combination for now. So as your user interaction changes, as the market changes as you run different campaigns, your user behavior will change.
And so, what we’re really looking for is the best combination of changes at any given time.
So, in some cases, our customers will choose to find the best combination for now, take it, and implement it based on our Bayesian statistics telling them, “This is the most likely combination to improve the goal achievement of your users.” But, in the end, the system can continue to search in the long term.
So the end product is that applied to web and user experience.
Ledge: I have a question. I’ve talked to other technology leaders particularly around companies where the end product is very ML- or AI-based and the organization of an engineering team and group of team under that context because it’s very scientific and experimental may not, in fact, fit along the lines of normal Scrum and Agile.
I’m curious as to what you, guys, have run into there? Do you treat engineering teams and sort of scientific teams differently? How do you handle the organizational flow there?
Tyler: As far as that goes, we do find that it works sort of well enough but we have the team structured in such a way that we have a couple of data scientists who are permanently exploring new ideas and new improvements to algorithms. Sometimes, that’s a simulation task; sometimes, it’s productizing an algorithm task.
But it’s available. And so, we try to democratize it more than just isolate a team of smart people in a room who spend all of their time researching and not propagating that data and experience to the rest of the engineering team. We try to give all of our engineers the opportunity to do research and to find the right answers because I think that that makes the team better overall.
It makes everyone value the output of the research more significantly if they’re involved in producing and if they understand the justifications for it and they help to productize those solutions.
Ledge: Do you have cross-functional product managers and people who are thinking particularly on the product angle there? It seem like it would be maybe an engineering science and product kind of triangle.
Tyler: Yes. We have an engineer with a math background as our product manager within the company. And so, he’s quite cross functional and can speak on both sides. And I tend to bridge the gap so I’m much more technical in my background. We work together and kind of identify the best path to take the technology.
Ledge: Fantastic! I ask all the guests this because we’re in the business of evaluating and vetting and finding the very best engineers and it ends up being less than one percent getting through the net.
So we have a pretty strong sort of proprietary process, if you will, for doing that. And yet, I like to continue to improve that process so I ask all our guests, “How do you find and hire ─ and more on the hire and sort of identify ─ the very best unicorn senior A-plus engineers? What are the heuristics that are most important for these roles in your company?”
Tyler: I think that it really depends on the team. At any given time, a team has its strengths and weaknesses. And we’re always trying to be aware of where we’re strong and where we’re weak.
I’m looking for people who fill the gap. So much about good engineering is context. So does that person have the background to solve the problems we have right now?
As far as the process goes, I find that engineers we’ve worked with previously within our network, people we have long track records of delivering great products with always end up being the best hires.
And good engineers know good engineers. So it’s a good sort of practice to get people who are both good engineers and good people because they generally have a strong network that we can continue to expand the team through.
And then, as far as the evaluation of the individuals, we try and do blind panels where each person will test them on what matters to that individual and review then in that context. And then, we can see where their strengths and weaknesses are and determine maybe that they’re great technically, in general, but they’re not filling the gaps that we actually have.
Or vice-versa ─ they aren’t the most technical candidate but maybe they have exactly the skillset that we need and the skillset where we’re weak.
So, overall, I think that teams will evolve. You’ll use certain people who have strengths. You’ll need to backfill those.
The problem space changes. And so, we adapt our teams relative to the current context.
Ledge: That’s great. I love that it’s not just about any particular profile or test or anything; it really depends on the organizational and business problem that you’re trying to solve with that collection of skills in context.
Ledge: Of course, now, I want you to develop the AI to just battle test that and figure out how to fit the exact right person at the right time. People have been hit in that problem for a while.
Tyler: Yes. I think some problems are good for AI and some problems are good for traditional intelligence.
Ledge: We used to joke that we use AI, actual intelligence.
Tyler: I think a lot of it is empathy as well.
Ledge: Absolutely! And it’s just necessary. Good to have you today, man! I appreciate it. Best of luck with the growth of the company!
Tyler: Great! Thank you very much.