We’re all targets: Preparing for inevitable security breaches with Dan Tienes of Corporate Cost Control
Dan Tienes, CTO of Corporate Cost Control, has unique perspectives on cybersecurity, particularly having taken a personal interest in the well-known Equifax breach, having spent time working at the company for 9 years after an acquisition.
With a deep insight into the Equifax corporate culture, born of his time at the company, Dan has thoughtful perspectives on both failures of technology and failures of process, that likely led to the famous breach.
It’s a complicated story with no clear answers. What we can learn is complexity of the modern systems upon which we all rely is SO great that it’s inevitable that challenges will arise. The key is in our preparedness.
Ledge: Dan, thank you for joining us. Really cool to have you on today.
Dan: Well, thank you very much for having me. I appreciate the opportunity to talk with you.
Ledge: Could you give just a couple of minute intro of yourself and your work so the audience can get to know you?
Dan: Sure. I’m the Chief Technology Officer for a company called, Corporate Cost Control. We’re a small company. We’ve got about 150 employees. We’re based out of Boston but we’re spread all over the country – I work out of St. Louis.
I’ve been there for about 10 years, and I’ve been in the IT industry for about 30. Prior to coming to Corporate Cost Control, I was employed at a company called TALX which was acquired back by Equifax. My experience is, I’ve worked for Equifax, I’ve worked for TALK. Obviously, when I saw the Equifax breach, like a lot of people I was shocked and troubled in terms of what that meant for all the companies.
What my company does is very similar to what the Equifax workforce solution does, which is we do deal in personally identifiable information. Needless to say, I think the intensity and the way that people are looking at that and that way that people are looking at the process around that over the last year since the Equifax breach has really intensified.
I guess I just want to clarify right up front that, even though I worked there and I’m going to talk about that breach, I don’t have any particular insight. I never worked in the division that was hacked or anything like that. I guess I just wanted to make it clear up front that, although I did work there, I don’t have any unique insight to it other than having done like a lot of other folks have done and having read what happened.
Reading the Congressional Report, reading the summaries and taking a look at all that it’s pretty sobering. Obviously it was a failure of technology, but I think larger than that it was a failure of process. That, to me, is the biggest issue here.
I was discussing this with a friend of mine – a very good friend of mine. We’ve worked together for years and he’s a very bright guy. He’s smarter than I am. We were talking about the Equifax breach, and we both work in similar fields. His reaction to it was, listen, there really wasn’t anything they could have done about that. They have thousands and servers and if 5% of those servers have an issue, you’re going to have a problem.
I didn’t really argue with him at the time but, when I went home and as I thought about that conversation more and more, I started to realize that I really disagree with him and I really think he was wrong. What happened at Equifax is a server. We’re not talking about cattle roaming the plains, we’re talking about pieces of hardware that have IP addresses bound to them, and particularly public-facing web servers.
What I found from this guy that I have immense respect for, and still have immense respect for and he’s a very a bright guy, when I heard him say well there was really nothing you can do, I started to think has this entire profession – as we in danger of giving up?
I can tell you that I’m not, but I see that attitude developing of, there are so many data breaches – have we simply become desensitized to them? That really got me thinking.
I can tell you that, from having looked… I’ll take a little bit of a step back.
When I worked for TALX… In my career, I’ve been a part of a lot of acquisitions where we’ve acquired other companies. I’ve watched a lot of IT organizations – different groups, different cultures – try to get together. While the failure at Equifax was a result of a flaw in the Apache Struts framework, I think what it really was was a failure of process. I think that’s very controllable. Contrary to what some people may be thinking and giving up, I do think it’s very controllable.
Ledge: Tactically as an organization it’s huge. I resonate with the argument. I tend to be on your side of the fence that we shouldn’t be doing things that are so complex and unmanageable as an organization if in fact we can’t figure out how to manage them. I certainly quibble with the argument. I agree that you can make a system that’s so complex and so poorly managed but these things happen, I don’t agree potentially that it’s inevitable. It ought not to be inevitable and you shouldn’t do it in the first place if in fact it’s inevitable.
Just talk from the ground up. How do you mitigate the risk and opportunity of these things happening?
Dan: If you look at the Congressional Report, I can tell. All of this started when someone was responsible for applying the Apache Struts fix on one of these servers that was running that software and, according to the Congressional Report, they installed the application from the root directory and not the application directory.
If you look at any big disaster there’s always one or two little things that happen. It’s never one thing that goes wrong, it’s all a bunch of things.
As a practical matter, first thing I think you could do is make sure that if you’ve got people that are applying patches to crucial servers, that they know how to apply then. Again, it’s very easy for anyone in this field to Monday morning quarterback what somebody else did – and there but by the grace of God go all of us – but I do think that when you’re dealing with something like this…
Understand the scope of the issue. The Apache Struts vulnerability was a critical vulnerability. This wasn’t one of 500 things that appear on a list of potential issues. This was a major issue. I think one of the things you could do is, if you’ve got something like that that’s that crucial, that’s that critical, have a frontline person install the patch but then have some sort of an authentication process behind it.
So, Engineer A does the job, and then Engineer is responsibility as quality assurance to go back in and audit them. Now, that’s expensive, it takes time, but look at the cost. Look at the cost, the potentially billions of dollars in fines that Equifax may be facing.
I think there’s an ethical issue here. Yes, it’s expensive to keep this data safe, but we responsibility to do it as IT professionals.
That’s one of the things you can do, is yes, it is worth it to continue to invest like that.
Make sure that you have a complete inventory. In every acquisition that I’ve been a part of, the very first thing we do is we collect an inventory of what all these systems are. What they’re running. Did they have something like that at Equifax? Maybe. It’s possible. I would imagine they did. Was it being kept updated? Who knows. You have to know what’s in your environment first. That sounds obvious, but I think we’ve learned it doesn’t always happen the way it’s supposed to.
Ledge: I’m curious, you’ve been in this space a long time and you have your firm that’s sort of right in there, right in the mix. Have any of your assumptions, behaviors or other paradigms changed as a result of this environment? What have you done differently and learned from the massive failings of others?
Dan: Well, I can tell you that when we first found out about it I didn’t sleep a lot. You would have thought that they would have had the gold standard. Honestly, the first thought was, as a smaller company, if a monstrous company like that, a Fortune 500 company like that with their resources can’t stay safe what can we do?
Well, what I’ve come to realize is that, the fact that my organization – and I think this is true for a lot of companies – is smaller, and also our technology stack is new, I think there’s tremendous advantages in that if you can do it. I know not a lot of people can. I know if you inherit legacy systems that’s difficult. But in my experience, if there’s any way to build something from the ground up and really focus on those systems that you know are ancient and try to get those things upgraded sooner rather than later…
Again, it all sounds very practical and it all sounds very obvious, but I think if it were a lot of these things wouldn’t be happening. That the system that was supposed to have been breached at Equifax was from the ‘70s. Clearly, that had to have been on somebody’s project list somewhere, to a point.
What I’m saying is, those legacy systems are much more dangerous than you think they are and I think, when you look at prioritizing things from a security perspective, at least having those systems identify there’s something to replace or to make absolutely sure is patched is really important.
Ledge: We’ve all been in an environment where, let us be honest, a cost-conscious organization does not, from the top, fund technical debt remediation.
Certainly you, you’re in a place that can control that but everybody has a boss and we’re all under cost pressures. You could certainly see how the wrong cultural and metric KPI implementation could be pushed down the chain. Where it’s just, well, we’re never going to fix that.
Dan: Right. Again, I realize my organization is smaller so it’s very easy for me to do some of these things because I do have a much to deal with. But I think what’s important there is that there needs to be one person accountable.
One of the advantages that I have is that I don’t have five people above me and ten people below me that are responsible for this stuff. Within every organization, there needs to be that one person who is ultimately responsible. It’s so easy to have silos. It’s so easy to have your legal department have one group of responsibility, and your IT department and your networking, because there’s all these different departments. Identifying that, and making sure that there’s one person that all of this stuff rolls up to is crucial.
Here’s my theory. My theory is that there were probably two or three people at Equifax that knew that they had a potential issue with the server, and everybody probably thought that the other department or the other group was going to take care of it. That’s been my experience with watching organizations grow, especially through acquisition. It’s very, very easy to lose that thread of responsibility. It’s very easy to not know who’s responsible for something.
One of the things you could do, and one of the things I’ve done is, as soon as you find out about a major… Imagine you’re a Microsoft shop and you find out about a major issue in Windows Server 2008 tomorrow, or 2016 tomorrow. Think to yourself, who’s responsible for that? Don’t just ask what department – what’s the name of the person who’s responsible for that in my organization? Or, what’s the name of that person’s manager? Who manages that group? Really walk through it. Okay, how am I going to make sure that they do their job?
We’ve got all the Security in Depth, and you were talking before about companies that are cost-conscious. I think a lot of times we do spend a lot of money in general on certain Security in Depth, but we don’t fully utilize it.
One of the things that they found at Equifax was that the system that was supposed to be monitoring network traffic had an expired digital certificate, and that once they realized that and once they installed a new certificate, then the lights went on and everybody saw what was happening. Expired certificates are a tangible thing you can fix. They are a thing that you can identify.
They are. They’re an incredibly complex system, and the nature of any complex system is the more you add to it the worse it gets. Again, that’s no excuse for us to go, “Oh well, that’s the way it is.” We can’t do that.
Ledge: I wonder if there wasn’t even a place where people said, you know, there’s a hundred thousand things that I’m responsible for and my management doesn’t listen to me, and even if I brought this up it wouldn’t get priority. In fact, someone does know. That potentially poor leadership is incentivizing – or just de-incentivizing or de-motivating people who are on the ground there in a data center.
How do you avoid that? The unknown unknown of the cultural detrious is really where that damage happens. The bigger you grow, the more possibility there is of management just being out of touch.
Dan: Yeah, it’s difficult. It really is. Like you say, ultimately it is a management problem. It’s a problem of making sure that the folks in your organization…
I know your folks are probably used to talking about technology and I’m dealing more with this kind of nebulous notion of management and making folks feel like a team, but it is possible to do that. It is possible to convey to even the frontline person sitting in the data center that what they do matters, that what they’re doing is important, and that they should feel comfortable speaking up.
Again, that’s why I have no internal knowledge of what happened but I have observed, in past experiences at past places, dysfunctional organizations. Ultimately, I think it’s a failure of communication. The person installed the patch wrong and everything, and all the software failed then, but it still could have if they’d have had a good certificate inventory.
It’s not just a technology solution. I think we get hung up on that. We get hung up on using technology to solve everything. It is ultimately about the people involved. Again, that sounds like a very nebulous platitude, but I wouldn’t be surprised if a lot of people have forgotten about that.
Ledge: Well, let’s finish up on an action item mandate from Dan, and saying; CTOs get out there today with your frontline people and do what?
Dan: Let them know that what they’re doing matters. Let them know that having a data breach in your organization is not acceptable. Let them know that you want to know if there are things that they know aren’t getting done, and have them tell you about that.
That happens all the time, “We want to maintain open communication.” Don’t just say you want to maintain open communication, let there be open communication. My guess is that the folks that are working in the data center have probably the best picture of everything that’s going on in your organization than anyone. They’re the ones that are seeing the entire process from the beginning. So don’t just pay lip service to being an open organization but be an open organization. When someone comes to you with a concern, take action on it and take ownership of it, and make sure that they know that it’s been resolved.
That’s not tech. A lot of us in the tech field we’re comfortable with the tech. This is more of a soft skills, people issue but I think it’s as important as everything else.
There’s a whole other item on this that I hadn’t really talked about, and you may have some other questions, but one of the other questions that I have about this that concerns me is, where were the auditors in all of this?
We know that Equifax has an ISO certification and we know that they passed it, and clearly there were issues there. Veering off here a little bit, I think it’s important that that raises the larger question of, we have these systems that we want to make sure they’re secure. We have independent third-party auditors that we pay to go in and provide us with assurances that they’re secure. How did this happen? How did 145 million records get breached with an International Standards Organization audit?
That’s probably a whole other topic for a whole other conversation, but I think it’s too easy to rely, “Oh, the auditors will catch it. We go through this every year.” You might, but that doesn’t mean that they’re catching everything.
Ledge: Yeah. That’s sort of the doomsday scenario, but you hear a lot of that in regulations in general. This isn’t a failsafe system and we really ought to be personally responsible for each thing that we touch. If you can instill that culture up and down the chain, at least you have a chance to keep it in-house.
Dan: Right. Exactly. It can be done, it’s just a difficult thing. Like I said, I think particularly in the technology field, it is not in most technologists’ nature to take that kind of hands-on management approach. Maybe you don’t want to do it but maybe find somebody that you trust that can do it and have them go out as your spokesperson.
It’s folks that are actually doing the work that are going to know where your dangers lie. If you can talk to them now, you can avoid having to read about their testimony in front of Congress – is my take on it.
Ledge: Let me take a quick pivot to make a last question here. You work in a remote organization. You said that you’re fully distributed. Obviously, we’re heavily embedded in thinking about that, and engineering and IT as a distributed organization.
Best practices that you’ve seen to make that successful?
Dan: Make sure that your security policy spells out everything that that remote employee is allowed to do. That’s the approach that we think we have. The folks that are responsible for provisioning our machines, getting them out into our environment, have a very, very thorough checklist of the things that they go through.
Make sure that if you’ve got home workers – and this is true I think in every system but one of the approaches we’ve taken is, we’ve really looked at the systems that home workers have access to. Really, anyone who’s not within the organization, within a corporate office where you can a little bit more.
I look at it as, does this user have access to data through a fire hose or a faucet? I think if you look at your organization, there’s probably some people who have the ability to access massive amounts of data and turn on that fire hose and spray a lot of information into a file. You want to try to avoid that wherever you can and instead focus on getting people faucets so they can just get to the information that they need.
It comes down to having a very tight security policy. It comes down to doing regular security training and, quite frankly, scaring people a little bit in terms of letting them know what hackers are capable of and the dangers that are inherent in their thermostat – their Internet of Things thermostat. “What’s the Wi-Fi password on that one? Have you changed it?”
Just continuing to inform people and like I said in the beginning, don’t give up. We have to continue to fight this fight. We have to make sure that it’s everybody in the organization, regardless of what department they’re in, is interested in fighting and quite honestly believes that they can win. I think you can win. I know you can win. You just have to be diligent.
Ledge: Great insights, Dan. I appreciate it with the finish on the call to arms there. Great to have you on.
Dan: Alright. Thanks a lot. Thank you very much. Bye-bye.