Artwork

Contenu fourni par newline, Nate Murray, and Amelia Wattenberger. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par newline, Nate Murray, and Amelia Wattenberger ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.
Player FM - Application Podcast
Mettez-vous hors ligne avec l'application Player FM !

Proactive Security with Eric Higgins

1:12:23
 
Partager
 

Manage episode 267294199 series 2754560
Contenu fourni par newline, Nate Murray, and Amelia Wattenberger. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par newline, Nate Murray, and Amelia Wattenberger ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.

Resources:

Welcome to the podcast. Our show is a conversation with experienced software engineers where we discuss new technology, career advice, and help you be amazing at work.

I’m Nate Murray and I’m Amelia Wattenberger and today we're talking with ex-Google engineer Eric Higgins who is the founder of Brindle Consulting and co-author of the book Security from Zero.

https://www.brindleco.com/

In this episode we talk about how to think about security as developer and how to take the responsibility we have seriously. We talk about how to take a preventative and proactive approach to your security, and that means we cover:

  • How to deal with extortion threats by having a bug bounty program
  • How to think about automation tools when it comes to security
  • What resources you should read if you want to get better at security
  • How much does a web developer need to know about security, really?

Eric has worked in security for a long time and he does a great job at being pragmatic to make sure the security goals are in line with the business goals. Amelia and I really enjoyed our conversation with Eric and I'm sure you will, too. Let's get started.

Eric Higgins Podcast

Nate: [00:00:00] All right. So Eric, welcome to the show. Just kidding. Thanks for having me, Nate. Your company is brittle consulting, so tell us about it.

Eric: [00:00:07] Brindle consulting. I basically help my clients who work in the tech sector and have customers, have been customers. They're profitable, but they've.

Avoided working on security for a little bit too long, and now they are finally starting to realize that they have some problems that they need to address, and it's becoming overwhelming. So I help them create a very practical security program so they can start to address these things so that they stop from feeling like they're reacting to all this stuff and start taking some proactive approaches.

Nate: [00:00:37] What kind of stage company are we talking about here? .

On Bug Bounties

Eric: [00:00:39] the types of stages of clients that I thought I would get are very different than what I've actually had to work with.

here's like the common denominator in all these cases.

Usually they'll start to get emails to a gall, have like a security@mycompany.com email address set up where people can report security issues and they. Inevitably, we'll start to receive these emails from security researchers. I'm quoting here, security researchers, and it's usually people who are running these scripts that look for common vulnerabilities against like somebody's website, and.

They're basically trying to extort these companies for money to pay out because they don't have a bug bounty program in place. And what that really means is that they don't have a policy in place to say that for these types of vulnerabilities that we're willing to, pay, you'd report to us responsibly.

This is how much we pay, right? And this is the rules by which this game is played. So they start to get overwhelmed because they constantly get hit by all these things or all these emails from these researchers, and they start to feel overwhelmed. And it gets to the point where the individuals who are responding to all these emails or all of these security related issues start to realize that like they can't get any of their normal work done because they're just buried in all these security related requests and they realize like it just like, and any other company for any other position, you need somebody to be doing this stuff so that you're not the one doing it. So then they come to me and they say, how do we avoid this problem? Maybe we're not at the stage yet where we can hire somebody to work on security full time for a variety of reasons, but maybe we can do some things to make sure that we don't feel like we're buried in this work and we're not constantly getting distracted from working on our product, but still making sure that we maintain a certain level of security and know how to respond when these things come up.

Nate: [00:02:26]

Yeah. I want to talk about the bug bounty programs a little bit. So going back, you're saying you used air quotes around security researchers.

The implication is they're maybe not really researchers, but maybe they, what's the idea that they have, they're using automated scripts or something to find these vulnerabilities and they're just trying to. Collect bounties? Are they actually trying to say like, we found the security hole and we're going to exploit it.

You don't pay us a ransom. What are you implying here?

Eric: [00:02:49] So it's a little bit more of the former, I mean, I guess there's a hint of the ladder in there. So here's what I really mean by this. So not to admonish anyone, because I think that there, I mean, I know that there are a lot of real security researchers out there who play by the rules, but there's a certain class of individuals.

And there seems to be a network of them that they tend to come from like third world countries or where they have internet access and like they're just looking for some way to make money. Right. So, you know, it's noble cause I suppose, but they specifically seem to target companies that don't have a published.

Responsible disclosure policy. So responsible disclosure is really like the umbrella term for what a bug bounty policy is, or a bug bounty program. It's a way to report security issues to accompany in a responsible way, which is the opposite, would be like they just publish about it on like Reddit or hacker news or a blog or something and make it public to the world without telling you first.

Right? So the old school mentality. Or approach to this that a lot of companies used to take was if you reported security issues to us, we would assume that you were a hacker and we would start to litigate against you, right? We would take you to court and Sue you cause you're hacking us. So that approach doesn't really work like that stupid and nobody should do it.

And I have a firm position on that instead. The way that the landscape has shifted is now there's actually companies. In existence that will help you create and run above bounty program where it's an incentivized responsible disclosure program. That's what the bug bounty is. And you basically say, like for this class of security issues, we'll pay you X amount of dollars.

So just to give you some examples. So Google I think has different classes of bounties that they'll pay out. And I think the highest is something like $100,000 and that's if you can find a security, like a major security issue in like the Chrome browser or Android operating system for their phones.

Right? So there's a very high level of payout for very like deeply technical and like widely exploitable type of security issue. More commonly for like the class of companies that I work with, they'll have some kind of web application. It will be vulnerable to like SQL injection or something else. It's like relatively common that these, I would say the lower tier of security researchers, they're looking for all these low hanging fruit that they can run some kind of software and scan for these things and find them pretty quickly.

Then they contact the companies by email and say, Hey, I thought all these issues, I would love to get paid for my work. So the problem with this is that there's, I guess a few problems with this. The first is like they're not really. Doing a lot of work, right? They're bringing it to your attention, which is great.

But as soon as those companies, and this has happened to a number of my clients, as soon as you pay out with one of them, they tell all of their friends and their network that this company pays out. Then like you start to get inundated, you get pile on with all these security reports and they may have run the scan once and like are sharing all these different security issues with their friends so they can all kind of get paid.

So it's a little problematic and it's problematic because. The companies haven't said, these are the rules and here's what we're willing to pay. So when it comes time to like reward these researchers who are reporting these issues, they don't have any guidelines to follow to save. This is how much we're going to pay for this type of vulnerability or this type of vulnerabilities out of scope.

You can't stalk our employees on Facebook or LinkedIn and try to extort us for higher payment because you disagree with, because there's no written policy to say, these are what the rules are and we use what the payments are. That's kind of where they get stuck, right? Like they. Not having the policy in place is really like the key driver to this and these researchers, the air quote, researchers are starting to target those kinds of companies because they know that they can get payment and kind of extort them for a little bit higher.

Nate: [00:06:21] What are the types of classes of like the tiers for the types of bugs that the people typically pay out for? And also who gets to decide? Is it just like the company gets to decide somewhat arbitrary early and they say, like we said, that if you find SQL injection, we'll pay out, you know, $1,000 or there are many cases where it's ambiguous what the actual vulnerability was.

Eric: [00:06:40] I would say it used to be more ambiguous than it is now because bug bounty programs are. Much more prolific than they used to be. It's become almost standardized to say, like for this class of vulnerability, this is the payment tier we're going to pay out. So here's the common case. So it is set by the companies.

To answer your direct question, the payment tiers are set by the company and usually goes along with what stage they're at and like what their financials look like. So they'll set some kind of a budget for the year to say, this is the max we want to pay for security issues through this bug bounty program for the year.

So let's say it's. I don't know, $10,000 or $30,000 whatever it happens to be, it's usually pretty low around that ballpark. So then they can say, well, we're going to expect in the first year to get, based on our priors, however many we've had from these researchers, maybe twice that many, because now we're like publishing that this thing's available.

So we'll expect it to see more for a specific. Type of vulnerability, like let's say it's low hanging fruit, you're using an older version of some Java script library. Then maybe has some kind of weird vulnerability in it or the vulnerability one, its dependencies or something like that, but the effects of that aren't very great.

Like the impact isn't great to your web application. It just happens to be like, Oh, this isn't a best practice. The threat level is pretty low. So thanks for reporting it. Here's like $100 like so the lower end is usually like maybe a hundred bucks, something like that, maybe $50 it all depends on the company, what they decide to set.

So at the higher end of the types of security vulnerabilities that the companies are looking for are things like remote code execution. Like you can. Fill out some form on our web application and somehow run code on our server that we didn't expect you to run. Or you can somehow access everything that's in our database so you're not supposed to be able to access.

So the classes for security issues. Are fairly well documented. There's like, you know, five or six general categories they fall into, but it's really the level of impact that that security vulnerability that the reporting has and whether or not it can be reproduced and it's well documents and all of a sudden things kind of play into whether or not it's actually granted as a true vulnerability or a valid report.

So the level of impact that the security issue has is being recorded usually ties directly to the level of payment. So, you know, a company that's first starting off, I usually recommend a couple of things. First. your payment size pretty low, especially for the first year, because you're going to get a ton of low hanging fruit and you're not going to want to pay like $10,000 per whatever, weird JavaScript vulnerability that it's relatively low.

So, so Kevin, pretty low for the first couple of years. And then the second piece of advice I usually give that they don't always follow is to use a managed bug bounty program. And what that means is you pay these companies who provide the software. It's almost like. I'll use get hub as an example, like their get hub.

In this scenario, they're offering the software that hosts this bug bounty program. So that's where the security reports go to and are listed and are managed by teams of manage bug bounty program is where that company also provides. So their employees to review and triage the tickets and make sure that they're like written in the proper format, they're reproducible and all these things before they actually come to your teams.

That really helps to reduce the amount of noise because especially at the very beginning, what you go public with, your bug bounty program. You tend to get a really, really poor signal to noise ratio and you want to try and improve that level. So I usually set the caps pretty low, make sure it's managed for like the first year because you're going to have to manage all the noise and then as time goes on, you start to increase your budget, you increase the tiers, you can increase the scope, and if you hire people who can manage this thing, then maybe you don't have to pay that company, whatever they charge for somebody to manage it for you.

Managed Bug Bounty Programs

Nate: [00:10:19] What are the major players in that space? who are the companies that, or maybe the defaults to go to?

Eric: [00:10:25] The two main ones right now at the time of recording our hacker one and the other is called bug crowd. For all intents and purposes, they offer nearly the exact same services. Their marketing material in their sales team will tell you that there is slight differences between them and there are, there are some differences.

There's differences for the types of integrations their software provides. They'll tell you that there's a different number of. Security researchers in their platform, and in a lot of ways it's very similar to Uber versus Lyft. Hacker one was first in the same way that the Uber was first and Lyft came later.

Same is true. Both crowded came later, and also in that same way, I would say that based on my experience, hacker one is a little bit more aggressive with like their sales and marketing techniques. In the same way that Uber is a little bit more aggressive with their sales and marketing techniques. That being said, it work successfully with both these companies.

I'm not trying to like bash any of them by making a negative correlation between any of these companies based on, you know, whatever your predilections happened to be about Uber and Lyft. So those are the main players. Now, interestingly. At my previous role at Optimizely, we use a company called Cobolt who also did, or also offered a bug bounty program as software package, like as a service.

And recently when I reached out to them to see if they're still doing this, they have transitioned away from that model and more towards almost like an automated model where it's. They scan your systems from the inside and try and look for these vulnerabilities. At least that's the way that I am remembering my understanding of it.

It seemed kind of complicated and expensive when I talked to them. Maybe it's a great product, but it was interesting that they had completely pivoted away from the previous model where they were kind of competing with hacker one and bug crowd to something that's completely new. The Role of Automation in the Future of Cyber Security

Amelia: [00:12:01] How much of this space do you think is going to be automated in the next few 10 years?

Eric: [00:12:06] So my background, I should clarify, is as a software developer, so I tried to think of the question of automation in terms of a software developer, like what's possible to automate. And I'm like, what should be automated? So, so this is actually a really interesting question because I've started to see in the last couple of years a lot more tools that.

Offer automation for all these kinds of problems. Like the security space is just like one aspect of this, and I'm sure that like, you know, by next year we're going to have all kinds of crazy blockchain distributed Kubernetes, AI driven security tools that are out there trying to sell us products.

Whether or not they work, I think is a different question. And if you think about the last few years, like there was this huge push like, Oh man, machine learning is going to solve all these problems for us and is going to solve all these problems for us. And then a couple of years later people start to realize like, Oh, you machine learning is a cool tool for a specific set of problems, like finding patterns and making sure that you're including things in the same kind of pattern.

So more for AI. Like there's certain things that they're really good at, but it is not like general AI. Like it doesn't. Do all of the things that a human being can do very simply. So you have to kind of back away from that. And like we've started to see people sort of backing away from these like very grandiose claims about what those things can do or what they're capable of.

So I think to answer your question, I think the same is really true as it currently stands for security software. There's a lot of companies who are offering crazy AI driven, automated tools to do all these things, but whether or not they actually do the things they say, like I think is a different question.

And. It's really up to the companies buying whether or not they want to go through a pilot program and see if it works for them. What are they willing to pay for it? I think fundamentally, the question for any kind of software as a service comes down to what am I paying for this and how does that correlate to the number of employees that I would normally have to hire to do that job?

Right? Are they automating something that. Is easy to automate that we could do ourselves. Like is it, you know, just trying to match them patterns that we know and like could just add a filter to Splunk or whatever logging software we have, or they're doing something more advanced where we're like, we would have to build out a huge crazy complex platform, two ingress, all this data and then, you know, run a bunch of code against it to find like weird patterns that we would not normally see.

How much time does that save us? Not only like at the initial, but also like over the longterm. And I think the same answer is true for a lot of software as a service. Like if you're going to charge a company $30,000 a year for software, but it would cost them an engineer. Per year to do that same job, like an engineer is going to cost them $100,000 or more.

So they're saving a ton of money by using the software instead of hiring an engineer to do that job. So maybe that's a roundabout way of answering your question, but like that's the way that I think about these things. I don't have a lot of firsthand experience with a lot of the newer automation tools that are coming out.

Maybe they're great. Maybe they're junk. I mean, I haven't seen evidence in either direction yet, but my gut reaction to me, I've just like worked in security too long, so I'm always like a little bit skeptical. I'm usually pretty skeptical about what they're offering, like whether or not it's worth the price that they're asking.

How You Detect Hacks

Nate: [00:15:03] You mentioned using Splunk to track logs and to find abnormal behavior. One of the things that I've noticed when I've seen blog posts about security incidents, they might say, you know, we had an employee who had their admin panel password hacked and the attacker had access to all these accounts for like three days and you know, we were able to track them down and shut them down. What tools would you use to actually detect that? Because for pretty much every company I've worked for, if a hacker got access to an admin's password, no one would ever know, like ever. Like we would never find out that that had happened.

So like what tools and processes and monitoring do you put in place to catch something like that?

Eric: [00:15:46] You opened that really interesting can of worms. And here's why. So the question that you asked is how do you detect this? Which. In the question itself, you're already telling me that it's too late because it's already happened.

So this is really the type of thing that I focus on with my clients is How do you prevent this from happening? How do you make sure this doesn't happen? Because if you can make sure that it never happens or is nearly impossible to happen, or is such a great burden for an attacker. To go after that approach.

They're going to do something else. They're going to do something that's easier instead so then like, maybe you don't need a crazy monitoring solution for this kind of hard problem in place because even if you had it now, you know it's too late. They already have it. Right? So how long would it take them if they had ad admin access to your systems to copy all that data?

Right? Even if you can shut them out, maybe it took them 30 seconds and like it took 30 seconds just for you to get that email and read it. And they already have your data, so it's too late. Right. So I would rephrase the thought process too. How do you prevent these things from happening in the first place so that you don't have to worry about like, Oh my God, like what are we going to do if this happens?

Cause that's a much harder problem. So I focused on the easy thing. So the easy thing is how do you keep people out of the admin. Handle it, of your systems to have access to everything. And just as a preface, I want to point out that I think a lot of my clients and a lot of people I talked to tend to think that attackers are going to try and go through your web application or your mobile application to try and hack your company.

But that's a pretty limited approach. And I think threat actors in the space have already started to realize this. So the targets that they're choosing instead are developer machines or developer systems. So. If you have Jenkins running all your CIC CD systems, your continuous integration, continuous deployment, that system probably has keys for all of your servers as keys for all your source code.

It probably has rewrite access to your database. It probably has admin level access to everything so that it isn't blocked. So that's a really ripe target. And usually when people set up. The systems, like they just set it up just to the point where it's working, but not necessarily secure. Right. And it's just like an admin.

Yeah. Right. It's very common. And that's the thing. And that's kind of the reason, the realization I had when I started consulting is that everybody has the same problems. Everybody's making the same mistakes. So there's a pattern that's pretty easy to solve for in, it's really just a matter of education.

So that's kind of what I focus on. So getting back to the question of how do you prevent this from happening. There's a variety of ways, like the easy one that I usually recommend is for anything that requires admin level access. I review who has access to it with my clients and say, do all of these people actually need admin level access on a day to day basis for their jobs?

Often the answer's no. Right? There might be a couple people who day to day need admin level access to do their jobs in whatever that system happens to be. For everyone else, they can get a lower level. Admin privilege or whatever it happens to be, or lower level permission. But I admin like they don't need rewrite access to literally everything.

So that's the first thing I focused on is who has access to this and this. So this model is called least privilege. So you're offering the least privilege to most people by default. So that one comes up a lot, right? And then the second thing is for the people who have admin access or any access at all, can you enable some type of multifactor off like two factor auth using, you know, the Google authenticator on your phone or like a YubiKey or some other kind of system to make sure that even if your admin password was published on the internet, nobody could really do anything with it.

There's something else preventing them from logging in. Just those two things like limiting who has access to admin levels in systems. Enabling multi-factor off. Get you most of the way there. Like you're almost to the point where like it is a really hard target now to get into those systems. Now I could kind of fearmonger you and say like, well, you know, it's an had been a little system in like the person who's the admin is kind of sloppy and they set up SMS for their two factor auth instead of like a, you know, authentication app and maybe their phone gets spooked and now like it's possible it's a compromise that there's all these weird ways to kind of work around these things, but it's a much higher bar.

Than it was before where maybe laptop got stolen, right? Or like somebody just like look over their shoulder and saw them log in, or you know, maybe they like sniff their cookie or something like that, and then now they have access to this system. So it raises the bar for that kind of thing just by putting these preventative measures in place.

But to answer your question more directly, which was. How do you know about these things after the fact? Normally, any types of systems that have admin paddles, not always, but they will often offer some kind of like auditing system when it, any kind of administrator logs in, it will keep a separate log for all the actions of that person.

So click, who logged in, where do they log in from? Like in the world, what was their IP address? What actions did they perform? So if you are talking about like the AWS. Council or like the Google cloud console, they usually offer this kind of system. I think Splunk does as well. So this gives you a couple of things, like you've prevented the ability or not the ability, but you've raised the bar for getting access to these admin systems, or if you've made it much harder to get in, and you also have some kind of system in place to say like for anybody that's in there.

What are they doing? Like do we have some kind of record for what actions have been performed so that we know that if somehow one of those logins were compromised, we have some record of what actually took place act that happened. So hopefully that helps to answer your question. That gives you a little bit more insight into like the kind of things I'd be looking for.

Amelia: [00:21:03] It kinda sounds like there is no incentive for actually monitoring the logs.

Like you can only get bad news that you can't act on. So if I were a little bit more nefarious, I might just never look at

them.

Eric: [00:21:16] That's a good point to bring up. I would say that it depends on the logs you're talking about. If we're talking about like the audit logs that are considered any action that is contained within Ottawa, we assume that it's somebody internal to our company, like somebody who should have access to this, except for the worst case where maybe somebody.

Shouldn't have access, does have access to like they're doing something weird. Right? So the audit logs are, they're definitely going to be reactive in the, in that case. But if you think about logging more generally, like if you think about your server logs for your website, right? That actually can be a good leading indicator that an attack.

Might be happening or somebody probing your systems. So you can look for all kinds of interesting things in your logs. You can look at like the login page, like all the login URLs and see like is one IP address trying a bunch of different logins and failing? Are they trying one login? And it's constantly feeling like they're trying to brute for somebody's password.

So there's a lot of things you can do and get a signal that like. An attack is happening and understand what they're trying to attack in your systems just by looking at the log in without actually having been compromised. So let's say the, in that more general case, you're getting a leading indicator instead of a trailing indicator.

So I think both of the things you said are true, but I think it depends on the system that you're talking about.

Nate: [00:22:32] Looking at the logs after an attack is sort of reactive. Taking more proactive steps of making sure that you review who has admin access in your review that everyone has two factor auth.

That's more of like a proactive approach. Would that fall under the more general umbrella of threat modeling? we're looking at this and saying, okay, how could we get attacked? We could be attacked by one of our employees losing their credentials somehow, or leaking their credentials. What are some of the other things we might look at to have a process to prevent things before they happen.

Eric: [00:23:02] Oh, this is a great question and I'm glad that you brought up threat modeling. So threat modeling is an exercise, and by far, what are the best values for getting the most information to the least amount of time that I do with my clients. So I want to try to explain threat modeling to help you wrap your head around it and like give you some examples like what it is and what it means.

It'll help you to answer this question for yourself and the way that you would sort of like think about. The general problem, which is like we have this system in place. What could possibly go wrong with it? Which is the shortest version of like what threat modeling is. So here's the most simple real world example of threat modeling, let's say, and this is something that people do on a day to day basis.

So here's a simple example, like you want to go out to lunch from work. And meet up with a friend for lunch right now. There's a lot of considerations that your mind processes before you actually go out to lunch. Is it raining? Do I need to take an umbrella? Right? The thing that could go wrong is like, am I going to get rained on the preventative measures?

Like, do I need a rain jacket? Do I need an umbrella or. Another thing that comes up is like, you know, are there any dietary restrictions I have to keep in mind for myself or the person I'm going out to lunch with. If there are like, how are we going to resolve that? Where are we going to choose to go to lunch?

Do I have to be back at exactly one o'clock for a meeting with my boss and I can't be late? Like we can't go somewhere that's too far away. So it's really, this is this process by which you think about problems and think about all the different things that could possibly go wrong and then come up with.

Different ways of solving them so that you avoid as many of those problems as possible. Now, some of the risks are. You don't want to get caught in like analysis process where you think like, Oh my God, whatever we're trying to do, there are so many things that could go wrong. Like maybe we should just not do it.

My advice is to say like the approaches you take is usually the one that has the most pros and the fewest cons rather than no cons cause you'll just never get anything done. If you try and take that approach for threat modeling as it applies to security and it applies to software. Let's think about.

Something that's a little bit more practical than like going out to lunch. So let's say we have. I dunno, a basic software system. There's some kind of web application running in production. It has a database somewhere, and we want to say like, what could possibly go wrong with this? Like how could it be compromised or abused?

So the approach for this exercise is you get a bunch of. People who work at your company or work with you on this project in the same room together and you just have that conversation. Like I usually start with a different question though. I say like, what are the biggest risks for our company? What could really just ruin us or like put us in a bad situation?

And it might be, we've got a lot of really bad PR at this point. We would really struggle to recover from it because we lose a lot of customer trust. We'd end up on the front page of the New York times with all this negative press, and like it would really hurt our brand and they'd go to our competitors and said, so that might be the worst case scenario.

So like, how would an attacker. Compromise you in such a way that would make it a really bad PR campaign. Or it could be like, we have a lot of this really sensitive data in the database, like maybe it's personally identifiable information, PII, like people's credit card numbers, or it's their address along with their names and email addresses.

All this stuff. Like we cannot allow this data to get leaked because like our customers, again, like they wouldn't trust us. It'd be a huge problem for our company. Going forward. And that's a hard problem to recover from. Like once I data's out on the internet, you can't unleash information. Like that's a hard one to recover from.

So you really have to think about how you can prevent it in the first place. Similar to the problem of admin credentials, like after it's done, it's too late, right? You enough solve these problems before it's too late. So just to give you a third one, like maybe the worst case scenario for our company is.

Financial, right? What would happen if these attackers got access to our bank accounts? Or you know, maybe we deal in like Bitcoin. Like what if people like could somehow compromise our system and like steal everybody's money. So that would also be problematic. So there's all these different scenarios you could think about that would be worst case scenarios or maybe like, not necessarily worst case, totally disaster, but like hard recover from problems.

And then you think about like what ways would an attacker or a malicious actor. Achieve that goal based on everything that we know about how our systems and our software works today. And sometimes this goes beyond. Again, like as I said before, it isn't just your web application or just how your database works.

It could be something much more sinister, like maybe you have. A bunch of laptops and people who work from home work from cafes, like you know, Starbucks and a laptop gets stolen and that laptop belonged to a cofounder and the drive wasn't encrypted. Right? So now an attacker like who stole this laptop?

Maybe they didn't know what they stole, but they find out and now they have access to the bank account information and login four, your company's bank. Like it's a bad scenario. So how do you prevent that kind of thing from happening? Or maybe something else, like how would they get admin access to one of our systems?

Can we prevent that in some way? So it's this way of sort of thinking about the problem and preventing it in advance. And then once you leave, you should have a list, Oh, here's all the things that an attacker could do. And it normally boils down into like a handful of very common things, like all these different threads of attacks.

Have you a few things in common, like, you know. We don't have to factor off the Naval, not on all these systems, really, too many people have access to it or drives aren't encrypted or whatever it happens to be. It's usually a handful of things that are common amongst all those attacks. And those are the things that you focus on fixing first because you, I fixed one thing that solves three possible tax, right?

So you're really like getting a great value for the time that you're spending and you're also focusing on the right problems instead of like the things that might happen. Maybe like it's a low. Likelihood and maybe low impact if they do happen. Like that's not really where you want to spend your time.

You want to focus on the things that are potentially big problems for your company and take the least amount of effort to achieve. So hopefully that helps to answer the question. I would say that the one other thing about threat modeling is, I can give you a recent example from the news where like this kind of went horribly wrong.

So I think this was maybe last year. There is a company called Strava who does like fitness trackers. The way that struggle works is like people attach them like a Fitbit and they run around it. It maps out like everywhere they run. So then like the people who were running King see where they ran, what their route was, and then they can keep track of their miles, which is great.

But the other thing that Strava did was it would. Publish on a public map everywhere that you were running, which, you know, privacy concerns start to bubble up with this, but people start having fun with it, right? They start drawing like the Nike swoosh with their running patterns, or they draw, you know, spaceships or Darth Vader or tie fighters and all this stuff.

People will start to do this more and more, and like this feature gets really popular. But the other thing that happens is that the U S military has soldiers. Who are using these fitness trackers while they're exercising, but they're doing it secret bases around the world. And now you look at Strava map and you have all these little hotspots that show up in the middle of Africa, or you know, somewhere where there's nothing else.

And there's this little tiny hotspot and. The effect is that Strava has just now leaked the specific locations of all these secret us military bases around the world. So huge problem, right? How do they not think about this in advance? This ends up on the front page of the New York times, you know, Strava leaks, location of all these secret us military bases.

If your Garmin, a competitors' Strava who offers nearly the identical product and has the same problem, you might think like we really dodged a bullet. Because we're not on the front page of the New York times. But three days later or whatever it happened to be like they also were, because they had the exact same issue.

It's kind of interesting to me that like they didn't immediately like, we need to fix this now. I'm like, delete this from the internet so that we're not also caught in the same place, but it just goes to show that. I don't know what the root cause was that they both ended up having the exact same problem where they didn't think about what the consequences were, their actions.

A more lighthearted example through our modeling is the Boaty McBoatface example. So there is this like online voting system in the UK where they're going to name some military ship or whatever happened to be, and the top voted ship name by far is. Boaty McBoatface right. And really like that's kind of an abuse of the platform.

Those weren't the answers that they were hoping to get, but is the answers that they got, what the mitigations were for preventing that? Like maybe the consequences weren't that great for Boaty McBoatface but the consequences for leaking the secret location view as how us military bases is pretty high by comparison.

So you have to think of these abuse patterns in addition to how could we actually be hacked. Like Strava wasn't hacked. They like leaked this information out because like. Like the system was working as designed by, it was a use case they hadn't thought about in advance and it was like it published on by default, I assume.

So anyway, like those are just some simple examples of threat modeling and like the ways to think about these things from a larger perspective. And I think the last thing I would say about through modeling is it depends who you invite. To this meeting where you conducted the right modeling exercise.

Because if I were to ask a database engineer, what's the worst thing that could happen to your company? A database engineer is going to tell you all about the worst things that can happen to their database. Cause like that's their world. So the best person to ask this question to and is usually somebody in executive leadership because they're going to have the best perspective.

I'm like. What the company I'd a broader vision is doing, like what the real business risks are. They don't necessarily have to attend a meeting and hear all the nitty gritty details about how the database works with the web application works or two factor off, but they should provide those initial answers to the question of like, what's the worst case scenario for our company?

And then everyone else who's more technical can think about their own systems. Either it. You know, managing all the laptops of the company or the database engineer, managing all the data storage systems or the web application engineer running all the Node.js Or Python code. Whatever happens, all those people should have one representative in the room to think about their own systems and how it can contribute to the threat modeling exercise.

Security in Today's Web

Amelia: [00:32:48] I feel like your examples have highlighted something about how the web itself has evolved over. I don't know, the past 30 years where it used to be this scrappy connection of people in different parts of the world and we get to do weird things and it's all fun and lighthearted and now it feels like we have to grow up because we can't just have fun anymore.

People will use our fun.

Yeah.

Eric: [00:33:14] That's a really interesting way to phrase that. Arguably, fun has always been profitable to some degree, but I think we're not quite as carefree as we once were. It's certainly true that the old internet, as I remember it, like there were still plenty of problems, like security problems, the ability to really like.

Make widespread chaos and the old school internet was much harder. And like there's a lot of ways I could speculate or reasons. I can speculate why that's true or more true now than it used to be. So one is like the way that you phrase it was like it was a bunch of small little interconnected websites, right?

Like maybe people were hosting on their servers and like when they turn their computer off at night, that website went down until like the next morning when they turned it back

Amelia: [00:33:55] on, the store is closed.

Yeah.

Eric: [00:33:57] The store is closed. Exactly like. And I've had that experience plenty of times when I've seen that for a website I was looking at at 3:00 AM, but now if you think about it, because of the way the industry has grown in evolves, there's servers run all the time and it's cheap.

I mean, it's practically free to run a web service. And most of them, a lot of them are consolidated on three major platforms. . AWS and Google cloud, and they're on all the time. And you know, if there happens to be a fundamental security flaw in Google cloud or AWS or Azure, that affects almost everybody, right?

And we've seen this come up a few times, I would say like the last seven years. So, you know, when I worked to optimize the is when we had. A number of industry-wide security vulnerabilities come to light. So Heartbleed was one of them. Shell shock was another, and if you were working at the time in the industry, you probably remember like it was all hands on deck.

We had to patch all of our systems in like prevent this because the fundamental problem, these cases was that in some version of bashes insecure Nick, you could compromise it remotely. And then the other one was, there was some kind of. Underlying security vulnerability with open SSL, which is the library used by like every Linux server, which is most of the servers on the planet.

And this is a huge problem. So everyone had to go out and like patch all servers the exact same time. So for a couple of weeks during these periods of time, nobody was writing code. Everyone is trying to patch their systems to make sure that they weren't the ones that were hacked. And the other thing that is.

Also happened is not just that the targets have sort of like shifted from being, well, I could compromise this computer, but it's like off from midnight until 5:00 AM it's just one computer. Right? But now it can compromise all these computers. Right? So the, the targets are much bigger because connectivity has improved.

The sharing of information has improved, which is like by far has. More positive effects than it has negative, like there's GitHub and all these ways to share code. But now like the things that can also be shared are, here's a tool called that allows you to just click on button in, like run some kind of crazy massive attack.

Or here's the source code for the myriad worm, which shut off most of the internet. And when was this like 2015 I can't remember exactly. So they can share the, the nasty code, the dangerous code, as well as like the good code that, you know, people write day to day. And I think for the most part, people just want to do the right thing, but there's always going to be malicious actors out there.

And it's certainly true that like now they have easier access to some of these tools and it's problematic. But. The good news is that everyone's getting smarter about security. They understand what the attacks are as technology improves, like the attacks, the types of attacks are going to also like mature and evolve with technology, but people are more wise to it now.

As has always been true of history, we learned from the mistakes of our past, or at least we should, and hopefully like the technology we build tomorrow is better than the technology we built yesterday.

How much should a responsible Web Developer know about security?

Amelia: [00:36:51] I bet your experience of these attacks is a very different experience than the experience I as a software developer has.

So when Heartbleed came out, I remember all I knew was it's a big deal. We're freaking out and everybody should be upset,

Eric: [00:37:08] and

Amelia: [00:37:09] maybe I can spend three hours reading up about it to try to understand. So as a software developer who doesn't work in security. How much should I know about security? What are some basic things that I know and how does that differ from, say someone who isn't a software developer?

Do they need to know anything.

Eric: [00:37:27] Oh, these are both excellent questions and thanks for sharing your experience about Heartbleed. I just want to clarify the, at that time I had a lot of different roles. What I worked to optimize the and security is just like one small portion of that. All of the things I had to focus my time on.

And it was really like a group effort of everyone coming together at the company, all the engineers and it professionals to come together as this sort of like make sure that we did the right thing and patch our systems and like communicated to everyone that. patch things as quickly as we could and to the best of our knowledge, like nothing was compromised.

So I think we did everything we could in that situation. We worked as a team to kind of solve the problem. Just like you said, it was kind of pants on fire, like everybody knew, like, Oh my God, this is everyone. It isn't just like some companies, it's everyone except for the few people out there who run Microsoft based servers out there.

I'm sure they're laughing at us, but that's okay. We get to laugh at them the rest of the time, I would say. Yeah. So your question was, what should you think about as a software developer about security. On a maybe a regular basis or how do you learn more is, am I remembering correctly? Yeah.

Amelia: [00:38:26] How much do I need to know?

Like how much should I feel responsible to understand?

Eric: [00:38:31] I would say that my general advice, which is less specific about security, is. Take in as much information as you're comfortable with, like, you know, read some more diverse sources. Like I think it's common for, for engineers, especially those who are like just starting out to really focus on how do I write better code.

Like that's the one thing they kind of focus on is like, how do I write the best possible code? Like how can I learn all these interesting coding design patterns and like make my code run faster and like have fewer bugs. And I would say that the more diverse sources you can read, the better you'll be at your job on the whole.

So. Here's some examples, like try to understand the perspective of like the product and program managers at your company or like the marketing departments. What is their job look like or the support team, what does their job look like? What kind of questions are they getting from your customers on the support team?

How are they helping the customers? Like what does that system look like? How do they do their jobs? Do they have to provide technical support? And some companies I've worked at. We were allowed to sit on sales calls like with potential customers and just sit there and listen to the concerns, sometimes security concerns, but sometimes just like the product concerns about from potential customers.

We could also sit in on calls with customer, like existing customers in here about their problems. And it really helps to like understand your perspective or a change of perspective to understand their perspective about like, you know, what are the things that actually concern them cause they're going to be different than what you assume they are.

Which I think really helps. As far as security goes. Like the same thing is true if you have the opportunity to participate in your company's security program, if they have one, I would say the right way for a company to run a security program is one that's inclusive instead of exclusive, which is to say that like you have office hours, you invite people to join and participate.

Instead of saying like security is our world. And like, we're trying to protect you. Just stay back and let us do our jobs. Right? I vehemently disagree with that approach by disagree with this exclusive approach where like they played the new sheriff in town to like, they're trying to protect everybody and no one else can really play the game because it, it has a number of problems with the main two that come to mind right now are nobody likes to be told what to do.

They like to understand what they're being asked to do. They can comprehend like, okay, there is a good reason why I have to do this other work instead of like Joe over and it just like, you have to do this, whatever security thing is now. Like that's annoying. Okay. I guess I'll do it. Cause they wrote a new policy.

And the other thing is that. By being inclusive, it helps to spread like education and awareness about security. So for example, if you worked at a company, they had an inclusive, you know, anybody contribute security program, you would probably have the opportunity to go in and maybe participate in the threat modeling exercise and you'd have a better understanding for like, you know, what are the threats our company actually faces?

Which might inform you later on if you're creating a new feature or a new product for that company. Oh, I know that. If I, you know, create this web service, these are the kinds of threats that. It might face, cause you've experienced that threat modeling exercise before. So I know that I can't use X, Y, and Z type of database.

I don't know. Just some random stupid example. So it's really just about like getting. More information in your mind, in a different perspectives in your mind, in all of this stuff will not necessarily be immediately useful. It'll just be one of those things like that later in your life it'll become apparent like, Oh man, I'm so glad that like I participated in that and I'm so glad I learned that thing.

Cause like now it actually makes sense and I finally get it. So I would say like I could point you to several different security related blogs and you know, newsletters and Twitter accounts and all this stuff, but you're just going to get so inundated with all these like. Technical details and it's going to drag you down mentally.

Cause a lot of them are just like aggregators for like, here's another company that got breached and here's how they got breached and you're going to think that the world is falling apart. I would say that like that's not going to like bring up your spirits about security and like the state of the world.

So instead I would focus on like the things that you can learn in your most local community, your local environment. So if your company doesn't have a security program, there might be a local Oh wasp chapter. So ops is like a open security organization. They're around the world. Most cities have like some kind of local chapter.

I know the here in Portland, there's like monthly meetings you can go and attend. They usually have some kind of like guest speaker who will give a talk about some thing related to security. So I think engaging those types of communities can be really beneficial as well. You know, if you want to, the other thing you could do is just like attend a security conference.

I wouldn't necessarily, I recommend starting with black hat or Devcon in Las Vegas. Those are very intense and very like, I would say deeply technical and like. Culturally heavy. I would say that there's something a little bit more lightweight though. It'd be beneficial. Like if you went to a JavaScript conference and like somebody was talking about JavaScript security, attend that talk, see what you can learn.

I think they would probably help you on a more on a day to day basis and going head first into like the deep rabbit hole of security.

Amelia: [00:43:14] Right. Don't start with the black diamond ski slope.

Eric: [00:43:18] Right? Exactly. Exactly. That's a great analogy. I don't ski, but I get the reference.

Amelia: [00:43:23] I also love your answer because I realize that as a front end developer, I don't have to worry about what other people within my company know.

Whereas within security. I feel like you have to worry about your coworkers, whether they open a malicious email or the security could be attacked through people, which I think I would find terrifying.

Eric: [00:43:46] It's certainly true. So as far as like the nasty email example that you gave, that's such a great one.

And like I've seen this firsthand where emails were sent to a company I worked for, there were spooks, so it looked like a legitimate email from one of my coworkers. It looked like any other. Email that you would get if they were like sharing a Google doc with you, right? It would say like, here's the name of the stock.

Whatever happens to be in, there's a link in the email and you click on it, you open it. But the clever thing about it, there's two, like the one is like spoofing their email addresses, which is not technically challenging. It's pretty trivial. There's a few things you can do to mitigate that. The clever bit was.

They make it look like a legitimate email. We're like, nobody would really be the wiser on a day to day basis. But you open it, you click on the link and it takes you to a page that looks exactly like the modern Google walkin. So now if you type in your password. They have your password and they know your email address.

So it's a pretty clever way to fish people and they can get a whole bunch of logins and passwords relatively easily. And I think the other thing that you do that's kind of clever is they've gotten wise, they're not running from their home. Machines, like all these web servers and stuff they have to run are these scripts that they use to target different companies or servers.

They just run them on like compromise AWS accounts. So you can't black list like the IP addresses for AWS because then all of your code shuts off. Right. So it's pretty clever the way that they're kind of using the same systems that we use to the right, normal white hat code. As far as your, the concern about, you know, if you work in security, you have to worry about everyone.

I think that's true. Like you're going to be worried a lot like, but that's your job. Your job is to be the one worrying so that other people don't have to. But that's kind of the motivating factor. Like if it's keeping you awake at night, then that should lead you into action and like to do something to make sure that that one thing can't happen.

So you can spend your nights being awake, worrying about something else, and maybe you can't control. Right. And then you think, you know, if I can't control this thing, what can I do.

How Do We Think About Security and New Hardware?

Amelia: [00:45:43] So you mentioned this example before, which is smartwatches, which haven't been around for that long. How much do you have to keep up to date with new technology? Like we have Google homes and our houses and their smartwatches, and there's something new every year.

How much do you worry about new devices that come out or have to keep up to date with new tech?

Eric: [00:46:05] Maybe you have a Google home in your house, but I don't have one in mind. Well, I guess there's a lot I could say on this, and I'll try and keep it. More succinct. So at a philosophical level, the same problems always persist in the security threats.

Simply follow along and mature and evolve with the technological changes that we have. If you had a computer before and you didn't have an iPhone or you know, a smart home, there is still possibility that like your computer could be compromised remotely and the camera could be taken over. The microphone could be taken over and that could lead to.

Some kind of disastrous result for you and I, the day to day, like that's not that big of a deal. Like if our computer gets compromised, like, okay, like what's really the worst case cap? If they see me okay. I don't often sit naked in front of my computer, but even if I did, like nobody really is going to want anything to do with that.

Let me give another example though. The risks are much higher in the federal government in the Pentagon. They have a policy where if you go into a conference room, you cannot bring a cell phone. You cannot bring a laptop, you can't bring anything that has the ability to record information and has a battery in it or transmit information.

It has a battery. Like that's the policy. What that means is. Is that if you want to give a presentation at the Pentagon, you have to print out all of the slides for your presentation on paper and then give a copy of that to every attendee that's in the meeting, and then at the end of the meeting, those all have to get shredded securely.

I know this because I have a friend who works in the Pentagon one day. This person who was like. Very concerned because they had to give a presentation the next day. I think it was like right after, right before the government shutdown, all of the printers at the Pentagon, which is the only place they were allowed to print off these classified documents.

All the printers at the Pentagon were out of ink. So how do they give their presentation? Right? So it's weird problems that you take on for the sake of security, where for the sake of national security, you can't take any kind of these types of devices that we take for granted. Like, you know, if you. Told somebody in Silicon Valley that you had to print off a proper presentation.

They couldn't bring their laptops and phones into a meeting. You would get fired. They would think you're crazy and like kick you out of the company probably, or just tell you that you're paranoid. So I used to live in DC and I worked in a similar environment that, so for me, like having lived on both coasts and in both of the DC area, then also in Silicon Valley, the differences are so stark.

It's really crazy. But it also goes to show like. The vast differences are the vast levels of security that people take on based on the level of risk. So I would say like that's the fundamental thing to keep in mind is what are the risks that you want to avoid if you're going to like enable internet of things devices in your home.

I may not have a Google home in my house, but I do have a nest thermostat. And I know that the nest thermostat doesn't have a microphone in it and like, you know, could it be compromised remotely? Probably we're going to do make it too hot or cold in my house. Big deal. Right? But it's a nest thermostat is so much better than like a non nest thermostat.

They're like, why would I not have a net service set? There's so great. Just a couple of days ago, the ring doorbell company, like it was published that there was like, these podcasts are so like taking over people's ring doorbells in their house and like harassing people across the country. With the ring doorbells.

Right. Which is crazy. So I don't have a ring doorbell because I know that their security is pretty low. And that's really the problem with the internet of things stuff, is that they want to make these things cheap, which means they have to compromise on something. And the one thing they usually compromise is the security of their product.

And that's actually how the myriad worm spread is. They didn't pay for a bunch of servers that had really high bandwidth. They compromised a bunch of internet of things devices and use them like a swarm to take down like internet servers around the world, which is crazy. It was just like people's cameras and stuff in their houses.

They use it as like a zombie to like send more traffic to things. Those are the kinds of things I think about. Like, you know, you could have those things, no big deal, but just be aware of what the risks are and whether or not you trust the company behind the device that you were just right.

Amelia: [00:50:00] It seems like it's so as a trade off between convenience.

And money and security.

Eric: [00:50:05] I'm glad that you said that. So at my first job back in 2001 I remember stating, and I don't know if I was just trying to be clever, but I said like security is inversely proportional to convenience, and I think that is still true today. But going back to the seatbelt example that I gave you earlier.

It is, you could argue inconvenient to get into a car and have to book your seatbelt. It's inconvenient to have a seatbelt on. If you want to like take off your jacket or put a jacket on, you're too high or too cold. It's it convenient to have a seatbelt on if you are in the back seat and you dislike over, but it's a trade off between the level of security that provides.

It might be inconvenient. You might be a little cold or a little hot early cancer site across the car or whatever. But it's better than flying out of the windshield if you happen to get an accent. Right. So it's constantly this trade off where like convenience versus security. I think it's still true. I would say that because technology is improving so much, they are lowering how inconvenient it is.

So here's some good examples or recent examples. Like I have an Android phone and it has a fingerprint reader on the back and it's great. I can unlock that thing in a split second and it just, boom, there it is. I don't have to type in a password or put in a code. And same thing is true for like the neuro iPhones that are in like the new pixel phones is, they have like face unlock.

So you just look at the thing and unlocks where you. If you're a James Bond and you are tied up on a chair somewhere and they want to like unlock your phone, they just pointed your face and now you know, if you're under arrest, like you can't prevent that from happening. But like, you know, most of us aren't James Bond.

So I don't think that that isn't necessarily like the primary concern that you want you to have, but is the way that I would usually recommend thinking about these things.

The Dangers of Wireless Security and Serendipity

Amelia: [00:51:40] Your seatbelt example brings up the point of I think about how dangerous it is to drive versus how dangerous it is to take a flight cross country and driving is way more dangerous, but I still get really scared every time I take a long flight because it feels so much scarier.

So I bet in the security world there are things that don't feel like big risks are and things that are big risks, but they feel like not a big deal when you think about them.

Eric: [00:52:08] Here are a couple things that come to mind. The first one is wifi. So why fi wireless internet is like something that's so prevalent now.

That we assume that when we go to the airport, there's going to be free wifi. We assume that when we go to a restaurant, there's going to be free wifi. We assume that what we go to work, there'll be a wifi. We can catch you. So our phones have service. We assume that there's going to be wifi, like everywhere we go and when there isn't, it seems like a huge problem.

Because there's free wifi everywhere. That means there's a network that you're connecting to. Who knows who else is on that network, right? When you go to the airport, who knows who else is on the network at the airport? Who knows if they're monitoring all the traffic that's going through computer, who knows if they compromise the router at the airport.

That's a bigger problem than I think people realize like wifi, security. Even though you have like your crazy long password on your router and you keep it up to date all the time. Routers and wireless security like are. Crazy, crazy, insecure, cracking the password on anybody's rodder today. Even with like if you get the most fancy high end butter off the shelf and bring it home and plug it in.

With a laptop, you can probably break the longest password that you can possibly put into that thing within a few minutes. Like you just need enough traffic going over the wire. So the more people that are on our network, the easier is this sort of crackly security and like get access to the, to the router.

And then you're on the network and once you're on the network, if you can be on the network, is everybody else? What else can you do? What traffic is going around that. Isn't encrypted in transit. If you're on the network, let's say you have wifi at your office and you have some kind of admin panels like Nate was talking about earlier, you have admin systems set up so that you can only access it if you're on the network, and if you're on the network, maybe you can get access to the internal, like Wiki for auger, your company's internal documentation.

And if you can access that internal Wiki for all the internal documentation, just because you have the right IP address now maybe like what kind of information is on that Wiki that you don't want anybody else to have that's outside your company. So you've assumed that everyone on your network is trusted, right.

Just because they have access to the wifi. So not only is like the encryption protocol not necessarily secure on like modern wifi, the devices themselves aren't that secure. That was a huge target for a lot of the known vulnerabilities in the world. Like. Routers are a huge shark cause you can get access to people's like routers, you can access to like their whole home network.

Then you get access to like the ring devices and everything else that's on their network and you can do who knows what. So yeah, wireless security is one of the big ones that I think people take for granted. Interesting bit of trivia. Wireless security is one of the ways that I first got involved in security.

So, and I will definitely date myself with this. So I apologize for sounding old. Back in the 90s when wireless networks started become more prevalent, like they're inexpensive and people could start using them. There was this article that came out, I can't remember if it's like 2,600 magazine or what it was, where it was an intended design that you would make using a Pringles can like Pringles potato chips.

So you use the empty can and you put like a rod in it, like some nuts and bolts, and you essentially use it as like a directional antenna. It has a wire that goes and attaches to the wireless card that would be in your laptop, which back then the wireless cards were these things you would slide into the side of the laptop.

And they look like a big fat credit card. And that's how you would like add wireless networking to your laptop as you slide this huge thing in the side and go and you could add an extra oil antenna and then connect you to the sprinkles can. So the Pringles can give you a longer range and there was some software you could run that would allow you to.

Do this activity called wardriving, so you'd drive around in the car and you point the antenna. This Pringles can around and it would help you did a tech who had wireless networks and it would just say like, boom, I found one. It would keep track of where you were. If it had like a GPS card or something in your laptop, you could map out.

Where the w wireless networks were. And like if they had a password protecting the wireless network, which back then not a lot of people did. So it was a way to sort of map out like where all these wireless networks were, and like sniff all the packets from a distance without like having to like sit in a person's driveway or like right outside their business.

You could be across the street or like just driving around like pickup all this traffic. So I built this, Pringles can attend a, we'd drive around my small town and in Northwest Ohio. And there was like in the entire town, there was like one wireless networking was outside, like the local ISP and that was it.

Like there nothing else. We drove around for like hours trying to find nothing but that experience, I kind of kept an eye on wireless security cause I thought it was really, really interesting that you could do this. Stuff and like also the possibility is of wifi back then were just awesome instead of running wires are rare, which obviously sucks.

Also around that time I was working as a bartender at like a local country club and it was a slow day. I mean, it's like a Sunday. And this older gentleman comes in, one of the guests, he comes in with his laptop and like laptops bend over these huge clunky monstrosities. So he's just on his laptop. He's like playing around, and I see he slides in this katonk, this wireless card in the side, which at the time the country club started offering like where I was internet.

So I think that was a big deal for all the guests. So you know. Being a nice bar center. I have a casual conversation like, Oh, like did you know that wireless security? Not that good. It's like, Oh, interesting. He didn't care about that. He's like, are you into computers in a computer? So I was going to school at the time, like working part time as a bartender.

So he shows me this piece of software that the company he owns is working on, and I was like, Oh, that's really cool. He's like, if you're interested, I'll put you in touch with my son who's the president of the company. Like, you know. We might have a place for you. So sure enough, I applied and I got the job there and I would be one of only two software developers had that company and they basically, what they did was like wide format printing.

So like it was like art reproduction is and all this other stuff. Signage. So if you go into like a home Depot and you see like the pink Panther cardboard cutout, things like this company made those. So I got this job in like two weeks later. The other software developer that worked there, the one that'd be like learning from, this is my first job as a software developer.

He leaves, he takes another job. I'm on my own to work at this company, which we can get into plenty of stories about that, but yeah, so anyway, like why are the security kind of led me down that path and got me interested in security, but also helped me land my first job. Just by like talking to people I thought was kind of interesting.

Hacking for Good

Amelia: [00:58:38] That's an amazing story. How long was it before you came back to security from your initial internet security

Eric: [00:58:45] comment? Yeah, so I would say that security has always been like a common thread for me. It's something that I've always paid attention to at every company I've worked at. I've like had a little bit of a focus on security.

Even like when I worked at Google for a number of years, I talked to lots of the security team there and one of the last things I did before I left was, so the team that I worked on at Google was very closely aligned with like the marketing organization, marketing team. One of the last things I did was I drafted the policy that.

Any contractors, the marketing team would want to hire to do like one off websites, all the security concerns and like technology concerns that those contractors would have to follow in order to make something that would be used by Google. You know, I always had like my fingers insecurity around that time.

I met a lot of interesting people. Some were interested in security, some were involved in security at Google. But even when I had that first job, that was the first time that I got to use my ability to compromise another computer. Four. Good. So I'll give you the lay of the land. This is 2001 2002 so it was the fairly early days of the internet.

The company primarily has one main database and it's on just like a regular workstation. It has like a CRT, huge gross looking. Beige monitor attached to it and a gross looking beige keyboard. And that's their database. It's running windows and T a Niagara tango, which is like this old school version of windows that were used for like servers, and it has database software on it.

So it's called SQL server, which is Microsoft's version of like my SQL. So it's just like a SQL database. So this computer is primarily designed to run that in this. Database holds all of the inventory and all the orders that are coming in for all the customers that this company has. So we get orders every day as like a XML file that we'd have the Parson in sort of the database, and then other servers that like would contact or work with the API APIs on the SQL server to run like internal websites that people could like.

Let's see what the inventory was and like manage all the orders. So one day there was an it guy that worked at the company. He did deliveries and then he ran all the of the it stuff. And then it was just me, like the only software engineer that was all the computer stuff. Everything else was like printing, printing, printing and business.

So one day he wants to log into this computer to do the weekly backup. So he has like, they do a tape backup of the hard drive with this computer. Once a week, just to make sure that, like if a case compromise, we will at worst lose a week's worth of data. So he goes to log into this computer on a Friday and he can't log in and there's only one login.

It's just the admin login is password doesn't work. And like he's a competent person. You typed this password hundreds of times before. I can't log in, so you know, I try to like, Oh yeah, maybe you're just like fat fingering out it out. So I tried like, sure enough, can't log in so. He calls the software engineer who used to work there, who like happened to be in town is like, Hey, can you come look at this thing?

He came over, he's like, Oh yeah, you guys may have been hacked or something cause like the login isn't working anymore. He's like, just restore from backup. It'll be fine. But it was Friday, which means worst case scenario, we would have lost that whole week's worth of data. So. I was like, that sucks. I don't want to lose a week's worth of data.

There has to be a way that we can get as, and she's like, it's sitting in front of us. How can we not get access to this computer? So I was like, well listen, like let me have like an hour with it. I know that there is some known vulnerabilities for a windows NT. I want to see if I can use some of those to hack this computer.

Our own computers, it's legal to get access to and then change the admin password back to something that we can use to log in. So sure enough, like after about 30 minutes, I was able to execute this exploit against the computer, get access to it from my laptop and the reset, the admin password. So we were able to get back into the machine and like it was no longer compromise.

So what was interesting was we don't actually know this, the root cause of like what happened to the machine, but what we suspected. Had happened was at the same time, that same era, there is this internet worm going around called the S a worm. That's Sierra alpha, the letters essay. So essay Sierra alpha.

We're also in, this will give you a clue as to what the word did. The default admin login for SQL server software. So instead of typing in route to log in as admin or admin log in as admin, you typed an essay that was the default. Login and then the password, like there somebody full pastor, I can't remember what that was.

So the essay worm would go around and look for, it would get infected machine, the head sequel server on it. By using the default username and password and then it will look for other SQL servers. There were nearby, they could access by the network and to do the same thing and like compromise that. I don't think that we necessarily had the default password, but I think it had some way to like compromise or understand whatever the credentials were and then log in and take over the machine.

So we think that that's what happened is that the essay worm or some modified version of essay warned like compromise this computer and. Prevented us from logging in. So yeah, long story short, I was able to hack our own computers that we could get access to it again, backup the contents of the database and then wipe the machine and start over from scratch so that like we no longer had like what we assumed was a compromise machine and then load the data back on.

So because of my interest in security and knowing what vulnerabilities are possible and like I just kind of understand like what could be done, I was able to save that company a week's worth of data.

Amelia: [01:04:14] That's amazing. And I wish we had more time to hear all of your stories because they're all so great, but I did want to talk about, you're writing a book on InfoSec.

Tell me a little bit about that.

Eric: [01:04:25] The book that I'm writing is essentially the same template I use when I worked with one of my clients. So it's how to take accompany that. Has become successful. They have customers, they're profitable, they're making money, but they haven't done security at all. How do you go from zero to one?

Like what are the first things you do? Well, you haven't done security at all and you know that you need to, what's the first thing you do? So this book kind of talks about like. The right way to approach creating a security program, like a holistic approach, like here are all the things that you could do.

Here's a way to create a data-driven security program so that you know what is you're going to be doing over the next year or two years. You know how to create metrics to track your progress and hold yourselves accountable. You know how to have like an inclusive, instead of an exclusive security team, you know how to like.

Do security awareness. You know the right things to log and monitor so that you're not spending time on the things that don't matter. Like you want to just focus on the things that have the lowest amount of effort and the highest amount of impact. I need to understand what those things are and what your risks are so that you can kind of like spend your time and your resources wisely to create a security program that, that works for your company so that you can sleep at night.

Right. And are worried constantly about while these emails you're getting or getting hacked or compromised or ending up in the news.

Amelia: [01:05:43] What is the hardest part that you're finding for writing a book about this and how is that different than consulting a specific company?

Eric: [01:05:53] I think that I've had a relatively easy time because for most of the clients I work with, I have to document all the things that went wrong with your company or all the things that I found.

I shouldn't say the things that went wrong, but I have to document all my findings and put them into context of report. So I think that, you know, having done that a few times, like doing written word form of like all the advice that I give, it becomes a little bit repetitive. It kind of lends itself naturally to becoming a book.

But the difference is that like it isn't specific to one company anymore. You have to think more broadly. Instead of saying, here's how you know company that sells whatever widgets or widget service on the internet. Instead you had to think about like. If you're a medical company, you may have these concerns.

If you're a financial company, you might have these other concerns. If you're just a technology company, I shouldn't say just to acknowledge a couple people, like if you're selling like online services, you might have a different set of concerns. So there's a lot of mental juggling of whatever sector of the reader might be in and the level of concerns and risks that they have, which is a little bit.

Strange cause like the, the fundamental problem is that I don't know who the reader is at any given point in time versus like when I'm working with a client, I know exactly who they are and I know exactly what the risks are. So right, again, in a more general way can be a little tricky. And it's just a problem of phrasing and wording.

So that's been probably the thing that I've struggled with the most.

Amelia: [01:07:11] Yeah, that makes total sense. And I can't wait to read it. So I want to wrap up here, but I did want to ask you for someone who is already in tech. But once to learn more about security and maybe once they start getting into the fields, what is the best resource for them to start learning more?

Eric: [01:07:31] Sure. So the best resource for somebody to start learning about security if that's what they want to get into. Let me preface my answer to that question with it depends on. What aspect of security they want to get into. And what I mean by that is they're deep technical careers that they could follow in security where you know they're going to focus on being a security researcher.

they'll be like doing penetration tests against the companies they work for or focusing purely on security. But there is another route which is more aligned with the type of work that I do, which is how do you manage a security program? Like how do you run it. From a higher level, like if you want it to be a CIO or a CTO of a company, or if you wanted to be the head of security at a company, you don't necessarily need like the deep technical skills of a security researcher.

Because the day to day role is very different. Like you need good communication skills, you need good soft skills. You have to be able to communicate and present all this information effectively. You have to be able to like plan out, like how do you do security and then tell people like, this is what we're doing and why and what we need from you.

Like what kind of help we need from you. So I would say like. If you want to take the deeply technical route, there's a lot of resources. I would say, like, as I mentioned before, the local wasp chapters are a great place to start because you'll meet all the people in your local community. You understand where they work, and it's somebody who you can relate to and ask questions of, and there's.

Almost always there'll be somebody who has a similar background to you, like they will be a developer who wanted to get interested in security. How did they do it? Another great resource I would say is security conferences are good. One of the more low key ones is called, besides as Bravo sides, and they do them in most of the major cities in the world.

I know that. They're international. And again, like it's pretty low key. It isn't like a massive conference like the AWS conference or DEFCON or black hat, so it's not going to be overwhelming. And there's a lot of diverse backgrounds. I know there's a lot of software engineers that attend, so would say like starting those conversations and like meeting people and security is a great way to just sort of see what's out there, who you can talk to if you want to.

Learn the skills. Oh, OSB usually does. I don't know if it's monthly, but they do like a hands on exercises that you can use to like learn some of these things. There's also, like we talked about before, the bug bounty programs that are out there, so anybody can apply to be a security researcher on these platforms.

So if you think that you can find security vulnerabilities with any of the companies that are registered with these programs or have a Bookbinder program. You could register as a researcher and just choose a target, look at their website or the application or whatever it happens to be in scope and try and find vulnerabilities.

And if you can, you can report them in the right way and then you'll potentially get paid for it. So that's another great way to learn. You know? And then I guess lastly, like you could always take a more academic approach. Like if you're in university, most schools are offering some kind of cybersecurity classes.

They tend to be deeply technical and focus a little bit more on the it side. It's a great place to start.

Amelia: [01:10:27] Those all sound like great resources and we'll include a link to a last besides and the bug bounty programs within our show notes. Thanks so much for coming on. It's been an absolute pleasure to have you.

Thanks a lot.

Eric: [01:10:39] Yeah, no, thank you for having me.

  continue reading

9 episodes

Artwork

Proactive Security with Eric Higgins

newline

14 subscribers

published

iconPartager
 
Manage episode 267294199 series 2754560
Contenu fourni par newline, Nate Murray, and Amelia Wattenberger. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par newline, Nate Murray, and Amelia Wattenberger ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.

Resources:

Welcome to the podcast. Our show is a conversation with experienced software engineers where we discuss new technology, career advice, and help you be amazing at work.

I’m Nate Murray and I’m Amelia Wattenberger and today we're talking with ex-Google engineer Eric Higgins who is the founder of Brindle Consulting and co-author of the book Security from Zero.

https://www.brindleco.com/

In this episode we talk about how to think about security as developer and how to take the responsibility we have seriously. We talk about how to take a preventative and proactive approach to your security, and that means we cover:

  • How to deal with extortion threats by having a bug bounty program
  • How to think about automation tools when it comes to security
  • What resources you should read if you want to get better at security
  • How much does a web developer need to know about security, really?

Eric has worked in security for a long time and he does a great job at being pragmatic to make sure the security goals are in line with the business goals. Amelia and I really enjoyed our conversation with Eric and I'm sure you will, too. Let's get started.

Eric Higgins Podcast

Nate: [00:00:00] All right. So Eric, welcome to the show. Just kidding. Thanks for having me, Nate. Your company is brittle consulting, so tell us about it.

Eric: [00:00:07] Brindle consulting. I basically help my clients who work in the tech sector and have customers, have been customers. They're profitable, but they've.

Avoided working on security for a little bit too long, and now they are finally starting to realize that they have some problems that they need to address, and it's becoming overwhelming. So I help them create a very practical security program so they can start to address these things so that they stop from feeling like they're reacting to all this stuff and start taking some proactive approaches.

Nate: [00:00:37] What kind of stage company are we talking about here? .

On Bug Bounties

Eric: [00:00:39] the types of stages of clients that I thought I would get are very different than what I've actually had to work with.

here's like the common denominator in all these cases.

Usually they'll start to get emails to a gall, have like a security@mycompany.com email address set up where people can report security issues and they. Inevitably, we'll start to receive these emails from security researchers. I'm quoting here, security researchers, and it's usually people who are running these scripts that look for common vulnerabilities against like somebody's website, and.

They're basically trying to extort these companies for money to pay out because they don't have a bug bounty program in place. And what that really means is that they don't have a policy in place to say that for these types of vulnerabilities that we're willing to, pay, you'd report to us responsibly.

This is how much we pay, right? And this is the rules by which this game is played. So they start to get overwhelmed because they constantly get hit by all these things or all these emails from these researchers, and they start to feel overwhelmed. And it gets to the point where the individuals who are responding to all these emails or all of these security related issues start to realize that like they can't get any of their normal work done because they're just buried in all these security related requests and they realize like it just like, and any other company for any other position, you need somebody to be doing this stuff so that you're not the one doing it. So then they come to me and they say, how do we avoid this problem? Maybe we're not at the stage yet where we can hire somebody to work on security full time for a variety of reasons, but maybe we can do some things to make sure that we don't feel like we're buried in this work and we're not constantly getting distracted from working on our product, but still making sure that we maintain a certain level of security and know how to respond when these things come up.

Nate: [00:02:26]

Yeah. I want to talk about the bug bounty programs a little bit. So going back, you're saying you used air quotes around security researchers.

The implication is they're maybe not really researchers, but maybe they, what's the idea that they have, they're using automated scripts or something to find these vulnerabilities and they're just trying to. Collect bounties? Are they actually trying to say like, we found the security hole and we're going to exploit it.

You don't pay us a ransom. What are you implying here?

Eric: [00:02:49] So it's a little bit more of the former, I mean, I guess there's a hint of the ladder in there. So here's what I really mean by this. So not to admonish anyone, because I think that there, I mean, I know that there are a lot of real security researchers out there who play by the rules, but there's a certain class of individuals.

And there seems to be a network of them that they tend to come from like third world countries or where they have internet access and like they're just looking for some way to make money. Right. So, you know, it's noble cause I suppose, but they specifically seem to target companies that don't have a published.

Responsible disclosure policy. So responsible disclosure is really like the umbrella term for what a bug bounty policy is, or a bug bounty program. It's a way to report security issues to accompany in a responsible way, which is the opposite, would be like they just publish about it on like Reddit or hacker news or a blog or something and make it public to the world without telling you first.

Right? So the old school mentality. Or approach to this that a lot of companies used to take was if you reported security issues to us, we would assume that you were a hacker and we would start to litigate against you, right? We would take you to court and Sue you cause you're hacking us. So that approach doesn't really work like that stupid and nobody should do it.

And I have a firm position on that instead. The way that the landscape has shifted is now there's actually companies. In existence that will help you create and run above bounty program where it's an incentivized responsible disclosure program. That's what the bug bounty is. And you basically say, like for this class of security issues, we'll pay you X amount of dollars.

So just to give you some examples. So Google I think has different classes of bounties that they'll pay out. And I think the highest is something like $100,000 and that's if you can find a security, like a major security issue in like the Chrome browser or Android operating system for their phones.

Right? So there's a very high level of payout for very like deeply technical and like widely exploitable type of security issue. More commonly for like the class of companies that I work with, they'll have some kind of web application. It will be vulnerable to like SQL injection or something else. It's like relatively common that these, I would say the lower tier of security researchers, they're looking for all these low hanging fruit that they can run some kind of software and scan for these things and find them pretty quickly.

Then they contact the companies by email and say, Hey, I thought all these issues, I would love to get paid for my work. So the problem with this is that there's, I guess a few problems with this. The first is like they're not really. Doing a lot of work, right? They're bringing it to your attention, which is great.

But as soon as those companies, and this has happened to a number of my clients, as soon as you pay out with one of them, they tell all of their friends and their network that this company pays out. Then like you start to get inundated, you get pile on with all these security reports and they may have run the scan once and like are sharing all these different security issues with their friends so they can all kind of get paid.

So it's a little problematic and it's problematic because. The companies haven't said, these are the rules and here's what we're willing to pay. So when it comes time to like reward these researchers who are reporting these issues, they don't have any guidelines to follow to save. This is how much we're going to pay for this type of vulnerability or this type of vulnerabilities out of scope.

You can't stalk our employees on Facebook or LinkedIn and try to extort us for higher payment because you disagree with, because there's no written policy to say, these are what the rules are and we use what the payments are. That's kind of where they get stuck, right? Like they. Not having the policy in place is really like the key driver to this and these researchers, the air quote, researchers are starting to target those kinds of companies because they know that they can get payment and kind of extort them for a little bit higher.

Nate: [00:06:21] What are the types of classes of like the tiers for the types of bugs that the people typically pay out for? And also who gets to decide? Is it just like the company gets to decide somewhat arbitrary early and they say, like we said, that if you find SQL injection, we'll pay out, you know, $1,000 or there are many cases where it's ambiguous what the actual vulnerability was.

Eric: [00:06:40] I would say it used to be more ambiguous than it is now because bug bounty programs are. Much more prolific than they used to be. It's become almost standardized to say, like for this class of vulnerability, this is the payment tier we're going to pay out. So here's the common case. So it is set by the companies.

To answer your direct question, the payment tiers are set by the company and usually goes along with what stage they're at and like what their financials look like. So they'll set some kind of a budget for the year to say, this is the max we want to pay for security issues through this bug bounty program for the year.

So let's say it's. I don't know, $10,000 or $30,000 whatever it happens to be, it's usually pretty low around that ballpark. So then they can say, well, we're going to expect in the first year to get, based on our priors, however many we've had from these researchers, maybe twice that many, because now we're like publishing that this thing's available.

So we'll expect it to see more for a specific. Type of vulnerability, like let's say it's low hanging fruit, you're using an older version of some Java script library. Then maybe has some kind of weird vulnerability in it or the vulnerability one, its dependencies or something like that, but the effects of that aren't very great.

Like the impact isn't great to your web application. It just happens to be like, Oh, this isn't a best practice. The threat level is pretty low. So thanks for reporting it. Here's like $100 like so the lower end is usually like maybe a hundred bucks, something like that, maybe $50 it all depends on the company, what they decide to set.

So at the higher end of the types of security vulnerabilities that the companies are looking for are things like remote code execution. Like you can. Fill out some form on our web application and somehow run code on our server that we didn't expect you to run. Or you can somehow access everything that's in our database so you're not supposed to be able to access.

So the classes for security issues. Are fairly well documented. There's like, you know, five or six general categories they fall into, but it's really the level of impact that that security vulnerability that the reporting has and whether or not it can be reproduced and it's well documents and all of a sudden things kind of play into whether or not it's actually granted as a true vulnerability or a valid report.

So the level of impact that the security issue has is being recorded usually ties directly to the level of payment. So, you know, a company that's first starting off, I usually recommend a couple of things. First. your payment size pretty low, especially for the first year, because you're going to get a ton of low hanging fruit and you're not going to want to pay like $10,000 per whatever, weird JavaScript vulnerability that it's relatively low.

So, so Kevin, pretty low for the first couple of years. And then the second piece of advice I usually give that they don't always follow is to use a managed bug bounty program. And what that means is you pay these companies who provide the software. It's almost like. I'll use get hub as an example, like their get hub.

In this scenario, they're offering the software that hosts this bug bounty program. So that's where the security reports go to and are listed and are managed by teams of manage bug bounty program is where that company also provides. So their employees to review and triage the tickets and make sure that they're like written in the proper format, they're reproducible and all these things before they actually come to your teams.

That really helps to reduce the amount of noise because especially at the very beginning, what you go public with, your bug bounty program. You tend to get a really, really poor signal to noise ratio and you want to try and improve that level. So I usually set the caps pretty low, make sure it's managed for like the first year because you're going to have to manage all the noise and then as time goes on, you start to increase your budget, you increase the tiers, you can increase the scope, and if you hire people who can manage this thing, then maybe you don't have to pay that company, whatever they charge for somebody to manage it for you.

Managed Bug Bounty Programs

Nate: [00:10:19] What are the major players in that space? who are the companies that, or maybe the defaults to go to?

Eric: [00:10:25] The two main ones right now at the time of recording our hacker one and the other is called bug crowd. For all intents and purposes, they offer nearly the exact same services. Their marketing material in their sales team will tell you that there is slight differences between them and there are, there are some differences.

There's differences for the types of integrations their software provides. They'll tell you that there's a different number of. Security researchers in their platform, and in a lot of ways it's very similar to Uber versus Lyft. Hacker one was first in the same way that the Uber was first and Lyft came later.

Same is true. Both crowded came later, and also in that same way, I would say that based on my experience, hacker one is a little bit more aggressive with like their sales and marketing techniques. In the same way that Uber is a little bit more aggressive with their sales and marketing techniques. That being said, it work successfully with both these companies.

I'm not trying to like bash any of them by making a negative correlation between any of these companies based on, you know, whatever your predilections happened to be about Uber and Lyft. So those are the main players. Now, interestingly. At my previous role at Optimizely, we use a company called Cobolt who also did, or also offered a bug bounty program as software package, like as a service.

And recently when I reached out to them to see if they're still doing this, they have transitioned away from that model and more towards almost like an automated model where it's. They scan your systems from the inside and try and look for these vulnerabilities. At least that's the way that I am remembering my understanding of it.

It seemed kind of complicated and expensive when I talked to them. Maybe it's a great product, but it was interesting that they had completely pivoted away from the previous model where they were kind of competing with hacker one and bug crowd to something that's completely new. The Role of Automation in the Future of Cyber Security

Amelia: [00:12:01] How much of this space do you think is going to be automated in the next few 10 years?

Eric: [00:12:06] So my background, I should clarify, is as a software developer, so I tried to think of the question of automation in terms of a software developer, like what's possible to automate. And I'm like, what should be automated? So, so this is actually a really interesting question because I've started to see in the last couple of years a lot more tools that.

Offer automation for all these kinds of problems. Like the security space is just like one aspect of this, and I'm sure that like, you know, by next year we're going to have all kinds of crazy blockchain distributed Kubernetes, AI driven security tools that are out there trying to sell us products.

Whether or not they work, I think is a different question. And if you think about the last few years, like there was this huge push like, Oh man, machine learning is going to solve all these problems for us and is going to solve all these problems for us. And then a couple of years later people start to realize like, Oh, you machine learning is a cool tool for a specific set of problems, like finding patterns and making sure that you're including things in the same kind of pattern.

So more for AI. Like there's certain things that they're really good at, but it is not like general AI. Like it doesn't. Do all of the things that a human being can do very simply. So you have to kind of back away from that. And like we've started to see people sort of backing away from these like very grandiose claims about what those things can do or what they're capable of.

So I think to answer your question, I think the same is really true as it currently stands for security software. There's a lot of companies who are offering crazy AI driven, automated tools to do all these things, but whether or not they actually do the things they say, like I think is a different question.

And. It's really up to the companies buying whether or not they want to go through a pilot program and see if it works for them. What are they willing to pay for it? I think fundamentally, the question for any kind of software as a service comes down to what am I paying for this and how does that correlate to the number of employees that I would normally have to hire to do that job?

Right? Are they automating something that. Is easy to automate that we could do ourselves. Like is it, you know, just trying to match them patterns that we know and like could just add a filter to Splunk or whatever logging software we have, or they're doing something more advanced where we're like, we would have to build out a huge crazy complex platform, two ingress, all this data and then, you know, run a bunch of code against it to find like weird patterns that we would not normally see.

How much time does that save us? Not only like at the initial, but also like over the longterm. And I think the same answer is true for a lot of software as a service. Like if you're going to charge a company $30,000 a year for software, but it would cost them an engineer. Per year to do that same job, like an engineer is going to cost them $100,000 or more.

So they're saving a ton of money by using the software instead of hiring an engineer to do that job. So maybe that's a roundabout way of answering your question, but like that's the way that I think about these things. I don't have a lot of firsthand experience with a lot of the newer automation tools that are coming out.

Maybe they're great. Maybe they're junk. I mean, I haven't seen evidence in either direction yet, but my gut reaction to me, I've just like worked in security too long, so I'm always like a little bit skeptical. I'm usually pretty skeptical about what they're offering, like whether or not it's worth the price that they're asking.

How You Detect Hacks

Nate: [00:15:03] You mentioned using Splunk to track logs and to find abnormal behavior. One of the things that I've noticed when I've seen blog posts about security incidents, they might say, you know, we had an employee who had their admin panel password hacked and the attacker had access to all these accounts for like three days and you know, we were able to track them down and shut them down. What tools would you use to actually detect that? Because for pretty much every company I've worked for, if a hacker got access to an admin's password, no one would ever know, like ever. Like we would never find out that that had happened.

So like what tools and processes and monitoring do you put in place to catch something like that?

Eric: [00:15:46] You opened that really interesting can of worms. And here's why. So the question that you asked is how do you detect this? Which. In the question itself, you're already telling me that it's too late because it's already happened.

So this is really the type of thing that I focus on with my clients is How do you prevent this from happening? How do you make sure this doesn't happen? Because if you can make sure that it never happens or is nearly impossible to happen, or is such a great burden for an attacker. To go after that approach.

They're going to do something else. They're going to do something that's easier instead so then like, maybe you don't need a crazy monitoring solution for this kind of hard problem in place because even if you had it now, you know it's too late. They already have it. Right? So how long would it take them if they had ad admin access to your systems to copy all that data?

Right? Even if you can shut them out, maybe it took them 30 seconds and like it took 30 seconds just for you to get that email and read it. And they already have your data, so it's too late. Right. So I would rephrase the thought process too. How do you prevent these things from happening in the first place so that you don't have to worry about like, Oh my God, like what are we going to do if this happens?

Cause that's a much harder problem. So I focused on the easy thing. So the easy thing is how do you keep people out of the admin. Handle it, of your systems to have access to everything. And just as a preface, I want to point out that I think a lot of my clients and a lot of people I talked to tend to think that attackers are going to try and go through your web application or your mobile application to try and hack your company.

But that's a pretty limited approach. And I think threat actors in the space have already started to realize this. So the targets that they're choosing instead are developer machines or developer systems. So. If you have Jenkins running all your CIC CD systems, your continuous integration, continuous deployment, that system probably has keys for all of your servers as keys for all your source code.

It probably has rewrite access to your database. It probably has admin level access to everything so that it isn't blocked. So that's a really ripe target. And usually when people set up. The systems, like they just set it up just to the point where it's working, but not necessarily secure. Right. And it's just like an admin.

Yeah. Right. It's very common. And that's the thing. And that's kind of the reason, the realization I had when I started consulting is that everybody has the same problems. Everybody's making the same mistakes. So there's a pattern that's pretty easy to solve for in, it's really just a matter of education.

So that's kind of what I focus on. So getting back to the question of how do you prevent this from happening. There's a variety of ways, like the easy one that I usually recommend is for anything that requires admin level access. I review who has access to it with my clients and say, do all of these people actually need admin level access on a day to day basis for their jobs?

Often the answer's no. Right? There might be a couple people who day to day need admin level access to do their jobs in whatever that system happens to be. For everyone else, they can get a lower level. Admin privilege or whatever it happens to be, or lower level permission. But I admin like they don't need rewrite access to literally everything.

So that's the first thing I focused on is who has access to this and this. So this model is called least privilege. So you're offering the least privilege to most people by default. So that one comes up a lot, right? And then the second thing is for the people who have admin access or any access at all, can you enable some type of multifactor off like two factor auth using, you know, the Google authenticator on your phone or like a YubiKey or some other kind of system to make sure that even if your admin password was published on the internet, nobody could really do anything with it.

There's something else preventing them from logging in. Just those two things like limiting who has access to admin levels in systems. Enabling multi-factor off. Get you most of the way there. Like you're almost to the point where like it is a really hard target now to get into those systems. Now I could kind of fearmonger you and say like, well, you know, it's an had been a little system in like the person who's the admin is kind of sloppy and they set up SMS for their two factor auth instead of like a, you know, authentication app and maybe their phone gets spooked and now like it's possible it's a compromise that there's all these weird ways to kind of work around these things, but it's a much higher bar.

Than it was before where maybe laptop got stolen, right? Or like somebody just like look over their shoulder and saw them log in, or you know, maybe they like sniff their cookie or something like that, and then now they have access to this system. So it raises the bar for that kind of thing just by putting these preventative measures in place.

But to answer your question more directly, which was. How do you know about these things after the fact? Normally, any types of systems that have admin paddles, not always, but they will often offer some kind of like auditing system when it, any kind of administrator logs in, it will keep a separate log for all the actions of that person.

So click, who logged in, where do they log in from? Like in the world, what was their IP address? What actions did they perform? So if you are talking about like the AWS. Council or like the Google cloud console, they usually offer this kind of system. I think Splunk does as well. So this gives you a couple of things, like you've prevented the ability or not the ability, but you've raised the bar for getting access to these admin systems, or if you've made it much harder to get in, and you also have some kind of system in place to say like for anybody that's in there.

What are they doing? Like do we have some kind of record for what actions have been performed so that we know that if somehow one of those logins were compromised, we have some record of what actually took place act that happened. So hopefully that helps to answer your question. That gives you a little bit more insight into like the kind of things I'd be looking for.

Amelia: [00:21:03] It kinda sounds like there is no incentive for actually monitoring the logs.

Like you can only get bad news that you can't act on. So if I were a little bit more nefarious, I might just never look at

them.

Eric: [00:21:16] That's a good point to bring up. I would say that it depends on the logs you're talking about. If we're talking about like the audit logs that are considered any action that is contained within Ottawa, we assume that it's somebody internal to our company, like somebody who should have access to this, except for the worst case where maybe somebody.

Shouldn't have access, does have access to like they're doing something weird. Right? So the audit logs are, they're definitely going to be reactive in the, in that case. But if you think about logging more generally, like if you think about your server logs for your website, right? That actually can be a good leading indicator that an attack.

Might be happening or somebody probing your systems. So you can look for all kinds of interesting things in your logs. You can look at like the login page, like all the login URLs and see like is one IP address trying a bunch of different logins and failing? Are they trying one login? And it's constantly feeling like they're trying to brute for somebody's password.

So there's a lot of things you can do and get a signal that like. An attack is happening and understand what they're trying to attack in your systems just by looking at the log in without actually having been compromised. So let's say the, in that more general case, you're getting a leading indicator instead of a trailing indicator.

So I think both of the things you said are true, but I think it depends on the system that you're talking about.

Nate: [00:22:32] Looking at the logs after an attack is sort of reactive. Taking more proactive steps of making sure that you review who has admin access in your review that everyone has two factor auth.

That's more of like a proactive approach. Would that fall under the more general umbrella of threat modeling? we're looking at this and saying, okay, how could we get attacked? We could be attacked by one of our employees losing their credentials somehow, or leaking their credentials. What are some of the other things we might look at to have a process to prevent things before they happen.

Eric: [00:23:02] Oh, this is a great question and I'm glad that you brought up threat modeling. So threat modeling is an exercise, and by far, what are the best values for getting the most information to the least amount of time that I do with my clients. So I want to try to explain threat modeling to help you wrap your head around it and like give you some examples like what it is and what it means.

It'll help you to answer this question for yourself and the way that you would sort of like think about. The general problem, which is like we have this system in place. What could possibly go wrong with it? Which is the shortest version of like what threat modeling is. So here's the most simple real world example of threat modeling, let's say, and this is something that people do on a day to day basis.

So here's a simple example, like you want to go out to lunch from work. And meet up with a friend for lunch right now. There's a lot of considerations that your mind processes before you actually go out to lunch. Is it raining? Do I need to take an umbrella? Right? The thing that could go wrong is like, am I going to get rained on the preventative measures?

Like, do I need a rain jacket? Do I need an umbrella or. Another thing that comes up is like, you know, are there any dietary restrictions I have to keep in mind for myself or the person I'm going out to lunch with. If there are like, how are we going to resolve that? Where are we going to choose to go to lunch?

Do I have to be back at exactly one o'clock for a meeting with my boss and I can't be late? Like we can't go somewhere that's too far away. So it's really, this is this process by which you think about problems and think about all the different things that could possibly go wrong and then come up with.

Different ways of solving them so that you avoid as many of those problems as possible. Now, some of the risks are. You don't want to get caught in like analysis process where you think like, Oh my God, whatever we're trying to do, there are so many things that could go wrong. Like maybe we should just not do it.

My advice is to say like the approaches you take is usually the one that has the most pros and the fewest cons rather than no cons cause you'll just never get anything done. If you try and take that approach for threat modeling as it applies to security and it applies to software. Let's think about.

Something that's a little bit more practical than like going out to lunch. So let's say we have. I dunno, a basic software system. There's some kind of web application running in production. It has a database somewhere, and we want to say like, what could possibly go wrong with this? Like how could it be compromised or abused?

So the approach for this exercise is you get a bunch of. People who work at your company or work with you on this project in the same room together and you just have that conversation. Like I usually start with a different question though. I say like, what are the biggest risks for our company? What could really just ruin us or like put us in a bad situation?

And it might be, we've got a lot of really bad PR at this point. We would really struggle to recover from it because we lose a lot of customer trust. We'd end up on the front page of the New York times with all this negative press, and like it would really hurt our brand and they'd go to our competitors and said, so that might be the worst case scenario.

So like, how would an attacker. Compromise you in such a way that would make it a really bad PR campaign. Or it could be like, we have a lot of this really sensitive data in the database, like maybe it's personally identifiable information, PII, like people's credit card numbers, or it's their address along with their names and email addresses.

All this stuff. Like we cannot allow this data to get leaked because like our customers, again, like they wouldn't trust us. It'd be a huge problem for our company. Going forward. And that's a hard problem to recover from. Like once I data's out on the internet, you can't unleash information. Like that's a hard one to recover from.

So you really have to think about how you can prevent it in the first place. Similar to the problem of admin credentials, like after it's done, it's too late, right? You enough solve these problems before it's too late. So just to give you a third one, like maybe the worst case scenario for our company is.

Financial, right? What would happen if these attackers got access to our bank accounts? Or you know, maybe we deal in like Bitcoin. Like what if people like could somehow compromise our system and like steal everybody's money. So that would also be problematic. So there's all these different scenarios you could think about that would be worst case scenarios or maybe like, not necessarily worst case, totally disaster, but like hard recover from problems.

And then you think about like what ways would an attacker or a malicious actor. Achieve that goal based on everything that we know about how our systems and our software works today. And sometimes this goes beyond. Again, like as I said before, it isn't just your web application or just how your database works.

It could be something much more sinister, like maybe you have. A bunch of laptops and people who work from home work from cafes, like you know, Starbucks and a laptop gets stolen and that laptop belonged to a cofounder and the drive wasn't encrypted. Right? So now an attacker like who stole this laptop?

Maybe they didn't know what they stole, but they find out and now they have access to the bank account information and login four, your company's bank. Like it's a bad scenario. So how do you prevent that kind of thing from happening? Or maybe something else, like how would they get admin access to one of our systems?

Can we prevent that in some way? So it's this way of sort of thinking about the problem and preventing it in advance. And then once you leave, you should have a list, Oh, here's all the things that an attacker could do. And it normally boils down into like a handful of very common things, like all these different threads of attacks.

Have you a few things in common, like, you know. We don't have to factor off the Naval, not on all these systems, really, too many people have access to it or drives aren't encrypted or whatever it happens to be. It's usually a handful of things that are common amongst all those attacks. And those are the things that you focus on fixing first because you, I fixed one thing that solves three possible tax, right?

So you're really like getting a great value for the time that you're spending and you're also focusing on the right problems instead of like the things that might happen. Maybe like it's a low. Likelihood and maybe low impact if they do happen. Like that's not really where you want to spend your time.

You want to focus on the things that are potentially big problems for your company and take the least amount of effort to achieve. So hopefully that helps to answer the question. I would say that the one other thing about threat modeling is, I can give you a recent example from the news where like this kind of went horribly wrong.

So I think this was maybe last year. There is a company called Strava who does like fitness trackers. The way that struggle works is like people attach them like a Fitbit and they run around it. It maps out like everywhere they run. So then like the people who were running King see where they ran, what their route was, and then they can keep track of their miles, which is great.

But the other thing that Strava did was it would. Publish on a public map everywhere that you were running, which, you know, privacy concerns start to bubble up with this, but people start having fun with it, right? They start drawing like the Nike swoosh with their running patterns, or they draw, you know, spaceships or Darth Vader or tie fighters and all this stuff.

People will start to do this more and more, and like this feature gets really popular. But the other thing that happens is that the U S military has soldiers. Who are using these fitness trackers while they're exercising, but they're doing it secret bases around the world. And now you look at Strava map and you have all these little hotspots that show up in the middle of Africa, or you know, somewhere where there's nothing else.

And there's this little tiny hotspot and. The effect is that Strava has just now leaked the specific locations of all these secret us military bases around the world. So huge problem, right? How do they not think about this in advance? This ends up on the front page of the New York times, you know, Strava leaks, location of all these secret us military bases.

If your Garmin, a competitors' Strava who offers nearly the identical product and has the same problem, you might think like we really dodged a bullet. Because we're not on the front page of the New York times. But three days later or whatever it happened to be like they also were, because they had the exact same issue.

It's kind of interesting to me that like they didn't immediately like, we need to fix this now. I'm like, delete this from the internet so that we're not also caught in the same place, but it just goes to show that. I don't know what the root cause was that they both ended up having the exact same problem where they didn't think about what the consequences were, their actions.

A more lighthearted example through our modeling is the Boaty McBoatface example. So there is this like online voting system in the UK where they're going to name some military ship or whatever happened to be, and the top voted ship name by far is. Boaty McBoatface right. And really like that's kind of an abuse of the platform.

Those weren't the answers that they were hoping to get, but is the answers that they got, what the mitigations were for preventing that? Like maybe the consequences weren't that great for Boaty McBoatface but the consequences for leaking the secret location view as how us military bases is pretty high by comparison.

So you have to think of these abuse patterns in addition to how could we actually be hacked. Like Strava wasn't hacked. They like leaked this information out because like. Like the system was working as designed by, it was a use case they hadn't thought about in advance and it was like it published on by default, I assume.

So anyway, like those are just some simple examples of threat modeling and like the ways to think about these things from a larger perspective. And I think the last thing I would say about through modeling is it depends who you invite. To this meeting where you conducted the right modeling exercise.

Because if I were to ask a database engineer, what's the worst thing that could happen to your company? A database engineer is going to tell you all about the worst things that can happen to their database. Cause like that's their world. So the best person to ask this question to and is usually somebody in executive leadership because they're going to have the best perspective.

I'm like. What the company I'd a broader vision is doing, like what the real business risks are. They don't necessarily have to attend a meeting and hear all the nitty gritty details about how the database works with the web application works or two factor off, but they should provide those initial answers to the question of like, what's the worst case scenario for our company?

And then everyone else who's more technical can think about their own systems. Either it. You know, managing all the laptops of the company or the database engineer, managing all the data storage systems or the web application engineer running all the Node.js Or Python code. Whatever happens, all those people should have one representative in the room to think about their own systems and how it can contribute to the threat modeling exercise.

Security in Today's Web

Amelia: [00:32:48] I feel like your examples have highlighted something about how the web itself has evolved over. I don't know, the past 30 years where it used to be this scrappy connection of people in different parts of the world and we get to do weird things and it's all fun and lighthearted and now it feels like we have to grow up because we can't just have fun anymore.

People will use our fun.

Yeah.

Eric: [00:33:14] That's a really interesting way to phrase that. Arguably, fun has always been profitable to some degree, but I think we're not quite as carefree as we once were. It's certainly true that the old internet, as I remember it, like there were still plenty of problems, like security problems, the ability to really like.

Make widespread chaos and the old school internet was much harder. And like there's a lot of ways I could speculate or reasons. I can speculate why that's true or more true now than it used to be. So one is like the way that you phrase it was like it was a bunch of small little interconnected websites, right?

Like maybe people were hosting on their servers and like when they turn their computer off at night, that website went down until like the next morning when they turned it back

Amelia: [00:33:55] on, the store is closed.

Yeah.

Eric: [00:33:57] The store is closed. Exactly like. And I've had that experience plenty of times when I've seen that for a website I was looking at at 3:00 AM, but now if you think about it, because of the way the industry has grown in evolves, there's servers run all the time and it's cheap.

I mean, it's practically free to run a web service. And most of them, a lot of them are consolidated on three major platforms. . AWS and Google cloud, and they're on all the time. And you know, if there happens to be a fundamental security flaw in Google cloud or AWS or Azure, that affects almost everybody, right?

And we've seen this come up a few times, I would say like the last seven years. So, you know, when I worked to optimize the is when we had. A number of industry-wide security vulnerabilities come to light. So Heartbleed was one of them. Shell shock was another, and if you were working at the time in the industry, you probably remember like it was all hands on deck.

We had to patch all of our systems in like prevent this because the fundamental problem, these cases was that in some version of bashes insecure Nick, you could compromise it remotely. And then the other one was, there was some kind of. Underlying security vulnerability with open SSL, which is the library used by like every Linux server, which is most of the servers on the planet.

And this is a huge problem. So everyone had to go out and like patch all servers the exact same time. So for a couple of weeks during these periods of time, nobody was writing code. Everyone is trying to patch their systems to make sure that they weren't the ones that were hacked. And the other thing that is.

Also happened is not just that the targets have sort of like shifted from being, well, I could compromise this computer, but it's like off from midnight until 5:00 AM it's just one computer. Right? But now it can compromise all these computers. Right? So the, the targets are much bigger because connectivity has improved.

The sharing of information has improved, which is like by far has. More positive effects than it has negative, like there's GitHub and all these ways to share code. But now like the things that can also be shared are, here's a tool called that allows you to just click on button in, like run some kind of crazy massive attack.

Or here's the source code for the myriad worm, which shut off most of the internet. And when was this like 2015 I can't remember exactly. So they can share the, the nasty code, the dangerous code, as well as like the good code that, you know, people write day to day. And I think for the most part, people just want to do the right thing, but there's always going to be malicious actors out there.

And it's certainly true that like now they have easier access to some of these tools and it's problematic. But. The good news is that everyone's getting smarter about security. They understand what the attacks are as technology improves, like the attacks, the types of attacks are going to also like mature and evolve with technology, but people are more wise to it now.

As has always been true of history, we learned from the mistakes of our past, or at least we should, and hopefully like the technology we build tomorrow is better than the technology we built yesterday.

How much should a responsible Web Developer know about security?

Amelia: [00:36:51] I bet your experience of these attacks is a very different experience than the experience I as a software developer has.

So when Heartbleed came out, I remember all I knew was it's a big deal. We're freaking out and everybody should be upset,

Eric: [00:37:08] and

Amelia: [00:37:09] maybe I can spend three hours reading up about it to try to understand. So as a software developer who doesn't work in security. How much should I know about security? What are some basic things that I know and how does that differ from, say someone who isn't a software developer?

Do they need to know anything.

Eric: [00:37:27] Oh, these are both excellent questions and thanks for sharing your experience about Heartbleed. I just want to clarify the, at that time I had a lot of different roles. What I worked to optimize the and security is just like one small portion of that. All of the things I had to focus my time on.

And it was really like a group effort of everyone coming together at the company, all the engineers and it professionals to come together as this sort of like make sure that we did the right thing and patch our systems and like communicated to everyone that. patch things as quickly as we could and to the best of our knowledge, like nothing was compromised.

So I think we did everything we could in that situation. We worked as a team to kind of solve the problem. Just like you said, it was kind of pants on fire, like everybody knew, like, Oh my God, this is everyone. It isn't just like some companies, it's everyone except for the few people out there who run Microsoft based servers out there.

I'm sure they're laughing at us, but that's okay. We get to laugh at them the rest of the time, I would say. Yeah. So your question was, what should you think about as a software developer about security. On a maybe a regular basis or how do you learn more is, am I remembering correctly? Yeah.

Amelia: [00:38:26] How much do I need to know?

Like how much should I feel responsible to understand?

Eric: [00:38:31] I would say that my general advice, which is less specific about security, is. Take in as much information as you're comfortable with, like, you know, read some more diverse sources. Like I think it's common for, for engineers, especially those who are like just starting out to really focus on how do I write better code.

Like that's the one thing they kind of focus on is like, how do I write the best possible code? Like how can I learn all these interesting coding design patterns and like make my code run faster and like have fewer bugs. And I would say that the more diverse sources you can read, the better you'll be at your job on the whole.

So. Here's some examples, like try to understand the perspective of like the product and program managers at your company or like the marketing departments. What is their job look like or the support team, what does their job look like? What kind of questions are they getting from your customers on the support team?

How are they helping the customers? Like what does that system look like? How do they do their jobs? Do they have to provide technical support? And some companies I've worked at. We were allowed to sit on sales calls like with potential customers and just sit there and listen to the concerns, sometimes security concerns, but sometimes just like the product concerns about from potential customers.

We could also sit in on calls with customer, like existing customers in here about their problems. And it really helps to like understand your perspective or a change of perspective to understand their perspective about like, you know, what are the things that actually concern them cause they're going to be different than what you assume they are.

Which I think really helps. As far as security goes. Like the same thing is true if you have the opportunity to participate in your company's security program, if they have one, I would say the right way for a company to run a security program is one that's inclusive instead of exclusive, which is to say that like you have office hours, you invite people to join and participate.

Instead of saying like security is our world. And like, we're trying to protect you. Just stay back and let us do our jobs. Right? I vehemently disagree with that approach by disagree with this exclusive approach where like they played the new sheriff in town to like, they're trying to protect everybody and no one else can really play the game because it, it has a number of problems with the main two that come to mind right now are nobody likes to be told what to do.

They like to understand what they're being asked to do. They can comprehend like, okay, there is a good reason why I have to do this other work instead of like Joe over and it just like, you have to do this, whatever security thing is now. Like that's annoying. Okay. I guess I'll do it. Cause they wrote a new policy.

And the other thing is that. By being inclusive, it helps to spread like education and awareness about security. So for example, if you worked at a company, they had an inclusive, you know, anybody contribute security program, you would probably have the opportunity to go in and maybe participate in the threat modeling exercise and you'd have a better understanding for like, you know, what are the threats our company actually faces?

Which might inform you later on if you're creating a new feature or a new product for that company. Oh, I know that. If I, you know, create this web service, these are the kinds of threats that. It might face, cause you've experienced that threat modeling exercise before. So I know that I can't use X, Y, and Z type of database.

I don't know. Just some random stupid example. So it's really just about like getting. More information in your mind, in a different perspectives in your mind, in all of this stuff will not necessarily be immediately useful. It'll just be one of those things like that later in your life it'll become apparent like, Oh man, I'm so glad that like I participated in that and I'm so glad I learned that thing.

Cause like now it actually makes sense and I finally get it. So I would say like I could point you to several different security related blogs and you know, newsletters and Twitter accounts and all this stuff, but you're just going to get so inundated with all these like. Technical details and it's going to drag you down mentally.

Cause a lot of them are just like aggregators for like, here's another company that got breached and here's how they got breached and you're going to think that the world is falling apart. I would say that like that's not going to like bring up your spirits about security and like the state of the world.

So instead I would focus on like the things that you can learn in your most local community, your local environment. So if your company doesn't have a security program, there might be a local Oh wasp chapter. So ops is like a open security organization. They're around the world. Most cities have like some kind of local chapter.

I know the here in Portland, there's like monthly meetings you can go and attend. They usually have some kind of like guest speaker who will give a talk about some thing related to security. So I think engaging those types of communities can be really beneficial as well. You know, if you want to, the other thing you could do is just like attend a security conference.

I wouldn't necessarily, I recommend starting with black hat or Devcon in Las Vegas. Those are very intense and very like, I would say deeply technical and like. Culturally heavy. I would say that there's something a little bit more lightweight though. It'd be beneficial. Like if you went to a JavaScript conference and like somebody was talking about JavaScript security, attend that talk, see what you can learn.

I think they would probably help you on a more on a day to day basis and going head first into like the deep rabbit hole of security.

Amelia: [00:43:14] Right. Don't start with the black diamond ski slope.

Eric: [00:43:18] Right? Exactly. Exactly. That's a great analogy. I don't ski, but I get the reference.

Amelia: [00:43:23] I also love your answer because I realize that as a front end developer, I don't have to worry about what other people within my company know.

Whereas within security. I feel like you have to worry about your coworkers, whether they open a malicious email or the security could be attacked through people, which I think I would find terrifying.

Eric: [00:43:46] It's certainly true. So as far as like the nasty email example that you gave, that's such a great one.

And like I've seen this firsthand where emails were sent to a company I worked for, there were spooks, so it looked like a legitimate email from one of my coworkers. It looked like any other. Email that you would get if they were like sharing a Google doc with you, right? It would say like, here's the name of the stock.

Whatever happens to be in, there's a link in the email and you click on it, you open it. But the clever thing about it, there's two, like the one is like spoofing their email addresses, which is not technically challenging. It's pretty trivial. There's a few things you can do to mitigate that. The clever bit was.

They make it look like a legitimate email. We're like, nobody would really be the wiser on a day to day basis. But you open it, you click on the link and it takes you to a page that looks exactly like the modern Google walkin. So now if you type in your password. They have your password and they know your email address.

So it's a pretty clever way to fish people and they can get a whole bunch of logins and passwords relatively easily. And I think the other thing that you do that's kind of clever is they've gotten wise, they're not running from their home. Machines, like all these web servers and stuff they have to run are these scripts that they use to target different companies or servers.

They just run them on like compromise AWS accounts. So you can't black list like the IP addresses for AWS because then all of your code shuts off. Right. So it's pretty clever the way that they're kind of using the same systems that we use to the right, normal white hat code. As far as your, the concern about, you know, if you work in security, you have to worry about everyone.

I think that's true. Like you're going to be worried a lot like, but that's your job. Your job is to be the one worrying so that other people don't have to. But that's kind of the motivating factor. Like if it's keeping you awake at night, then that should lead you into action and like to do something to make sure that that one thing can't happen.

So you can spend your nights being awake, worrying about something else, and maybe you can't control. Right. And then you think, you know, if I can't control this thing, what can I do.

How Do We Think About Security and New Hardware?

Amelia: [00:45:43] So you mentioned this example before, which is smartwatches, which haven't been around for that long. How much do you have to keep up to date with new technology? Like we have Google homes and our houses and their smartwatches, and there's something new every year.

How much do you worry about new devices that come out or have to keep up to date with new tech?

Eric: [00:46:05] Maybe you have a Google home in your house, but I don't have one in mind. Well, I guess there's a lot I could say on this, and I'll try and keep it. More succinct. So at a philosophical level, the same problems always persist in the security threats.

Simply follow along and mature and evolve with the technological changes that we have. If you had a computer before and you didn't have an iPhone or you know, a smart home, there is still possibility that like your computer could be compromised remotely and the camera could be taken over. The microphone could be taken over and that could lead to.

Some kind of disastrous result for you and I, the day to day, like that's not that big of a deal. Like if our computer gets compromised, like, okay, like what's really the worst case cap? If they see me okay. I don't often sit naked in front of my computer, but even if I did, like nobody really is going to want anything to do with that.

Let me give another example though. The risks are much higher in the federal government in the Pentagon. They have a policy where if you go into a conference room, you cannot bring a cell phone. You cannot bring a laptop, you can't bring anything that has the ability to record information and has a battery in it or transmit information.

It has a battery. Like that's the policy. What that means is. Is that if you want to give a presentation at the Pentagon, you have to print out all of the slides for your presentation on paper and then give a copy of that to every attendee that's in the meeting, and then at the end of the meeting, those all have to get shredded securely.

I know this because I have a friend who works in the Pentagon one day. This person who was like. Very concerned because they had to give a presentation the next day. I think it was like right after, right before the government shutdown, all of the printers at the Pentagon, which is the only place they were allowed to print off these classified documents.

All the printers at the Pentagon were out of ink. So how do they give their presentation? Right? So it's weird problems that you take on for the sake of security, where for the sake of national security, you can't take any kind of these types of devices that we take for granted. Like, you know, if you. Told somebody in Silicon Valley that you had to print off a proper presentation.

They couldn't bring their laptops and phones into a meeting. You would get fired. They would think you're crazy and like kick you out of the company probably, or just tell you that you're paranoid. So I used to live in DC and I worked in a similar environment that, so for me, like having lived on both coasts and in both of the DC area, then also in Silicon Valley, the differences are so stark.

It's really crazy. But it also goes to show like. The vast differences are the vast levels of security that people take on based on the level of risk. So I would say like that's the fundamental thing to keep in mind is what are the risks that you want to avoid if you're going to like enable internet of things devices in your home.

I may not have a Google home in my house, but I do have a nest thermostat. And I know that the nest thermostat doesn't have a microphone in it and like, you know, could it be compromised remotely? Probably we're going to do make it too hot or cold in my house. Big deal. Right? But it's a nest thermostat is so much better than like a non nest thermostat.

They're like, why would I not have a net service set? There's so great. Just a couple of days ago, the ring doorbell company, like it was published that there was like, these podcasts are so like taking over people's ring doorbells in their house and like harassing people across the country. With the ring doorbells.

Right. Which is crazy. So I don't have a ring doorbell because I know that their security is pretty low. And that's really the problem with the internet of things stuff, is that they want to make these things cheap, which means they have to compromise on something. And the one thing they usually compromise is the security of their product.

And that's actually how the myriad worm spread is. They didn't pay for a bunch of servers that had really high bandwidth. They compromised a bunch of internet of things devices and use them like a swarm to take down like internet servers around the world, which is crazy. It was just like people's cameras and stuff in their houses.

They use it as like a zombie to like send more traffic to things. Those are the kinds of things I think about. Like, you know, you could have those things, no big deal, but just be aware of what the risks are and whether or not you trust the company behind the device that you were just right.

Amelia: [00:50:00] It seems like it's so as a trade off between convenience.

And money and security.

Eric: [00:50:05] I'm glad that you said that. So at my first job back in 2001 I remember stating, and I don't know if I was just trying to be clever, but I said like security is inversely proportional to convenience, and I think that is still true today. But going back to the seatbelt example that I gave you earlier.

It is, you could argue inconvenient to get into a car and have to book your seatbelt. It's inconvenient to have a seatbelt on. If you want to like take off your jacket or put a jacket on, you're too high or too cold. It's it convenient to have a seatbelt on if you are in the back seat and you dislike over, but it's a trade off between the level of security that provides.

It might be inconvenient. You might be a little cold or a little hot early cancer site across the car or whatever. But it's better than flying out of the windshield if you happen to get an accent. Right. So it's constantly this trade off where like convenience versus security. I think it's still true. I would say that because technology is improving so much, they are lowering how inconvenient it is.

So here's some good examples or recent examples. Like I have an Android phone and it has a fingerprint reader on the back and it's great. I can unlock that thing in a split second and it just, boom, there it is. I don't have to type in a password or put in a code. And same thing is true for like the neuro iPhones that are in like the new pixel phones is, they have like face unlock.

So you just look at the thing and unlocks where you. If you're a James Bond and you are tied up on a chair somewhere and they want to like unlock your phone, they just pointed your face and now you know, if you're under arrest, like you can't prevent that from happening. But like, you know, most of us aren't James Bond.

So I don't think that that isn't necessarily like the primary concern that you want you to have, but is the way that I would usually recommend thinking about these things.

The Dangers of Wireless Security and Serendipity

Amelia: [00:51:40] Your seatbelt example brings up the point of I think about how dangerous it is to drive versus how dangerous it is to take a flight cross country and driving is way more dangerous, but I still get really scared every time I take a long flight because it feels so much scarier.

So I bet in the security world there are things that don't feel like big risks are and things that are big risks, but they feel like not a big deal when you think about them.

Eric: [00:52:08] Here are a couple things that come to mind. The first one is wifi. So why fi wireless internet is like something that's so prevalent now.

That we assume that when we go to the airport, there's going to be free wifi. We assume that when we go to a restaurant, there's going to be free wifi. We assume that what we go to work, there'll be a wifi. We can catch you. So our phones have service. We assume that there's going to be wifi, like everywhere we go and when there isn't, it seems like a huge problem.

Because there's free wifi everywhere. That means there's a network that you're connecting to. Who knows who else is on that network, right? When you go to the airport, who knows who else is on the network at the airport? Who knows if they're monitoring all the traffic that's going through computer, who knows if they compromise the router at the airport.

That's a bigger problem than I think people realize like wifi, security. Even though you have like your crazy long password on your router and you keep it up to date all the time. Routers and wireless security like are. Crazy, crazy, insecure, cracking the password on anybody's rodder today. Even with like if you get the most fancy high end butter off the shelf and bring it home and plug it in.

With a laptop, you can probably break the longest password that you can possibly put into that thing within a few minutes. Like you just need enough traffic going over the wire. So the more people that are on our network, the easier is this sort of crackly security and like get access to the, to the router.

And then you're on the network and once you're on the network, if you can be on the network, is everybody else? What else can you do? What traffic is going around that. Isn't encrypted in transit. If you're on the network, let's say you have wifi at your office and you have some kind of admin panels like Nate was talking about earlier, you have admin systems set up so that you can only access it if you're on the network, and if you're on the network, maybe you can get access to the internal, like Wiki for auger, your company's internal documentation.

And if you can access that internal Wiki for all the internal documentation, just because you have the right IP address now maybe like what kind of information is on that Wiki that you don't want anybody else to have that's outside your company. So you've assumed that everyone on your network is trusted, right.

Just because they have access to the wifi. So not only is like the encryption protocol not necessarily secure on like modern wifi, the devices themselves aren't that secure. That was a huge target for a lot of the known vulnerabilities in the world. Like. Routers are a huge shark cause you can get access to people's like routers, you can access to like their whole home network.

Then you get access to like the ring devices and everything else that's on their network and you can do who knows what. So yeah, wireless security is one of the big ones that I think people take for granted. Interesting bit of trivia. Wireless security is one of the ways that I first got involved in security.

So, and I will definitely date myself with this. So I apologize for sounding old. Back in the 90s when wireless networks started become more prevalent, like they're inexpensive and people could start using them. There was this article that came out, I can't remember if it's like 2,600 magazine or what it was, where it was an intended design that you would make using a Pringles can like Pringles potato chips.

So you use the empty can and you put like a rod in it, like some nuts and bolts, and you essentially use it as like a directional antenna. It has a wire that goes and attaches to the wireless card that would be in your laptop, which back then the wireless cards were these things you would slide into the side of the laptop.

And they look like a big fat credit card. And that's how you would like add wireless networking to your laptop as you slide this huge thing in the side and go and you could add an extra oil antenna and then connect you to the sprinkles can. So the Pringles can give you a longer range and there was some software you could run that would allow you to.

Do this activity called wardriving, so you'd drive around in the car and you point the antenna. This Pringles can around and it would help you did a tech who had wireless networks and it would just say like, boom, I found one. It would keep track of where you were. If it had like a GPS card or something in your laptop, you could map out.

Where the w wireless networks were. And like if they had a password protecting the wireless network, which back then not a lot of people did. So it was a way to sort of map out like where all these wireless networks were, and like sniff all the packets from a distance without like having to like sit in a person's driveway or like right outside their business.

You could be across the street or like just driving around like pickup all this traffic. So I built this, Pringles can attend a, we'd drive around my small town and in Northwest Ohio. And there was like in the entire town, there was like one wireless networking was outside, like the local ISP and that was it.

Like there nothing else. We drove around for like hours trying to find nothing but that experience, I kind of kept an eye on wireless security cause I thought it was really, really interesting that you could do this. Stuff and like also the possibility is of wifi back then were just awesome instead of running wires are rare, which obviously sucks.

Also around that time I was working as a bartender at like a local country club and it was a slow day. I mean, it's like a Sunday. And this older gentleman comes in, one of the guests, he comes in with his laptop and like laptops bend over these huge clunky monstrosities. So he's just on his laptop. He's like playing around, and I see he slides in this katonk, this wireless card in the side, which at the time the country club started offering like where I was internet.

So I think that was a big deal for all the guests. So you know. Being a nice bar center. I have a casual conversation like, Oh, like did you know that wireless security? Not that good. It's like, Oh, interesting. He didn't care about that. He's like, are you into computers in a computer? So I was going to school at the time, like working part time as a bartender.

So he shows me this piece of software that the company he owns is working on, and I was like, Oh, that's really cool. He's like, if you're interested, I'll put you in touch with my son who's the president of the company. Like, you know. We might have a place for you. So sure enough, I applied and I got the job there and I would be one of only two software developers had that company and they basically, what they did was like wide format printing.

So like it was like art reproduction is and all this other stuff. Signage. So if you go into like a home Depot and you see like the pink Panther cardboard cutout, things like this company made those. So I got this job in like two weeks later. The other software developer that worked there, the one that'd be like learning from, this is my first job as a software developer.

He leaves, he takes another job. I'm on my own to work at this company, which we can get into plenty of stories about that, but yeah, so anyway, like why are the security kind of led me down that path and got me interested in security, but also helped me land my first job. Just by like talking to people I thought was kind of interesting.

Hacking for Good

Amelia: [00:58:38] That's an amazing story. How long was it before you came back to security from your initial internet security

Eric: [00:58:45] comment? Yeah, so I would say that security has always been like a common thread for me. It's something that I've always paid attention to at every company I've worked at. I've like had a little bit of a focus on security.

Even like when I worked at Google for a number of years, I talked to lots of the security team there and one of the last things I did before I left was, so the team that I worked on at Google was very closely aligned with like the marketing organization, marketing team. One of the last things I did was I drafted the policy that.

Any contractors, the marketing team would want to hire to do like one off websites, all the security concerns and like technology concerns that those contractors would have to follow in order to make something that would be used by Google. You know, I always had like my fingers insecurity around that time.

I met a lot of interesting people. Some were interested in security, some were involved in security at Google. But even when I had that first job, that was the first time that I got to use my ability to compromise another computer. Four. Good. So I'll give you the lay of the land. This is 2001 2002 so it was the fairly early days of the internet.

The company primarily has one main database and it's on just like a regular workstation. It has like a CRT, huge gross looking. Beige monitor attached to it and a gross looking beige keyboard. And that's their database. It's running windows and T a Niagara tango, which is like this old school version of windows that were used for like servers, and it has database software on it.

So it's called SQL server, which is Microsoft's version of like my SQL. So it's just like a SQL database. So this computer is primarily designed to run that in this. Database holds all of the inventory and all the orders that are coming in for all the customers that this company has. So we get orders every day as like a XML file that we'd have the Parson in sort of the database, and then other servers that like would contact or work with the API APIs on the SQL server to run like internal websites that people could like.

Let's see what the inventory was and like manage all the orders. So one day there was an it guy that worked at the company. He did deliveries and then he ran all the of the it stuff. And then it was just me, like the only software engineer that was all the computer stuff. Everything else was like printing, printing, printing and business.

So one day he wants to log into this computer to do the weekly backup. So he has like, they do a tape backup of the hard drive with this computer. Once a week, just to make sure that, like if a case compromise, we will at worst lose a week's worth of data. So he goes to log into this computer on a Friday and he can't log in and there's only one login.

It's just the admin login is password doesn't work. And like he's a competent person. You typed this password hundreds of times before. I can't log in, so you know, I try to like, Oh yeah, maybe you're just like fat fingering out it out. So I tried like, sure enough, can't log in so. He calls the software engineer who used to work there, who like happened to be in town is like, Hey, can you come look at this thing?

He came over, he's like, Oh yeah, you guys may have been hacked or something cause like the login isn't working anymore. He's like, just restore from backup. It'll be fine. But it was Friday, which means worst case scenario, we would have lost that whole week's worth of data. So. I was like, that sucks. I don't want to lose a week's worth of data.

There has to be a way that we can get as, and she's like, it's sitting in front of us. How can we not get access to this computer? So I was like, well listen, like let me have like an hour with it. I know that there is some known vulnerabilities for a windows NT. I want to see if I can use some of those to hack this computer.

Our own computers, it's legal to get access to and then change the admin password back to something that we can use to log in. So sure enough, like after about 30 minutes, I was able to execute this exploit against the computer, get access to it from my laptop and the reset, the admin password. So we were able to get back into the machine and like it was no longer compromise.

So what was interesting was we don't actually know this, the root cause of like what happened to the machine, but what we suspected. Had happened was at the same time, that same era, there is this internet worm going around called the S a worm. That's Sierra alpha, the letters essay. So essay Sierra alpha.

We're also in, this will give you a clue as to what the word did. The default admin login for SQL server software. So instead of typing in route to log in as admin or admin log in as admin, you typed an essay that was the default. Login and then the password, like there somebody full pastor, I can't remember what that was.

So the essay worm would go around and look for, it would get infected machine, the head sequel server on it. By using the default username and password and then it will look for other SQL servers. There were nearby, they could access by the network and to do the same thing and like compromise that. I don't think that we necessarily had the default password, but I think it had some way to like compromise or understand whatever the credentials were and then log in and take over the machine.

So we think that that's what happened is that the essay worm or some modified version of essay warned like compromise this computer and. Prevented us from logging in. So yeah, long story short, I was able to hack our own computers that we could get access to it again, backup the contents of the database and then wipe the machine and start over from scratch so that like we no longer had like what we assumed was a compromise machine and then load the data back on.

So because of my interest in security and knowing what vulnerabilities are possible and like I just kind of understand like what could be done, I was able to save that company a week's worth of data.

Amelia: [01:04:14] That's amazing. And I wish we had more time to hear all of your stories because they're all so great, but I did want to talk about, you're writing a book on InfoSec.

Tell me a little bit about that.

Eric: [01:04:25] The book that I'm writing is essentially the same template I use when I worked with one of my clients. So it's how to take accompany that. Has become successful. They have customers, they're profitable, they're making money, but they haven't done security at all. How do you go from zero to one?

Like what are the first things you do? Well, you haven't done security at all and you know that you need to, what's the first thing you do? So this book kind of talks about like. The right way to approach creating a security program, like a holistic approach, like here are all the things that you could do.

Here's a way to create a data-driven security program so that you know what is you're going to be doing over the next year or two years. You know how to create metrics to track your progress and hold yourselves accountable. You know how to have like an inclusive, instead of an exclusive security team, you know how to like.

Do security awareness. You know the right things to log and monitor so that you're not spending time on the things that don't matter. Like you want to just focus on the things that have the lowest amount of effort and the highest amount of impact. I need to understand what those things are and what your risks are so that you can kind of like spend your time and your resources wisely to create a security program that, that works for your company so that you can sleep at night.

Right. And are worried constantly about while these emails you're getting or getting hacked or compromised or ending up in the news.

Amelia: [01:05:43] What is the hardest part that you're finding for writing a book about this and how is that different than consulting a specific company?

Eric: [01:05:53] I think that I've had a relatively easy time because for most of the clients I work with, I have to document all the things that went wrong with your company or all the things that I found.

I shouldn't say the things that went wrong, but I have to document all my findings and put them into context of report. So I think that, you know, having done that a few times, like doing written word form of like all the advice that I give, it becomes a little bit repetitive. It kind of lends itself naturally to becoming a book.

But the difference is that like it isn't specific to one company anymore. You have to think more broadly. Instead of saying, here's how you know company that sells whatever widgets or widget service on the internet. Instead you had to think about like. If you're a medical company, you may have these concerns.

If you're a financial company, you might have these other concerns. If you're just a technology company, I shouldn't say just to acknowledge a couple people, like if you're selling like online services, you might have a different set of concerns. So there's a lot of mental juggling of whatever sector of the reader might be in and the level of concerns and risks that they have, which is a little bit.

Strange cause like the, the fundamental problem is that I don't know who the reader is at any given point in time versus like when I'm working with a client, I know exactly who they are and I know exactly what the risks are. So right, again, in a more general way can be a little tricky. And it's just a problem of phrasing and wording.

So that's been probably the thing that I've struggled with the most.

Amelia: [01:07:11] Yeah, that makes total sense. And I can't wait to read it. So I want to wrap up here, but I did want to ask you for someone who is already in tech. But once to learn more about security and maybe once they start getting into the fields, what is the best resource for them to start learning more?

Eric: [01:07:31] Sure. So the best resource for somebody to start learning about security if that's what they want to get into. Let me preface my answer to that question with it depends on. What aspect of security they want to get into. And what I mean by that is they're deep technical careers that they could follow in security where you know they're going to focus on being a security researcher.

they'll be like doing penetration tests against the companies they work for or focusing purely on security. But there is another route which is more aligned with the type of work that I do, which is how do you manage a security program? Like how do you run it. From a higher level, like if you want it to be a CIO or a CTO of a company, or if you wanted to be the head of security at a company, you don't necessarily need like the deep technical skills of a security researcher.

Because the day to day role is very different. Like you need good communication skills, you need good soft skills. You have to be able to communicate and present all this information effectively. You have to be able to like plan out, like how do you do security and then tell people like, this is what we're doing and why and what we need from you.

Like what kind of help we need from you. So I would say like. If you want to take the deeply technical route, there's a lot of resources. I would say, like, as I mentioned before, the local wasp chapters are a great place to start because you'll meet all the people in your local community. You understand where they work, and it's somebody who you can relate to and ask questions of, and there's.

Almost always there'll be somebody who has a similar background to you, like they will be a developer who wanted to get interested in security. How did they do it? Another great resource I would say is security conferences are good. One of the more low key ones is called, besides as Bravo sides, and they do them in most of the major cities in the world.

I know that. They're international. And again, like it's pretty low key. It isn't like a massive conference like the AWS conference or DEFCON or black hat, so it's not going to be overwhelming. And there's a lot of diverse backgrounds. I know there's a lot of software engineers that attend, so would say like starting those conversations and like meeting people and security is a great way to just sort of see what's out there, who you can talk to if you want to.

Learn the skills. Oh, OSB usually does. I don't know if it's monthly, but they do like a hands on exercises that you can use to like learn some of these things. There's also, like we talked about before, the bug bounty programs that are out there, so anybody can apply to be a security researcher on these platforms.

So if you think that you can find security vulnerabilities with any of the companies that are registered with these programs or have a Bookbinder program. You could register as a researcher and just choose a target, look at their website or the application or whatever it happens to be in scope and try and find vulnerabilities.

And if you can, you can report them in the right way and then you'll potentially get paid for it. So that's another great way to learn. You know? And then I guess lastly, like you could always take a more academic approach. Like if you're in university, most schools are offering some kind of cybersecurity classes.

They tend to be deeply technical and focus a little bit more on the it side. It's a great place to start.

Amelia: [01:10:27] Those all sound like great resources and we'll include a link to a last besides and the bug bounty programs within our show notes. Thanks so much for coming on. It's been an absolute pleasure to have you.

Thanks a lot.

Eric: [01:10:39] Yeah, no, thank you for having me.

  continue reading

9 episodes

Semua episod

×
 
Loading …

Bienvenue sur Lecteur FM!

Lecteur FM recherche sur Internet des podcasts de haute qualité que vous pourrez apprécier dès maintenant. C'est la meilleure application de podcast et fonctionne sur Android, iPhone et le Web. Inscrivez-vous pour synchroniser les abonnements sur tous les appareils.

 

Guide de référence rapide