by Josh More
Publisher: RJS Smart Security
Number of Pages: 24
Josh More over at RJS Smart Security obviously had some fun putting this together. Lean Security 101 is a neat little info-graphic that looks an awful lot like a comic book.
Percy the Protection Pangolin
I'll admit it; I had to look up what a Pangolin actually was (+1 for originality). The Pangolin is Josh's sidekick throughout the story.
The 80x5 Rule
The biggest insight I got out of this comic was the 80x5 Rule. So you've probably heard of the "Pareto Principle", commonly referred to as the 80/20 rule. Well the 80x5 rule builds on this idea using concepts from Lean.

The 80/20 rule is often quoted by business managers and executives as a rallying cry to take some action or get started with some new project by trying to justify quick returns with minimal effort. But hidden within this management standard is an implicit acknowledgment that getting a project to 100% perfection (meeting all of the requirements on time and within budget) becomes increasingly difficult. The law of diminishing returns takes over and additional effort is needed just to make incremental progress towards the goal.
When applied to Information Security, this concept is just as true. There is no silver bullet for protecting your digital assets, so no single project or technology or defense mechanism is ever going to be 100% effective at keeping your data safe.
The 80x5 rule is designed to help you get the most value from the least amount of effort, and while maximizing your defensive posture.

The 80x5 rule says that instead of spending all of your effort trying to implement a single defensive measure (that will never reach 100% effectiveness), it would be much more productive to add complementary layers of security. After you have spent the first 20% of your effort on that defensive measure (and reached 80% of the results), any further effort on that task could be considered waste (based on Lean). In terms of opportunity cost, if you took the remaining unspent effort (you still have 80% left at this point) and divide that into four more blocks, you could potentially get 80% results from each of another four projects. This is obviously a much better ROI than spending that remaining 80% and only obtaining at most 20% benefit from your current task.
Assuming each layer is 80% effective (based on the Pareto Principle), eight layers could give you up to 99.999% effective security. Yes, there can and will be various exceptions to this line of reasoning. But why spend all your effort on fixing things that should be considered "good enough" when there are other more productive security measures you could be working on (like building up your incident response team and testing your IR plan)? I see this as an important tool for helping to prioritize competing projects and assessing those final inches toward the goal line.
The book goes into more detail, but hopefully you get the idea. Go download a free copy for yourself, http://www.rjssmartsecurity.com/Lean-Security-101-Comic/, and give them a call about a free Lean Security Assessment.
I keep seeing tweets and blog posts and hearing talks at various cons that keep repeating statements such as:
"[insert unpopular framework/checklist here] has done nothing to improve cyber security, and in fact it has probably made security worse"
And I don't believe it!
I recently wrote about how the InfoSec echo chamber keeps dogging on "outdated best practices", and today I started wondering if these echo repeaters all work for Gartner? So I'm proposing that all framework/checklist bashing should use the hashtag #ChecklistIsDead from now on.
My point is that one of the biggest reasons InfoSec is failing is not because we are using a bad checklist. We are failing because we aren't actually following through with implementing *any* checklist consistently, whether it is the PCI DSS, FFIEC, FISMA, NIST, or the SANS Critical Security Controls. I don't really care which checklist you are being graded on (most of them can be cross-referenced with each other anyway, just different wording for the same basic goals), but if you can't make a list of your key business process, a list of your critical information assets, and updated diagrams for your network and data flow... then what makes you think that you are going to do any better with the newest #RiskManagement flavor of the week?
For example, I hear a lot of people complaining about the PCI DSS in one breath and then calling for the need to replace checklists with a risk based approach to security. That's all fine and good, but if companies can't comply with the intent of PCI DSS v2.0 Requirement 12.1.2 to perform a risk assessment [only] once per year, then how well are they going on their own without such a requirement?
Establish, publish, maintain, and disseminate a security policy that "12.1.2 Includes an annual process that identifies threats, and vulnerabilities, and results in a formal risk assessment. (Examples of risk assessment methodologies include but are not limited to OCTAVE, ISO 27005 and NIST SP 800-30.)"
I have read some articles lately (here and here) that talk about how security policies and frameworks are too silo'd and need to span across functional boundaries. I'm sorry, but show me what framework or checklist specifically calls for its implementation to be contained within silos? These failed implementations are the direct result of bad decisions made at the highest levels of most companies who don't understand the threats and vulnerabilities facing their organizations. Yet these same decision makers are supposed to magically understand the risk derived from these same threats and vulnerabilities in order to invent a better #RiskManagement program that fixes their security failures?
All the while, taking time to actually implement the items on the existing checklists keeps slipping through the cracks or falling down the priority list (and just getting a QSA to submit your RoC to the cardbrands doesn't mean your company has actually implemented all of the requirements on the checklist).
There were several interesting items listed in a recent paper by James Lewis of the Center for Strategic & International Studies, Raising the Bar for Cybersecurity.
"In the last few years, in 2009 and 2010, Australia’s Defense Signals Directorate (DSD) and the U.S. National Security Agency (NSA) independently surveyed the techniques hackers used to successfully penetrate networks. NSA (in partnership with private experts) and DSD each came up with a list of measures that stop almost all attacks.
"DSD found that four risk reduction measures block most attacks. Agencies and companies implementing these measures saw risk fall by 85 percent and, in some cases, to zero."
<sarcasm>Too bad checklists are dead.</sarcasm>
Great comment in this week's SANS NewsBites (Vol. 15 Num. 029) from Alan Paller, director of research at the SANS Institute.
[Editor's Note (Paller): As organizations discover there is economic liability for lax cybersecurity, and lawyers smell blood in the water, the recognition will dawn on policymakers that their reliance on high level "guidance" was a really bad idea and made government cybersecurity a terrible model for protecting the critical infrastructure and businesses. This week the Australian Attorney General established a legal requirement that all agencies implement a small number of critical security controls. No company can pretend they don't know the basic controls they must implement. The U.S. government will do that, too, but, as Winston Churchill said so long ago, "Americans will always do the right thing - after exhausting all the alternatives." You can get a head start on doing the right thing if you can get to London on May 1-2 (http://www.sans.org/event/critical-security-controls-international-summit) or listen in on the briefing on April 18. (http://www.sans.org/info/128297]
I found this comment somewhat ironic, given the recent twitter conversation with @joshcorman:
Maybe "Best Practices" really aren't the absolute "Best" that we can do in every individual situation. And can they really be called "Practices", if they aren't actually practiced? (i.e. repeated performance or systematic exercise for the purpose of acquiring skill or proficiency). Having cursory familiarity with an established checklist of known good security measures such as the SANS Critical Security Controls, does not qualify as practicing or best. ;)
Also, check out Cindy's article about being Consumers of Security Intelligence here.
In a recent blog post, Is Your Security Harming Someone Else’s Business?, Tripwire CTO, Dwayne Melancon, talks about mapping out the relationships your business has with other outside entities to "connect security to the businesses we depend on and those who depend on us."
This is a novel concept considering that many organizations have such a hard time mapping out their own internal processes, let alone ones that stretch outside their environment. One of the main points that Dr. Eric Cole discussed this year during his SANS 2013 keynote in Orlando was that when he has been called into organizations to do an investigation or analysis, the first thing he asks for is a network diagram and a list of locations of critical data. He then conducts a discovery of critical data on the client's network and maps out the true location of critical data to find that it rarely matches the client's list.
One of the questions that Melancon asks in his blog is, do you know what impact changes in your organization might have on contractual commitments with outside parties? Many times the people engaged in writing, reviewing and signing contracts within an organization do not have the level of technical understanding to know what good security practices are, let alone whether they are properly included in the contract. In the financial sector, there are regulatory requirements for this sort of thing (see Interagency Guidelines Establishing Information Security Standards)
Under the Security Guidelines, each financial institution must:
- Develop and maintain an effective information security program tailored to the complexity of its operations, and
- Require, by contract, service providers that have access to its customer information to take appropriate steps to protect the security and confidentiality of this information.
You Get What You Pay For (and Prepare For)
If the people writing and signing these contracts do not understand InfoSec, then this whole process seems a bit like going off a cliff and not knowing what's up ahead.
I was reviewing some BCP/DR
documents for a small financial institution not long ago and found a contract with a
technology service provider (TSP) that was storing off-site backups for them. The TSP provided the proof of breach insurance that was requested and it showed that the coverage limit was $1 MM per incident. That sure doesn't sound like much given the amount of money lost by small and medium sized banks recently due to phishing and account take over attacks which led to ACH/Wire Fraud. But then after taking a closer look at the contract, I found this statement in the section titled "Limitation of Liability":
"If VENDOR becomes liable to the CUSTOMER under this Agreement for any other reason, whether arising by negligence, willful misconduct or otherwise, (a) the damages recoverable against VENDOR for all events, acts, delays, or omissions will not exceed in aggregate the compensation payable to VENDOR […] for the lesser of the months that have elapsed since the Operational Date […]"
Say what? [Unnecessary Risk]
Somebody obviously didn't take the time to read this garbage before whipping out their John Hancock. Let's just say the amount paid to this vendor each year is much less than the value of the data the customer was backing up with this vendor.
Managing the security of interconnected systems is not just an IT issue. It is a business issue which means taking the time to read and understand the contracts that your business is agreeing to abide by. Does your company have a process for reviewing contracts? Is your IT/InfoSec team involved in that process? Are contractual requirements communicated to your IT/InfoSec teams? If you are missing these steps, then it will be impossible to do any sort of impact analysis across interconnected outside entities... just say'n.
While working on deploying an update to a software application by one of the vendors I deal with on a regular basis, I was given a document titled "Pre-Install Handout". (Well, I'm not really that crazy about handouts. Most of the time I believe in earning my keep. Anyway...) At the top of page three, there is a big yellow triangle with a bold black exclamation mark next to the following text:
"Domains: If the [server] machine is to be placed on the [customer] domain, [vendor] must have complete administrative rights to that machine.
There should be absolutely no security restrictions on making changes to that machine. If the domain security interferes with the [server] machines operations; it will have to be removed from the domain."
Dear vendor, if you can't get your software to work with the security settings of our environment, then it is probably time for me to start looking for a new vendor.
This is the same vendor that a couple of months earlier, I sent a request asking for a copy of their SSAE-16 reports and financial statements so that I could include the documentation in my vendor management program file. The response I got back:
"Though [Vendor] does have products that fall under the requirements of a SAS70 audit, because both [Product A] and [Product B] are considered in-house products for our customers (no data is stored on our servers here), it does not apply in this instance.
I have attached an FAQ that further addresses your request. As a point of mention, both the [Product A] and [Product B] divisions of [Vendor] share a facility with the [Product C] division and though we are separate products, we are bound by the same facility and information security management as addressed in that product’s SAS70. Let me know if I can provide any additional assistance.
The SAS70 report appears to now be in the new SSAE-16 reports, but again, the [Product A] and [Product B] departments, this doesn’t apply."
A couple of weeks after getting this response, I had a support ticket open with this vendor. The same support technician that sent me the response above, proceeded to download a copy of the application database from my server to their workstation to "test some things out". The vendor did not request permission before making a copy of the data that I am responsible for, and didn't admit to doing so until after I questioned them about it. I then asked for a copy of their SSAE-16 reports (which they obviously don't have). Finally, the vendor sent me an updated privacy statement that included coverage of customer data within the vendor's environment. While not a perfect solution, it should hold up in court if they have a data breach. But at that point all of the data I'm responsible for is probably already on pastebin.
So back to the software upgrade deployment I mentioned earlier. When I sent the paperwork in to the vendor for the service request to have them install the upgrade (and no, they don't allow anyone else to do the installation), I explicitly stated that they are not to make any changes to the OS, firewall settings, install third party tools, etc. unless they first got permission to do so. I also restated this to the technician (different than the one above) working on the install and got an affirmative response that they agreed to these rules of engagement. So guess what? I remote into the system they are installing the upgrade to and see the vendor making changes to the host firewall (as in completely turning it off). #FAIL! I then terminated the vendor's connection, turned the firewall back on, and called to see whether this technician suffers from short term memory loss.
Dear vendor, when are you going to wake up and treat your customers with a little bit of decency. It is not my goal in life to catch you doing stupid things. I really don't need the extra stress of worrying about what you might do to compromise the security of my data. I have enough bad guys out there trying to break into my systems without you handing them the keys to your backdoor.
A word of advice to sysadmins and business folks that deal with vendors:
I doubt I'm alone in this experience. I suspect things like this happen everyday to systems in your environment the same as they do in mine. Do you have policies in place to handle these situations? Are the people on your team trained to watch out for this type of negligence on the part of the vendor?
One of the basic concepts of data security is the principle of least privilege. This should be applied to vendors as much as anyone. Vendors don't usually have malicious intent, but in general they also have much less skin in the game when it comes to protecting your data. Monitor their activity. Watch what they are doing when they are in your system. Revoke their access unless you have a real need for them to be in your systems.