Showing posts with label Framework. Show all posts
Showing posts with label Framework. Show all posts

Friday, May 31, 2013

Book Review: Assessing Vendors

Assessing Vendors: A Hands-On Guide to Assessing InfoSec and IT Vendors


by Josh More
Publisher: Syngress
ISBN: 978-0124096073
Number of Pages: 95
Date Published: May 10, 2013 


As I've noted in several previous blog posts, I believe the concept of Vendor Management is one of the weaker links in the security chain at many organizations.  While this book doesn't necessarily show you everything you need to know to fix this problem, it does provide solid advice on proper due diligence for selecting vendors and products that you want to build a relationship with.

Josh More lays out a very practical framework for finding vendors that provide technology (products and/or services) that address the needs of your situation.  More's Vendor Assessment process contains nine phases to help those responsible for evaluating and recommending solutions in Information Technology and InfoSec.  The process is designed to help these individuals in fairly and quickly evaluating vendors, understanding how the vendor/sales atmosphere operates, and getting more value out of vendor contracts.


One of the biggest lessons I got out of the book was in properly defining the criteria used to assess and compare various solutions.  By selecting specific criteria to measure each vendor, you are ensuring a fair and systematic evaluation so that the final decision can be based on a true apples to apples comparison and backed up with data.  On page 17, More provides some great advice for deciding how many different criteria should be used in this process:

The limit is going to be the number of dimensions that you can hold in your head at any given time.  This way, as you assess systems, you don't have to bounce between modes of thinking too much.  This process, called "context shift," is a very common source of time loss when doing analyses.  If you are running down a large list for each candidate, you have to constantly change your mode of thinking and every time you do, it will cost you a little bit of time.  If your list is too short, you will be losing time thing of real-world scenarios that could be concerning but cannot be captured in your limited system. 

More provides several examples to address this issue, ranging from the C-I-A triad to the CISSP 10 Domains.  But I really liked the reference to the Parkerian Hexad on page 18, which is a short enough list to easily remember, but comprehensive enough to cover the majority of vendor/product assessments you will run into.
  1. Availability
  2. Possession/Control
  3. Confidentiality
  4. Utility
  5. Integrity
  6. Authenticity
I have to admit, this isn't the most exciting IT book out there, but I'm glad I read through it.  All in all, this one is a quick read weighing in at just under 100 pages, but sheds some light on what can sometimes be a very ad-hoc selection and purchasing process.

Saturday, April 20, 2013

Book Review: The VisibleOps Handbook

The VisibleOps Handbook: Implementing ITIL in 4 Practical and Auditable Steps


by Gene Kim, Kevin Behr, and George Spafford
Publisher: Information Technology Process Institute
ISBN: 0975568612
Number of Pages: 100
Date Published: June 15, 2005



VisibleOps is one of my favorite computer geek books of all time.  This book is a no-nonsense, straight forward guide to running a highly successful IT department.  But, VisibleOps is not just some flavor of the week self-help management book.  The lessons and goals presented in VisibleOps are the culmination of years of observation and research by the authors, who happened to notice that successful organizations had IT departments that operated in very similar ways.  This book is a distillation of those observations into a methodology that is easy for anyone in IT to grok.  Loosely based on the ITIL framework, VisibleOps cuts straight to the chase with four basic steps. 
 
The Four Steps of Visbile Ops

Phase 1. Stabilize the Patient
Phase 2. Catch & Release and Find Fragile ArtifactsPhase 3. Establish Repeatable Build LibraryPhase 4. Enable Continuous Improvement

Stabilize the Patient

In the first phase of VisibleOps, the goal is triage.  Can you reduce the number and impact of outages?  Some of the key ways to accomplish this goal is to implement and strengthen Change Management processes, only allow scheduled changes, and have a defined maintenance window.

Another huge benefit to the Change Management process that often gets overlooked is its ability to act as a communication tool and a way to publish a schedule of changes.  With these processes in place, you will have better visibility for outage responders:

  1. What changed? 
  2. How to back out that change

Fragile Artifacts

The second phase is all about using a risk based approach to identifying and cataloging critical systems.  Some of the key indicators include:
  • Systems with the highest Mean Time To Recovery (MTTR)
  • Systems with low change success rates
  • Systems with the highest downtime costs
But being able to understand and identify the cost of downtime requires understanding the business processes that each system supports.  That is why this phase is based on the Configuration Management process and includes implementing a Configuration Management Database (CMDB).  Once these processes are in place, you should see a reduction in variance, increased conformity in your systems, and it will be easier to detect anomalies within the environment.

Repeatable Build Library

In order to overcome the limitations imposed by the Fragile Artifacts, you must create a way to commoditize these systems.  Phase three is all about implementing proper Build and Release Management processes to further reduce variance and increase your understanding of what your systems are actually doing.  The thing that makes systems fragile in the first place is your lack of understanding about how that system operates.  

Once you are able to obtain that level of understanding, it is much easier to swap out interchangeable components than it is to ad-hoc a resolution out of random troubleshooting steps that you can't really explain WHY those steps "fixed" the issue.

Continuous Improvement

You would think that phase four would be self explanatory.  It is anything but that.  In terms of implementation, I have found that this can be the absolute most difficult because it requires a major shift in the culture of most organizations.  The VisibleOps Handbook provides some key indicators and metrics that can help track your progress on this journey.  It does not, however, provide much advice on how to steer your Titanic to avoid icebergs along the way.

Reflection

The thing I love the most about the Visible Ops approach to ITIL and managing IT in general, is how corporeal it is.  The word "visible" in the title obviously wasn't an accident; it is visible because the steps for implementation, the explanation of the methodology, really everything about it is so clearly evident that [almost] anybody should be able to thumb through this booklet and pick up some ideas that they can put to use right away and see results almost as fast.

Tuesday, April 16, 2013

#ChecklistIsDead

I keep seeing tweets and blog posts and hearing talks at various cons that keep repeating statements such as: 

"[insert unpopular framework/checklist here] has done nothing to improve cyber security, and in fact it has probably made security worse"

And I don't believe it!

I recently wrote about how the InfoSec echo chamber keeps dogging on "outdated best practices", and today I started wondering if these echo repeaters all work for Gartner?  So I'm proposing that all framework/checklist bashing should use the hashtag #ChecklistIsDead from now on.

My point is that one of the biggest reasons InfoSec is failing is not because we are using a bad checklist.  We are failing because we aren't actually following through with implementing *any* checklist consistently, whether it is the PCI DSS, FFIEC, FISMA, NIST, or the SANS Critical Security Controls.  I don't really care which checklist you are being graded on (most of them can be cross-referenced with each other anyway, just different wording for the same basic goals), but if you can't make a list of your key business process, a list of your critical information assets, and updated diagrams for your network and data flow... then what makes you think that you are going to do any better with the newest #RiskManagement flavor of the week?

For example, I hear a lot of people complaining about the PCI DSS in one breath and then calling for the need to replace checklists with a risk based approach to security.  That's all fine and good, but if companies can't comply with the intent of PCI DSS v2.0 Requirement 12.1.2 to perform a risk assessment [only] once per year, then how well are they going on their own without such a requirement?

Establish, publish, maintain, and disseminate a security policy that "12.1.2 Includes an annual process that identifies threats, and vulnerabilities, and results in a formal risk assessment. (Examples of risk assessment methodologies include but are not limited to OCTAVE, ISO 27005 and NIST SP 800-30.)"

I have read some articles lately (here and here) that talk about how security policies and frameworks are too silo'd and need to span across functional boundaries.  I'm sorry, but show me what framework or checklist specifically calls for its implementation to be contained within silos?  These failed implementations are the direct result of bad decisions made at the highest levels of most companies who don't understand the threats and vulnerabilities facing their organizations.  Yet these same decision makers are supposed to magically understand the risk derived from these same threats and vulnerabilities in order to invent a better #RiskManagement program that fixes their security failures?

All the while, taking time to actually implement the items on the existing checklists keeps slipping through the cracks or falling down the priority list (and just getting a QSA to submit your RoC to the cardbrands doesn't mean your company has actually implemented all of the requirements on the checklist).

There were several interesting items listed in a recent paper by James Lewis of the Center for Strategic & International Studies, Raising the Bar for Cybersecurity.

"In the last few years, in 2009 and 2010, Australia’s Defense Signals Directorate (DSD) and the U.S. National Security Agency (NSA) independently surveyed the techniques hackers used to successfully penetrate networks. NSA (in partnership with private experts) and DSD each came up with a list of measures that stop almost all attacks.

"DSD found that four risk reduction measures block most attacks. Agencies and companies implementing these measures saw risk fall by 85 percent and, in some cases, to zero."


<sarcasm>Too bad checklists are dead.</sarcasm>

Friday, April 12, 2013

Best Practices

Great comment in this week's SANS NewsBites (Vol. 15 Num. 029) from Alan Paller, director of research at the SANS Institute.

[Editor's Note (Paller): As organizations discover there is economic liability for lax cybersecurity, and lawyers smell blood in the water, the recognition will dawn on policymakers that their reliance on high level "guidance" was a really bad idea and made government cybersecurity a terrible model for protecting the critical infrastructure and businesses.  This week the Australian Attorney General established a legal requirement that all agencies implement a small number of critical security controls. No company can pretend they don't know the basic controls they must implement. The U.S. government will do that, too, but, as Winston Churchill said so long ago, "Americans will always do the right thing - after exhausting all the alternatives." You can get a head start on doing the right thing if you can get to London on May 1-2 (http://www.sans.org/event/critical-security-controls-international-summit) or listen in on the briefing on April 18.  (http://www.sans.org/info/128297]


I found this comment somewhat ironic, given the recent twitter conversation with @joshcorman:



Maybe "Best Practices" really aren't the absolute "Best" that we can do in every individual situation.  And can they really be called "Practices", if they aren't actually practiced? (i.e. repeated performance or systematic exercise for the purpose of acquiring skill or proficiency). Having cursory familiarity with an established checklist of known good security measures such as the SANS Critical Security Controls, does not qualify as practicing or best.  ;)



Also, check out Cindy's article about being Consumers of Security Intelligence here.