Friday, May 31, 2013

Book Review: Assessing Vendors

Assessing Vendors: A Hands-On Guide to Assessing InfoSec and IT Vendors


by Josh More
Publisher: Syngress
ISBN: 978-0124096073
Number of Pages: 95
Date Published: May 10, 2013 


As I've noted in several previous blog posts, I believe the concept of Vendor Management is one of the weaker links in the security chain at many organizations.  While this book doesn't necessarily show you everything you need to know to fix this problem, it does provide solid advice on proper due diligence for selecting vendors and products that you want to build a relationship with.

Josh More lays out a very practical framework for finding vendors that provide technology (products and/or services) that address the needs of your situation.  More's Vendor Assessment process contains nine phases to help those responsible for evaluating and recommending solutions in Information Technology and InfoSec.  The process is designed to help these individuals in fairly and quickly evaluating vendors, understanding how the vendor/sales atmosphere operates, and getting more value out of vendor contracts.


One of the biggest lessons I got out of the book was in properly defining the criteria used to assess and compare various solutions.  By selecting specific criteria to measure each vendor, you are ensuring a fair and systematic evaluation so that the final decision can be based on a true apples to apples comparison and backed up with data.  On page 17, More provides some great advice for deciding how many different criteria should be used in this process:

The limit is going to be the number of dimensions that you can hold in your head at any given time.  This way, as you assess systems, you don't have to bounce between modes of thinking too much.  This process, called "context shift," is a very common source of time loss when doing analyses.  If you are running down a large list for each candidate, you have to constantly change your mode of thinking and every time you do, it will cost you a little bit of time.  If your list is too short, you will be losing time thing of real-world scenarios that could be concerning but cannot be captured in your limited system. 

More provides several examples to address this issue, ranging from the C-I-A triad to the CISSP 10 Domains.  But I really liked the reference to the Parkerian Hexad on page 18, which is a short enough list to easily remember, but comprehensive enough to cover the majority of vendor/product assessments you will run into.
  1. Availability
  2. Possession/Control
  3. Confidentiality
  4. Utility
  5. Integrity
  6. Authenticity
I have to admit, this isn't the most exciting IT book out there, but I'm glad I read through it.  All in all, this one is a quick read weighing in at just under 100 pages, but sheds some light on what can sometimes be a very ad-hoc selection and purchasing process.

Thursday, May 30, 2013

Coursera: A Beginner's Guide to Irrational Behavior

I recently participated in a Behavioral Economics MOOC on Coursera.org taught by Dan Ariely.  I used to love listening to Dan's conversations with Kai Ryssdal on NPR/Marketplace on my drive home several years ago. And I had already been through each of his books, Predictably Irrational, The Upside of Irrationality, and The Truth About Dishonesty.  So, I thought the class would be a lot of fun.



A Beginner's Guide to Irrational Behavior

Most of the topics, research and experiments that were covered in the class were ones straight out of the books, but there were also interesting references to research papers to see more details about how the experiments were carried out and the exact findings of the research.  It is evident that Dan and his team put a lot of effort into building and presenting a very high quality MOOC.

Impressions of Coursera

I have to say I'm a fan of Coursera.  I think it is very cool that they have worked out a deal with so many top colleges and universities to offer outstanding content to the masses for free!  The Coursera website is easy to navigate.  Access to course materials, video lectures, and discussions is fantastic.  The quizzes associated with the lectures and reading assignments were straight forward and engaging.

The one area that I find to be rather disheartening is the peer grading process.  I understand that there is no way for a professor and a couple of graduate students to grade tens of thousands of essays and writing assignments, but there seems to be a rather large flaw in the the grading criteria for these non-multiple choice assignments.  The problem is subjectivity of the grader on the written assignment.



Peer Grading

After submitting the written assignment, each student was asked to grade three other assignments based on the following criteria:
  1. Did the student identify and describe a behavioral problem?
  2. Did the student correctly identify and describe research that is relevant to the problem?
  3. Did the student propose a research-based solution?

I find it odd that someone as fluent in measuring the results of research and experiments as professor Ariely (which are often dependent upon how questions are phrased, the order in which the questions appear and the types of choices presented), would purposefully introduce subjectivity into this grading process by allowing the grader a three point scale for each of these criteria.  Unless I'm mistaken, these appear to be simple "Yes/No" questions, yet the grader was given a range of choices from 1 (didn't meet the criteria) to 3 (met the criteria) thereby adding unnecessary subjectivity.

For the first criteria, either the student described a behavioral problem or they didn't.  Answering this question should not be based upon the grader's bias as to whether they perceive the issue to truly be a problem or not.  If the student articulated a topic in the context of human behavior, how can anyone honestly give them less than full credit for this criteria?


The second criteria possibly could have been worded more precisely as to convey the intent of the question, such as, "Did the student utilize material from the course reading assignments that supports the claim of the behavioral problem?"

The third criteria, again, is not asking for the grader's biased opinion as to whether or not they agree with the proposed solution.  It is simply asking if the student proposed a solution within the context of the research covered by the course reading assignments.  It is not the grader's responsibility to assess the viability of the proposed solution.  It shouldn't be the grader's prerogative to judge the effectiveness of the proposed solution.  I find it hard to believe that anyone would submit their written assignment if it were totally void of a proposed solution.

Proposed Solution for Peer Grading
 


Dear Coursera and prof. Ariely, to improve the quality of this course for the students, I believe an effort should be made to remove as much subjectivity from the peer grading process as possible.  If you are going to ask "Yes/No" questions, then grade accordingly with "Yes/No" answers.  If you want the written assignment to be based on a total of 9 points, then ask 9 "Yes/No" questions that are specific to the quality of the written assignment, such as:
  1. Did the student use the correct name for the problem (if the problem has a name we discussed in this course)? 
  2. Did the student give a clear indication of why the behavior is problematic?
  3. Did the student tell us what the scale of the problem was?
  4. Did the student summarize the experiments and findings about this behavior? This should include only relevant experiments and findings. 
  5. Did the student refer to experiments from the assigned readings and/or lectures? 
  6. Did the student cite his or her sources? 
  7. Did the student propose a solution?
  8. Did the student show the solution was based on existing behavioral research? 
  9. Was the solution original? That is, did the student come up with plan that was not exactly like another we have studied?
By the way, these questions were provided as guidance for answering the original three criteria, but were left up to the subjectivity of the grader to incorporate within the faulty three point scale.

I also realize that grading any written assignment will inherently include some subjectivity of the grader, but the need to limit this subjectivity is only magnified by the fact that the graders in this case have no credibility on which to base their opinions (i.e. the students of a free online introduction class are most likely not "experts" on the subject of behavioral economics).

Sunday, May 19, 2013

Detecting Evil: Network Security Monitoring

Trying to clear out the backlog of posts in my Drafts folder and this one is long over due...

News Headlines

These are just a couple of the new stories that caught my eye [in the hopefully not too distant past].

BofA Confirms Third-Party Breach

"Bank of America systems were not compromised. Our customer data is secure." Mark Pipitone, BofA Spokesman

Evernote Security Notice

"Evernote’s Operations & Security team has discovered and blocked suspicious activity on the Evernote network that appears to have been a coordinated attempt to access secure areas of the Evernote Service."


LA Times Serving Up Malware

"To ensure safety, the Offers & Deals platform has been rebuilt and further secured. The sub-domain generates only advertising content and does not contain any customer information."

So What's Going On?

When ever I read headlines like this, they almost always come with some disclaimer that "There is no evidence that the intruders gained access to..." and rightly so.  A lot of companies don't have the ability to detect the breach in the first place, and then when they are notified about it by some third party (customers, law enforcement or the attackers themselves) they most certainly don't have any way to tell what information was stolen, so they claim that customers need not worry because there isn't any evidence of their information being stolen.

"Is it just me or am I the only one who thinks how can these attackers be so good as to break into the networks of companies yet those companies always seem to stop the attack just before the attackers gain access to sensitive information?" Brian Honan, SANS NewsBites Vol 15 Issue18.
 

Assessing the Damage

Because of all of the cover-up attempts by companies to minimize the negative publicity falls out from disclosing a data breach, it can be difficult to find relevant data to use to education the decision makers in your organization about the importance of good Network Security Monitoring.  Here are some resources that I have found useful in describing the impact of a security breach to executives and senior management:
 
And some other interesting statistics in regards to the accuracy of claims that companies caught the bad guys "just in the nick of time".
  • Trustwave 2012 GSR states breach to detection was 173.5 days within the victim’s environment before detection occurred. 
  • In the Trustwave 2013 GSR breach to detection window widened to 210 days. 
  • The Mandiant 2012 Annual Threat Report on Advanced Targeted Attacks sites a much larger window, "The median number of days from the first evidence of compromise to when the attack was identified was 416 days." 
  • The more recent Mandiant APT1 Report states "[attackers] were inside victim systems for avg of 356 days; longest observed: 1764 days"
  • The Verizon DBIR 2012 states, "It saddens us to report that, yet again, breach victims could have circumnavigated the globe in a pirogue (not from the bayou? Look it up.) before discovering they were owned. In over half of the incidents investigated, it took months—sometimes even years—for this realization to dawn. That’s a long time for customer data, IP, and other sensitive information to be at the disposal of criminals without the owners being aware of it."
  • The Verizon DBIR 2013 concludes that it still takes on average 221 from breach to discovery.
These numbers seem to echo a common sentiment that there are two kinds of companies out there; those who know they've been breached and those who have been breached and just don't know it yet.

Saturday, May 18, 2013

Book Review: Lean Security 101

Lean Security 101: The Comic Book


by Josh More
Publisher: RJS Smart Security
Number of Pages: 24


Josh More over at RJS Smart Security obviously had some fun putting this together. Lean Security 101 is a neat little info-graphic that looks an awful lot like a comic book.  

Percy the Protection Pangolin

I'll admit it; I had to look up what a Pangolin actually was (+1 for originality).  The Pangolin is Josh's sidekick throughout the story.

The 80x5 Rule

The biggest insight I got out of this comic was the 80x5 Rule.  So you've probably heard of the "Pareto Principle", commonly referred to as the 80/20 rule.  Well the 80x5 rule builds on this idea using concepts from Lean.


The 80/20 rule is often quoted by business managers and executives as a rallying cry to take some action or get started with some new project by trying to justify quick returns with minimal effort.  But hidden within this management standard is an implicit acknowledgment that getting a project to 100% perfection (meeting all of the requirements on time and within budget) becomes increasingly difficult.  The law of diminishing returns takes over and additional effort is needed just to make incremental progress towards the goal.

When applied to Information Security, this concept is just as true.  There is no silver bullet for protecting your digital assets, so no single project or technology or defense mechanism is ever going to be 100% effective at keeping your data safe.

The 80x5 rule is designed to help you get the most value from the least amount of effort, and while maximizing your defensive posture.




The 80x5 rule says that instead of spending all of your effort trying to implement a single defensive measure (that will never reach 100% effectiveness), it would be much more productive to add complementary layers of security.  After you have spent the first 20% of your effort on that defensive measure (and reached 80% of the results), any further effort on that task could be considered waste (based on Lean).  In terms of opportunity cost, if you took the remaining unspent effort (you still have 80% left at this point) and divide that into four more blocks, you could potentially get 80% results from each of another four projects.  This is obviously a much better ROI than spending that remaining 80% and only obtaining at most 20% benefit from your current task.

Assuming each layer is 80% effective (based on the Pareto Principle), eight layers could give you up to 99.999% effective security.  Yes, there can and will be various exceptions to this line of reasoning.  But why spend all your effort on fixing things that should be considered "good enough" when there are other more productive security measures you could be working on (like building up your incident response team and testing your IR plan)?  I see this as an important tool for helping to prioritize competing projects and assessing those final inches toward the goal line.

The book goes into more detail, but hopefully you get the idea.  Go download a free copy for yourself, http://www.rjssmartsecurity.com/Lean-Security-101-Comic/, and give them a call about a free Lean Security Assessment.

Wednesday, May 8, 2013

Book Review: Predictably Irrational

Predictably Irrational: The Hidden Forces That Shape Our Decisions


by Dan ArielyNarrated by: Simon Jones
Publisher: Harper Audio
Total Length: 7 Hours, 24 Minutes
Date Published: April 2, 2009



I first heard about Predictably Irrational on NPR while listening to the show Marketplace by American Public Media.  Dan Ariely had a segment each week where he would discuss something from one of his experiments and how the results defy the general assumptions held by most people.  I found Dan to be very entertaining to listen to, especially amid the context of the Great Recession.  So, I decided to download Predictably Irrational to see if I was missing out on any other great insights in the world of Behavioral Economics.

The Decoy Effect

Relativity is all about how we compare things.  The example of the subscription to the Economist shows how most people don't really know what anything is worth, but when comparing two similar items it is easier to see the relative value of each.  I love how it points out that, "Thinking is difficult and sometimes unpleasant."  This is a vulnerability just waiting to be exploited.

The take away here is when you want to persuade someone towards a particular choice, one effective way to do so is by adding a similar, but less attractive option. When given this situation, Option A, Option B, and Option -B, most people will choose Option B.


"Free" 

Here's a less than obvious calculation (well it wasn't obvious to me anyway).  When given the choice between two products, I should compare the perceived value of each to the stated price and if the benefit of the higher priced product is worth the higher price to me, then I should choose that product.  The difference in the price should be the difference in value (to me) of the two products.  But, when one of the products is "free", the difference in value becomes much harder to justify.  Ariely provides several examples of experiments where they offer a premium chocolate for $0.25 and an average chocolate for $0.01.  If I value the premium chocolate by $0.24 more, then I should still be willing to pay the $0.24 for the premium chocolate even if the average chocolate is priced at "free".

Social Norms vs Market Norms

I found the topic of comparing social norms and market norms very interesting.  It seems to me that there are many untapped solutions to everyday problems that are obfuscated by the fact that we are looking at the problem through only one of the possible lenses (social or market norms).  Based on the research presented in Predictably Irrational, it can often be difficult to make the shift from one point of view to the other, or difficult to return to a particular point of view once that shift has been made.

Price of Placebo

"Before recent times, almost all medicines were placebos.  Eye of the toad, wing of the bat, dried fox lungs, mercury, mineral water, cocaine, an electric current: these were all touted as suitable cures for various aliments.  When Lincoln lay dying across the street from Ford's Theater, it is said that his physician applied a bit of 'mummy paint' to the wounds.  Egyptian mummy, ground to powder, was believed to be a remedy for epilepsy, abscesses, rashes, fractures, paralysis, migraine, ulcers, and many other things.  As late as 1908, 'genuine Egyptian mummy' could be ordered through the E. Merck catalog... We may think we're different now.  But we're not.  Placebos still work their magic on us."

Chapter 10 was one of my favorite chapters.  The Placebo Effect has long been a fascination of mine, and Ariely's research puts some hard data to this question.  The results show that when people pay more, they claim to receive greater benefits.  This bias is extremely unfortunate, given that alternative solutions may actually be more effective and more holistic, but are excluded because they don't fall within popular opinion.

Reflections

Overall, I really enjoyed this audio book.  It is chock-full of great examples and data from experiments on behavioral economics (many, many more than the ones I mentioned here).  I have gone back and listened to several of the chapters over again in the past couple of years, as it provides some interesting alternate view points and topics of debate to insert into other research projects I've been working on.  My only disappointment with it, I might have preferred the audio book more if it was read by the author.