Albert Einstein described insanity as “doing the same thing over and over again and expecting different results”. The BCM industry’s reliance on BIA surveys fits that definition. Repeating the pattern of BIA surveying – with its inherent flaws – produces results of little or no value. It is time BCM managers took a long, hard look at the value of their BIA surveys and stopped repeating them – and assuming their results are valid.
Before the torch-carrying, pitchfork-armed mob begins circling my office with garlic and wooden stakes, let me explain why I believe reliance on BIA surveys is flawed:
Surveying is both a Science and an Art.
Creating a Survey may be easy (if you have the right tools), but creating a valid Survey requires asking the right questions, in a totally unbiased manner, of the right people. While that may sound simple, it isn’t. Professionals and academics with vast experience in survey creation and analysis argue endlessly about the validity of surveys. What makes a BCM professional assume he or she knows more than these survey professionals?
Surveys are a Snapshot in Time
BIA survey data is a snapshot of business functions and enterprise-wide impacts and dependencies at certain points in time. It may take four to six months to create, distribute, collect and analyze the survey data. By the time the last survey-taker responds, the facts offered by earlier responders have already changed. By the time the ink is dry on the Senior Management presentation, the organization has certainly changed. Basing BC and DR plans on outdated results compounds the problem.
Who Provides the Answers?
Who responds to a Survey is critical to the validity of its results. But a Survey recipient and the Survey respondent may not be the same – for many reasons. Who responds can be a significant factor in whether the results are valid – but managing respondents is not within most BCM professionals’ ability to control.
Subjective Questions don’t yield Objective Results
It should be obvious (but often isn’t) that respondents’ answers to questions will be colored by their own perspective. Allowing respondents to provide subjective answers (fill in the blank, free-form text, create a list) is a path to failure. Without some constraints on the scope of answers, the results may not only be difficult to quantify, but nearly impossible to qualify as ‘facts’.
Questions for which Respondents have no Answers
In survey creation, how questions are posed to the respondents is critical. Why? Because the survey questions determine the type of analysis you can derive from the data they generate.
When you pose a question for which the recipient is unprepared to respond (either because they don’t have knowledge of, or access to, the underlying data – or because they don’t understand the purpose or intent of the question) what results should you expect to get?
Questions which Force Respondents to Guess
“If your operation is interrupted, how many desktop computers will you require to restore your operation in the 1st 24 hours, on the 2nd day, at the end of the 1st week?”
Seems like good idea, but the answers to those types of questions are almost certainly guesses. The guesses of multiple respondents isn’t quantifiable data – its garbage (garbage in – garbage out).
Similarly “Which other functions/processes/operations/departments are dependent upon your function/process/operation/department?” anticipates what result? Assume that will yield a ‘critical path’ of functional dependencies? Respondents think they know who depends on them downstream – but they have little or no means of verification. Therefore, almost any answer given is a guess.
Enterprise Effort – Limited Value
Months and countless man-hours are spent creating, distributing, completing, collecting and analyzing the results of BIA Surveys. The result: dubious data containing both fact and fiction, and charts & graphs of extremely questionable value.
So what’s the solution or alternative? That’s a topic for another time½