Prepare for benchmarking
Benchmarking is usually aimed at making an industry-wide snapshot of the actual situation. By definition, 80% of clients will not be in the Top-20%. Hence, benchmarking frequently leads to inert and frustrated clients. The Top-20% feel there is no need to change, Clients below 50% feel the improvement is just too much of a task, and everyone below 25% doubts the design of the benchmark in the first place. Yet, asking various segmentation questions gives more angles to offer a benchmark. Client ABC is not among the Top-20% in their industry but is in the Top-20% in their industry, given their company size in number in terms of #. of employees. By doing so, there is a higher chance that clients will be motivated to start improving their performance. If a consultant has a large number of Clients from a market segment, the benchmarking information drives industry research, whitepapers, and conferences.
Prepare for optimization models
When comparing clients, it is also very interesting to analyze, for example, to what extent the Top-20% performing clients do their business differently than the other 80%. Therefore, we need to add segmentation questions that help to define the Top-20% further. Sample performance questions could be:
What has been your revenue growth in the past five years?
More than 10% decline - 5%-10% decline - stable - 5%-10% growth - more than 10% growth
What % of your IT budget went to unplanned spending/escalations, etc.?
Less than 5% - More than 5% - More than 10% - More than 25% - More than 50%
What revenue share do you realize via online channels?
Less than 5% - More than 5% - More than 10% - More than 25% - More than 50%
Note that we suggest not asking for specific numbers. Many clients will find that too private information to share. The next step for optimization is to correlate the response to the assessment questions with the scores on these performance questions. Applicable statistical techniques to do so are factor analysis, principal component analysis, and, most of all, ANOVA/MANOVA (multivariate analysis of the variance). These techniques discriminate against the assessment questions about their net effect on the performance questions.
For example, Project ABC shows that only seven questions significantly influence revenue growth. From Project XYZ, we know that only four questions significantly influence IT budget escalations. We say ‘Project’ because such an optimization model becomes more powerful when multiple assessments (clients) have been compared. These resulting influencing questions should be the client's priority (given, in the first example, that it is revenue growth the client is after). As PRAIORITIZE asks both the Actual and the Ambition scores from respondents, it will be possible to put departments with Ambition questions other than the ones inferred from the optimization model on a ‘watch list’ as these departments are the prime targets for preemptive interventions.