Evaluating the responses from your invitation to tender

The first in this series of blogs discussed writing requirements documents (ITTs) in a way that maximizes chances of getting to the truth about competing systems and this article now looks at how companies should be evaluating responses to those documents. The first article pointed out that creating page after page of tick-boxes has never been, and cannot ever be, a successful methodology because it is too easy for questions to be misinterpreted (accidentally or deliberately). One reason, though, that companies persevered with ineffective ITTs was that they were easy to evaluate: the responses with the most 'Yes' ticks became the shortlist, but experience tells us that this doesn't produce good results.

Take two questions (which came from actual requirements documents):

  • Does the system support batch control?
  • Does the stock file have a field for the buyer's full name, and not just initials?

Yes; we are going to extremes here, but it is clear that the relative importance of these two things is far from equal and it follows that the importance of the responses to the questions cannot be equal either. The only way to properly compare ERP systems is to gauge them against an individual company's needs and that is one reason that the advice was to ask questions that begin with, “How...?”.

That results in better answers, but answers that can not be totaled in the way that Y/N answers can, so companies need a way to score documents and there are two options. In the first method, responses to individual questions are rated on a scale of zero to 10, with zero meaning that the software cannot support the requirement and 10 saying that it is a perfect fit. In the middle is a gray area: maybe the system supports it but not very well; or maybe is can support it with modifications or via a workaround. Responses that fall into these categories get a score between 1 and 9 depending on the assessed fit and, having scored all requirements, the scores are totaled and the systems with the highest totals are put on the shortlist.

Check out our guide to requirements gathering for ERP here

This appears logical but is a problem. First; any score between 1 and 9 is largely subjective. Is something that doesn't fit very well a 3, or a 5 or a 7? Some people say that, with a team of six or so on the selection team, the fluctuations will even out. But this means asking production people to assess fit in financial areas, finance people in sales areas and salespeople in purchasing areas.

Yes; each team member could take each response back to his or her home department for wider assessment but is this likely to be viable? Probably not. The result can be that systems that are a good fit in most areas, but totally lacking in others, will win out over systems that are average throughout. This may seem to be right but what if the winning system lacks functionality that is mission-critical and cannot be satisfied by modifications or workarounds?

The alternative is to go for a traffic light system where, after reviewing the bidder's response to each requirement, answers are marked green, red or amber. Green says OK, red says it can't be done and amber means that it requires modification, customization or the use of a work-around

The only way to assess the systems that have been proposed is to go through the ITT response line by line and assess each item as green, amber or red. Don’t, at this stage, spend too much time on the ambers, as the first pass is really to find the reds: i.e. the areas that each package cannot support.

For an item to be marked green, there must be a clear response that the system does what is wanted, how it is wanted, without chargeable modification. Items that the software suppliers advise would require chargeable modification should be flagged amber, as should items that appear workable but cumbersome. Those items that the software cannot address are clearly red. 

It is advantageous for the team to go through this assessment together as some things will straddle departmental boundaries. Each team member should lead the assessment of his or her prime area and advise the team of their assessment on each point, along with reasons why. It is the job of the team, and the project manager, to challenge any assessment that they disagree with; again with reasons why. At the end of the process, it is possible but very unlikely, that one competing system will have no reds, but this would be extremely rare. In fact, if there are no reds at all, there should be concerns about the selection criteria and the selection process.

More likely, if reviewing four to six systems, some at least will have fewer reds than others so it should be possible to get the shortlist down to just a few. If reviewing two large systems, perhaps one will score slightly better than the other. Having scored reds against criteria that are regarded as being essential, there will be the temptation to go around again, with another selection of potential suppliers. Experience says that this will be a waste of time because, if there was an ideal system out there, there would only be one system out there.

So the next thing to do is to go through the reds, as a team, and challenge whether each of those criteria really is non-negotiable. Confucius once said, “The best is the enemy of the good”. That means that if companies continue searching for a perfect system, they will never implement a good one. Hopefully, the second pass will have turned some reds to amber so it is now time to consider what to do about the ones that remain.

It would be possible to send out another six ITTs to new bidders but that would likely be a waste of time. So, depending on the number of 'reds' remaining, it may be time for a company to make a pragmatic decision or to call in outside experts to help with the decision and to advise on whether the requirements are realistic and in line with the company's budget. Otherwise, it is time to select a shortlist and arrange demos and detailed discussions on system and provider capabilities. That phase will be the focus of the next article.

author image
ERP Focus

About the author…

ERP Focus provides knowledge and evaluation resources to ERP software professionals. Whether you're already using ERP or considering your first implementation, our aim is to give you free access to the latest knowledge, research and tools needed to navigate the ERP market.

author image
ERP Focus

Featured white papers

Related articles