Is it correct to include a neutral point in the grading questions?
Often when I work with a client developing a questionnaire, they ask me if we should include a neutral point in rating questions (for example, Very Satisfied, Satisfied, Neither satisfied nor dissatisfied, Very unsatisfied). Much research has been done in this area, particularly by psychologists concerned with scale development, but a definitive answer has not been found and the debate still continues. Some studies find support for excluding it while others for including it based on topic, audience, and type of question.
Those against a neutral point argue that by including it, we give respondents an easy way out to avoid taking a position on a particular issue. There is also the argument that is equivalent to including a neutral point to the waste of research dollars, since this information would not be of much value or, in the worst case, it would distort the results. This camp advocates avoiding the use of a neutral point and forcing respondents to tell us which side of the problem they are on.
However, we as consumers make decisions all day and often find ourselves idle in neutral. A neutral point can reflect any of these scenarios:
1. We feel ambivalent about the problem and we could go either way
2. We do not have an opinion on the subject due to lack of knowledge or experience.
3. We never develop an opinion on the subject because we consider it irrelevant.
4. We do not want to give our real opinion if it is not considered socially desirable.
5. We do not recall a particular experience related to the issue being rated.
By forcing respondents to take a position when they do not have a formed opinion about something, we introduce a measurement error into the data, as we do not capture a plausible psychological scenario in which respondents might find themselves. If the goal of the question is to understand the variation in opinion, we should not only use a neutral point, but also a “Not sure / Don’t know / Does not apply” option. This would allow respondents in Scenarios 2 and 3 to give an answer that is true to their experience.
For example, the other day I received a customer satisfaction survey from Blackberry after a call I made to their support desk. The survey had a question in which I was asked to rate the representative, who took my call, on different aspects. One of them was “Timely Updates – Regular status updates were provided regarding your service request.” I wouldn’t know how to answer this as the issue I called did not require regular updates. Fortunately, they had a “Not Applicable” option, otherwise I would have been forced to lie and one side of the scale would be just as good as the other.
An increase in non-responses and survey dropouts can also result from respondents not wanting to express their opinion due to perceived low social desirability. If they are given the option “Not sure / Don’t know / Not applicable”, they are more likely to use it than neutral. This would be preferable as they could be excluded from the analysis for a particular question, but information on other questions would not be lost. An even better alternative is to provide a “Prefer not to answer” option if the question includes particularly sensitive topics.
Ultimately, the best antidote to respondents gravitating toward neutral is to make sure you show the questions to those who can actually answer them. With the help of skip logic, we can design surveys that filter out respondents with no experience, knowledge, or interest in the subject being graded. In my Blackberry example, they could have asked me first if my application needed regular updates, and if that was the case, ask me to rate my satisfaction with it. Most likely, the researcher who designed the Blackberry survey was trying to make it shorter, but could still have introduced a measurement error, had he not seen the “Not Applicable” option at the bottom of the scale, which almost I did not. I don’t realize it at first.
You may have already guessed what field I’m in. Survey questions should be as close as possible to the way respondents would naturally answer them in real life. Sometimes we have to get there in multiple steps by filtering out those who can’t respond, but sometimes we just have to give them the option to be neutral.