The Survey Unit   
 

Surveys: Frequently asked questions

There are a number of decisions you will have to make when designing your survey, most of which will influence how successful your survey is. The following list represents some of the most frequently asked questions which might help you when designing your survey project. This information focuses primarily on surveys in the form of web and paper based questionnaires.

 

Survey Unit

What response rate can I expect?

There is no scientific formula for estimating the response rate, however, there are a number of factors and things you can do which might influence the response rate. The following elements of your research design could affect the response rate:

The format of your survey

Whether your survey is web or paper based can influence response rates, but this is often in conjunction with other factors. The most important factor is accessibility, if you are not sure that your potential respondents have access to the internet a web survey may not be the most sensible strategy. Some commentators (e.g. Soloman, 2001, see: http://cogprints.ecs.soton.ac.uk/archive/00002357/) warn that web surveys generally attract lower response rates than postal surveys, however, this should be viewed in the context of your target population, in some cases you may be likely to attract a greater response rate with a web based survey.

The Survey Unit carries out a Student Satisfaction Survey with cohorts of students at the University of Nottingham each year and response rates to this are variable, however, on switching from a paper based survey to a web questionnaire response rates amongst all three cohorts surveyed during 2003/04 did improve on the previous time these cohorts were surveyed, however, it is not clear whether this is due entirely to the change in format from paper to web.

If you are interested in finding out more about the influence the survey mode can have on your response rate and other elements of your research project, Don Dillman has a website containing links to a number of papers on this subject: http://survey.sesrc.wsu.edu/dillman/papers.htm

Your target population

The nature of the target respondents is likely to affect your response rate, for example: Are they easily accessible? Can you assume they can access your survey in the format it is in? Are they going to have the time and / or the inclination to complete your survey?

Some researchers suggest that males are more likely to respond to web surveys than females so consider the characteristics of your respondents when selecting the method by which you will survey them.

Also consider how the subject matter relates to your potential respondents, for example, younger people may be less likely to respond to a survey on pension arrangements simply because for many it is not a priority. Factors such as these can be built in to your sampling strategy.

Method of distribution

Have you carefully targeted your potential respondents or are you relying on a 'hit and miss' strategy for your questionnaire distribution? Well targeted questionnaires usually elicit better response rates since there will be fewer recipients for whom the survey is inappropriate.

In some instances it may also be worth spending a little extra time (if practical) to address potential respondents individually by mail-merging their name onto the envelope and covering letter rather than simply addressing it, e.g. 'To the householder', or 'Dear colleague', however, if you are researching a particularly sensitive topic respondents may prefer the more 'anonymous' approach.

For web surveys potential respondents should be alerted to the survey. This is usually done by emailing notification to your sample or population. If this is not possible it may be useful to place an announcement on a website your potential respondents are likely to view, for example, if surveying students at a given university they could be alerted by placing an announcement and a link to the survey on the student homepage of the university's website.

If relying on data files from other sources for your alerts or distribution (e.g. list of email addresses, or names and addresses of potential respondents) ensure they are the most accurate and up to date source of information.

Publicity

In some instances it may be appropriate to publicise your survey beyond simply sending a letter or email to potential respondents. For example, the Student Satisfaction Survey carried out annually by the Survey Unit amongst University of Nottingham students has been publicised in various ways including posters in student areas around the University, an article in 'Grapevine', the Students' Union magazine, and announcements on the students' homepage on the website. This seems to boost response rates slightly compared to years when little or no publicity was carried out.

An incentive to participate

The offer of an incentive to complete the survey can influence response rates. This could be a prize draw, or a promise to send the respondent a summary of your findings, or even just the inducement that they will be providing a valuable contribution to your research project. Make sure you explain clearly the purpose of the research you are carrying out and do not mislead the respondents. See also 'Should I offer potential respondents an incentive to participate?'

Return of the questionnaire

In the case of paper questionnaires, you must provide clear instructions for the return of completed surveys otherwise your response rate may be affected. If this involves a postal return it helps if you provide a pre-paid envelope or a freepost address; respondents are less likely to return their questionnaire if they must pay for the postage.

For web based surveys, ensure your data is submitting correctly before making the survey live, carry out test submissions and check the data for each question is writing to the data file correctly. If your survey extends over multiple web pages, ensure each one has a functional 'submit' button.

Time limits

Have you set a realistic (if any!) deadline for the return of questionnaires?

Bear in mind the characteristics of your target population when deciding on a deadline, are they busy people, do they work to similar timelines as you, e.g. a survey of school teachers with a completion period covering the summer holidays is unlikely to achieve a good response rate. Excessively lengthy response periods can result in your questionnaire being constantly put to the 'bottom of the pile', while a completion period which is too short can fall at the first hurdle if respondents assume it is impossible for them to complete a questionnaire before your deadline.

Failing to set a deadline can adversely affect response rates, people may assume that by the time the questionnaire has reached them it is already too late to send a return. Deadlines are not always necessary for web surveys since you can keep the survey 'live' for the length of time you require, although if alerting potential respondents of the existence of the survey try to inform them that the survey will be accessible between x and y dates.

Reminders

In some instances it may be necessary to send one or more reminders to your target population. Whether you send a reminder will be a decision made by balancing the demands of project finances and time resources, the most practical reminder method (e.g emails will be less costly than postal reminders but in most cases telephone reminders are more expensive than postal), whether you are concerned over appearing to 'harass' people into responding (this doesn't always provide the best data!). Sometimes targeted reminders are a more acceptable method, but these are only possible if you have some way of monitoring who has already made a return and can therefore exclude these respondents from your reminders. Be aware that reminders often create additional workload as you may need to respond to requests to re-send the questionnaire and other queries about the survey.

The questions

The questions you ask in your survey can also affect response rates. If there are too many questions or the questions appear daunting potential respondents may be put off. The types of questions and the subject matter of questions can also be influential. Piloting your survey thoroughly should help to identify any issues with the questions in your survey. See also: 'How many questions should I include?' and 'What type of questions should I ask?'

Survey fatigue

If your target population has been surveyed more than once in recent months they may become less likely to respond with each additional survey they receive. Be aware of this when designing your sampling strategy and try to establish whether anyone else has surveyed your population recently so that you can be prepared for the effects of survey fatigue.

Examples of response rates

The Survey Unit carries out a 'Student Satisfaction Survey' each year amongst selected student cohorts at the University of Nottingham. Cohorts of students are surveyed in alternate years, e.g. for 2003/04 we surveyed International and UK postgraduates and final year UK undergraduates, these cohorts will be surveyed again in 2005/06; for 2004/05 we are surveying all International undergraduates and first year UK undergraduates, these groups will be surveyed again in 2006/07. The response rates achieved for these surveys over the last three occasions varied between 16% and 38%. A variety of factors could have been influential on each survey occasion, for example, the amount of publicity, the incentives offered and more recently a switch from a paper to web questionnaires. Further details on the response rates achieved for these surveys can be viewed by clicking the links below:

Response rates for 2005/06 cohorts
Response rates for 2004/05 cohorts

The Survey Unit also administers a 'Teaching Rooms Standards' (TRS) survey to both teaching staff and student course representatives at the University. For the last two years this survey has been web based, no incentives are offered other than the opportunity to report issues with teaching rooms and no publicity other than the initial email alerting respondents to the survey has been carried out. The response rates have been as follows:

TRS response rates
2002/03
2003/04
Teaching staff
26%
13%
Student course representatives
47%
36%

The dip in the response rate for teaching staff is likely to be due to a change in the way the list of teaching staff was acquired, we suspect this overestimated the actual number of teaching staff, therefore the base for the percentage calculation is likely to be inflated. The drop in response rates for student course representatives could be at least partly attributable to the time of year at which the different surveys were conducted; the 2003/04 survey was administered at the end of Semester 2 when many students were completing course work and preparing for exams.

top of page back

 

How do I calculate my response rate?

The response rate is usually calculated as the number of returns (either returned paper questionnaires or submitted web questionnaires), divided by the total number distributed, multiplied by 100 to give a percentage figure. Any questionnaires returned to sender or email alerts returned as undeliverable are usually deducted from the total distributed, see the example below:

500 questionnaires posted
15 returned undelivered
Total distributed is 500 - 15 = 485
150 completed questionnaires returned
Response rate is 150 divided by 485 = 0.309 multiplied by 100 = 30.9, so the response rate to the nearest whole number is 31%

If your survey is administered as a web questionnaire the response rate can be calculated using the number of email alerts sent out (if this was your 'distribution' method). If you do not alert potential respondents to the web survey a response rate cannot be calculated in this way.

The above method is the simplest and is usually sufficient for most social surveys. For a more complex method of calculating your response rate have a look at the following response rate calculator: http://home.clara.net/sisa/casro.htm and it's associated explanation at: http://home.clara.net/sisa/resprhlp.htm

top of page back

 

Should I conduct my survey as a web or paper based questionnaire?

This is dependent on a number of factors but primarily upon:

- your target respondent population
- the project budget
- the expertise available to you

Consider whether the people you wish to survey are best accessed via the web, for example, a survey of employees who work in manual occupations and have no access to a computer at work is unlikely to be successful in a web based format. Many commentators also warn that web surveys are good method of researching internet users, but are not always appropriate if you wish to generalise your findings to a broader population. If you are doubtful as to the availability of internet access within your target population a web survey may not be the best choice and limited web access could bias your findings. Web based surveys are often less expensive than postal questionnaires but note that if you do not have the expertise to produce your own web form you may have to pay for someone else to do this for you.

Whichever format you choose, remember to attempt to make it accessible to people with disabilities, for example, web surveys should be screen reader compatible for people with visual impairments. At the very least, you should offer to make your survey available in an alternative format on request (e.g. a large-print version of your paper questionnaire). University of Nottingham staff and students requiring further information on making your website accessible should refer to the Information Services Web Accessibility Guide.

top of page back

 

How many questions should I include?

The best advice to follow here is to bear in mind your respondents and what you can reasonably ask of them (see also incentives).

Generally, avoid making your questionnaire too long, particularly with paper questionnaires where respondents can see the length of the questionnaire before they answer a single question - they may just give up straight way if it looks especially time consuming.

Try to give respondents a realistic (but not off-putting!) estimate of the time it will take to complete the questionnaire either in your covering letter or email or at the beginning of the questionnaire.

Also consider the types of question you are asking, a long series of open ended questions is quite a big 'ask' of your respondents whereas questions with multiple-choice responses are quicker and easier to answer so respondents may tolerate a greater number of these.

top of page back

 

What type of questions should I ask?

The most appropriate questions to ask will depend on the format and subject matter of your survey, but most importantly, you should think carefully about what you want to get out of the survey. Try to remain focused on the aims and objectives of your research when designing survey questions and wherever possible try to pilot your questionnaire with members of your target population. A brief description of the most common question formats follows:

Closed-choice:

These are questions with a pre-determined set of possible responses from which the respondent must select their answer. Sometimes these include an 'other, please specify' option at the end with space for respondents to write in their own answer. These questions will either invite the respondent to tick one box only or to tick all that apply. For web forms, 'tick all that apply' questions require the use of tick boxes which enable respondents to tick multiple boxes by clicking them, whereas 'tick one only' responses usually employ radio buttons where only one button can be checked within each set. An example of both types of closed-choice questions follows:

Example 1: A 'tick one box only' question:

Do you think you made the right decision regarding your choice of course?

Definitely yes
Probably yes
Uncertain
Probably no
Definitely no

Example 2: A 'tick all that apply' question:

Which of the following make a contribution to your fees and living costs?

Parents
Personal income / savings
Studentship / sponsorship
Employment during degree
Grant
Student loans
Other loans
Other (please specify)

Open-ended:

Open-ended questions invite the respondent to provide a response in their own words, for example:

'What would you say you have most liked about studying at the University of Nottingham so far?'

Sometimes respondents will simply be invited to provide further comment on an issue (usually the subject of the preceding question) rather than being asked a specific question, for example:

'If you have any further comments about the facilities mentioned above, please use the space below'

Open-ended questions can help you to identify salient issues which may not have been covered by other questions and can be a useful tool but they make questionnaire completion, data input (if a paper questionnaire) and data analysis more time consuming so try to use them sparingly. If you find you are tempted to construct a questionnaire in which the majority of your questions are open-ended, it is possible that a questionnaire is not the ideal instrument for your research; consider using interviews or focus groups instead.

Rating scales:

These are questions which require respondents to indicate their position on a scale which usually runs from low to high. Different types of scales can be used including a Likert scale which commonly comprise a series of statements to which the respondent must indicate the extent to which they agree or disagree, for example:

Please indicate the extent to which you agree or disagree with the following statements about the IT helpdesk:

 
Strongly agree
Agree
Neither agree nor disagree
Disagree
Strongly disagree
IT helpdesk answer my call promptly
IT helpdesk offer a polite and friendly service
IT helpdesk always follow-up problems

Rating scales can also be constructed using elements other than levels of agreement, for example, satisfaction, importance, value for money, frequency, adequacy and so on. Another commonly used format is a numeric scale with just the upper and lower ends labelled where the respondent is asked to circle a number on the scale to represent where their opinion falls, for example:

The University is:

Lively      
Dull
1
2
3
4
5

Survey Unit experience suggests that in most cases these numeric scales are impractical since there is no guarantee that one person's '2' is the same as another's '2', wherever possible it is best to employ a verbal scale with each increment labelled to minimise the possibility of misinterpretation and provide some degree of standardisation to the responses. Numeric scales are also more difficult to analyse and report on than verbal scales. It is difficult to comment on say 80% of your respondents circling '2' on a scale; with a verbal scale this is more meaningful and you can put your findings into words without fear of misinterpretation, e.g. 80% indicated that the University is 'fairly lively'.

As far as possible try to keep the scales you use evenly balanced, e.g. two agree and two disagree categories rather than three agree and one disagree; uneven balance of categories can produce bias.

A common issue with such scales is whether to include a 'middle alternative' for respondents, such as 'neither agree nor disagree'. Opinion is split as to whether this is a useful addition to your scale. There is an argument that the inclusion of a middle alternative may result in respondents 'sitting on the fence' and ticking this option every time, conversely others contend that by not including a middle option you are forcing respondents to express an opinion one way or another which may be asking them to commit to a view they do not necessarily hold; this can provide misleading results. Survey Unit experience suggests that whether to include such a middle option is best assessed on a case by case basis as it may be less appropriate for certain subject matters, also, in some cases it may be necessary to include a 'not applicable' or 'not experienced' option instead of or even as well as your 'middle alternative'.

Ranking:

Questionnaires often make use of questions which ask the respondent to 'rank' a list in a specific order, for example:

Please indicate how important the following elements of community policing are to you by numbering each of the following aspects 1 - 5, where 1 is the most important and 5 is the least important:

Visible police patrols  
Neighbourhood watch schemes  
Issuing anti-social behaviour orders  
Availability of advice on security matters  
Quick responses to emergency calls  

These ranking questions can be difficult for respondents to answer particularly if they rate more than one of the listed items as equally important. This can lead to questionnaires being completed incorrectly and can make data input and analysis difficult. Think carefully about how you would analyse and report on such a question before you include it in your questionnaire. Sometimes when you feel you need a ranking question, what would actually work better is a simple multiple response (i.e. tick all that apply) question e.g:

Did any of the following factors influence your decision to apply to the University of Nottingham? (tick all that apply)

Reputation
Location
Course
Grade requirements
University facilities
The campus(es)

Although it may be tempting to make this question a ranking question, in this case it would be advisable to simply ask respondents to tick all that apply rather than to rank the list, this is because it cannot be assumed that all factors listed did actually influence the respondent's decision, for example, the campus may have played no part at all so the respondent should not be forced into ranking this element, instead as part of a 'tick all that apply' response they can simply leave blank this and any other factors which were not applicable.

top of page back

 

Should I offer potential respondents an incentive to participate?

The offer of an incentive is likely to increase your response rate, but consider first whether it is appropriate and whether you might be compromising quality of data for quantity.

Make your incentive appropriate to your target population, the Student Satisfaction Survey conducted by the Survey Unit offers respondents the opportunity enter into a prize draw for cash prizes and seems to encourage a reasonable response rate. A great deal of research has been carried out into the influence of incentives, if you would like to find out more see David De Vaus's website for a comprehensive bibliography on this subject: http://www.social-research.org/, follow the 'reading' link and then click on 'using incentives'.

top of page back

How can I make my paper questionnaire 'data entry friendly'?

When designing your paper questionnaire try to incorporate elements which will allow for easy data entry. For example, give each tick box a number, e.g. yes = 1, no = 2, don't know = 3. This means that when the data entry is carried out, all that needs to be entered is the number of the response ticked rather than a whole word or phrase. You should decide how you are going to enter the data (e.g. into an Access database or Excel spreadsheet) when designing the questionnaire as this might affect how you incorporate data entry aids into your questionnaire design. If you are asking a data entry agency to input the data for you, always allow them to see your questionnaire before distributing it and offer them the chance to make any amendments or additions, for example, some data inputters require column numbers in the margins of your questionnaire. Click here to view a pdf version of a paper based questionnaire designed by Survey Unit which shows the numbers clearly marked next to each tick box for data input purposes.

Web based questionnaires eliminate the need for data entry since responses are written directly to a data file, however, do build in sufficient time to your questionnaire development to test the data file before making the survey 'live' and then allow time to check and 'clean' the data before any analysis can be carried out.

top of page back

Should I offer respondents assurances of anonymity?

It may seem obvious but do not assure your respondents that their anonymity will be retained if this really isn't the case. This relates to both directly and indirectly revealing the identity of a respondent, sometimes identifying information can appear in a report so ensure that you remove any such detail before reporting any findings. Remember also that sometimes you are offering assurances of confidentiality rather than anonymity, the former suggests that individuals will not be identified to any third party however, anonymity suggests that even the researcher will not know the identity of the respondent - this is often not the case.

top of page back

Where can I find out more about questionnaire design?

Contact the Survey Unit.

Email: survey-unit@nottingham.ac.uk
Tel: 0115 8466091
Fax: 0115 84 66090

Most social research methods books contain some guidance on surveys, however the following books are particularly useful for information on survey design and analysis:

ALDRIDGE, A., and LEVINE, K., (2001) Surveying the social world: Principles and practice in survey research. Buckingham: Open University Press

DE VAUS, D., (2002) Surveys in social research. 5th ed. London: Routledge

DE VAUS, D., (2002) Analyzing social science data. London : Sage

There is also a wealth of information on survey design on the internet, a selection of useful sites are listed below:

For links to more detailed survey design resources go to:
http://www.social-research.org/

The Centre for Applied Social Surveys (CASS) is a good resource for up to date information and news letters as well as details of short courses on social surveys
http://www.socstats.soton.ac.uk/cass/
CASS also maintain a question bank which can be accessed at the following URL:
http://qb.soc.surrey.ac.uk/docs/home.htm

For a simple guide to constructing market research questionnaires:
http://www.webcom.com/ygourven/quest12.html

The website for the British Sociological Association provides useful guidance on the ethical issues in social research, many of which apply to social surveys:
http://www.britsoc.co.uk/bsaweb.php?area=item1

 

Contact details

Survey Unit
University of Nottingham
University Park
Nottingham NG7 2RD

Tel: +44 115 8466091
Fax: +44 115 84 66090

top of page back