Survey Principles Guide


This guide is written for the user who is not overly familiar with the design or management of surveys. It will provide an overview of the survey management process, as well as give practical advice in the planning, designing, and conducting of surveys. It can also serve as a quick reference and refresher for those with more survey experience.

Since this manual is designed for newcomers to surveys, it necessarily adopts a simplified approach. We purposely do not try to go into every nuance and detail of the process. There are many fine books available for those who wish to pursue the subject further.

Chapter 1: Overview


Knowledge is the fuel that runs today´s business. The success or failure of any organization depends on knowing the attitudes, beliefs, and opinions of its people and also of the people it serves. The best way to determine these is by conducting a survey.

A survey may be called different things, depending on its purpose: a poll, a questionnaire, an opinionnaire, an evaluator, an assessment, an inventory, or a survey. Throughout this document, all of these various forms are referred to as surveys.

A survey is a systematic, scientific, and impartial way of collecting information. For example, you can survey a group (or sample) of people about their feelings, motivations, plans, beliefs, and personal, educational, and financial background. This information is used to generalize conclusions or statements about the larger group (or population) from which the sample is drawn. The intent of the survey is not to describe the particular individuals who take part in a sampling, but to obtain a statistical profile of the population.

Surveys are all around us. They are used by many different organizations for many different purposes. For example:

  • Organizations use surveys to measure and improve customers satisfaction levels.
  • Organizations often use surveys to discover worker attitudes about issues affecting the work environment, quality, and productivity.
  • Organizations use e-mail surveys to quickly evaluate opinions and attitudes.
  • Management uses surveys to provide data for long-range strategic planning and to enhance customer relations.
  • Training Departments use surveys to determine the specific training needs that exist in their organization.
  • Help Desk Departments use surveys to determine the effectiveness of their customer support.
  • Organizations use surveys to evaluate their internal and external supplier quality such as ISO 9000 and Malcolm Baldrige.
  • Organizations use surveys for Strategic Planning to increase their employees commitment through their involvement and implementation of a common mission.
  • Organizations use assessments to evaluate their employee’s leadership effectiveness.
  • Parents, alumni, students, and teachers use surveys to measure the quality of the education system.
  • Organizations use surveys to evaluate and track their Total Quality team’s effectiveness.
  • Organizations use surveys to ensure communication among all levels.
  • Banks sample customer bank accounts for sample audits.
  • News and political organizations use the surveys from a sampling of voters to determine how the public perceives political candidates and issues.
  • TV networks take surveys to determine how many and what types of people watch their programs.
  • Organizations use surveys to determine the effectiveness and attractiveness of their web page.
  • Auto manufacturers survey their customers to determine their level of satisfaction.
  • Restaurants survey their customers for feedback about their food quality, appearance, quantity, and price vs. value. They also ask customers about the level of service provided by their servers.
  • Hospitals survey patients to evaluate the effectiveness of care and facilities.
  • Governmental branches of the armed forces are measuring and identifying and measuring customer satisfaction.
  • Organizations use surveys to discover new market possibilities.
  • Resorts survey their guests to gauge enjoyment of the facilities and discover opportunities for improvement.
  • Universities survey alumni to gather information on the quality of the educational program.
  • Charities survey donors to discover ways to accomplish their mission more effectively.
  • Churches survey members to determine if their needs are being met and discover new opportunities for service.

Survey Software Tools

Often an organization can gain an advantage during their survey project by relying on software to assist in the management, design, distribution, collection, and reporting of the survey. The advances in computer technology can take what was once a long, tedious task and turn it into a much simpler process. The results tend to be more advanced and more professional while requiring far fewer resources.

There’s no need these days to do the entire survey project by hand. Using a software tool can:

  • Help keep the survey project’s budget balanced.
  • Organize, sort, and filter a list of potential respondents.
  • Provide quick import and export of audience lists.
  • Provide instant, automated sampling of the audience list.
  • Speed up the survey design process.
  • Personalize each survey with actual names, addresses, and other information from your audience list.
  • Make distribution of the survey quick with electronic methods such as web page and e-mail.
  • Allow quick manual data entry and instant electronic data collection.
  • Make the analysis of data seamless with instant filters and calculations of various statistics.
  • Create instant, powerful, and colorful reports that can be manipulated on the fly.
  • Provide quick reports, containing as much or as little detail as needed, to various levels of management.

There are numerous survey software tools on the market, many of which can do some of the above tasks. SurveyTracker can do all these tasks and more. Using SurveyTracker will dramatically decrease the resources you would otherwise employ on a manual survey project.

The Project Track Method

SurveyTracker provides a clear, easy track to follow in order to plan, create, administer, and analyze an effective survey project. The Project Track Method helps you easily move from the beginning planning stages through the development of action plans after the survey is over. Many organizations are continuously tracking their improvement efforts by using surveys on a regular basis.

SurveyTracker will help you track many continuous surveys simultaneously over time.

The SurveyTracker Project Track follows the proven path of the Deming Cycle for Continuous Improvement. The Cycle begins with PLAN. Without adequate planning, your survey will be ineffective.

The next step in the cycle is DO. This encompasses the actual design of the survey, selection of the sample, and collection of the data. The next phase of the Cycle is the STUDY phase. This is the point where you analyze the data you have collected, draw conclusions, and report your findings. The last phase of the Cycle is ACT. Here is where you determine the appropriate action steps to be taken in the light of the data. Once the actions steps are taken, the Cycle leads you back to PLAN another survey to evaluate the changes made.

The Project Track Method is a project tracking system that helps you stay on track and always know where you are in the survey management process.


For your survey to accomplish its objectives it must be well-planned from the start. The following detailed Checklist will help you in developing an effective plan. (Not all these details need to be carried out on every survey. Adapt the list to your particular project.)

For each of the following items, specify who is responsible and when that item is to be completed if you choose to use it in your survey project.

SurveyTracker makes this planning simple by providing a Checklist function to allow you to easily manage the details of your survey project. This Checklist also follows the Deming Cycle for Continuous Improvement and shows the details you should strongly consider for a successful project.


  1. Develop a list of survey objectives (expected outcomes).
  2. Develop a list of potential action steps reflecting the outcomes.
  3. Develop a list of potential contingency steps reflecting changes in the action steps or outcomes.
  4. Secure management’s commitment and support.
  5. Assign or nominate a project administrator\manager\coordinator.
  6. Determine the survey delivery methods available.
  7. Develop a projected budget for all of the survey costs.
  8. Select members for a Survey Project Team.
  9. Select Team members to help design and administer the survey.DO
  10. Develop a schedule for key dates of the survey project.
  11. Decide what demographics are relevant to your survey.
  12. Decide upon the survey sampling techniques.
  13. Decide what topics are to be addressed in the survey.
  14. Draft a cover letter to accompany the survey.
  15. Review the audience list for integrity and enter it into SurveyTracker.
  16. Determine the survey delivery methods you will employ based on your audience list.
  17. Send letters to survey/interview respondents.
  18. Print surveys and prepare to present them to the respondents.
  19. Coordinate the administering of the surveys.
  20. Retrieve returned surveys from the mail, disk, e-mail, web, etc.
  21. Enter data from the returned surveys into SurveyTracker.STUDY
  22. Analyze the data and develop a survey report.ACT
  23. Present the results to management.
  24. Present the feedback to respondents/audience.
  25. Establish an action plan.
  26. Establish a responsibility chart for action implementation and follow-up.


Surveys are systematic methods of gathering the attitudes, beliefs, and opinions of a group of people. The survey shows the attitudes of the group, not the individuals who make up the group. Surveys are used by many organizations for many specific purposes.

SurveyTracker follows the Deming Cycle of Continuous Improvement: Plan – Do – Study – Act. This system will help you know where you are going in your survey project and get there efficiently.


©, Training Technologies, Inc.

Chapter 2: Planning a Survey

All successful survey projects begin with planning. The more thorough the planning, the more smoothly the project will run.

Regardless of how small your survey project may be, it needs a plan that clearly defines what is to be accomplished, and defines it by whom, and when it will be conducted, and estimates for overall costs. This survey plan doesn’t have to be elaborate, but it must define the responsibilities of each survey project member including the manager, coordinator, administrator, etc. that are involved with the project. Below are some considerations that should be incorporated into survey project plan:

  • Provide your customers and/or upper management high level summary of the survey project.
  • Provide the survey project management with a plan from start to finish for monitoring progress and allocating resources and showing potential positive and negative outcomes.
  • A plan for consistently updating survey information and distributing it to the survey team members throughout the project.

In every survey, someone (an individual or a team) is going to have to come up with solid answers for the following concerns:

  1. Why should we do a survey?
  2. What might be some of the anticipated responses?
  3. What actions might we take?
  4. What actions do the respondents anticipate as a result of their participation?
  5. If your actions don’t work, what might be some alternatives?
  6. Where will the survey be distributed?
  7. Will we need to create a multi-lingual survey?
  8. How will the survey be distributed?
  9. Do we use paper, scannable, web, or electronic surveys?
  10. How will we collect the survey data?
  11. How much data should be collected?
  12. When is the best time to collect data?
  13. Where will the data be collected?
  14. Is there any special method for collecting data?
  15. Who should be surveyed?
  16. How do we determine the best delivery method for each person to be surveyed?
  17. Who will manage the survey project?
  18. Who will help coordinate the survey project?
  19. If we do manual data collection, who will enter the survey respondent data?
  20. Who will do the survey analysis?
  21. What kind of analysis and reports should be done with the survey data?
  22. Who will present the survey results to whom?
  23. When will the respondents be told about the results?

Internal Resources Needed

It is important right from the start to make sure you have the support and commitment of your organization’s upper management. Without this kind of backing, your survey is sure to fail. Respondents are more likely to participate fully if they sense positive support from the top.

You must also decide if you will conduct the survey yourselves or hire someone from outside to do it. Hiring a consultant can be very expensive. But can you do it yourself if you are new to surveys?

In most cases, your organizational culture has a medium to high degree of trust for internal organizational, leadership, or Total Quality assessments. Training Technologies, Inc. can provide you with additional information specific to your survey project concerns. SurveyTracker and this manual will help teach you how to design and conduct a survey. In addition, Training Technologies has invested considerable effort testing the surveys in the SurveyTracker Modules. Customizing our Modules to your particular situation can save you time and effort. You can also learn good survey design by studying the surveys included in each Module.

If you are going to conduct your own survey, be certain you have a Project Manager, Coordinator, Administrator, etc. to oversee the survey project. It may be you or someone you delegate the task to, but it is essential to have a single individual who is in touch with all aspects of the project and can make sure all the parts work together to produce an effective whole.

It is also useful to have coordinators whose job is to help administer the survey and provide feedback to the respondents. These are the people who do the actual interviewing and data collection. Make sure they are well trained and know how important their task is.

The active support of your organization’s middle management is also essential in order to facilitate communication, administering the survey, feedback, support, and implementation of the action plan for change. If they are going to be held accountable for action planning and leadership based on the results of the survey, they should very definitely be actively involved from the beginning.

You should also give some thought to the respondents. Without them, you won’t have a survey. As you plan the survey, put yourself in their place and try to develop a survey that will be as profitable and enjoyable for them to take as it will be useful to you.

Give yourself plenty of time to plan. Build into your schedule enough time to handle problems that might arise, such as the need to replace some of the people involved if they find they cannot perform their job duties without additional help (manpower, etc.).

Below is a list of some of the common resources you may need for your survey project:

  • Mailing/Audience List
  • Computer and SurveyTracker
  • Telephones Access
  • E-mail, web page, or network
  • Scanner for scannable forms
  • Postal Services
  • Paper and Printing
  • Scannable forms and laser printer
  • Fax Availability
  • Transportation
  • Meeting Rooms, Audio-Visual Aids, etc.


The size of the survey project team depends on the size of the organization and the scope of the survey.

Anyone who has a vested interest in the outcome of the survey and whose primary responsible for implementing the strategies and objectives should be represented in the survey project team. These individuals should be involved from the very beginning. Be sure to keep absentees informed by sending memos and newsletter and by holding meetings and team member informal updates. It is critical to address the needs and concerns from all levels that are affected within your organization. A quick and effective way to accomplish this is to conduct cross-departmental meetings and brainstorm a list of topical areas to be covered in the survey. These topical areas can be combined, prioritized and be developed into objectives for the survey project.

Some of the items to address are listed below for your survey project staffing considerations:

  • Who should be surveyed?
  • How many you need to survey to achieve your objectives.
  • When should they be surveyed?
  • What type of method will be used for gathering data? (Mailings, e-mail, web, disk, kiosk, network, scanners, meetings, personal interviews, group interviews, in-person or telephone interviews)
  • How the data is entered into the SurveyTracker software.
  • Who will do the survey data analysis and interpretation?
  • Who will present the results to whom?

A survey project manager, coordinator, or facilitator should be selected to energize, coach, coordinate, and be the spokesperson for your organization and/or customers. SurveyTracker’s Schedule is a key tool for the project manager, team, and others to accomplish survey objectives.

In many companies, the Quality, Human Resource or Marketing/Customer Service areas facilitate most of their internal and external surveys. They oversee the survey project from the beginning through the on-going improvement tracking of subsequent surveys.

Surveys of smaller scope can be handled by only a few people or even an individual using SurveyTracker and its pre-designed Survey Modules.


One of first things you need to do is determine the purpose of your survey.

Why do you want to conduct a survey?

A clearly expressed answer to this question from the survey planning team will help you plan the survey. The answer provides everyone with a framework for their expectations. Remember: there are many different, equally valid reasons for conducting a survey.

A survey can accomplish many vital functions for an organization. Some of the things a survey can do are:

  • Improve customer relations
  • Determine the quality of customer support
  • Evaluate your current and prospective suppliers
  • Indicate strengths and weaknesses
  • Pinpoint problems with productivity
  • Evaluate the quality of education
  • Track the effects of change
  • Enhance communication
  • Improve attitudes
  • Uncover knowledge and training gaps
  • Track training implementation
  • Evaluate leadership results
  • Determine the quality of a web page
  • Increase commitment through involvement

In preparing to conduct a survey, you must first decide what specific topics you want to cover and what information you want to gather. To select the content of a survey, define your terms and clarify what you need to know.

The reason for this, of course, is to aid you in developing clear and effective questions to help you obtain the information you need. Therefore, at this stage in the planning, write down the topics that you want to gather information on. (For example, product and service performance, value for money spent, strengths, weaknesses, demographics, product size, etc.) It isn’t necessary to worry about the exact questions you will ask. That will come later. For now, define the topics and be sure everyone involved in planning the survey is in agreement on them.

Whenever possible, define the terms used in your survey. “Productivity,” for instance, can mean different things to different people. If you don’t clearly define your terms, you may end up with results that don’t tell you what you want to know.

Make sure you can actually get the information you need. Sometimes, people are reluctant to reveal their views, especially if they are not popular or if they are critical of management.

If you find you cannot get the information you need, you should remove the topic from the survey and find an alternative source of data.

Make sure you can deploy the right survey for the right audience. If you have a highly technical audience, make sure you can provide an equally high-tech method. In contrast, if your audience has no technical background, make sure you use a mailing, personal interview, or similar non-technical method.

Information for Action

It is very important not to ask for information unless you, or someone in authority, can act on it. A survey raises people’s hopes that action will be taken. If you repeatedly ask for their opinions and then ignore them, people will stop taking your surveys seriously. They will soon begin to answer in a hit or miss fashion or fail to answer at all. Soon, they will develop a negative attitude toward all surveys.

This can be avoided if you take the time to plan your methods for problem solving, action planning, communication, measurement, and feedback. Respondents don’t expect miracles. However, they do expect a sincere effort to explain the results of the survey and to use the information gathered to take positive steps. So make specific plans for ways to let the respondents know what is being done with the information they supplied.

It is also important to communicate before the survey with employees and management alike to correct any misconceptions and allay any misgivings. The efforts you put forth in this way before the actual survey will pay off with better response and greater accuracy.

Being open about the entire survey process is another way to build trust with the respondents. This is especially true if you are conducting internal surveys among employees.

Planning of survey objectives

Another important part of the survey planning process is the consideration of possible outcomes and the development of contingency action plans for those outcomes.

Survey research information should always be directly related to decisions and have specific action potential. Of course, survey results do not make decisions or dictate actions, but they should suggest some actions that can be taken.

It cannot be stated too strongly that, when conducting surveys objectives should be developed right away for the survey project. These will provide some milestones to measure the success of the overall survey project. It is important to keep your objectives focused and limited in number. Be sure your objectives are also:

  • Out of sight, but not out of reach of your project team and organization.
  • Contribute to your organization’s and/or customer’s overall mission and/or objectives.
  • Short and simple. (The KISS principle)

Objectives should be written specifically and with time frames. The statement of an objective should follow the following formula:

To + An Action Verb + A Measurable Output + Quantity (Objective) + Time Frame

Example: To increase ABC’s Company’s customer satisfaction level 15% by December 31.

Planning of survey analysis and reports

The survey analysis may be designed in stages utilizing the best use of the survey team’s time and impact. SurveyTracker’s Summary Table and Summary Multi-Graph quickly identifies organizational-wide strengths and weaknesses. A Grand Mean (X double bar) can be benchmarked with individual departments, plants, divisions, etc.

The survey should be planned with the desired results in mind. For example, if you are going to do Summary Tables and Summary Multi-Graphing, you must design your survey using the proper questions that will allow you to use the appropriate scales to give you the data you need to produce the kinds of analysis you want.

Some of the common analysis that are conducted on general surveys are:

  • Frequency of a response to a particular question.
  • Question responses ranked from the best to the worst.
  • Compare an individual area, department, or branch with the Grand Mean of the organization.
  • Frequency, averages, or cross-tabulations of demographic data.

As you plan the survey, you should also develop contingency plans for action in the case of undesired outcomes, such as:

  • Non-completed surveys.
  • Unrealistic expectations of employees or customers that everything they say will be immediately changed in your organization. (Consistent and timely communication along with implementing value-added ideas or systems will help reduce these expectations.)
  • Superficial feedback from respondents.
  • Large number of returned surveys because of a poor audience list (e.g., out-of-date mailing addresses, incorrect e-mail addresses, etc).
  • Poorly completed scannable forms (the wrong type of pencil or pen used or messy response marks).
  • Radical results or overwhelming feedback.
  • Inadequate results (Low response rate)

It is always better to be prepared for unusual occurrences than to have them catch you off guard. If you have a contingency plan, you can take the appropriate action quickly in order to remedy the situation.

Confidentiality of Responses

An important factor that must be considered during the planning stages is the confidentiality of the responses. In general, survey instrument should be designed so that there are no markings or codings that could link a particular survey with the specific individual. You should also make sure that there are no questions that would tend to identify who gave a certain response.

If you use any codings on your survey, you should explain fully to the respondents the purpose of the codings—to allow you to track results from branches, locations, plants, departments, etc. They are used to enable you to do comparisons between areas, not individuals. This is especially important when it comes to e-mail or other electronic survey methods. Many of these methods require specific subject lines so they can be returned easily to the receiving software. These headings may look “suspicious” to your audience so make sure that they know they are used to ensure accurate data collection.

You should always make sure that your electronic survey methods employ proper data encryption to ensure the confidentiality of your results. This is especially true in kiosk, network, and disk surveys where multiple people repeat the same survey on the same disk or computer. E-mail replies (both for e-mail and web surveys) are also important to safeguard from hackers. SurveyTracker has all the data encryption routines you’re likely to need. All responses are viewable only by the person retrieving the data.

If your results are going to be accurate, your respondents must have confidence that their responses will remain private. This is especially important in business. If respondents think that management can trace a response back to its author, they will only respond with what they think management would like. Your data will be useless.

If there is any question, it can be useful to explain the difference between “anonymous” and “confidential.” Anonymous means that the employees do not have to sign their names to the completed surveys. Confidential means that only authorized persons (such as, the project manager) will see the surveys and that the identity of individual respondents will be closely guarded.

All SurveyTracker Special Interest Modules and Cover Letters are expressly designed to provide all respondents complete confidentiality.

Choosing the survey Method

The next step in survey planning is to select the method you will use to gather the information. Selection of the right method can greatly enhance the response rate and the completeness of your data.

Often, people only consider the three main methods (personal, mail, and telephone) of taking surveys. However, there are actually eight basic methods that can be effective in gathering survey information. These eight basic methods are:

  • Personal interviews
  • Mail surveys
  • Telephone interviews
  • Hand-delivered surveys
  • Focus Groups (or Group Interviews)
  • Location interviews
  • Web surveys
  • Computer interviews

Each of these methods can be effective and useful, depending on the scope and situation of the survey. Of course, if you are surveying the employees of your organization, you will usually find yourself doing personal interviews, phone interviews, network, e-mail, or a variation of the delivered survey. You will give the survey to the employee while at work and ask them to fill it out either during work or on their own time. When finished, they will return them to you or you will pick them up.

Before you choose a particular method, you should carefully consider the strengths and weaknesses of each, as well as the ease and cost of administering the survey.

Determining the best survey delivery method

Careful consideration must be paid to how surveys will be delivered to your audience. Using the wrong survey delivery method can result in disastrously low returns while using just the right mix can result in fantastic return rates.

To determine the proper distribution methods, go over your audience list carefully. Do you know the technical capabilities of each audience member? Do they need a simple mailed survey or can they handle e-mail? If they can handle e-mail, do they need a simple text-based e-mail or a more advanced form-based e-mail? Should the audience be contacted directly for a personal interview?

Equally important is whether your organization has the capabilities for the survey delivery methods. If your audience can receive e-mail surveys, can you deliver these surveys in a timely manner? If a personal interview is warranted, do you have an interviewer with the personal skills required to gather the information? If your audience is capable of replying to a web survey, do you have a web page to host the survey?

If you do not have the necessary delivery methods, you must determine if you can put them in place in a timely manner or if you can employ a different delivery method altogether. The former may be expensive or technologically improbable for your organization and the later may net low returns. Careful consideration of this subject is required.

Determining resources NEEDED for data collection and ENTRY

Although data collection may be the most important step in the survey process, it is often the most neglected part. You need to give careful consideration to the question of who is going to collect the data and how they should be prepared. Failure to give careful thought to this phase can result in unreliable data due to untrained or unqualified interviewers.

Data entry is also an important part of the survey process. Plan how the data will be collected. Determine if it will be entered manually by one individual or by multiple team members. After Identifying who is going to enter the results of the survey into SurveyTracker, make sure they have the time available and access to a computer during that time.

Data collection can also be performed electronically with little actual manual data entry time required. This will lower the responsibilities of the data collection personnel but it should not eliminate them. Assigning a diligent person to ensure the accuracy of returns and to manually enter surveys with bad data is very important. Choosing an individual who understand the technology enough to retrieve the results is also a necessity.

If you have decided on a scannable survey, you’ll want to make sure you have a scanner that is compatible with the data sheets you will use.


The first step in planning a survey is to decide the purpose of your survey. This will provide a framework for every decision you make throughout the entire project. The purpose for conducting a survey will also raise expectations in those who take part in the survey. You should be sure to act on the information you gather.

A very important aspect of a survey project is checking your resources is to make sure you have the internal resources—the help from the necessary people, especially management—to successfully conduct your survey.

You should plan your survey by considering the possible outcomes and developing contingency plans for those outcomes. Except for academic research, all surveys should be given for the purpose of obtaining data that will help in determining a course of action. You should also consider means to ensure the confidentiality of responses.

It is very important to make certain you will have all the resources you will need to conduct an effective survey. Plan the delivery methods and data collection carefully, trying to anticipate any problems and developing contingency plans for those situations. The technical level of you and your audience, either internal or external, should also be considered before deciding upon any survey methods. You should make sure you have the personnel needed to deliver, collect, and enter the data into SurveyTracker.

©, Training Technologies, Inc.

Chapter 3: Survey Scheduling

If you want to conduct your survey efficiently, you must start with a careful plan. An important part of an effective plan is a realistic, attainable schedule.

At the very beginning of your project, set up a timetable that includes all the tasks necessary to complete your objectives. The scheduling of activities should be an interactive process that helps provide the framework for an accurate project budget. The depth or level of your schedule will vary depending on the complexity and size of the survey project.

The best time to schedule an organizational survey is when workers will have time to complete the survey and management has the resources to take action on some of the findings if necessary. Below is a partial list of times that will be bad to conduct a survey:

  • During holiday or heavy vacation periods.
  • During your organization’s busy season.
  • During plant shutdowns.
  • During computer upgrade periods.
  • When the company web page or network is down or acting unstable.

The survey schedule should be detailed enough to increase your customer and/or management’s support and confidence for the overall survey project.

Next, you will need a timetable for conducting the survey. Include in this timetable everything from the survey design date to the completion of the survey. This will give you an indication of approximately how long the surveying will take.

You should also prepare a timetable for the analysis of the survey results.


For smooth administration of a survey project, all the details need to be worked out on a complete schedule. The schedule should include all major events, along with a timetable for their completion.

©, Training Technologies, Inc.

Chapter 4: Survey Budgeting

Determine Survey Delivery Costs

Before you can conduct an effective survey, you must have a realistic picture of what it will cost. Although it would be nice if cost were no object, everyone must operate under some budgetary constraints.

A survey’s major cost is people. You cannot design and conduct an effective survey without individuals to plan, design, test, and administer the survey.

  • A coordinator must oversee the entire survey process from beginning to end.
  • Individuals must administer the survey. In addition to salary, they will also incur expenses.
  • Incentives used to increase the response rate will add cost for each respondent.
  • If conducting a non-electronic survey, people must enter the data collected into SurveyTracker for analysis. Time costs for this will vary greatly depending on the complexity of the survey and the number of responses obtained.
  • Survey results must be analyzed and reports delivered to management.

In addition to the labor costs, you must also take into account the resources necessary to design and conduct the survey. Some of the various kinds of expenses incurred are:

  • Mailing list expenses
  • Computer hardware and software expenses
  • Telephone expenses
  • Network administrator expenses
  • Postal expenses
  • Scannable form printing/scanning expenses
  • Web page operation and maintenance expenses
  • Paper and printing expenses
  • Fax transmittal expenses
  • Travel expenses
  • Meeting room expenses
  • Audio-visual aids

In addition to the above expenses that can be estimated accurately, most organizations add 10-15% to the cost estimate for unexpected expenses.

The survey project staff should consider the following items in relation to their costs verses the value they add to the achieving their survey objectives:

  • Sampling and data collection techniques.
  • Anticipating typical problems and delays.
  • How soon do they need the survey results?

Detailed schedules and projected costs can be prepared and integrated into a survey project report.

SurveyTracker helps you quickly do “what if” budgeting to satisfy your survey objectives without sacrificing a lot of precious project time. The SurveyTracker Budget report keeps key project members informed and involved.

Here are several effective ways to reduce the cost of your survey:

  • Keep the geographic area of the survey as small as possible. Don’t go to five cities if you can do your survey as well in just three.
  • Reduce open-ended questions. These can cost of up to five times more to ask and analyze than close-ended questions.
  • Conduct electronic surveys when possible and practical.
  • Try to keep your sample sizes as small as possible. Don’t oversample just to “play it safe.” Trust the sample and don’t survey more people than necessary. (The only exception would be if your audience population was small or just large enough to exclude a minor portion of your audience.
  • If you have designed a new survey and haven’t pilot tested it, do a small-scale survey first to see if there are any hidden surprises. If not, then go ahead with the full-scale survey.
  • Thoroughly test all electronic survey methods to ensure compatibility.
  • Analyze only the information you really need. If you find you need more data at a later time, you can always get it from SurveyTracker.


Before you can conduct an effective survey, you must have a realistic picture of what it will cost. A survey’s major cost is in the people needed to produce and administer it. In addition to the expenses that can be estimated accurately, most organizations add 10-15% to the cost estimate for unexpected expenses.

©, Training Technologies, Inc.

Chapter 5: Survey Design

Designing the Survey

The design of the survey involves a lot more than just jotting down a few questions off the top of your head. Careful thought must be given both to the topics that should be included in the survey and the phrasing of the questions about these topics.

To make your task easier, SurveyTracker’s survey library contains numerous surveys on various topics. Every survey has been carefully designed for maximum effectiveness. Using SurveyTracker, you can customize the individual surveys to fit your particular situation.

If you are going to create your survey from scratch, you should give careful thought to the question of who should provide input into the design and wording of the survey.

You should also determine at this point (if you haven’t done so already) who should give the final approval of the survey design. This should be a single individual. Trying to get approval from a committee is usually an exercise in frustration. Everyone has his or her opinion about what should be on the survey and how it should be phrased. Getting them to agree at least 85% to 90% is as good as you are probably going to get in a short period of time. Get a single individual with the authority to give final approval to the survey and you will reduce your frustration and time required considerably.

What Will Be On Your Survey?

Before you can design an effective survey, you must know what information you want to obtain. The following list of possible topics should help you evaluate whether or not you have covered all necessary areas.

  • Leadership
  • Empowerment
  • Communication
  • 360 Degree Feedback
  • Meeting Skills
  • Education
  • Quality Performance
  • Quality Standards
  • Innovation
  • Motivation
  • Trustworthiness
  • Company’s Interest in your Welfare
  • Delegation of Responsibilities
  • Decision-Making Process
  • Management Competence
  • Supervisor Know-How
  • Goal-Setting Skills
  • Help-Desk Evaluation
  • Fairness Within Company
  • On-The-Job Stress
  • Incentive Plans
  • Sufficient Pay
  • Benefits
  • Work Schedule
  • Suggestion System
  • Web Page Evaluation
  • Employee Handbook
  • Effectiveness of Open Door Policy
  • Physical Work Environment
  • Ergonomics
  • Equipment
  • Maintenance
  • Customer Service
  • Management Attitudes
  • On-The-Job Training,
  • Performance Appraisal
  • Job Satisfaction
  • Opportunity to Use Abilities
  • Job Interest
  • Chance for Advancement
  • Efficiency
  • Planning
  • Scheduling
  • Work Flow
  • Relationships with Co-Workers
  • Meeting Productivity
  • Cooperation
  • Strategy Implementation
  • Personal Expectations

The Look of the Survey

A survey’s appearance is very important, especially for a self-administered survey. Any survey that is hard to read confuses or irritates interviewers and respondents. The result is lost data. Also, any survey form that does not provide enough space to easily record answers will produce confusion and dubious data.

General rules:

  • Include the entire question with its scale(s) on one page or screen whenever possible.
  • Don’t split a question response list or scale, with part on one page and part on another.
  • The place for response for each question should be located either below or to the right of the question.
  • Always provide enough blank lines or open space for fill-in-the-blank or open-ended responses.

Paper (non-scannable) Surveys

When you design a survey, give careful consideration to the font, size, and style you use. Select a type style that is comfortable to read. Some fonts make reading easier, others are more difficult to read. Remember, many people dislike reading small type. Also, use bold type to set apart the different sections of your survey.

Leave plenty of white space on the page. Questions should have a double space between them and wide margins. This makes the survey more accessible for the respondent.

Web Surveys

The primary concern of any web survey is whether or not you should use a single-page “long” survey or a multi-page survey. Each method has its pluses and minuses.

A long survey has the advantage of offering the entire survey on a single web page. It does not require any additional clicks or pages to load. The disadvantage of this is that it can take time to load if the survey is of a significant length. This could result in an impatient respondent who simply clicks STOP before the page is completely loaded! Remember that the average time spent waiting for a web page to load is less than 30 seconds. Most high speed internet connections can load web pages quickly, but if there are any factors slowing load time or if the respondent has a slow modem, this can be a serious problem.

A series of small survey pages is the alternative. These surveys have the benefit of being both more interactive (which appeals to some people) and quicker to load. Each individual page should take under a couple seconds to load (taking into account various factors and connection speeds). The disadvantage is that it can be easy to lose respondents. It has been estimated that a survey can lose as much as 10% of its respondents each time a new survey page is loaded!

The best way to avoid losing respondents is to make the survey as streamlined as possible. This is another judgement call for web page design. Should the web survey be clean and unadorned for faster load times or more visually pleasing but with slower load times? The answer often depends on numerous factors such as the speed of your web provider, the phone lines, the respondent’s web provider, and the speed of the respondent’s internet connection. The decision to make a web page fancy or simple depends completely on where you draw the line between the possibility of lost respondents and the benefit of an attractive survey.

All web-based surveys should employ the standard features people have come to expect from web pages. This includes radio buttons, drop-down boxes, and the like.

Phone/Tablet Surveys

A survey designed for the small screen of a smart phone should be designed so that there are few questions per page and/or have a vertical layout that is fully re-sizable via pinch and zoom if necessary. Most users hold their phones in a vertical layout so its best to avoid wide surveys that require scrolling to the right or multi-column surveys.

Generally speaking, a phone or tablet survey should be a web survey which will work as long as the respondent has Web access. It is possible to provide surveys via apps but we highly advise against this as it requires the user take the effort of downloading the app. Every modern phone or tablet has a web browser that is fully compatible with standard web surveys/forms.

The primary thing to keep in mind is to support a responsive HTML layout. This means the survey either adjusts dynamically to the width of the page or allows for full zoom and resizing.

Not all mobile devices adhere to standard web page layouts. That said, the vast majority of users with a smart phone will be fine. Respondents with older phones or flip phones will not be able to take the survey.

USB Thumb Drive, Network, and Kiosk Surveys

These electronic survey methods should include drop-down boxes, appropriate white space for open-ended questions, radio buttons, and so forth.

A USB thumb drive survey should limit the amount of graphics or multi-media because such things will limit the number of respondents that can reply on one disk. This is not actually a concern if you create a USB thumb drive or similar media.

A network survey has few limitations in its look unless the computer in which it resides has a small hard drive that can’t handle additional graphics or multi-media. Keep in mind also the speed of your network and the number of users on it at any one time. A fancy survey may load slowly if your network is slow or if a significant number of people are trying to use the network at one time.

If you are planning on using a dedicated cabinet for a kiosk survey, then you might opt for touch-screen monitors or similar proprietary hardware or interface. This type of kiosk survey usually doesn’t include open-ended questions or anything more fancy than multiple choice scales. On the other hand, if you simply place a survey on a standard IBM or Mac-compatible computer, then you should probably use a standard PC or Mac interface that people will quickly understand.

Scannable Form surveys

Scannable surveys  should have a uniform look for two reasons: First, they need to have the look or else the scanner won’t be able to read the survey properly. Second, people expect these surveys to look a specific way! Most schools have used multiple-choice scannable surveys for years now and everyone coming up through the system knows how these surveys are “supposed” to look. It’s best to stick to that look for consistency.

In most cases, the scanner software you use to set up a scannable survey will stipulate the look of the survey. You should follow these requirements rigidly or else your survey results will be damaged during collection. Different scanner companies offer different types and lengths of forms and some allow you to design and print the form on your local printer. Contact your scanner company for a list of the pre-designed scanner forms they offer or to determine if you can use your own paper.

Keep in mind that if you use your own paper, check with your scanner to determine if there are recommended limits for your paper. You most want to consider the paper weight/thickness and brightness/whiteness level.


Writing the Survey

The first thing that should be written in your survey is an Overview. This overview states the purpose of the survey and the objectives you are trying to achieve. A well-written overview will help respondents understand the importance of their participation in the survey.

The next element in your survey should be Instructions telling the respondent how to score the survey questions and approximately how long it should take to complete the instrument.

Throughout the survey, use Notes of various kinds and lengths to deal with special situations. (You can also use notes for advice to the interviewer in telephone or personal interviews.)

Once you have written the Overview, the Instructions, and any Notes that apply at this point, you are ready to begin writing the questions that will solicit the information you are seeking.

The specific elements that make up a survey are written as either questions to answer or statements to respond to. For ease of use in this manual, both types will be referred to as “questions.”

There are basically only two types of questions that can be used in a survey: open-ended or closed-ended. Open-ended questions let the respondent answer in their own words. This provides insights into why people believe the things they do, but it is very difficult to interpret accurately and objectively.

Closed-ended questions ask the respondent to choose from a list of possible responses. This is more efficient and reliable. The efficiency comes from being easy to use, score, and code (for analysis by computer). Reliability comes from the uniformity they provide. Since everyone responds within the same options (e.g.,”agree/disagree/don’ t know”).

Working with Different Languages or Cultures

As you write the survey, keep in mind the increasingly global market. Depending on your survey audience or the location the survey will take place, you may need to adjust the language or use of language in your survey!

For instance, if you’re writing in the United States, a survey in certain parts of the country such as the Southwest might need to be in Spanish and English. Or if the survey is in Canada, French and English. You should consider hiring or contracting a translator if there are no multilingual employees in your organization.

If your survey is going to be used in another country, consider all the languages that country speaks. This is especially important in Europe with so many counties speaking so many different languages so close together. It isn’t uncommon to see things written in English, French, Italian, and German in the European Community!

Language translation programs can be purchased to convert text for you. You shouldn’t rely completely on these programs because they will often be too formal or use strict or unintentional translations. If you use one, run the translations by an expert occasionally to make sure it is correct in its word usage.

If you’re sending a survey to nations that speak the same language as yours, keep in mind the difference in the spelling of certain words. The British use colour over the American color. Americans use organization and South Africans and Australians use organisation. Sending a facsimile is a Fax in the US and Faks in South Africa. Consider also the different use of words: elevator vs. lift, soccer vs. football vs. American football, flat vs. apartment, etc.

Audience Mail Merge

Audience mail merge to placing audience information directly onto the survey instrument. This is really only practical when you are using a survey software tool that can implement mail merge. Placing information onto the survey that is directly tied to the audience list will allow you to address each audience member personally and will increase your response rate.

You should be conservative, however, in the number of audience fields you apply to your survey. Placing the name with salutation (e.g. Dr., Mrs., Mr., etc) and a job title are often all you need. Including a home or work address may be appropriate under certain circumstances. You should avoid using too much personal information because you risk making your audience aware that the survey isn’t really personalized or you may make them uncomfortable by appearing to know too much personal information about them.

Survey Personalization

Survey personalization refers to placing logos or markings on the survey instrument itself. This can be as simple as your company logo in the header or footer, an organizational slogan, or printing the survey on a paper with the organization’s watermark.

A web survey can offer a great deal of personalization, including animated logos, sounds, and the like. However, the more fancy you make the survey page, the longer it will take to load and the busier it will look. The risk of losing your audience must be weighed against your desire to make the page attractive.

Surveys placed on a USB thumb drive have the limitation of disk space. Placing a large logo will result in less room for audience replies. This isn’t a concern if you are sending a disk to each respondent or if you’re using large media.

Surveys on a network or kiosk don’t tend to have many limitations. These surveys can be as fancy as you want as long as the hard drive they’re placed on has the room (and its unlikely this will be an actual concern).

Scannable surveys require that markings of any kind be delegated to specific sections of the scannable form. Printing information in the wrong spot can confuse the scanner and result in scanning errors and lost data.

SurveyTracker directly supports audience mail merge and the placement of logos and other headers and footers in web, paper, USB thumb drive, kiosk, and network surveys.  Survey-by-USB offers multi-media support as well. SurveyTracker Plus supports mail merge through Scantron DesignExpert’sTM pre-slugging capabilities.


How long should a survey be? That depends on what you need to know and how many questions it takes to collect trustworthy data. It also depends on the type of survey (self-administered questionnaires should be shorter than face-to-face interviews), on the time available, and on your resources. A major resource concern is if the respondent is being compensated to complete the survey, who is paying for this?


Writing good questions is probably the single most difficult task involved in any survey. Constructing good, clear questions is much more complex than it appears to anyone who has not done it. A good question should be worded so clearly that all respondents interpret it the same way. This is hard to accomplish, but the reliability of the survey results depends a great deal on the quality of the questions asked. If the respondents have difficulty interpreting questions, it can introduce a significant amount of error into the scoring and make accurate interpretation difficult, if not impossible.

Below are some guidelines that will help you to write effective survey questions.

  1. Use only one thought in each question.
  2. Use standard English.
  3. Be direct.
  4. Word your questions as simply as possible.
  5. Make questions close to the respondent’s personal experience.
  6. Do not use biased words or phrases.
  7. Be alert not to include your own biases in the wording. (If in doubt, have someone else check the questions for bias.)
  8. Do not get too personal.
  9. Avoid using slang, or abbreviations.
  10. Use jargon only when absolutely necessary and only when certain that respondents will understand it.
  11. Ask straightforward questions: avoid complex questions.
  12. Give enough information in the question for the respondents to give a reasoned, intelligent answer.
  13. Do not use “loaded” questions that suggest particular responses.
  14. Be sure the question clearly states whether you want a factual or an opinion-based response.
  15. Do not write questions with more than one adjective or adverb. (e.g., “Was the nurse friendly and helpful?”) This should be two questions instead of one.
  16. Take extra caution when using vague adjectives and adverbs such as “several,” “few,” “most,” “usually” and other similar words and phrases.
  17. If the answer to a question could be a negative (“No”), avoid using a negative in the question (e.g.. “Do you believe retired generals should not pay taxes?”). This results in a double negative. Phrase the question directly instead. (e.g., “Should retired generals be exempt from paying taxes?”)
  18. Avoid words that have more than one meaning. (e.g., “value.” “liberal.” Or “conservative.”) You will not know which meaning respondents had in mind when they answered.
  19. Avoid hypothetical questions. Many people object to answering them.
  20. Make sure the respondent knows exactly what information should be put in “fill-in-the-blank” questions.


The order in which the questions are asked can sometimes be as important as the questions themselves. Each question influences all the ones that follow. Below are some guidelines to help you arrange your survey questions in the most effective order.

  • The first question on a survey should be clearly connected to its purpose.
  • Put relatively easy questions at the beginning of the survey. This helps people make a good start and encourages them not to quit because they are tired or not confident enough to complete the rest of the questions.
  • Put “sensitive” questions in the middle of the survey.
  • Objective questions should come before subjective ones.
  • The survey should move from the most familiar to the least.
  • Follow the natural sequence of time or process flow. Limit branching. All branch instructions should be simple and clear.
  • Avoid many items that look alike. Place questions logically.
  • Group questions about the same or similar subjects together and consider providing a category heading.
  • Ask demographic questions last. Some people don’t like giving out personal information.
  • If they must be requested, the last two demographic items (in order) should be religion and income.

Skip and Hit Patterns

In addition to the rules listed above, skip and hit patterns can be applied to most surveys. A skip and hit pattern is simply a way of directing the order of questions based on the replies to past questions. For instance, if you answer “No” to question 5, skip to question 8.

If the survey is being conducted by phone or person-to-person, make sure your surveyor is well versed on the skip and hits so he or she won’t become confused while talking to the respondents.

Paper surveys that include skip and hit should include clear instructions on how to maneuver through the survey. A survey can lose a significant number of respondents if complicated directions confuse them. Limit the number of skips. Even if your instructions are good, skipping too many questions can still become confusing. There’s also the chance the respondent might feel they are “missing” too much of the survey. It is better to allow a respondent to simply choose “No” or “Does Not Apply” than to have them skip over every other question.

Electronic surveys can often be programmed with skip and hit patterns that are invisible to the respondent! The advantage is that the respondent neither knows they are skipping questions nor are they required to follow complex instructions. SurveyTracker supports skip and hit in web surveys, Survey-by-USB, network, and kiosk surveys.


Piping is another way electronic surveys can aid the order of questions. A piped question attempts to focus the respondent’s answer by including the replies from previous questions in the current question. For example, if a respondent answers, “Monday” to one question, the following question might read, “Explain why Monday is the day you go shopping.”

Randomizing and Grouping Questions

A problem that often arises when ordering questions is that a respondent may try to outthink the survey instrument. They do this by predicting the “right” answers to questions based on how previous questions were asked or answered. Respondents might be trying to give replies they think you want or they might be doing it subconsciously. Two ways to beat this problem is randomization and grouping questions.

Randomizing the order of questions removes patterns that exist in your questions. This is also handy when testing because each respondent will have the same questions but in different orders. A good software program should not allow randomization to interfere with the analysis and reporting of your survey results. SurveyTracker supports randomized questions in form-based e-mail, disk, network, or kiosk surveys.

Grouping is a means by which you can place questions at any location throughout the survey and apply invisible associations between them. This allows you to analyze and report on these hidden groups with the assurance that your respondents can’t second guessed the survey based on the order of questions. SurveyTracker supports Grouping in all electronic survey methods.


The question of what to do with the “don’t know” response is a difficult one. Sometimes, this response is a legitimate one (e.g., “Do you know what the term CPU refers to?”). However, it is often used merely as a way to avoid answering a question that the respondent thinks is difficult or uncomfortable. In designing your survey, take into account the “don’t know” responses and make provision for them. If a “don’ t know” is meaningful, include a place for that response on the survey form. If it is not, do not include a space for it. (Some respondents will still leave the answer blank anyway.)


Depth can be added to a survey by including respondent comments. These comments not only help place the survey results in the real world situation, but they also can provide insights that can help interpret the data more accurately.

SurveyTracker allows open-ended comments either by typing them into the datasheet, by having a data-entry person code them according to the content of comment, or by automatic electronic data collection. In the case of electronic collection, a data entry person can later go into the results and code each response. Comments enhance the numerical findings by reminding the administrator of the real world situations that contributed to the way the question was originally answered.

The benefit of Coding is that you can perform objective analysis and create statistical reports on these otherwise subjective comments.

Should You Do A Trial Survey?

Once you have written the survey questions. You should consider the possibility of conducting a trial survey to test its reliability. The trial survey will help you see if the survey can be administered easily and if it will provide accurate data. The purpose of the trial survey (sometimes called a pretest or pilot test) is to answer the following questions:

  • Will the survey provide the needed information?
  • Are any questions misleading or repetitive?
  • Are the questions appropriate for the information you wish to gather?
  • Is the survey language appropriate for your sample population?
  • Will those giving the survey be able to use the forms effectively?
  • Are the procedures for giving the survey standardized enough, h so that all information is collected the same way?
  • Is the data gathered by the survey consistent?
  • Is the data gathered by the survey accurate?
  • If doing an electronic survey, does the data collection process work as expected?


A web survey can be tested by placing it on the Web and then doing a single test reply. Just remember not to provide any links to the survey or announce its existence until it has been fully tested. A web survey doesn’t tend to require the same amount of testing as an e-mail survey as they aren’t tied to the compatibility between e-mail systems.

Disk, network, and kiosk surveys don’t require rigorous testing as they aren’t tied to e-mail. However, you should still run a test to make sure your software program can receive the results properly.

A scannable survey should be tested to make sure the scanner is marking the answers correctly. You may also want to test different types of pens and pencils and sizes of marks to determine how exacting the scanner needs your respondents to be when filling in the data sheet.

Choosing the Response Scale

In writing a question, you must also take into consideration the scale you will use to record the responses. Since the answers to survey questions usually reflect some opinion or position, a well-designed scale makes responding to the question simpler.

Scales are used to collect responses that are compared to one another. Each possible response in a scale should be expressed in terms very similar to the other possible responses. This enables the answers to be compared.

The use of scales also makes analyzing the data easier because scales can be easily transformed into numerical scores for analysis. SurveyTracker makes this process easy.

There are many types of scales. The following chart contains the most commonly used ones, all of which are offered by SurveyTracker. The scales marked with an * indicate scales that can’t be read by a scannable survey. These scales can still be used as long as they are manually entered.

Write-In Lets respondents fill in the blank with their answer. Offers insight into why people believe the things they do, but difficult to interpret accurately.
  1. How long have you worked here? ______
  2. What is your birth date? _____
  3. General Comments: _______________________
Multiple Choice – Single Response Provides a series of possible responses from which respondents are to select one that best fits their response. How would you rate our overall service?

___ Great

___ Average

___ Poor

Multiple Choice – Multiple Response Provides a series of possible responses from which respondents may select more than one to describe their response. Who did you come with today?

(Check all that apply)

___ Family

___ Spouse

___ Friends

___ Date

Horizontal Numerical * Offers a simple, horizontal numeric scale along which respondents are to locate their opinion about each of the issues listed. Offers a quick way to tap the respondent’s values with regard to several issues. How was the service?

Slow 1 2 3 4 5 Fast

How would you rate your manager’s organization skills?

Weak -5 -4 -3 -2 -1 0 1 2 3 4 5 Strong

Forced Ranking * Asks respondents to rank a list of items in order. Enables you to measure people’s choices and preferences relative to other items in the same group (e.g., different brands of detergent). TEAM T-SHIRT

The following colors have been reduced from a larger list. Please rank in order of your preferences starting with the number 1.

___ Blue

___ Green

___ Red

___ Grey

Fixed Sum * Consists of a number of possible responses, among which the respondent is asked to distribute a fixed number (e.g., “10”) of choices. Often used to determine what proportion of choices of several possibilities have been made. Most effective when used to measure recent events. Result is a clear indication of the proportion of times a particular choice was made. Please rate the following types of food in order of your personal preference from 1 to 5. ( 1 being your first choice and 5 being you last)

___ Cheesesteak Sandwich

___ Pizza or Spaghetti

___ Cheeseburger & Fries

___ Burrito or Taco

___ Fish & Chips


Yes/No, True/False, Binary Requires respondents to make a choice (“yes or no,” “true or false”). One of the simplest scales, but often can provide exactly the data you need. Do you smoke cigarettes?

___ Yes ___ No

I use valet parking when available.___ True ___ False

Note: The examples shown in these charts are reference only to show a structure of the scale type. The actual layout of the scales (typefaces, size, etc.) and the how the respondent answers the scale

(Checkbox, OMR-Optical Mark Recognition, lines to fill in numbers, etc.) are set up in each Scale Layout screen scale according to your specifications.


Designing an effective survey involves much more than just jotting down a few questions “off the top of your head.” You should give careful thought to the wording and the order of the questions, the response scales to be used, and the physical appearance of your survey. All these factors affect the accuracy and rate of return of your surveys.

Testing your survey is also important. This involves performing a test survey with a small number of people to ensure the survey “works”. If you are doing electronic surveys, testing also includes making sure that web, network, kiosk, and USB surveys return the results properly.

©, Training Technologies, Inc.

Chapter 6: Sampling

One of the first questions that must be answered in any survey is who exactly is going to be surveyed. It is usually impractical to interview everyone. Therefore, you need to survey a properly derived sample, which can reveal a wealth of information about the wants, needs, or opinions of the people surveyed.

A population, or audience, is the pool of individuals who you want to survey for information. This can include any number of people as long as they fit your required demographic. You wouldn’t, for example, include the entire population of Michigan if you wanted to survey the opinions of workers at General Motors.

A survey sample is specially designed so that each individual in the target population has a known chance of selection. Because of this, the results of a survey can be reliably projected to the larger public. For example, national surveys usually sample only 1500 persons to find out national attitudes and opinions (200 million population size).

The sample size required for a survey will depend on the reliability needed which, in turn, depends on how the results will be used. Contrary to popular belief, the error attached to a sample depends on the sample size, not on the size of the population from which the sample is drawn. Cost is also a factor in determining the sample size.


Determining the population of your audience is completely up to you. There are no hard and fast rules as to who should or should not be included. Simply put, if you use an audience at all, it could be as big as the United States population or as small as the people in your office. Keep in mind that more accurate results tend to come from a larger audience because you’ll retrieve more sets of data to use in your final analysis and reports. But don’t confuse a large audience for a good audience. It’s better to have the right people than to have a lot of people, especially if the larger audience includes people who don’t fit into your desired demographic.

Paper and scannable surveys either require or don’t require an audience list. If you are going to mail the survey, it certainly does need an audience list. Any survey that is going to be printed out and physically given to a select group also needs a defined audience. But if the survey is to be handed out in a public place, you won’t have an audience to choose from.

Another option would be a database of e-mail addresses purchased through companies that sell demographic information to businesses. These companies tend to sell phone, e-mail addresses, and more. You should be careful if you choose to rely on one of these lists as they are often out of date and many people listed don’t realize their demographic information is being sold. Expect a significant non-response rate if you choose names off such a list.

You should also be aware that randomly sending out unsolicited e-mail is commonly known as spam. Spam is any unrequested e-mail from a business and is openly despised by the Internet community. Many people simply delete spam unread. But there is another, more pressing concern. Some Internet Service Providers simply prohibit spam or any massive e-mail campaign. To avoid this, make sure your ISP allows large e-mail campaigns. You must also be prepared for significant numbers of non-respondents and possibly a large number of hostile replies. You should offer to remove any respondent from your mailing list at their direct request (and follow through).

Another way to receive an e-mail list is to request e-mail addresses on your web page! Taking these replies and using them for the survey is a great way to get an audience list that is actually interested in your survey. They wouldn’t have sent you their e-mail if they weren’t interested in what you were offering. This method is more passive and can’t be done on a moment’s notice. You’ll have to plan in advance and collect addresses for a period of time before you start the survey process.

Web surveys don’t tend to lend themselves to a specific audience list. They are often open to anyone who can locate the survey’s URL with their browser. It is possible to limit the people taking the survey by sending out the URL only to those people you want to survey and simply not provide a link anywhere on your web site. Another solution requires a specific password for each respondent. The password is entered into the web page before the survey begins.

Surveys on a USB thumb drive and kiosk can either have a selected audience or not. Audiences include only people who receive the disk or who you ask to physically visit the kiosk. On the other hand, a kiosk survey may have no set audience at all if its placed in a public place. In this case, anyone who wanders by may freely take the survey. Of course, a kiosk should always be located in a place where the respondents are the types of people you would want to take your survey. A USB survey, once it leaves your hands, could conceivably end up with anyone. Your responses are not guaranteed to be from only the people who received the disk from you.

Network surveys include anyone on your network. This type of survey tends to be tied to a very specific audience list. You rarely have any choice in limiting the respondents further unless you simply tell only the specific people you want to respond where to find the survey.

The Sampling Process

Sampling involves a number of tasks and decisions. In order to understand them better, it helps to place them in the context of the entire sampling process.

    1. The first step is to define the target population to be sampled. This is usually determined by the purposes of the survey. For example, a survey that investigates men’s preferences in shoes would, by definition, have a target population consisting of men. A survey of forklift operators’ safety attitudes would have a target population of forklift operators.
    2. Establish a “frame” for that population. This frame sets the boundaries of the population. Often, the frame is geographic, such as a survey of the population of the “Los Angeles area.” Sometimes the frame consists of other characteristics, such as income level, gender, educational level, etc. Often the frame can be as simple as the department the person works in or the shift they work.
    3. Choose the method for selecting the sample.
    4. Determine the size of sample that is needed.
    5. Select the actual individuals, households, or companies who will make up the sample.


Most surveys contain items that describe the respondents. These items are known as “demographics.” The demographic profile of the sample can be compared with the target population as a whole to see if it is a close fit. Demographic variables are also used to divide the sample into subsamples, such as age, sex, occupation, etc. There are many scales designed with different demographics included with SurveyTracker(TM).

Here is a list of commonly used demographic variables:

  • Sex
  • Age
  • Education
  • Occupation
  • Income
  • Race or ethnic identity
  • Religion
  • Type of dwelling
  • Zip code, or location
  • Length of time at present residence

When using demographic data to divide the sample into subgroups, it is always important to check the integrity of data source and when the data was last updated. Only current (or fairly recent) and accurate demographic data can provide you with the information you need to divide the sample.

Sampling Methods

Convenience Sampling

Convenience sampling is the least expensive method of sampling. This method samples people solely on the basis of “convenience” or “accessibility.” It consists of simply interviewing whomever you can get to take part in your survey. Commonly used in the “man on the street” type of interview, this method does not result in a very representative sample. Because of this, the results usually cannot be used to draw valid conclusions about the target population.

Representative Sampling

In a Representative Sample, you conduct a search for typical respondents who represent each of the aspects of the sample frame (typical middle-income households, typical low-income households, etc.). This method assumes that those who are determining the sample can select a representative cross-section utilizing their judgment. This method obviously leaves itself open for personal bias to enter the selection process.

Systematic Sampling

In systematic sampling, you pick a unit (say, five) and then select every sixth name on a list. This makes sample selection relatively easy and forces the sample to span the entire target population in a systematic manner. The problem with this method is that lists are sometimes arranged so that certain patterns can be uncovered. If you use one of those lists, a bias may be introduced into your sample. Examine your list carefully to ensure that there is no bias accidentally introduced.

Random Sampling

A random sample is one in which each person has an equal chance of being selected for. The term “random” applies to the way the sample was determined. You cannot tell if a sample is random by looking at the sample itself. Random samples are selected using tables of random numbers or computer generated random numbers. (SurveyTracker includes a random number generation feature.) Simply use the random number to select people from the population to be included in your sample.

Stratified Sampling

In random sampling, you chose a sample of respondents at random from the population. In stratified sampling, you first divide the target population into subgroups (or strata) and randomly select a given number of respondents from each stratum to get the sample. For example, if you want to have an equal representation of males and females in your survey, you would first divide your target population into subgroups of males and females and then randomly select an equal number from each subgroup.

Cluster Sampling

Usually, cluster sampling reduces costs and time needed by surveying groups who are located near each other geographically. However, clusters may also be selected using some other basis, such as groups of consecutive names on a membership list, etc.

One advantage of cluster sampling is the ease of obtaining additional observations. A disadvantage is that often units that are located close to each other are similar and do not contribute information much different from each other.

One form of cluster sampling is called Area Sampling. In this form, the initial cluster is determined by a geographical location (say, a city or county). From this cluster, a subgroup is randomly selected.

Quota Sampling

Quota sampling is based on the idea that, for accurate results to be projected from the sample, certain qualifying characteristics (such as magazine subscribers) must be sufficiently represented. It is a compromise between a stratified sample and a convenience sample. In a quota sample, interviewers survey people who fit the quota profile until the appropriate number have been surveyed. They then proceed to collect the rest of the sample. This makes the quota sample less accurate than a random sample, but it does guarantee that the sample represents the population in certain characteristics.

Sampling Error and Nonsampling Error

Obviously, no sample perfectly reflects the entire population from which it is drawn. There is always a difference between the sample and the population that must be taken into account in formulating action plans based on your survey results.

The natural difference between the sample and the population is called a sampling error, or “the error due to sampling.” Statistical methods can estimate the sampling error. Remember that the term “sampling error” does not imply a mistake of any kind. It merely refers to the natural variation from one sample to another.

If other factors, such as misleading survey answers, errors in transcribing, or invented data to fill in the blanks, cause the difference between the sample and the population, these differences are called nonsampling errors. Careful planning and dependable sampling procedures can help eliminate this type of error.


Survey samples are designed so that each individual in the target population has a known chance of selection. Convenience sampling is the least expensive, but not very representative. Representative sampling identifies respondents who are typical of the sample frame. Systematic sampling chooses the sample in a mathematically systematic way to reduce bias. Random sampling is a sample in which each person has an equal chance of being selected. In Stratified sampling, you first divide your target population into subgroups and then randomly select a given number from each group. Cluster sampling surveys groups who are located near each geographically or identifiable by some other common basis. Quota sampling insists that certain qualifying characteristics must be sufficiently represented in the sample.

No sample perfectly reflects the population from which it was drawn. This difference is called the sampling error and is an important part of the analysis of the data gathered.

©, Training Technologies, Inc.

Chapter 7: Distribution

Before you send the surveys into the field to collect the data, check them one last time to make sure they are exactly as you designed them. It’s often a good idea to have someone who has not seen the survey before read it through. When the content of the survey is just right, it’s time to put the survey distribution method into effect. But first, you must communicate with your audience.

Communicate the survey to your audience

If you want to secure the full cooperation of your respondents, contact them before the survey takes place, explain the survey’s purpose, and ask them to take part.

Not everyone enjoys a survey. This is especially true in business or service organizations. People sometimes feel that, if they are honest, their jobs or prospects for advancement will suffer.

Communicating with respondents in a business or service organization before the survey is especially important if you are trying to find out attitudes and/or performance. A prior letter should be sent to the respondent from the company president, or the highest person in the company at that particular site, telling the respondent of the need for their participation.

When you write them before the survey, let them know what you want of them and remind them of top management’s support for the survey. It is also helpful to stress the confidentiality of their replies. This pre-survey communication gives everyone, employees and management alike, a chance to get used to the idea of the survey.

The type of communication depends on who is involved in your survey. However, even if you are only surveying non-management employees, be sure to include involved managers and supervisors in the communication process. This way, they know what is going on and why.

The timing of your pre-survey communication is also important. Letters and e-mail should be sent at least three to four weeks before the survey begins. Personal communication (face-to-face, phone calls) should take place at least three weeks before the survey. This allows respondents plenty of time to get comfortable with the idea.

Survey Communication Channels

There are many ways you can publicize your survey. Following is a list of some of the channels you might consider using:

  • Memo from management announcing the survey and explaining its purpose and overall benefits.
  • Letter from the president, top management or survey administrator
  • General meetings
  • E-mail
  • Departmental or unit meetings
  • Letter to the respondent’s home
  • Telephone call from the president or survey administrator
  • Article or interview in organization newsletter
  • Posters, bulletin boards, or pay envelope inserts

Survey Preparation

Once the survey has been properly communicated, it is time to make the actual preparations for the conduct of the survey. This includes things like training interviewers and setting up locations. The survey preparations required differ according to the method you use. Now is the time to make sure everyone involved knows what is expected of them and how to carry out their part of the survey.

Increasing Response Rates

  • Contact person to answer any questions.
  • Communicate to your customers or employees before you send out the surveys.
  • Assure confidentiality and anonymity.
  • Use the right survey method for the right audience members.
  • Send reminder card one week after the survey is sent. (Add return postage paid envelope.)

SurveyTracker provides you with the ability to design very professional looking surveys which will also increase your completion and response rate.

Telling the respondent that you will share the findings of the survey with all those who respond can sometimes increase response rates.

The best way to ensure a good response rate in electronic surveys is to assure your audience that you have provided significant data encryption. A good program will provide encryption routines that protect the results of the survey from eyes other than the respondent’s and the data entry personnel´s. SurveyTracker is just such a program!

Another way to increase response rates is to include an inducement, such as a gift or premium. This promotes good will, encourages people to respond, and may even make some people feel obligated to return the survey. To be effective, inducements must be clearly seen as tokens of appreciation, rather than pay for returning the survey. Inducements should also be sent with the original mailing, rather than a later delivery. The inducement should be chosen carefully so as not to influence the way people might answer the questions.

One form of inducement that avoids the appearance of pay, is the statement that, for every returned survey, you will donate $1 (or similar amount) to a well-known charity (such as the Red Cross or UNICEF).

The Survey Cover Letter

A cover letter is an important part of the self-administered survey. It can also be helpful for in-home interviews to introduce the interviewer and verify his or her identity. The cover letter doesn’t have to be paper, it can be part of an e-mail message or appear as a note at the opening of a disk, network, or kiosk survey. A web page survey should lead off with the cover letter and provide the survey beneath it or on the following page. If you are going to use a single long survey page, it’s ok to have the cover letter on its own page.

People want to know what the survey is about and why they should take the time to complete it. The cover letter speaks directly to them and asks them to participate.

SurveyTracker’s Mail Merge feature allows you to easily format your cover letter and envelope (or mailing label) so that SurveyTracker can automatically use the sample selected from the audience list. This significantly cuts down the effort hours required to do survey cover letters. It also expedites conducting repeat surveys over time.

The opening of the letter asks the individual to take part in the survey. This opening is very important and must grab the reader’s attention in the first two or three sentences. People are usually more inclined to respond if they are assured that their replies will be completely confidential.

Somewhere in the letter you should tell the respondents that you will share the results of the survey with them. Be specific. Let them know where the results will be posted. If you haven’t decided how the information will be distributed, let them know and give an approximate date when you will have formulated your plan. Being open will often assure the respondents that their effort in filling out the survey won’t be wasted and this will increase your response rate.

Cover letters can vary widely. Here are some guidelines to help you write an effective survey cover letter:

  • Cover letters should be short and to the point.
  • Personalize the letter by talking directly to the person, using “you” instead of “he, “she,” or “the person.”
  • Be sincere.
  • Explain the purpose of the survey and emphasize what actions will result from it.
  • If you ask demographic questions (e.g., salary, number of children, age, occupation, etc.) and the respondent cannot easily see their relevance to the survey, you should explain why these questions are asked. If you can’t explain why, eliminate the questions.
  • Ask them to complete the survey and return it to you as soon as they can. (This is better than giving a specific deadline. If someone misses the deadline, they will assume that you no longer want their responses.)
  • Explain how they can obtain the results of the survey (if this is possible) or how you will communicate the survey findings to them. Offer to e-mail or post the results on your web page.
  • Mention the approximate time it will take to fill out the survey.
  • Stress the confidentiality of their replies in order to encourage them to give their honest response.
  • Say only what is absolutely necessary and say it briefly.
  • Thank them for their time and participation (if appropriate).
  • Keep your cover letter to a single page or screen in length.
  • Use standard business letter format in order to give a sense of professionalism.


The appearance of a mail survey is very important. The piece that is received in the mail is the only contact the respondents will have with you. It must effectively represent your organization. It also must be completely self-contained. You have no opportunity for further clarifications or explanations.

Use SurveyTracker to print out the survey instruments. If they are to be mass reproduced, you can make one master copy and then photocopy it as many times as needed. Place the survey and the cover letter (see section on the Cover Letter above) in the appropriate envelope, attach the address and stamps, and deliver to the Post Office.

Scannable survey must be printed out individually, usually on a laser printer. Printer companies or the company that produced the scanner will often print a quantity of your forms for a price.

Paper quality will significantly affect the overall impression of the survey. Do not use light paper stock in order to save on mailing costs. Use 20-pound, or heavier, paper so that printing will not show through. Smooth surface paper is best for a mailed survey. Avoid slick or textured surface paper. Envelopes should match the paper stock.

Be sure to always provide a pre-stamped addressed envelope. Failure to provide this will almost guarantee an extremely poor response rate.

Do not fold a scannable survey when mailing it and provide instruction to your respondent to not fold it when returning. This will require a return envelope at least as big as the survey itself.

Mail the surveys during times when the respondents will not be receiving large quantities of mail, such as on holidays.

When you decide on a mailing date, take into consideration any external events that could influence either the response rate or the responses themselves.

Most of the surveys returned will be received within a three to four week period. Rather than waiting a long time to begin analysis, it is best to monitor the daily returns and choose a cut-off point. A large number of surveys will be received every day for the first few days after a mailing, then taper off. Surveys received after the cut-off date should be ignored.

The type of postage used to mail the survey will affect the response rate. First-class stamps on the mailing envelope provide the highest response rates of all. Bulk mailing gives the lowest response rate.

It is important to verify the integrity of your mailing list. Check randomly selected names and addresses for accuracy. It is not unusual to find mailing lists with large numbers of inaccurate addresses. These will usually affect your response rate negatively.

Nonresponse is a common problem with mail surveys. Because of this, it is very important to weight your results to account for this nonresponse.

This can be done by logically and subjectively adjusting the results to accommodate the nonresponse (e.g., if only 60% responded and 30% of them answered affirmatively, you could adjust the response to approximately 40% affirmative). Another way to adjust the results for nonresponse is to conduct a follow-up study (for example, by phone) of those who didn’t respond.

SurveyTracker Sampling feature will help you calculate the necessary sample based on historical data concerning response rates.

Web Distribution

There is very little actual distribution involved with a web-based survey. Instead, you must get word to your respondents, if any, that the survey is online and can be replied to at their convenience.

You don’t have to have a specific audience for a web survey. You can simply place it on your site, provide suitable links, and let people visit it at their convenience. Cross-linking or advertising your survey is a good way to get the word out. If you find a web page that appeals to people who might want to complete your survey, you might want to approach the page’s Web Master and propose a mutual link exchange. He adds a link on his page to your survey (or your web site) and you do the same for him. Purchasing an advertising banner on a web site that attracts the business you want is another option. And don’t forget to submit the survey’s URL to search engines such as Yahoo and Excite.

The above options depend greatly on how long you want to keep the web survey online. If it’s only going to be up for a short time, using the search engines may not be a good idea. If you get another web page to provide a link, let them know when the survey is coming down so they can remove it.

If the web page is meant to be exclusive to your audience only, there are two good ways to allow only them to visit. First, you can simply place the web page and forgo providing links on your main web site. Instead, you can send the actual URL to your audience via an invitation message and have them click the link to launch the survey. Second, you can require a password to enter the survey.

Unless you specifically advertise your survey to audience members, a web survey tends to deliver a slow response rate. It’s best to use surveys that aren’t time critical in these situations and to send e-mail invitations and reminders directly to the respondents.

If you do use an audience list, take into account the following before setting a cut off date:

  • If you have included a specific audience in your web survey plans, you can usually expect responses quickly if the audience is active on the web. Make a cut-off date of about two weeks.
  • If the audience doesn’t tend to use the web actively, the response rate could be low and it could take a long time to receive enough replies. You probably don’t want to use a web survey in this case.
  • If you simply aren’t sure how active your audience is (i.e. you used an unsolicited e-mail list), it’ll have to be a complete judgement call as to when to set a cut off date.

Always make sure you provide the exact URL to each respondent or web site you want to advertise on. Double check the link to make sure it works. A potential respondent will NOT go out of his or her way to locate your survey if the link is incorrect.

Test your web page before you advertise it. Make sure it looks good in as many screen resolutions, colors, and computers as you can. Make sure that the Submit button works and the responses are returned to the proper e-mail address box.

Survey-by-USB Distribution

Surveys placed on a USB thumb drive can either be handed out physically to each audience member, passed between multiple audience members, or mailed directly to each audience member.

When you create the survey to the drive, provide clear instructions on how to load the survey. This is preferably included in an initial email but can also be included in a README file on the disk itself. If instructions are on a README file, make sure your respondents know where it can be found. Instructions can also be sent on the cover letter that you deliver with the drive.

You should avoid mailing out thumb drives unless you are sure each respondent, or a large percentage of them, have a computer and can run the file. You must also make sure they have a computer compatible with your disk. For instance, sending out a Macintosh compatible disk to respondents with an IBM compatible is futile.

When you mail a drive, make sure you use a protective cardboard disk mailer large enough to hold the device, a cover letter, and a return cardboard mailer with proper postage. Your response rate will plummet if you fail to include paid postage. First-class postage on the disk mailer provides the highest response rates of all. Bulk mailing gives the lowest response rate.

If you simply hand out the thumb drives, provide instructions on when and where to return them.

Your response rate with a Survey-by-USB survey depends on if you handed out or mailed the devices. A good cut off date for a mailed disk is 3 to 4 weeks. If you handed out a single drive to multiple people, your response rate will be slow as everyone will have to complete the survey and hand it on before sending it back to you. If one disk went out to every person, your response rate could be expected to pick anywhere from a couple days to a couple weeks.

Kiosk Distribution

When you place your survey on a kiosk, or computer terminal, you must make potential respondents aware of its existence. If the kiosk is within your office, this can be as simple as an internal memo or e-mail. If the kiosk is at a shopping center, tradeshow/convention floor, or in any other well-trafficked area, you must make it clear with proper signage that the survey exists.

A kiosk can be something as advanced as a dedicated, proprietary cabinet with its own screen and keyboard (or touch screen) to a simple computer setup and running the survey. If it’s not a dedicated piece of hardware, make sure the survey itself is coded not to return to the operating system between respondents.

A kiosk survey delivers results as fast as people can take the survey so you must make the survey easy to use, friendly, attractive, and, above all else, pertinent to the location. Don’t set up a kiosk at a restaurant and ask people about their favorite airlines. Place surveys that ask customer opinions of the store in the store itself, or at the tradeshow or convention, office, airport, hotel, restaurant, etc. Also, don’t set up a kiosk in the wrong spot at the location. Don’t place it at the entrance to a restaurant, put it at the exit so people will know how their service was, how good the food was, etc.

You must also have a technician ready to service the computer or at least provide a system robust enough to handle the public. Your survey will be a complete failure if the computer it’s on crashes!

Due to the public nature of some kiosks, you must keep an extra careful watch out for bogus replies. A kiosk can attract people serious about their replies or people just making random responses out of boredom or as a joke. Another type of bad respondent is the “theme” respondent. This type of person will enter information in a logical pattern but the replies won’t be his or her honest opinion. Thankfully, some of these people might answer in an obviously fake manner (i.e. Name: Sherlock Holmes. Location: 221b Baker Street). Making sure your replies are accurate can be a tough judgement call. Often, the only solution is factoring in a margin of error.

Network Survey Distribution

A network survey is nothing more than a survey placed on an organization’s internal network. Anyone on that network is asked to locate the file, load the survey, and reply. Usually the computer with the survey should be the network server or some other centralized system. Letting the people on your network know where to find and how to access the survey is important.

Since the audience for this type of survey is so narrow, you can usually expect a quick turn around rate. Giving it a single work-week is usually enough time.


If you want people to participate fully in your survey, you should be sure to contact them before the survey takes place. This is especially important in a business or survey organization if you survey attitudes or performance. Explain the purpose of your survey and assure respondents that all answers and comments will be confidential. You want to give yourself enough time so that everyone is contacted at least three weeks before the survey.

©, Training Technologies, Inc.

Chapter 8: Personal Interviews

Personal interviewing offers the greatest opportunity for gathering abundant information. It is also considered the best method for obtaining in-depth and complex information. Almost any question can be asked, because the interviewer can use both verbal and visual cues.

Personal interviews tend to yield higher quality answers because people tend to concentrate more and give more detailed responses. If a survey is going to last over an hour, it is also better to conduct it in person.

The in-personal approach is utilized in several of the eight basic methods: Personal Interviews, Delivery surveys, Focus Groups, Group Interviews, and Location Interviews. Some of the following may not apply to all of these methods, but the basic principles should be followed unless there is a specific reason for not doing so.

For best results, the respondent should be contacted before the actual interview. Often this is done by letter but a phone call also works. If it won’t bias your replies, a copy of the survey questions should be sent prior to the actual interview. This gives respondents a chance to think over their answers instead of just answering “off the top of their head.”

The Best time to interview

The time of day and day of the week when the interviews are conducted will have a significant effect on both the response rate and the nature of the answers received. Give the timing of the interviews careful thought, considering the method being used as well as the sample you wish to interview. For example, employee surveys are best conducted during the workday. Employees often view a survey more seriously if they are asked to take it on company time. This may affect the results, so be sure to plan the best time to interview your particular sample.

Customer surveys are best if conducted during the day unless you have made special arrangements.

Length of Interview

Keep interviews moderate in length. A long interview or questionnaire can frustrate or overwhelm people. Longer interviews cause participants to become impatient. Therefore, they may end the interview or provide false answers (e.g., all responses are “yes” or “disagree” regardless of their true feelings.)

In designing your surveys, remember that the fewer questions asked, the better the response rate and the more accurate the results of those responses.

Try to ask only those questions that are necessary to achieve the purpose of your survey.

Tips for Interviewers

    1. Be friendly so people will talk to you, but don’t be so aggressive that you drive them away.
    2. Involve the person immediately by asking them a question.
    3. Never ask if they have time for a few questions.
    4. Always be polite, even if someone refuses to participate or is rude.
    5. Adjust yourself to the participant and their environment. Be patient if there are interruptions, such as telephone calls.
    6. Read aloud, clearly, and slowly, all questions as written, so that the respondents can understand every word.
    7. Never interpret a question unless specifically instructed to do so by the survey administrator. Reread the question, but do not try to explain it.
    8. Be sure to follow all instructions and procedures. It is very important that all interviews be conducted the same way.
    9. Do not let respondents read the questionnaire over your shoulder. Premature knowledge of some parts of the questionnaire may bias their responses.
    10. At the end of the interview, be sure to thank the respondent for their participation.

Orientation of interviewers

The interviewer is the key to a successful personal interview survey. If there is going to be more than one interviewer, any variations among them can affect the data gathered. Because of this, it is important to orient interviewers so that you minimize differences in the way they conduct of the survey.

SurveyTracker provides assistance in minimizing these differences. As you design your survey, you can include special Notes that spell out how the interviewers are to conduct themselves. The notes can be on separate sheets or (for telephone surveys, etc.) printed right on the survey instrument.

Here are some suggestions for orienting effective interviewers:

    1. Provide an overview of the survey and its purpose.
    2. Give each interviewer all the materials to be used for the survey.
    3. Go through the survey and all related materials explaining the purpose and use of each element.
    4. Demonstrate the conduct of the survey by interviewing one of the interviewers.
    5. Answer any questions.
    6. Have the interviewers practice role-playing interviews with each other.

As the survey is being conducted, it is advisable to monitor the process by actually going out and observing the interviewers in action. Also, be sure to tell interviewers that you are going to do this and that you will be available if any problems come up.

Field Survey Sheet

If you are conducting personal interview surveys (in-home, location, etc.), make sure your interviewers have all materials ahead of time, so they can all start on time.

Make sure each personal interviewer has several Field Survey Sheets. These sheets are valuable practical tools for the field interviewer. The Field Survey Sheet:

  • Allows you to keep track of the survey administration and the result of each interview.
  • Records the interviewer’s name, a record of attempts, and interview number.
  • Lists the primary sample name and address, plus names and addresses of replacements (if you are using this method).
  • Contains information to make the interviewer’s role easier, such as an interview introduction.

Telephone Interviewing

Telephone interviews have the fastest turnaround time of all surveying methods. They often cost less than personal interviews and yield excellent quality data. Telephone interviews are also slightly easier to administer.

Although telephone interviews must rely solely on verbal cues, their ability to probe, clarify, and reinforce answers is generally not hindered. There is also less chance of interview behavior influencing the responses. Since respondents cannot see the interviewers, they do not make judgments about what answers the interviewers would prefer to receive.

Any information you can provide to them as they receive the survey form will help them feel more comfortable and will have a direct impact on the final results. If you are surveying people at work, a personal contact to let them know you will be calling is often helpful.

Some surveys do not notify respondents in advance. In these, the first contact is by telephone.

Methods of conducting the telephone interview are not significantly different from those used for personal interviews (See “Tips for Interviewers” above).

With telephone interviewing, as with personal interviewing, the time of day and days of the week during which calls are made have a significant effect on the type of respondents that are contacted. Most calls should be made between 9 AM and 9 PM. If the views of those who work outside the home are desired, calls should be made on weekends or after 6 PM.

Telephone interviews are usually much shorter than personal interviews. Calls that last longer than ten to fifteen minutes are likely to irritate the respondents and result in hang-ups or inaccurate responses.

Other Methods of Gathering Data

Focus Groups

Focus Groups are basically open discussions among 6 to 12 people with the focus being supplied by a trained facilitator. These groups can be used for a variety of purposes:

  • To generate ideas about the way consumers think or behave
  • To facilitate more open and honest responses
  • To probe for reasons behind comments
  • To test new concepts for a product or service
  • To help improve the design of a survey instrument (during pilot survey designs)

One of the advantages of Focus Groups is that they are very flexible. They can take advantage of unexpected responses and probe the reasons behind them. Group dynamics also help stimulate more significant responses from the group members. In fact, Focus Groups can probe deeper than the most questionnaires.

The role of the facilitator in a Focus Group is to keep the discussion moving and on track. This requires an interest in the group members and an effort to keep all members involved. The facilitator should have some, but not too much, information about the topic to keep the discussion flowing.

One good way to be prepared is to use the following list of topic areas to formulate questions to help stimulate discussion. Not all areas will apply to every subject under discussion. (Sample questions are included with each topic area.)

  1. Categories of the subject.What kinds of supervisors are there?
  2. Awareness of specific subject items.What types of supervisors have you personally had experience with?
  3. Comparison of subject items.Which was the best, the worst, and why?
  4. Circumstances of subject knowledge.When and where did you become familiar with these types of supervisors?
  5. Identifying relevant subject attributes.When supervising workers, what is important and why?
  6. 6. Physical attributes of subject items.When you think about a good supervisor, what characteristics come to mind?
  7. 7. Interpersonal attitudes about attributes subject items.Does anyone in your work team care what kind of supervisor the team has?
  8. Feelings about attributes of subject items.Do you have any feelings toward a certain supervisor as a “good supervisor”?
  9. Associations connected with various attributes.If a supervisor is well liked, is he or she more or less likely to enforce the rules?
  10. Requirements for satisfaction.How often should a supervisor offer help for you to consider him or her “helpful”?
  11. Opinions about brand name attributes.Does your supervisor offer enough help for you to consider him or her helpful?
  12. Evaluation of attribute worth.Which of these things you say you want in a supervisor would cause you to be willing to work longer hours?
  13. Determination of values.How would you characterize someone who is a hard worker?
  14. Hierarchy of values.Would you rather be known as a hard worker or a skilled worker?
  15. Relationship between attributes and values.You say you want a helpful supervisor. How does that affect you as a worker?


Direct observation can be a very important tool. In most situations, it is the simplest and most accurate way to measure external behavior. Sometimes, it is the only way to measure behavior, either because the subjects will not recount their behavior accurately (magicians are not likely to tell you exactly how they accomplished their trick) or because the subjects are unable (an infant cannot tell you what they are doing).

The major disadvantage is that observations cannot measure thoughts, ideas, opinions, or preferences. This limits the use of observation to those things that can be readily observed by someone else. Observations also are usually more expensive than other methods.

Since the subjects of an observation survey are not inconvenienced during the survey (in fact, they may not even be aware a survey is in progress), the length of a particular observation survey is determined only by the stamina of the persons conducting the survey and the cost.

Suggestion Systems

Suggestion Systems provide people with an opportunity to voice their opinions, make suggestions, and register approval or disapproval.

While respondents do not make up a statistical sample, they do offer an organization a simple way to hear what their people are thinking. Because it is voluntary, issues that come up several times are usually uppermost on people’s minds.


Simulations are the exact opposite of Focus Groups. Instead of consulting a group of individuals about their ideas and opinions, a group of formal/mathematical models of the situation are collected. These models are usually the result of the analysis of some form of data being combined with a theory. These studies are not directed toward collecting data, but rather toward using past data and models to answer “what if” questions (e.g., “What if we made the deluxe pizza 15% larger?” or “What if we added three more operators on the night shift?”). Results are projected for different hypothetical situations, simulating actual results.

The advantage of simulations is that they can answer questions without requiring the collection of new data. However, the results can be misleading if the hypothetical model is flawed or the past data is no longer accurate.


Personal interviewing covers everything from a person-to-person interview to phone interviews to focus groups. The variety here doesn’t preclude the basics of personal interviewing such as the demeanor of the interviewer, the time of the interview, and the length of the conversation.

All those involved in conducting the survey should be given a complete orientation so they will all conduct the survey alike. Variations among interviewers can affect the accuracy of your survey.

Returned surveys should be examined to identify any problems. You should set strict standards of completeness for a survey to be accepted.

©, Training Technologies, Inc.

Chapter 9: Data Collection


Once the data has been returned, you need someone, or a team of people, to enter the data into whatever computer program will receive it. These people should be chosen for their reliability and consistency when handling monotonous tasks. Data entry can be time-consuming task that lacks excitement or variety. One thing these people need to do is filter out surveys with dubious replies. They should make a separate pile for these so supervisors can go over them and decide if they should be included.

One way to assure the quality of responses for high-stakes surveys is to use double data entry. This means having one data entry person enter in X responses and then handing the responses to another person and having them enter the same data. The results of both are compared for accuracy. This tends to be an expensive method but is required for important surveys where bad data can’t be allowed.


Web surveys offer immediate, instant data collection. This, however, does not mean your job is done once the data is collected. Someone needs to review the results to determine if they are satisfactory and someone needs to be able to retrieve corrupted responses.

A data entry person is also necessary if you want to Code electronic responses. The person´s job is to go through all the open-ended responses and use a Code Book to assign generic information to each. This way, you can analyze and report on subjective replies in an object fashion.


Scannable forms offer rapid data collection via a scanner, but similar to electronic (web/email) data collection, just because it’s fast, doesn’t mean your job is done.

When scanning forms, make sure you alot enough time to perform the task. Loading a scanner, waiting for it to scan, removing forms, etc. can be more time consuming that some may originally expect. There are additional factors that may make the process slower such as:

  • All forms must be unfolded and flattened before being loaded into the scanner. The scanner may jam or misread a form that isn’t flat.
  • Booklets with multiple pages connected via performation must be separated before scanning. The time required to separate a multi-page form can be greater than predicted, depending on the number of pages and forms.
  • Torn or damaged forms should be excluded from the input hopper as often as possible. This may mean reviewing the forms for damged forms and pulling them aside for manual data entry.
  • Improperly completed forms are forms where the resondent used the wrong type or color of ink when filling in the form or merely circled the response and didn’t fill in the response bubble completely. You may need to separate these forms and enter them manually.
  • Write-in responses many need to be manually entered.

Once the forms are scanned, you should decide what to do with the completed forms. You can consider recycling them, throwing them out, or storing them for future reference. Whatever you do, you should take into consideration the sensitivity of the data on the forms.

Reviewing Returned Surveys

As the surveys are returned, they should be looked through immediately to identify problems. The purpose of this editing is to determine which will be accepted and which will be rejected. Reviewing electronic surveys takes time but it´s an important part of the process. Just because you don´t have a team entering data doesn´t mean someone shouldn´t review the results.

Strict standards of completeness should be set for acceptance of a given survey. Questionable surveys should be set aside for the Administrator to evaluate. Generally, a small amount of incomplete data can be accepted without affecting the value of the remaining data.

Editors should be on the lookout for:

  • Illegible Responses. This happens because of poorly trained or supervised interviewers. Or from poor handwriting in surveys with a large number of open-ended questions.
  • Inconsistent Responses. Sometimes, responses are not believable (e.g., “Married for 15 years” and “Age-21”), self-contradictory (e.g., “detail-oriented” and “can’t be bothered with small things”), or too consistent (e.g., a large number of consecutive questions answered “Disagree”). The Administrator should evaluate all of these.

If the results seem to be radical or extreme, you might do follow-up interviews with a small percentage of the sample.

Sometimes, if an individual respondent can be identified, you can go back and try to get them to explain or clarify their responses. In the case of a very large survey, this is obviously expensive. Remember also that answers this second time may be different than those on the first survey because of changes in the respondent, in conditions, or in the manner of collecting the data (e.g., telephone call to a mail survey respondent).

If most of the responses on an individual survey are blank or incomplete, ignore that particular survey and treat it as if it was not returned.

Sometimes, you can turn a non-response into a category. You may decide that respondents left the item blank because they felt that the question did not apply. So, their answer could be classed as “none” if there is a “none” group. (Obviously, this method only works with scales that include a “none” or similar response.)


It´s a fact of surveying that you will not receive 100% response rates. However, if your actual response rate is much lower than you expected, you can follow up on non-respondents.

By going over your audience list and comparing it to the responses, you can come up with a list of non-respondents. Or, if you have your information in a computer program such as SurveyTracker, you can mark audience members as respondents and filter them out for a list of non-respondents.

Once you have the list, you can either re-send the complete survey or send a polite note requesting the respondent please complete the survey in a timely manner. You will get additional replies if you handle this properly.


Once the responses are returned, someone needs to go through them and ensure the reliability of the answers. Even when responses are collected electronically, someone still needs to be assigned to this task, especially in cases where the data has become corrupted.

©, Training Technologies, Inc.

Chapter 10: Analysis and Reporting

Analyze the Data

Once you have entered all the data you have collected, you can analyze and interpret it. Analyzing survey data means doing the necessary statistical computations, including tallying and averaging responses, looking at relationships, making comparisons, and estimating trends. Interpreting the data means taking the statistics, giving them meaning in the context of your organization, and drawing conclusions from them.

The most commonly used method of analyzing survey data involves the use of statistics.

Statistics To Help Analysis

Statistics provide an objective method of analysis that often helps us see things we might miss otherwise. There are many varieties of statistics provided by SurveyTracker that can be useful in analyzing your survey data. Some of the most helpful are:

Frequency, also known as a tally or count, shows how many people responded to a survey, a question, or to a particular response in a scale. Numbers and percentages usually express tallies or frequency counts. When a graph or table of frequencies is created for a particular question, a frequency distribution is produced.

A frequency distribution for a question shows the number of people who responded to each of the scale’s possible responses. This can help indicate the characteristics of the population. If there are a large number of responses at each end of the scale with few in the middle, the population is polarized. If there are two distinct peaks in the distribution of responses with a dip in the middle, the population is bimodal. Polarized and bimodal populations usually indicate that filtering is needed in the analysis to address a particular group’s view. If the number of responses is fairly constant from one end of a scale to the other, the population is uniform.

A normal population’s responses start off low, grow, peak in the middle, then taper off so that they form a bell shape. Normal populations have 68% of all responses within plus or minus one standard deviation of the mean, 95% within plus or minus two standard deviations, and 99% within plus or minus three standard deviations.

Percentage is another way of representing frequency data. In the case of surveys, it indicates the percentage of all respondents who gave that particular response. Tables commonly show both frequency and percentage. If a table does not show percentage, it can easily be calculated by dividing the frequency for that scale response by the total number of respondents to the question.

Cumulationprovides a quick way of gauging how many people have already answered the current or previous scale responses. Usually shown as a cumulative percentage, cumulation adds all the prior response frequencies together with the frequency of the current scale response. Cumulation can be used to gauge what percent had a negative response. Large jumps can also be used as an indicator of possible sub-groups in the population. Cumulation is often used with ordinal scales.

The mean is what most people think of when they hear the term “average”. You arrive at the mean by adding all the scores for a given scale and dividing by the number of respondents. (e.g., if 3 people answered a question with scores 2,3,4 respectively, the total of the scores would be 9 and the mean (9 divided by 3) would be 3). The mean provides a quick way of discovering the central tendency or overall view of one group of respondents. To be the most use, the mean should only be used with scales with evenly spaced scores that go from one extreme to another, like Likert or Horizontal, Numerical. Usually scales such as Multiple Choice or Ordinal scales are not well suited for the mean. However, mean is often used with ordinal scales that are approximately evenly spaced and which the respondent is likely to divide evenly.

The median is another form of average that shows the centerline. If all the responses were lined up from lowest to highest, the median would be the middle response (or the mean of the middle two in there is an even number of responses). Since it is in the middle, it describes the typical response. Median is not as affected by extremes (like the mean) and is often used when data is highly skewed. Median is the preferred statistic for ordinal since the scoring is only used for ordering the data (the spacing of the scores does not matter).

The mode is still another form of average. Mode represents the particular response that occurs more than others. Mode can be used with any scale, unlike median or mean. If two responses tie for the largest number of responses, the frequency distribution is bimodal. This indicates that the population should be split into two sub-groups by using SurveyTracker’s powerful filtering feature. When bimodal conditions exist, it is best to also determine the frequency to see the actual distribution.

Range is the distance between the minimum (Min) and the maximum (Max) responses for a given question and scale. For example, if a horizontal numerical scale was used with possible responses from 1-7 and the actual responses were 3, 4, 5, 5, 6, 7, then the range would be 4 (7-3). Range is useful for showing how much the responses vary. But it does not indicate the frequency of the responses.

Standard Deviation (s.d.) is a very useful measure of how the responses vary. It is roughly based on the difference between the actual responses and the average (mean) responses. If responses are close to the mean, the s.d. will be small (0 = no deviation). If the responses vary widely, the s.d. will be large. Note that “small” and “large” are relative terms. In a scale that goes from 1-5 with a range of 4, a small s.d. would be 0.10 and a large s.d. would be 2.00.

Standard deviation is useful for gauging the distribution and indicating which areas need further study. If you initially do only the mean and the s.d. on a set of questions, you can pick out the ones that need further analysis based on the standard deviation. If the s.d. is small, the confidence level on that scale is usually greater and the mean is likely to be close to the true value. If the s.d. is large, the scale needs further examination to discover what is causing the variation. (It may be bimodal. Or the responses are so spread out that the “bell-shaped” curve is flattened, indicating that the sample size or response rate is not high enough to get accurate results.)

Variance is another measurement of how the responses vary. It is equivalent to the standard deviation squared. Since it produces larger numbers for s.d. > 1 and smaller numbers for s.d. < 1, it is more difficult to visualize than standard deviation and therefore not as popular.

Correlation shows a linear relationship. Whether or not the relationship is direct or inverse is indicated by the positive or negative value of the number.

Chi-Square measures the statistical relationship and significance of two variables.

Interpreting the Statistics

When you interpret your responses, don’t overlook frequency. Frequencies can be extremely useful statistics. They show you quickly how many people (and what percentage of the total) responded with each of the possible answers. Often, this is all you need to know. If your survey was well designed and your questions carefully thought out, you should be able to derive a large amount of meaning from a simple frequency table.

Generally speaking, the results of your survey should, when graphed, resemble the traditional bell-shaped curve of a histogram. The majority of your respondents (approximately 75%) should fall in the middle ranges of the scale with a small grouping at either extreme.

Although SurveyTracker does not draw histograms, some of the statistics it calculates can help you understand your survey’s distribution.

Skewness indicates how far off center the normal distribution curve (the top of the bell) is. A positive skewness means that the peak of the curve is to the left of center and a negative skewness means that the peak is to the right.

Kurtosis measures how the responses are distributed by indicating how narrow (or fat) the “bell” of the normal distribution curve is. Positive kurtosis indicates a narrow curve; negative kurtosis indicates a fat curve.


The decision about what level indicates a significant response is a highly subjective one. Much depends on the specific questions asked in the survey. Some questions might generate a number of negative comments without needing any immediate response. Other questions, however, demand quick action when you get even a few negative comments.

One way to measure the significance of responses is by determining the correlation between two items. Correlation measures the degree, direction, and significance of relationships. It does not indicate any cause and effect relationship.

Correlation is measured by a number that ranges from zero to plus or minus 1. Zero indicates that no relationship occurs. Plus numbers indicate that the items are both moving in the same direction and therefore seem to be related. The higher the number, the closer the relationship. Minus numbers indicate that the items are moving in the opposite direction and also may be related in a negative fashion (e.g., when one goes up, the other goes down). The greater the number, the closer the relationship. A plus or minus one indicates a perfect relationship between the items.


Another important factor to look for in analyzing your survey results is the relationship between items. Relationship analysis will help you determine cause and effect. It can also give you clues into the way that certain groups of people are thinking.

One of the statistical methods that is useful for discovering relationships is the cross-tabulation table, or “crosstab.” Crosstabs are the most common way to measure association. Once you have created a crosstab, the statistic to use to determine significance of a relationship is the chi-square value. The more the items are related, the higher the chi-square value will be.

Which Analysis Method Should I Use?

There is no single answer to this question. The analysis method you use depends on who you surveyed, what your survey purposes were, and what kind of data you gathered. SurveyTracker provides several practical methods for analysis.

Write-in Questions

Write-in questions provide a broader picture of respondents’ attitudes toward the various survey items. Using SurveyTracker’s Write-In Table feature to print out the comments you received can often give you insight into the reasoning behind responses and even indicate areas that need to be studied in a future survey.

Advanced Multi-Analysis Table

An Advanced Multi-Analysis Table provides a display of the information gathered in your analysis of the survey data across multiple filters. You can create a table for one or more questions, or one or more sections of questions. It is an excellent tool for getting either an overall view OR a detailed view of your survey’s results. The capability to compare multiple filters enables you to spot trends, monitor performance, and discover opportunities.

Statisitcal Table

A statistical table provides an easily understood display of the statistical information gathered in your analysis of the survey data. All scales can be used with tables except numeric scales and open-ended scales (unless coded).


The graph function in SurveyTracker allows you to represent your data in a visually accurate manner. This helps you evaluate the relationships and differences in the data.


Cross-tabulation enables you to establish cause/effect and other complementary relationships between factors on your survey. (For example, cross-tabulation might indicate that men under 30 were in favor of the proposal and all women over 30 were against it.)

Scoring Table

A Scoring Table enables you to display an overall score for each respondent. It’s the perfect report to provide to the respondent so he or she can see how he or she did in relation to the questions in total and to an overall average of the respondents.

Evaluation Table

An Evaluation Table display the results for a single subject and compares it to other related data (such as “Supervisor”, “Peer”, and “Direct Report” in a 360° evaluation or “Course Number” or “Department” for Course Evaluations).

Drawing Conclusions

The final step of an effective survey is drawing conclusions from the results. These conclusions will then lead to decisions concerning what actions should be taken. The value of your conclusions depends on how well they assist your organization in becoming aware and correcting the problems and situations examined by the survey.

Many surveys fail to provide good concrete help, ending up concluding that:

  • the problem definition should be changed, or
  • the data doesn’t give any help with the problem, or
  • more surveys need to be done to collect more data.

These kinds of conclusions usually indicate a failure during the survey’s planning phase. If survey objectives are properly thought through and defined, the information produced by the survey should prove useful.


Once the data has been entered, you can analyze it. Analyzing survey data includes tallying and averaging responses, looking at relationships, making comparisons, and estimating trends. Descriptive statistics, correlations, comparisons, and trends are used to identify significant information in the data.

When you have analyzed all the data, you must draw conclusions from the results. These conclusions will then lead to decisions concerning what actions should be taken. Generally speaking, the results of your survey should, when graphed, resemble the traditional bell-shaped curve of a histogram.

©, Training Technologies, Inc.

Chapter 11: Action Plans and Follow-Up

Follow-up Surveys

Although a survey gathers information from a large number of people about a wide range of subjects, it does not provide the in-depth information that a face-to-face interview provides. Because of this, many organizations conduct personal follow-up interviews with about 20% to 30% of the original survey sample.

In many instances, these follow-up interviews can become a means of obtaining vital information outside of the direct scope of the survey. When the interviews are conducted one-on-one or in very small groups, respondents often feel free to express themselves on a variety of important issues, including some that may not have been part of the survey. This kind of information can indicate direction for future surveys.

Select this smaller group carefully. Take special effort to make certain that this subgroup is a truly representative sample of those in the survey.

If possible, select the follow-up sample anonymously. Use the same methods you used to select the original sample (e.g., random, sequential, plus-one, etc.). When you interview these people, reassure them immediately that they were selected randomly and not because of any response they gave on their initial survey.

How do you decide what to ask in the follow-up interview? The following steps will help you identify areas that could profit from additional probing.

    1. Examine open-ended questions and “write-in” comments. These indicate what your respondents feel strongly about. They usually will welcome an opportunity to talk more about them.
    2. Check areas where the responses are not clear or where the conclusions are vague. These can benefit greatly from the greater depth of follow-up interviews.
    3. Look for results that surprised you. These areas also provide interesting and profitable points for further discussion.

Taking Action

Too often, surveys are thought of as ends in themselves. Once the data has been tabulated and all the reports finished, we tend to consider the job done. But, if a survey is to be truly effective, it must be followed by action.

In many cases, the follow-up interviews (see above) can be the place to start taking action. As you discuss some of the issues revealed by the survey, inquire of the respondent what they can do about the situations. In this way, they begin to see that action is not just the responsibility of management. They develop a personal stake in the issue. They become empowered to do whatever they can to eliminate problems and resolve issues.

In addition, management must be involved. Even before the survey is conducted, management support must be enlisted for acting on the survey results. This support is crucial for the development and implementation of an effective action plan.

Here are some ways to minimize management resistance and enlist their help:

  • Let managers see results that apply to them and their department before others see them.
  • Give plenty of time for managers to digest the results and develop action plans of their own.
  • Help them interpret survey results. Don’t assume they automatically know what the results mean.
  • Remind them that taking effective action can change any results they do not like.
  • Even during the planning stages, ask managers to develop contingency action plans based on what they think the results will be.

Once management is firmly behind you, formulate specific action plans. Take each of the areas that the survey indicated needed work and, in conjunction with the affected managers, work out a plan that details:

  • what steps to take (be specific)
  • when to take them (set specific dates)
  • who is responsible for what
  • what resources will be provided by whom
  • what results to expect

If your survey has uncovered significant problems, encourage management to view them as opportunities for improvement. Analyze the factors relating to the problem. Which forces, attitudes, etc. are maintaining the problem? Which forces, attitudes, etc. are pushing toward a solution of the problem? This will help you get a better overall view of the problem.

Now, develop specific actions to strengthen those forces that will help solve the problem and weaken those forces that maintain the problem.

If the problem seems particularly large or difficult to solve, encourage managers to find areas that they can do something about right now. Look at their particular areas of influence and determine what they can do TODAY (or this week or month) that will affect the problem in a positive way.

As each person does what he or she can do in their immediate sphere of influence, the problem undergoes a shift and becomes more approachable. First, initiate actions that are immediately at hand; then, reach out to other areas that impact the problem. If everyone concentrates on what he or she personally (or their department) can do to correct the problem, gradually it can be brought under control and eventually eliminated.

Above all, during the whole process, keep the employees informed about what is going on. They are expecting changes because of the survey. If no changes occur, they “write off” the whole process and discount any future surveys. Since they expect changes sooner than you can make then, you should let them know what is happening. Then they will be confident that the survey was not a waste of time.

  • Tell them what actions you are taking and which actions you are still working on. Ask for their input. They often know the most effective way to produce positive changes.
  • Tell them which areas have been “put on hold” and why. Let them know why action has been delayed and that it will be taken later.
  • If some ideas are not going to be acted upon, tell them why. Nothing frustrates people more than making a suggestion they think shows promise and having it disappear without a trace.
  • Always keep them informed of progress. Long periods without any indication of action will convince them that no action is being taken. So, if you experience unexpected delays, or even anticipate them, be sure to tell workers what is going on and why.

Methods for Communicating with Employees

When positive changes result from actions taken because of survey findings, be sure to point out the value of the survey. Make sure everyone – management and employees alike – know the positive results from the effective use of your survey. This builds the credibility of any future surveys you wish to conduct.

There are several ways you can communicate this information to management and employees.

  • Send a concise report of the survey findings and the actions to be taken as a result of the survey to each person who participated and each manager whose department was involved.
  • Publish a short-run (3 or 4 monthly issues) newsletter of 2 to 4 pages. Cover the action steps that are taking place and highlight the successes that have followed the action.
  • Send an e-mail message appraising them of your status.
  • Post the newsletter or report on your company web page.
  • Letters from the President or other high management person can be useful means of communication.
  • Informal brown bag lunches with management and workers can encourage open communication about what is being done.
  • Don’t forget to use posters and employee bulletin boards to spread the message.

The ways to communicate the follow-up action are limited only by your imagination. The more creative, the better.


It is often profitable to conduct personal follow-up interviews with about 20% to 30% of the original survey sample. These interviews provide in-depth information and further clarification of survey results. Once this clarification is obtained, it is time to take action on your survey results.

Never let your survey become an end in itself. From the very beginning, enlist management support for taking action on the survey results. This is very important for the success of future surveys. If respondents feel their opinions did not affect anything, they will be less likely to respond in the future. Once the results of the survey are clear, take each of the areas that the survey indicated needed work and, in conjunction with the affected managers, work out a detailed action plan. During the process, keep employees informed of your progress.

©, Training Technologies, Inc.

Addendum A: 360° Evaluations
 360° Evaluations

One of the best ways an organization can gather feedback about its employees is the 360° Evaluation. The core concept behind the 360° is that every stakeholder an employee associated with should be able to provide feedback about the employee and that employee about each stakeholder. One key stakeholder is the person being evaluated him or herself. Usually the evaluations encourage feedback about each employee’s work behavior, performance, and expertise.

360° Evaluations are also called multi-rater feedback and multi-source assessments, names that enforce and encourage the core idea that the employee reviews are circular and involve the employee him or herself and everyone they interact with including managers, associates, subordinates, customers, suppliers and other external stakeholders, and more. Unlike traditional performance reviews which employ a more top-down approach of a manager reviewing his or her subordinates. A subordinate/lateral/supervisory/self analysis ensures that all stakeholders are involved in the process giving a more, well, 360 view of the employees.


Types of 360° Evaluations

There are many different types of 360° Evaluations, all of which share a common goal: improvement in an organization through group review and analysis. The type of 360° evaluation conducted is based on the needs or perceived weaknesses or strengths of the organization. In some cases, only leadership needs to be reviewed – and in other organizations, entire teams or the entire organization should be analyzed.

Some of the most common 360° Evaluations include:

  • Leadership Assessments: In this assessment, only the leader of a team is evaluated. Each member of the team reviews the leader and the leader reviews him or herself. The goal is to improve leadership qualities within the organization.
  • Team Assessments: Each member of the team evaluates every other member of the team – this includes the leadership. The goal is to present the overall teamwork of the group.
  • Training Needs Assessment: This 360° evaluation is designed to study individuals in a team in order to determine where training is needed. Areas of strengths and weaknesses are determined in job performance and training can be scheduled to help improve the quality of work for those who need assistance.
  • Pre/Post Training: Similar to the Training Needs Assessment, this method incorporates a follow-up 360° evaluation that is used to measure the change between the original and the post-training 360° evaluation. This is done in order to measure the new level of the employee and also to measure the effectiveness of existing training methods.
  • System Intervention – this assessment is company/organization-wide 360° evaluation with an end goal of determining the relative status of the company and to determine future vision. The 360° evaluation goes out to each team/location in the organization and measures the company as a whole as opposed to individual team members.

Designing a 360° Evaluation

Creating a 360° Evaluation is essentially no different than creating any other survey except that you should follow some basic rules. Without a good evaluation, you will receive confusing, inaccurate, and biased results that don’t accurately reflect the qualities of the team members being evaluated.

Here are some tips for designing a proper 360° evaluation:

  • The questions should be universal for everyone taking the evaluation within a specific job function or department. Odds are good that there are topics that a subordinate would not know about a leader that a leader or a leader’s peer would know. Make sure all questions are answerable by all team members or you will end up with certain questions with a large number of non-responses that skew the overall results.
  • Measure one topic or item in each question – don’t ask questions with two topics. This is a good idea for all surveys and it’s critical in 360° evaluations. You don’t want different team members or different relationships answering different concepts in the same question.
  • Develop unique evaluations. A 360° Evaluation should not be universal between groups, departments, or job functions. For example, the accounting department should not be able to use an evaluation designed for the software development department. If you find you have a universal evaluation, your questions are probably too broad to be effective for measuring the key competencies of the team.
  • Write questions that use a single scale. The single scale is important for ensuring that all items in the survey receive the same focus. It is also critical for proper reporting.
  • Make sure questions can be answered by your scale. If you are using a scale rating importance, don’t ask the person to rate someone’s ability and vice versa.
  • Make the evaluation short enough so that it can be finished in an average of 15-30 minutes. Time limits over this limit risk a loss of concentration, quality, and thoughtfulness in your responses.

The scale used in a 360° evaluation is very important. There are a fair number of ideas about what makes a proper 360° evaluation scale. Here are some of those ideas – keep in mind that some of these are mutually exclusive. Your best bet is to choose the type of scale that matches what your organization is most familiar with.

  • Use a midpoint. The midpoint is a good way to provide a safe middle ground between positive and negative feedback. However, the midpoint is only useful if it actually DOES provide a middle ground. Don’t create a three, five, or seven point scale with a middle ground that is skewed positively or negatively. Keep in mind that some people believe that the midpoint is useless because it provides a simple way to avoid answering the question. This is a good and valid point but leaving off a middle ground response forces people into the occasionally uncomfortable position of praising or criticizing someone who they think is doing ‘ok’.
  • Use a scale with four or five choices. The benefit of a small scale is that it provides less fluidity and varied opinion over meaning between the concepts presented in your scale. Results tend to be more concrete which may be desirable in your evaluation.
  • Use a scale with six to nine choices. Using a scale with more choices tends to provide more flexibility in responses. This approach allows people to be less one-sided in their evaluation because it provides more positive and negative responses than a scale with five or fewer choices. It can also, unfortunately, lead to different respondents interpreting the scale in a wider variety of ways. If there is a danger of this, provide clear definitions of each choice in your overview and reiterated it in notes before each section or use a scale with few choices.
  • Provide Scales that flow from positive to negative. If someone feels negatively about an aspect of a person, they are more likely to skip to the specific choice at the end of the list than if someone feels positive. Placing the positive scale choices at the end may result in more negative responses because the negative responses are the first items the person sees.
  • Avoid Don’t Know or Not Applicable. The point of a 360° evaluation is to ask only questions that ARE applicable or ARE known by everyone in the team. If you must include these types of scales, stress in your introduction or notes that the respondent should not use these choices to avoid answering questions. If you find a question has been answered with a large number of don’t knows or not applicable, review the question for quality, clarity, and validity to the entire team.
  • Avoid using write-in scales. The 360° evaluation process can be taxing to people being evaluated so it’s best to avoid risking direct abuse from anonymous respondents. If you need to provide write-in areas, make them rare, short, and at the end of the evaluation.

Designing a 360° Evaluation Using SurveyTracker

There are multiple approaches to conducting a 360 in SurveyTracker.

Question-based Identifying of Self/Relationships

This approach places the specific identifying questions in the evaluation itself. One question for who is being surveyed (the “Self”) and what the relationship the respondent has to the person being evaluated.

For the question ‘who is being evaluated’, you may want to use code numbers to reflect each member so you can avoid misspelled names (which will result in multiple reports in the automated Evaluation Table unless manually corrected). If you have a specific set of people being evaluated, you may want to include each name as a single response scale (using a drop-down list for electronic surveys).  The “relationship” question is limited to a single response scale with up to five choices (e.g. Self, Supervisor, Peer, Subordinate).

The Evaluation Table automatically generates data filters based on these two questions.

360° Team Add-On Component

The 360 Add-On Component allows you to pre-define the teams instead of including questions in the survey itself.

Using an audience list consisting of every stakeholder, the Add-On Component provides a modified distribution phase where you create teams (based on the person being evaluated) and drag and drop respondents into specific relationship categories (Self, Supervisor, Direct Report, etc.).

The Evaluation Table automatically uses this team member setup to generate the necessary data filters and reports.

You do not need to include questions for “Self” and “Relationship” using this method. However, some people do still like to ask these questions as a check to make sure they are responding to the correct person. The only drawback to this is that a person may use a survey for one team member but reply using the person’s name they select in the survey. This would throw off the Evaluation Table report.

Manual Data Filters 

An audience list that specifically contains multiple entries for each person – an individual appears once for each person they will be evaluating. The audience should contain fields for the specific person being evaluated and the relationship. You would then distribute the survey with instructions and merge tags to ensure the respondent answers the survey using the correct link or form for each person evaluated.

The Evaluation Table does not specifically work with this method – you have to create standard reports and generate filters manually.

360° Evaluation Team Distribution

360° Evaluations are based on teams. Each team consists of the person being evaluated, a series of individuals drawn from your audience members, and their relationships to the person being evaluated. Defining this team and relationship introduces its own set of challenges.

Here are a few ideas:

  • If the team already exists as a known group in the organization, define the team in SurveyTracker and send the evaluations. You can do this in the “self” and “relationship” questions or in the 360 Team Setup if you have the 360 Add-On Component.
  • If teams are not firmly established or if you need to go through a process team, review the individuals related to the person to be evaluated, and determine their relationship through internal processes. If you have a 360° team in place whose job is to manage the process, allow them to go through the organization’s lists and define the team members.
  • Let the person evaluated define his or her own team. The team knows itself – let each person in the team determine who will provide his or her evaluations. You may want to determine this information through e-mail or by using another survey that simply asks who each person’s team members are and their relationship. Once you have the information, enter it into SurveyTracker and distribute the surveys.
  • Let the managers intervene. When a team member defines its own team, you may want to pass the team assignments through management in order to get buy-in. If the manager disagrees and wants people added/remove, you can either make the changes and distribute the evaluation as-is or pass the feedback back to the team member for approval. This process may take longer but it’s better to get buy-in from both sides and helps to eliminate positive bias.

Some guidelines for establishing teams:

  • 360° evaluations are usually anonymous so assign team members accordingly – you may want to avoid known ‘outlier’ respondents who provide biased responses based on personality conflicts. These responses may or may not be obvious in your data and may skew your results unnaturally high or low when compared to responses from more neutral parties.
  • Have at least three non-supervisor/manager individuals per relationship. This provides more room to establish solid averages for each relationship that aren’t skewed by one person. It also results in more honest answers from the respondents if they know the data is being combined with others on their relationship level.
  • For managers/leaders, it’s unlikely there will be more than one or two people, which means you will lose anonymity. However, it’s better to include the managers/leaders than exclude them.

Use standard relationships when setting up teams. The most common relationships include:

  • Self
  • Supervisor/Manager/Leader
  • Direct Report/Subordinate
  • Peer

You may have other relationship that you can customize in SurveyTracker. Keep in mind that you don’t want to scatter your 360° evaluations over too many relationships.

Other relationships sometimes used in evaluations include:

  • Subordinate -> Not a direct report
  • Subordinate -> Direct Report
  • Outside customer
  • Supplier
  • Peer -> same department
  • Peer -> different department
  • Superior -> not a direct manager
  • Superior -> Direct manager

Be careful wording your relationships. Some organizations and some individuals may take offense at words such as ‘subordinate’ or ‘superior’. If you use these terms, you may want to define them in your survey overview or notes.

360° Reports

When providing a report to the people who were evaluated, it is best to bring the person into a private meeting to explain the results. Due to the sensitive nature of the results and the need for explanations, a direct meeting is usually best. You should avoid simply mailing the reports to the people without at least a written explanation of what each category in the reports means.

You may want to leave out responses to write-in questions when presenting the report to the individual evaluated. Instead, use the comments to provide you with insight as to why the results are the way they are. Tailor the review of the report based on those comments. Providing the write-in results can cause more emotional distress than is necessary if the comments are negative. It’s better not to provide write-in feedback to everyone in the team – even those members with positive comments. This avoids the issue of ‘why didn’t I get write-in comments when everyone else did?’ Use your discretion when providing responses but always be consistent.

You should keep the following phrase in mind: what is perceived in the report is real. The person receiving the report can be told all the positive, uplifting things there are to say, but the employee’s perception of his or her results is what will remain. For this reason, keep the review of the results on a personal level and don’t rely on impersonal statistics. Explain things clearly and concisely to each member and always accentuate the positive at the end of the interview – even if you start with positive results, repeat them so the person doesn’t leave on a negative note.

You may want to remind every person who receives a report (regardless of positive or negative results) that self-deception is often a problem with 360° evaluations. Tell them this before any part of the result is discussed – and tell them that everyone is getting the same comments. People are very good at self-deception – either denying the negative results (applying blame to some vindictive person or conspiracy) or ignoring the positive results.


360° Evaluations are valuable and informative way to gather knowledge about your employees and management. Using the tool effectively will let you gather feedback from the person evaluated, his or her superiors, subordinates, and peers. This holistic evaluation method makes sure every stakeholder has their say as opposed to more traditional top-down type evaluations.

©, Training Technologies, Inc.

Addendum B: Course and Instructor Evaluations


Course and Teacher Evaluations Overview

Course and Teacher evaluations are a good way to gain valuable information about the quality of the instruction provided to students during a given course. These evaluations can help you to evaluate professors, curriculum, campuses, schools, and other factors. They can help ensure your students are receiving the best education available and provides valuable feedback to improve the instructor for future courses.

Types of Evaluations

While the primary evaluation type of evaluation asks the students to provide feedback about the instructor. There are, however, some variations that can be deployed. For example, instead of focusing on the instructor, a course evaluation can look at the general course without getting into specific of the method or quality of the instruction provided.

Some of the most common Evaluations include:

  • Course Evaluation– Allow the students to provide feedback and analysis of completed courses.
  • Instructor Evaluation– Obtain valuable information about teachers, professors, or instructors.
  • TA Evaluations – Obtain valuable information about the courses Teaching Associates.
  • School or Campus Evaluations – Evaluations that roll-up the results of all courses taught or specifically focus on the campus.
  • Term/Semester Evaluations – Evaluations that analyze a roll-up of results during a given semester, often used to compare to historic data.

Designing a Course Evaluation

Creating a Course or Teacher evaluation is essentially no different than creating any other survey except that you should follow some basic rules. Without a good evaluation, you will receive confusing, inaccurate, and biased results that don’t accurately reflect the qualities of the instructor.

Here are some tips for designing a proper evaluation:

  • Check with your state or local policies regarding expectations for evaluations. Often these can give you a good, working (or even required) knowledge of how to design an accurate and fair evaluation.
  • The questions should be universal for everyone taking the evaluation within a specific course or department (Math, Science, Literature, etc). Make sure all questions are answerable by all students or you will end up with certain questions with a large number of non-responses that skew the overall results.
  • Measure one topic or item in each question – don’t ask questions with two topics. This is a good idea for all surveys and it’s critical in course evaluations. You don’t want students answering different concepts in the same question.
  • Write questions that use a single scale. The single scale is important for ensuring that all items in the survey receive the same focus. It is also critical for proper reporting.
  • Make the evaluation short enough so that it can be finished in an average of 5-10 minutes. The longer the evaluation is, the less likely the student will be to take the time to complete it.

The scale used in the evaluation is very important. Here are some tips, keeping in mind that some of these are mutually exclusive. Your best bet is to choose the type of scale that matches what your school, teacher, and students are most familiar with.

  • Use a midpoint. The midpoint is a good way to provide a safe middle ground between positive and negative feedback. However, the midpoint is only useful if it actually DOES provide a middle ground. Don’t create a three, five, or seven point scale with a middle ground that is skewed positively or negatively. Keep in mind that some people believe that the midpoint is useless because it provides a simple way to avoid answering the question. This is a good and valid point but leaving off a middle ground response forces people into the occasionally uncomfortable position of praising or criticizing someone who they think is doing ‘ok’.
  • Use a scale with four or five choices. The benefit of a small scale is that it provides less fluidity and varied opinion over meaning between the concepts presented in your scale. Results tend to be more concrete which may be desirable in your evaluation.
  • Provide Scales that flow from positive to negative. If someone feels negatively about a teacher, they are more likely to skip to the specific choice at the end of the list than if someone feels positive. Placing the positive scale choices at the end may result in more negative responses because the negative responses are the first items the student sees.
  • Avoid Don’t Know or Not Applicable. The student should be able to answer all the questions in the evaluation. The point is to ask only questions that ARE applicable to the course taught. If you must include these types of scales, stress in your introduction or notes that the respondent should not use these choices to avoid answering questions. If you find a question has been answered with a large number of don’t knows or not applicable, review the question for quality, clarity, and validity of the question and modify the evaluation to avoid this in the future.
  • Include write-in scales. A write-in response can allow a student to get more granular about the teacher. If you would prefer a more analytical/statistical approach or if a write-in comment would create concerns, you may want to exclude it. But, generally speaking, if a student is going to complete a course evaluation, they may have very specific points or examples that can be helpful.

Designing a Course Evaluation Using SurveyTracker

A course evaluation needs to identify the course or instructor being evaluated. This means you need a question asking the teacher’s name and/or course name/number. This question is then paired with another that’s specific to the instructor such as the course name, semester, campus, or similar information. The second question is used in reports to show all the courses (for example) that the instructor taught.

For the instructor questions, you may want to include a scale choice for each instructor so you can avoid misspelled names (which will result in multiple reports in the automated Evaluation Table unless manually corrected). This may not be practical, depending on the number of courses, so you may have to export the response data and clean it up after its all collected.

The Evaluation Table automatically generates data filters based on these two questions.

Course Evaluation Add-On Component

The SurveyTracker Course Evaluation Add-On Component allows you to open up the “relationship” (course number, semester, etc.) to more entries than is allowed by default in SurveyTracker. We recommend you purchase this add-on to ensure that you can issue course evaluations across a wider net than would otherwise be supported.

Course Evaluation Identification and Setup

Defining your instructors and students introduces its own set of challenges.

Here are a few ideas:

  • If you have a school database of instructors and courses, you may want to refer to it in order to set up your survey or at least have it handy when it comes time to clean up any data if using write-in questions to identify instructor/course information.
  • If you plan to send out your course evaluations as e-mail, check your student logs to see if you email addresses to pull from.
  • If you don’t have a solid list of teachers and courses, let the instructors define his or her courses, perhaps through a separate survey. Once you have the information, enter it into SurveyTracker and distribute the surveys.
  • Make sure you get buy-in or assurance from all instructors. If an instructor refuses to hand out the evaluations, then your whole process will become questionable. It’s not impossible that an unpopular teacher will choose not to hand out the evaluation and the popular teachers will. This will skew the results and annoy the students who may want to provide feedback about a bad experience.

Distributing the Evaluation

The most common way a course evaluation is distributed is via paper (often scannable) form handed out in the class room. Scanners and scannable forms are fairly common in education markets so you may have access to them. They are also very familiar to students.

On the other hand, students are high-tech and often have access to laptops, smart phones, and tablets on which they can complete online surveys.

That said, there is a school of thought that the best way to get a good response rate is to make the evaluation immediate, going back to the idea of handing out forms during class. Your response rate is important and there’s nothing like the immediacy of the class room to get responses handed in quickly.

Make sure every instructor and course receives enough copies of the evaluation (if printed). No student should be left without the opportunity to reply to the evaluation. The evaluation should be distributed prior to the final exams, preferably the day of to ensure the most number of students receive it. Make sure you have a turn-in box and encourage students to fill in the form while in class.

If sending out the evaluation via e-mail / on the web make, make sure you don’t allow much time between the completion of the course and the receipt of the invitation. Capture the student responses while their memory of the course is still new. Also, if enough time passes, the student may just not bother to reply.

Course Evaluation Reports and Meetings

When providing a report to the instructors, it is best to bring the person into a private meeting to explain the results. The focus of the review should be on the strengths and weaknesses discovered in the student responses.

During the process, you must remember to deliver the feedback in a positive and considerate way. Provide ideas and suggestions in a way that makes sense to the instructor and keeps the meeting positive and instructional.

The conference should be formal and professional to keep the results more objective. This can be hard as the evaluation is certainly subjective but offering positive but non-personalized results can keep the atmosphere positive, especially when delivering negative results that could target the core professional identity of the instructor.

You should give enough feedback, being careful to overwhelm and discourage the instructor.

You may want to leave out responses to write-in questions, if you included them in the survey. Instead, use the comments to provide you with insight as to why the results are the way they are. Tailor the review of the report based on those comments. Providing the write-in results can cause more emotional distress than is necessary if the comments are negative.

You should end the evaluation with the idea of setting up specific, achievable goals for the future. Working with the teacher to determine to how to fix any problems ensures that the instructor leaves the meeting feeling like he or she is working with you, not being judged by you.

Don’t forget to ask their opinions; don’t dictate the meeting by making it one-sided.

You can also enlist the help of more experienced teachers to provide assistance with less experienced teachers who had negative results. A mentor-type relationship bringing in all the experience can help soften the blow of a poor evaluation.


Course Evaluations are one of the strongest information gathering and feedback tools a school has in its arsenal. Deploying them regularly across all courses and teachers is important and can help increase the quality of instruction in the future.

©, Training Technologies, Inc.