In the community college setting, each instructor sees (semester by semester) a particular piece of research that directly relates to teaching, or should I rather say holding class. The technical term for this survey form of research is SET (Student Evaluations of Teaching). Administrators place some predictive confidence in these surveys, though in type they appear to be simply descriptive. It is easy for the shrewd instructor to manipulate both SET input and the results. Let me give an illustration.
Although SET are used to present quantifiable data, often using a 1-5 scale (strongly agree, agree, no opinion or neutral, disagree, strongly disagree) resulting in percentage breakdowns, it has been documented in the literature that student input can be directly influenced by one or more of the following: candy, gender, easy course requirements, personal appearance (hot or not) and professional favors. For example, if I want high student ratings on a bad course, I would prep the students at least a week before SET by showing funny movies, giving out pop corn, and allowing students to write their own final scores in the gradebook. On the other hand, if I taught a rather difficult course, I could get a poor SET score merely because I gave a difficult test, did not allow late work, or wore stinky sox on the same day of SET.
This is not to totally ignore or call for a moratorium on the use of SET, but it goes to illustrate the need for controls, not only over the control group and its environment, but in interpretation of results and the possible fallacy of basing quantitative reports on qualitative data and highly subjective input. This is definitely not rocket science from anyone's point of view, unless that be be the viewpoint of one's department head.