«The Unwritten Rules of PhD Research Gordon Rugg Marian Petre THE UNWRITTEN RULES OF PHD RESEARCH THE UNWRITTEN RULES OF PHD RESEARCH Gordon Rugg and ...»
There are many techniques for eliciting information from people, most of them much more useful for any given research question than questionnaires RESEARCH DESIGN 149 and interviews, and you should have enough basic knowledge of the main techniques to make an informed choice between them. These techniques include: participant observation, shadowing, direct observation, indirect observation, critical incident technique, scenarios, structured interviews, unstructured interviews, depth interviews, group interviews, card sorts, laddering, repertory grids and various forms of content analysis.
You should also be aware of the concepts affecting choice of technique, such as external validity, reliability (test-retest, inter-observer) and observer effects. You should remember to make sure you have ethical clearance if needed.
Sampling and sample size Your sample needs to be either a total sample of an entire population or (much more often) a representative sample of a larger population. This is not always the same as a random sample, and you should know the difference between the two concepts. You should know how to select your sample in such a way as to make it representative.
There are statistical tests which allow you to say how likely it is that your sample is representative. Once your sample is big enough for you to be reasonably sure that it is representative, you do not need a bigger sample. You should know what level of likelihood is acceptable in your ﬁeld, why it is considered acceptable, how this level of likelihood is calculated and what it actually means.
Types of research Overview
• Size: small versus large
• Style: informal versus formal
• Focus: hardware, software, interface, people, the literature
• Data collection methods
• Data analysis methods Size Most students assume that a big study is better than a small one, and that a huge study is even better than a big one. Most students also don’t know much about statistics.
You can use statistics to tell you how likely your results are to be the result of chance. You can also use statistics to assess how likely you are to ﬁnd anything 150 THE UNWRITTEN RULES OF PhD RESEARCH more by extending your sample size. After a certain point, extending your sample size is simply a waste of resources.
It’s also important to realize that increasing your sample size won’t magically transform bad data into good data. If you are collecting bad data (for instance, with a particularly awful questionnaire) then collecting more data simply means that you have even more bad data, and have wasted the time of even more long-suffering respondents.
It’s possible to work out statistically in advance how much data you will need to collect for a particular experiment. The method for doing this is too lengthy to ﬁt in the margin of this page, but if you sweet-talk a statistician there’s a good chance you can persuade them to do it for you.
The result will probably make you happy, the statistician happy (especially if they’re going to be involved in the data analysis at the end) and the general population happy because you won’t be bothering so many of them.
If you ask the right sort of question, you can get away with surprisingly small sample sizes. One of the authors once got a paper into a good international conference based on data collected from one respondent, which is about as small a sample as you can get. (Some of the referees even said nice things about it.) Some sample sizes, and things to say about them, are listed below. The letter ‘n’ refers to the size of the sample: an n of 3 means that the sample size is three, for instance. Disciplines, as usual, vary. If you tried using an n of 2 in epidemiology, for instance, people would probably still be talking about you when your grandchildren had become old.
Use some discretion, and look at what the norms are in journals in your ﬁeld.
n = 1 to 5: case study Typical examples: in-depth study of an organization, demonstration of concept, ‘white crow’ study (demonstrating that an improbable-sounding effect exists).
n = 5 to about 20: pilot study or small study Typical examples: gathering rich data from a small sample; extended ﬁshing expedition; extended demonstration of concept.
n = about 20 to about 50: study Typical examples: ﬁeld experiment, formal experiment.
n about 50: survey Typical examples: gathering information about the incidence of a particular condition, belief etc. Surveys usually consist simply of gathering information, without any experimental manipulation of the respondents. It is possible to do experiments of this size, but the logistics quickly become horrible.
RESEARCH DESIGN 151 Style There is a traditional divide in most areas between ‘neats’ and ‘scrufﬁes’. The ‘neats’ concentrate on formalisms to provide clean, abstract descriptions of the area; the ‘scrufﬁes’ concentrate on understanding what is actually going on, even if they can’t express it very neatly. Relations between the two groups usually vary between cool disdain and bitter feuding. ‘Neats’ typically have more academic street credibility, because they typically use intimidating mathematical representations. ‘Scrufﬁes’ typically have more credibility with industry, because they typically have a wonderful collection of ‘war stories’, and know just what sort of things go on when the Health and Safety Executive isn’t watching. Some people straddle the divide and have both a wonderful fund of stories and the ability to use intimidating representations. These people frequently end up as the ‘gurus’ in a ﬁeld, and apparently get quite a few free meals and invitations to nice conferences as a result.
Anyway, returning to planning research, there is a spectrum of research types ranging from formal to informal. At the formal end of the scale are abstractions: for instance, mathematical modelling of an area, or trying different representations of the same topic. For this sort of work, you usually won’t need to worry about sample size because you won’t be collecting data as such; instead, you’ll be assessing how well the formalism performs.
Next along the scale is the formal controlled experiment, straight out of the textbook: for instance, comparing the responses from two groups which you have treated in different ways. For this, you will know which variables you are manipulating and which you are measuring; you will have thought carefully about sample size.
Around the middle of the scale is the ﬁeld experiment, where you are not able to control all the variables that you would like to and are trading that off against the realism of experimentation in the outside world. For instance, when redecoration time comes round, you might manage to persuade your establishment to paint the walls of one computer room a tasteful shade of green to see whether this calms down the users and reduces the number of complaints they make about the computers, compared to the users in the standard-issue hideous orange rooms. For this, you will know which variables you are manipulating and which you are measuring, but you will be horribly aware that other variables may be scurrying around looking for somewhere to cause you trouble.
At the scruffy end of the scale is the collection of squishy subjective data with a very small n. A good example of this is the eminent sociology professor who allegedly studied tramps via participant observation (i.e.
passing himself off as a tramp and socializing with them). The result can be extremely interesting insights into an area, plus data that nobody else has, plus clothing that smells of methylated spirits.
152 THE UNWRITTEN RULES OF PhD RESEARCH Focus Research can focus on a variety of things. In computer science, for instance, the research may focus on hardware, software, interfaces, people (either as groups or individually) or the literature. There is a useful conceptual divide which can be applied to most ﬁelds, consisting of research into (a) inanimate things (b) people/animals/plants and (c) concepts/the literature. Each of these has different implications for research design.
Research into inanimate things is a Good Thing. Much of physics, chemistry, geology and similar disciplines involves work of this sort. These disciplines usually have their own well-established ways of doing things and we have no intention of trying to teach them how to suck eggs.
Research involving people (and other living things, which we will ignore from now on, for brevity) is also a Good Thing. However, although there are many relevant disciplines, such as psychology, sociology, ergonomics, history and the like, most of which have venerable histories, it has to be said that there is still room for improvement in their methods. This is particularly the case with students, who usually appear not to have encountered any methods other than interviews, questionnaires and reading books or internet articles.
(This issue is discussed in more depth below.) Research involving concepts/the literature can be a Good Thing, but is a lot harder than it looks. If you are doing research involving collecting new data, then it is pretty easy to ﬁnd something which is in some way new – even a change of wording on a questionnaire can be enough. If you are doing research involving concepts and/or the literature, however, then you need to know what is already known before you can start looking for something new. This means that you are giving yourself the task of reading the literature, including the most recent and the most advanced work in your chosen area, and then trying to ﬁnd something that the best minds in the area haven’t thought of yet. This is not advisable for a beginner. It is also not advisable to believe that simply reporting what other people have found will count as original research – it won’t. At PhD level you will need to take on the literature and show that you can do original work, but it’s wiser to do this via new data or new methods rather than head-on.
Data collection methods Data can be collected using a wide range of methods. It is a good idea to become familiar with a range of these, since this can make your life a lot simpler.
If you’re dealing with people in your data collection, you might want to ﬁnd out more about the following. Some of these (particularly physical measures)
may involve ethical or licensing issues:
• physiological measures: response time; ECG; EEG; skin galvanometer measures; physical force used on instrument;
RESEARCH DESIGN 153
• behavioural: responses to various situations, such as smoke coming from under a door, or whether respondents post a dropped letter (may involve ethical issues);
• observation: direct, indirect; participant; shadowing; time lines;
• interview-like: interviews; scenarios; critical incident technique;
• personal construct theory: repertory grids, laddering, card sorts, implication grids.
Data analysis methods There are numerous ways of analysing data and it is usually possible to analyse the same data in quite different ways for different purposes. You might want to ﬁnd out more about the following, which come upstream of any statistical analysis you might want to do: content analysis; coding into categories;
time lines; discourse analysis; causal assertions; semiotics; deconstruction;
Classic pitfalls in research design The biggest obstacle to research is researchers’ own assumptions. Ignorance and isolation are the enemies of research. Here are some other common pitfalls
to watch out for:
• Leaping before looking: failure to reﬂect (think; reﬂect on assumptions, evidence, techniques, what can go wrong).
• Ignorance: often manifest as reinventing the wheel (a day in the library can save six months of redundant research).
• Putting the cart before the horse: trying to choose techniques before reﬁning the questions and evidence requirements (do ﬁrst things ﬁrst).
• Great expectations: also known as eyes bigger than stomach, or biting off more than you can chew (if the question is too big, ask a smaller question;
a life’s work takes a lifetime, but it’s achieved one step at a time).
• Sand through the ﬁngers: for a precise study, you need a precise question, but by the time you’ve got your experiment sufﬁciently controlled, you’ve lost sight of your purpose and possibly of your big question (back off and remind yourself why you started, then review your inference chain meticulously; maybe what you really need is some coarser-grained approach to help you reﬁne the question).
• Bias: be vigilant, be honest, go and read a good book on the subject.
• Confusing anecdote with fact: what ‘everyone knows’ is often wrong (let anecdote help shape your questions, but then seek independent evidence in order to ﬁnd answers).
154 THE UNWRITTEN RULES OF PhD RESEARCH
• Confusing statistics with rigour: ﬁnd out what statistics can and cannot do, then go and ﬁnd a good experimental statistician to consult. The point is to know what can and cannot be shown with different sorts of evidence.
• The false seduction of the deﬁnitive experiment: sometimes you need a different research method.
• Lack of respect for failure: Nils Bohr said, ‘Science is not “that’s interesting” but “that’s odd” ’. Great research often comes from surprise. The only bad study is one that doesn’t inform you; what information does your ‘failure’ provide?
• Shortage of theory: back to the library.
• Overgeneralization: take care in your storytelling and be meticulous about your inference chain.
• Fatal independence: trying to be an expert in everything (cultivate a social life; have coffee with a genuine expert).
Planning a body of research
So there you are, newly appointed, with a desk in your shared ofﬁce, with the world in front of you, not sure quite what to do next apart from react to what your line manager is telling you, and wishing that you hadn’t read out the checklist item about the key to the departmental wheelbarrow when you worked through the list of things you needed with the departmental secretary.
What do you do?
The ﬁrst step The ﬁrst priority is the cockroach principle. Cockroaches have been around for a lot longer than human beings, and are likely to be around for a long time to come. They didn’t last this long by having maladaptive strategies. One of their key principles is to make sure they have a nice, safe hole to scuttle into when things get scary. From your point of view, this means that you should make sure you have a protector and/or bolt-hole. Ideally, these should be your boss and your job, respectively. A good boss will protect staff and treat them as valued assets. If you have such a boss and you’re being given needless grief by some idiot with more power than you, then your boss should be able to sort things out. It’s a very comforting feeling to be able to say: ‘You’ll need to discuss that with my line manager’ and to know that you will never hear that request/threat/inappropriate command again.
Humans, however, face a problem which cockroaches are spared: human bolt-holes are organizational rather than physical, so humans need to maintain them. This is done, not by grovelling or bribery, but by honouring your RESEARCH DESIGN 155 end of the bolt-hole deal – you do the work which you are supposed to and support your boss in their daily struggle against the forces of chaos and darkness. This need not be a scary process; some very successful bosses get their way by a reputation as friendly, helpful, useful people to have around. This can occasionally lead to odd situations, such as your helping someone in a different faculty with some work which appears to have no visible connection to your ofﬁcial role, but it all helps the world go round.