Posts tagged: research

Embedding research participants in a scenario

Posted 15 August 2013 in research, service design | No comments yet

A participant and researcher sitting at a desk during user research

The materials in the Social Psychology course I’m currently studying include a video of a conversation with psychologist Elliot Aronson, Professor Emeritus at University of California, Santa Cruz, reflecting on what he learned as a graduate student of social psychologist, Leon Festinger:

“…what I learned from Festinger was how to do it, how to do experiments. It required very special skills and very special training. You had to be a playwright, you had to be a director, you had to be an actor, because you had to sell the procedure, because the laboratory is a very sterile place. A person comes in to be the subject in an experiment, he’s in a sterile environment. What our job was, to embed that person in a scenario where he’s not stepping back and making decisions about ‘what would a person normally do in this situation?’, but where he’s so embedded in the scenario that we constructed that he’s behaving the way he or she would behave in his or her real environment if it were happening, and for that you needed those skills: actor, director, playwright, so you wrote a scenario that was powerful.

The difference between simply sleepwalking through the instructions and doing it in a dramatic way is the difference between the hypothesis coming out and not coming out.”

This description of crafting experiment scenarios rings true in design research as well. When exploring a problem space or testing a concept, a designer’s intention is to understand behaviour rather than just collect checkbox data about attitudes or task completion. Rich research data comes from setting a scene that encourages a person to speak openly and honestly, and where they can demonstrate the types of reactions they would have if the facilitator wasn’t present.

Practice goes in to learning how to ask “Why?” without sounding like a broken record, and listening for cues that suggest the participant has slipped in to saying what they think you want to hear. Some people need reassurance that their personal opinion, whatever it may be, is perfectly valid and valuable. A considered environment, script, and activities, and the ability to riff and deviate where required without forgetting the objectives of the research, will help a participant reach a comfort-level quickly, and before you know it, they’re saying “Wow, that was quick and easy!” at the end of an hour of conversation.

Looking out for hindsight bias

Posted 15 August 2013 in design, research, service design, user experience | No comments yet

An illustration of a meeting where someone is thinking "I could have told them all of this!"

I’m currently diving in to a Social Psychology MOOC course through Coursera which is generating some personal reflection about design research.

One point I’ve been thinking about is hindsight bias, or the I-knew-it-all-along effect, where people, presented with some facts, strongly (and wrongly) feel that they already knew those facts. There have been a variety of studies in to hindsight bias to look at how and when it occurs. An excerpt from (an old version of) David G. Myers Exploring Social Psychology summarises the hindsight bias effect.

How can hindsight bias creep in to UX or service design research?


Structuring surveys to avoid confusion

Posted 2 September 2012 in business, research | No comments yet

Tonight I went to look at the web site for Australian TV show, The Project, to find information about an upcoming story (which I failed to find.. hmf!) and a little overlay appeared asking me to take part in a survey so of course I accepted. It turned out the survey was being conducted by the same people as the SMH survey I wrote about the other week.

The survey started innocently enough asking demographic information. It asked the same “Where do you access the internet from?” question I’d had in the SMH survey, and happily in this version they provided checkboxes so that I could select multiple options. Then unfortunately the survey began to go downhill.


Structuring surveys for useful data

Posted 11 August 2012 in business, research | No comments yet

I was checking out the weather on Sydney Morning Herald, a web site for a newspaper here in Sydney, Australia, when a little window appeared asking me to take part in a survey so that they can understand their customers better.  I’m always happy to complete surveys, in large part because I’m curious to see how other people structure them, and I always try to answer honestly. I’ve built and analysed many surveys, and people often underestimate the level of effort required in creating a survey structure that can generate useful data.

While I don’t know the objectives of the SMH survey, several aspects of it made me realise that they’re likely to end up with mis-representative and unusable data.

Early in the survey, it asked me for my main country of residence, so I picked “Australia”:

That was followed by a question about the state of Australia that I live in, so I picked “New South Wales (NSW)”:

The next question left me confused:

The question asked me for my monthly household income in US dollars. Considering the SMH is an Australian newspaper, and that I had just specified that I was living in Australia, asking me to try and specify my monthly income in a foreign currency felt just a little absurd (however, it’s not the first time that I’ve seen an Australian company do this in a survey). Some people may not realise it’s asking about another currency (or is that an unintentional mistake?), some might try to be helpful and convert their income to US dollars, some may pick the easy option of “Prefer not to say”, some may pick a random figure, and others may drop-out of the survey because it suddenly feels a little difficult or not applicable to them.

A few alternatives for the survey could be to ask for a monthly income in the currency of the country of residence, or, with the SMH’s presumably majority-Australian audience, it could ask for monthly income in Australian dollars while skipping the income question if the country of residence was not Australia. The decision to personalise or skip the question would depend on the objectives of the survey.

Another flaw in the survey was when they asked “Where do you access the Internet from?” but they provided mixed options like “Home” (a location) and “Smart Phone” (a device), and asked to “Please select all that apply” but provided the options as radio buttons so that only a single selection could be made:

Those weren’t the only issues I had with the survey, but they’re enough to point out that if you’re using surveys for research then their content and structure are vital and should be tested thoroughly, and that if you receive data from a survey it’s important to know how it was captured so that it can be analysed appropriately.