Readings' list

Jenny Preece, Helen Sharp, Yvonne Rogers, Interaction Design: Beyond Human-Computer Interaction , Wiley, John & Sons, Incorporated. 2002. (summary) , 1st ed. John Wiley & Sons, Inc. 2002.

 

1-What is interaction design?

Interview with Gitta Saloman

2. Understanding and conceptualizing interaction
Interview with Terry Winograd
3. Understanding users
4. Understanding and designing for collaboration and communication
Interview with Abigail Sellen

4- Designing for Collaboration and Communication

Humans are inherently social.

 


5. Understanding how interfaces affect users

6. The process of interaction design

Interview with Gillian Crampton Smith
 

7. Identifying needs and establishing requirements

7.4 Data gathering techniques

The purpose of data gathering is to collect sufficient, relevant, and appropriate data so that a set of stable requirements can be reproduced.

Questionnaires. Good for large group of people or geographically disperse.

Interviews. If interview in context, the context can trigger them to remember certain things. Interview can be structured, semi structured, or unstructured, depending on how rigorously the interviewer sticks to a prepared set of questions.

Focus group and workshops. Get a group of stakeholders together and discuss issues and requirements. Can be structured or unstructured. Participants need to be chosen carefully and the sessions need to be structured carefully to avoid few people to dominate the discussion.

Naturalistic observation. It can be very difficult for humans to explain what they do or to describe how they achieve a task.Observation provides a richer view. The observer shadows a stakeholder, making notes, asking questions (not too mach), and observing what is being into done in the natural context of the activity. The observer can be external to the activity or can participate with various degrees.

Studying Documentation. Documentation contains procedures and rules, so they are a good starting point for understanding the steps involved in an activity.

7.5 Data Interpretation and analysis

The aim of the interpretation is to begin structuring and recording descriptions of requirements.

 

Interview with Suzanne Robertson


8. Design, prototyping and construction
 

9-User-centered Approaches to Interaction Design

 


Interview with Karen Holtzblatt
10. Introducing evaluation


11. A framework for evaluation

11.2.1 Evaluation paradigms

Any kind of evaluation, whether it is a user study or not, is guided either explicitly or implicitly by a set of believes that may also be underpinned by theory.

Evaluation paradigm: beliefs and the practices (methods and techniques) associated with them. This book describes four paradigms:

  1. quick and dirty
  2. usability testing
  3. field study
  4. predictive evaluation

Quick and dirty evaluation

It is a common practice in which designers informally get feedback from users or consultants to confirm that their ideas are in line with users needs and are liked. Evaluation can happen at any stage. It is fast and not carefully documented.

Usability testing

It was prominent in the 1980s. While still important field studies has grown in prominence.

Usability testing: measuring typical users' performance on carefully prepared tasks. User's performance is measured in terms of errors and time to complete a task. Usability testing is strongly controlled by the evaluator. It is out of a context. The dominant theme is quantifying user's performance.

Field studies

They are done in natural setting with the aim of increasing understanding about what users do naturally and how technology impacts them. Fields studies can be used to

  1. help identify opportunities for new technology
  2. determine requirements for design
  3. facilitate the introduction of technology
  4. evaluate technology

There are two approaches to field studies: outsider or insider.

Predictive Evaluation

Experts apply their knowledge of typical users, often guided by heuristics, to predict usability problems. Another approach involves theoretically based models. Users do not need to be present.

11.2.2 Evaluation techniques

Some techniques are used in different ways in different evaluation paradigms.

Observing Users Observation techniques help to identify needs leading to new types of products and help evaluate prototypes.

Asking users Asking users what they think of a product. The main techniques are interviews and questionnaires.

Asking experts: Guided by heuristics, experts step through tasks role-playing typical users and identify prpblems.

User testing. Measuring user performance to compare two or more designs.

Modeling users' task performance. Model human-computer interaction so as to predict the efficiency and problems associated with different designs at an early stage. GOMS and keystroke analyses are the best known techniques.

1.3 DECIDE: a framework to guide evaluation

Determine the overall goals that the evaluation addresses

Explore the specific questions to be answered

Choose the evaluation paradigm and techniques to answer the questions.

Identify the practical issues that must be addressed, such as selecting participants.

Decide how to deal with the ethical issues.

Evaluate, interpret, and present the data.

Goals Examples of goals are  .)Investigate the degree to which technology influences working practices.  .)Identify the metaphor on which to base design.

Practical Issues.

Users. Involving appropriate users. Decide how users will be involved.
Facilities and Equipment.
Schedule and budget Constraints.
Expertise.
Does the evaluation team have the expertise needed to do the evaluation?

Ethical Issues

Evaluate, interpret, and present data

Evaluators need to decide what data to collect, how to analyze and present them.

Reliability. How well does it produce the same results on separate occasions?
Validity. Does the evaluation technique measure what is supposed to measure? For example a usability study on how people use an home product is not appropriate.
Biases. It occurs when results are distorted.
Scope. How much can a study finding be generalized?
Ecological validity. How the environment in which an evaluation is conducted influences or even distorts the results. Laboratory experiments have low ecological validity, while ethnographic studies have high ecological validity.

11.3 Pilot Studies

It is always worth testing plans for an evaluation by doing a pilot study before launching into the main study.

12. Observing users

Goals and questions provide focus for observation. They should guide all evaluation studies. Even in field study and ethnography there is a careful balance between being guided by goals and being open to modifying, shaping, or refocusing the study as you learn about the situation. being able to keep this balance is a skill that develops with experience.

12.2 Goals, Questions, and Paradigms

12.2.2 Approaches to Observation

"Quick and Dirty" observation

 They can occur anytime anywhere. Ways of finding out what is happening quickly and with little formality.

Observation in Usability testing

Video and interaction logs capture everything that the user does during usability tests. In addition observer can watch through a one way mirror or remote TV screen.

Observation in field studies.

Observer may be anywhere along the outsider-insider spectrum. Whether and in what ways observers influence those being observed depends on the type of observation and the observer's skills. The goal is to cause as little disruption as possible.

Outsider: people observe from the outside, without participating. Example observer interested in the time spent on a computer in a classroom.

Insider:

    -participant: observe from the inside as a member of the group. Evaluators participate with users in order to learn what they do and how they do it.
    -Ethnographers. Some consider them participant, some consider participant observation as a technique that is used in ethnography along with informants from the community, interviews with community members, and the study of community artifacts. It usually takes weeks, months.

12.3 How to Observe

In the lab the emphasis is on the details of what individuals do, while in the field the ocntext is important and the focus is on how people interact with each other, the technology and their environment.

12.3.1. In controlled environments.

The role of the observer is t collect and then make sense of the stream of data.

Lab set up with cameras.
Think aloud technique.

12.3.2. In the fields

Many experts have a framework to structure and focus their observations.

Example of frameworks.

Very simple:

Goetz and LeCompte framework:

Another framework by Colin Robson

These frameworks are useful not only for providing focus but also for organizing the observation and data-collection activity.

12.3.3 Participant observation and ethnography

A participant observer must be accepted in the community. There are two approaches to ethnography.

  1. Interpretivist: evaluators keep an open mind about what they will see.
  2. Theoretical underpinning: before going in the field the ethnographer begins with a problem, a theory or model, a research design, specific data collection techniques, tools for analysis, and a specific writing style

Ethnographic studies allow multiple interpretations of reality; it is interpretivist. Data collection and analysis often occur simultaneously in ethnography, with analysis happening at many different levels throughout the study. The question being investigated is refined as more understanding about the situation is gained.

12.4 Data Collection

12.5 Indirect Observation: tracking users' activities

Sometimes direct observation is not possible, so users' activities are tracked indirectly.

12.5.1 Diaries

Diaries provide a record of what users did, when they did it, and what they thought about their interactions with technology.

12.5.2 Interaction logging

Interaction logging collects data such as key presses, mouse or other devices movements. This data is usually synchronized with video and audio logs.

12.6 Analyzing, interpreting, and presenting the data

12.6.1 Qualitative Analysis to tell a story

Qualitative data that is interpreted and used to tell "the story" about what was observed.

Analyzing and reporting ethnographic data.

For ethnographers it is important to understand events in the context in which they happen. There is more emphasis on details.

12.6.2 Qualitative analysis for categorization

Looking for incidents or patterns

A common strategy to analyze video is to look for critical incidents. Evaluator focus on this incidents and analyze them in detail. The rest of the video is context.

Theory may be use to guide the study. For example Activity theory.

Analyzing data into categories

Content analysis is a systematic way of coding content into a meaningful set of mutually exclusive categories. The categories are determined by the evaluation questions.

The best way is to have two people analyze the data into categories. Inter-research reliability rating is the percentage of agreement between the two researcher, defined as the number of items that both categorized in the same way, expressed as a percentage of the total number of them examined. (Ebling, John 2000).

Analyzing discourse

Focus on the dialog. Discourse analysis is strongly interpretative, pays great attention to context, and views language not only as reflecting physiological and social aspects but also as constructing it. Un underline assumption of discourse analysis is that there is no objective scientific truth. Language is a form of social reality that is open to interpretation from different perspectives. In this sense, the underling philosophy of discourse analysis is similar to that of ethnography. Language is view as a constructive tool and discourse analysis provides a way of focusing upon how people use language to construct versions of their worlds.

12.6.3 Quantitative data Analysis

Video is annotated as it is observed. They also mark accidents. The evaluators use the annotated video to calculate performance times. It is also possible to quantify categorized data.

12.6.4 Feeding the findings back into design

Written report with an overview at the beginning and detailed content list.

13. Asking users and experts


Interview with Jakob Nielsen
14. Testing and modeling users
Interview with Ben Shneiderman
15. Doing design and evaluation in the real world: communicators and advisory systems
Epilogue