Interview with Gitta Saloman
2. Understanding and conceptualizing interaction
Interview with Terry Winograd
3. Understanding users
4. Understanding and designing for collaboration and communication
Interview with Abigail Sellen
Humans are inherently social.
The purpose of data gathering is to collect sufficient, relevant, and appropriate data so that a set of stable requirements can be reproduced.
Questionnaires. Good for large group of people or geographically disperse.
Interviews. If interview in context, the context can trigger them to remember certain things. Interview can be structured, semi structured, or unstructured, depending on how rigorously the interviewer sticks to a prepared set of questions.
Focus group and workshops. Get a group of stakeholders together and discuss issues and requirements. Can be structured or unstructured. Participants need to be chosen carefully and the sessions need to be structured carefully to avoid few people to dominate the discussion.
Naturalistic observation. It can be very difficult for humans to explain what they do or to describe how they achieve a task.Observation provides a richer view. The observer shadows a stakeholder, making notes, asking questions (not too mach), and observing what is being into done in the natural context of the activity. The observer can be external to the activity or can participate with various degrees.
Studying Documentation. Documentation contains procedures and rules, so they are a good starting point for understanding the steps involved in an activity.
The aim of the interpretation is to begin structuring and recording descriptions of requirements.
8. Design, prototyping and construction
Interview with Karen Holtzblatt
10. Introducing evaluation
Any kind of evaluation, whether it is a user study or not, is guided either explicitly or implicitly by a set of believes that may also be underpinned by theory.
Evaluation paradigm: beliefs and the practices (methods and techniques) associated with them. This book describes four paradigms:
It is a common practice in which designers informally get feedback from users or consultants to confirm that their ideas are in line with users needs and are liked. Evaluation can happen at any stage. It is fast and not carefully documented.
It was prominent in the 1980s. While still important field studies has grown in prominence.
Usability testing: measuring typical users' performance on carefully prepared tasks. User's performance is measured in terms of errors and time to complete a task. Usability testing is strongly controlled by the evaluator. It is out of a context. The dominant theme is quantifying user's performance.
They are done in natural setting with the aim of increasing understanding about what users do naturally and how technology impacts them. Fields studies can be used to
There are two approaches to field studies: outsider or insider.
Experts apply their knowledge of typical users, often guided by heuristics, to predict usability problems. Another approach involves theoretically based models. Users do not need to be present.
Some techniques are used in different ways in different evaluation paradigms.
Observing Users Observation techniques help to identify needs leading to new types of products and help evaluate prototypes.
Asking users Asking users what they think of a product. The main techniques are interviews and questionnaires.
Asking experts: Guided by heuristics, experts step through tasks role-playing typical users and identify prpblems.
User testing. Measuring user performance to compare two or more designs.
Modeling users' task performance. Model human-computer interaction so as to predict the efficiency and problems associated with different designs at an early stage. GOMS and keystroke analyses are the best known techniques.
Determine the overall goals that the evaluation addresses
Explore the specific questions to be answered
Choose the evaluation paradigm and techniques to answer the questions.
Identify the practical issues that must be addressed, such as selecting participants.
Decide how to deal with the ethical issues.
Evaluate, interpret, and present the data.
Goals Examples of goals are .)Investigate the degree to which technology influences working practices. .)Identify the metaphor on which to base design.
Users. Involving appropriate users. Decide how users will be involved.
Facilities and Equipment.
Schedule and budget Constraints.
Expertise. Does the evaluation team have the expertise needed to do the evaluation?
Evaluate, interpret, and present data
Evaluators need to decide what data to collect, how to analyze and present them.
Reliability. How well does it produce the same results on
Validity. Does the evaluation technique measure what is supposed to measure? For example a usability study on how people use an home product is not appropriate.
Biases. It occurs when results are distorted.
Scope. How much can a study finding be generalized?
Ecological validity. How the environment in which an evaluation is conducted influences or even distorts the results. Laboratory experiments have low ecological validity, while ethnographic studies have high ecological validity.
It is always worth testing plans for an evaluation by doing a pilot study before launching into the main study.
Goals and questions provide focus for observation. They should guide all evaluation studies. Even in field study and ethnography there is a careful balance between being guided by goals and being open to modifying, shaping, or refocusing the study as you learn about the situation. being able to keep this balance is a skill that develops with experience.
They can occur anytime anywhere. Ways of finding out what is happening quickly and with little formality.
Video and interaction logs capture everything that the user does during usability tests. In addition observer can watch through a one way mirror or remote TV screen.
Observer may be anywhere along the outsider-insider spectrum. Whether and in what ways observers influence those being observed depends on the type of observation and the observer's skills. The goal is to cause as little disruption as possible.
Outsider: people observe from the outside, without participating. Example observer interested in the time spent on a computer in a classroom.
-participant: observe from the inside as a member of the
group. Evaluators participate with users in order to learn what they do and how
they do it.
-Ethnographers. Some consider them participant, some consider participant observation as a technique that is used in ethnography along with informants from the community, interviews with community members, and the study of community artifacts. It usually takes weeks, months.
In the lab the emphasis is on the details of what individuals do, while in the field the ocntext is important and the focus is on how people interact with each other, the technology and their environment.
The role of the observer is t collect and then make sense of the stream of data.
Lab set up with cameras.
Think aloud technique.
Many experts have a framework to structure and focus their observations.
Example of frameworks.
Goetz and LeCompte framework:
Another framework by Colin Robson
These frameworks are useful not only for providing focus but also for organizing the observation and data-collection activity.
A participant observer must be accepted in the community. There are two approaches to ethnography.
Ethnographic studies allow multiple interpretations of reality; it is interpretivist. Data collection and analysis often occur simultaneously in ethnography, with analysis happening at many different levels throughout the study. The question being investigated is refined as more understanding about the situation is gained.
Sometimes direct observation is not possible, so users' activities are tracked indirectly.
Diaries provide a record of what users did, when they did it, and what they thought about their interactions with technology.
Interaction logging collects data such as key presses, mouse or other devices movements. This data is usually synchronized with video and audio logs.
Qualitative data that is interpreted and used to tell "the story" about what was observed.
Analyzing and reporting ethnographic data.
For ethnographers it is important to understand events in the context in which they happen. There is more emphasis on details.
Looking for incidents or patterns
A common strategy to analyze video is to look for critical incidents. Evaluator focus on this incidents and analyze them in detail. The rest of the video is context.
Theory may be use to guide the study. For example Activity theory.
Analyzing data into categories
Content analysis is a systematic way of coding content into a meaningful set of mutually exclusive categories. The categories are determined by the evaluation questions.
The best way is to have two people analyze the data into categories. Inter-research reliability rating is the percentage of agreement between the two researcher, defined as the number of items that both categorized in the same way, expressed as a percentage of the total number of them examined. (Ebling, John 2000).
Focus on the dialog. Discourse analysis is strongly interpretative, pays great attention to context, and views language not only as reflecting physiological and social aspects but also as constructing it. Un underline assumption of discourse analysis is that there is no objective scientific truth. Language is a form of social reality that is open to interpretation from different perspectives. In this sense, the underling philosophy of discourse analysis is similar to that of ethnography. Language is view as a constructive tool and discourse analysis provides a way of focusing upon how people use language to construct versions of their worlds.
Video is annotated as it is observed. They also mark accidents. The evaluators use the annotated video to calculate performance times. It is also possible to quantify categorized data.
Written report with an overview at the beginning and detailed content list.