Hello and in this course I stated the basic premise of interaction design. This promise is as follows. First, you need to envision the use, that is what a person is thinking, doing, and feeling while interacting with your prospective app. And only then do you design the app's interface in accordance with this interaction. To do that, you must understand how people interact with digital things and what's happening in their heads during the use. I'd like to start explaining this topic from a great analogy suggested by Ellen Cooper and his colleagues. The analogy involves such thing as idioms. Here is an example of one of them. Each idiom has a figurative meaning. I believe you know that when somebody says that, she is not referencing to books, in most cases. What it means is that you shouldn't judge someone or something primarily on appearance. The common feature of all idioms is that if you haven't been thought them they make no sense. Here is another example. This one means, a person will play the role of your opponent, present counter arguments. It's possible to infer an idiom's meaning from the context. Once you've learned them you can use them in your speech. Mobile user interfaces have the idioms too. These three are examples of them. The appearance of UI idiom does not always suggest it's meaning, that is it's behavior. To be honest, buttons and user interfaces do not resemble physical buttons much, at least now as well as switches. Gestures don't have appearance at all. UI idioms are a kind of declarative knowledge. In principle a person can't explain each idiom verbally to others. But most likely people learn them from interactions with digital things. There are idioms with more complex behavior. Curtains for example can contain other interface controls and appear from any side of the screen. The use of text fields invokes the appearance of a keyboard and a cursor etc. But still users don't have to know how they work. It's enough to know UI Idiom to be able to use them. Graphical user interfaces in general and mobile interfaces in particular, are idiomatic. It means, that knowing a limited set of fewer idioms users can successfully operate user interfaces of different apps based on these idioms. These and the fact that graphical user interfaces are capable of guiding those users and provide the appropriate level of control for experienced users, have made them the dominant UI type of modern digital products and services. Considering of course that graphical user interface is applicable for a wide variety of different application domains. All right, I think you've got the idea. Now let's examine how people interact with digital things. There is an approximate model developed by Donald Norman that describes the structure of an action. An action starts from establishing an end goal, what the user wants to happen. User goal may not be formulated very precisely. For example, the user may want to buy a present for a friend but she may not know what exactly it will be. The left side of this model is related to the execution. Here the user manipulates different UI controls. The world on this diagram is a mobile app and everything that surrounds it. Then the user can pass what happened with what you wanted to happen. That's the evaluation side. If they didn't match, the cycle starts over again. Let's take a closer look at both sides. On the execution side, the user starts by forming an intention to act. That is the way to achieve the goal. Then extending the intention with a sequence of actions and executing this sequence. Here is a simple example outside mobile interaction design. Let's assume that I'm in a classroom. There is too much light to show slides using a projector. So my goal is to get less light. There are students in the classroom so I may turn off the light by myself or ask one of them to do that. These are variants of an intention. Assuming that I choose to go the first way, a planned action sequence is to get up, go to the light switch, and turn it off. On the evaluation side, user continues by a sense and feedback from the app, interpreting it, and then comparing the interpretations with what she expected to happen. The whole cycle looks like this. There are certain differences in the interaction with any mobile app among novice users and experienced ones. Let's apply this model to examine how users perform a task for the first time. As an example, we'll take a standard alarm clock app from iOS. Imagine that a user, let's call him Michael, doesn't need one of his alarms. Michael's goals is clear, to make his iPhone not to ring at 5:45 in the morning, since Michael doesn't need it anymore. His intention is not just to turn it off, but delete the alarm. Michael bought the iPhone not so long ago so he has already set alarms, turned them off and on, of course, he has changed the time but never deleted one. When a user is faced with a new task, he or she has to figure out the path through a mobile interface to achieve the task goal. The process of figuring out the path is a problem solving process, where the user applies all relevant knowledge. Due to the fact that Michael hasn't done this before, he does not know the sequence of actions that might bring him to the goal. So he would have to guess the next action. He knows that a tap on any alarms, let's say here, it leads to nothing. Michael learned that when he was trying to reset an alarm for the first time. Looking at this screen he sees only one available option, the edit bottom at the top left corner, because any other option seems irrelevant to the task. Michael decides to tap it. I'd like to highlight the fact that in this case an action sequence consists of only one action. Michael can't plan in advance since he doesn't see available subsequent options. By the way, Michael is already familiar with many idioms that are used in the design of the screen. For example, the aforementioned tab on any alarm and these switches on the right which Michael has used when turning alarms on. Michael is familiar with buttons in general and that button at the top right corner in the form of a plus sign symbol, in particular. He acquired all this knowledge from previous interactions with these and other applications. All right, Michael taps on the edit button. The interface changes in response to this action that way. Michael sees the changes. He didn't expect that his action will lead him straight to the goal. So, noticing those red buttons on the left, he knows that he did the right choice. Despite the fact that Michael saw these controls earlier, when resetting alarms, he didn't remember them because he was focused on another goal and didn't actually use them. Going forward, according to a theoretical model of learning by exploration proposed by Peter Paulson and the Clayton Utz, learning occurs if a user considered the effect of taking actions as successful. Users learn successful actions as well as new idioms, to avoid having to perform problem solving again if a similar task is encountered in the future. He will go straight to form an action sequence step, because there is no need to form an intention. Michael is in the middle of the current task now. Of course he chooses to tap on the red button, right next to the false alarm. It's his next guess. This action reveals these hidden option. Michael didn't expect that, but he expectations about the tab on the red circle button wasn't accurate enough. Besides noticing this delete option he understands that he did the right choice. Note that in a real interface when you use a taps on the red circle button, an animation begins. The item in these, it slides mostly to the left, giving a user a clue that a deletion can be done this way too. I think it's clear what Michael will do next. There is only one relevant option on his path. So he taps on the delete option which brings him to the following state of the app's user interface. The false alarm disappeared and that makes Michae think that he achieved his goal. Michael however notices that the UI is still in the edit state and presses the done button at the top left corner. I think you noticed that the interface was guiding Michael all the way. Of course he might make a choice that wouldn't make progress towards the goal. But considering Michael's previous knowledge, he performed the task without any deviations from the shortest path. We'll examine the deviations and other types of interaction problems in the next lectures. So the task is done. It's important to mention that Michael learned the combined sequence of action that cover the whole path for the UI. Moving through the interface. Michael was constructing this action sequence. Each user task can be represented as a hierarchy depicted on this slide. At the top of this hierarchy, we have an end goal, at the middle steps and sub tasks for complex activities, and at the bottom atomic actions for example to look over there, to decide something, to perform a multi action etc. It's a static representation of a user task opposite to Norman's model that describes a dynamic problem solving process, the construction of this hierarchy. A user's knowledge of a task is a kind of procedural knowledge. When a user performs a task many times, an action sequence or an action pattern, if you will, become stronger acquiring less conscious attention from the user. The problem solving process that we have just described is effort-full and slow. The mechanism of formation of action patterns helps to make user actions faster, more precise, and effortless. Repetition plays a key role here. Users repeat actions on lower level of a task hierarchy more frequently, so they become patterns faster. But in fact, any level of user activity in any type of action, I mean motor, perceptual, and cognitive can constitute a pattern. If you are a driver as I am, you understand what I mean. Before changing lanes on the road, you check the side mirror. With time you do this so quick and easy that you may not even notice you did it. Interaction with digital things are no exception. Imagine that Michael has deleted many alarms from that first time. Firstly, he found out that he can swipe over an alarm without the need to activated edit mode by tapping on the bottom, at the top left corner. He was able to do that because of his own curiosity and that animation. Remember, once he performed the task this way, Michael started using it all the time. Now, when Michael needs to delete an alarm, he forms an intention to do it using the swipe gesture. Then he forms an action sequence consisting of two actions, the swipe and a tap on delete button and then executes it. Of course, Michael controls each actions. But now the whole task takes much less time and what's more important, less conscious attention. As you see, Norman's model can be used to analyze interactions of novices and experienced users. Thank you for watching. See you in the next lecture.