ヘルスケアのための活動気づきコンピューティング
Activity-Aware Computing for Healthcare 

Monica Tentori からの学び      :日本語訳は機械翻訳結果。今後修正予定】  

1. Activity-Aware Computing for Healthcare 

1.1 Introduction

1.2 Understanding human activities: A hospital case study

1.2.1 Conducting an observational study

1.2.2 Identifying the activities

(1) Monitored activities. 

(2) Distributed activities. 

(3) Dynamic activities. 

1.3 Exploiting activity-aware healthcare scenarios

1.3.1 Scenario 1: Creating coherent action histories

1.3.2 Scenario 2: Monitoring patient ADLs

1.4 Designing activity-aware applications

1.4.1 Activity modulation

1.4.2 Activity monitor

1.4.3 Activity history

1.4.4 Activity recognition

1.5 A mobile ADL monitor and display

1.5.1 Envisioned implementation

1.5.2 Evaluation

Related Work in Activity-Aware Computing

---

英日翻訳  

1. Activity-Aware Computing for Healthcare 

  (Monica Tentori, CICESE, mtentori@cicese.mx

  (Jesus Favela, CICESE, favela@cicese.mx

 (IEEE, PERVASIVE COMPUTING, Vol.7, No.2, 2008. p.51-57)  

A mobile activity monitor uses reactive, sequential, mobile, and persistent computational activities to exemplify activity-aware computing’s applicability in a hospital. 

Activity-aware applications should use e-activities as their basic computational units.

1.1 Introduction

Pervasive hospital environments are dynamic settings saturated with heterogeneous devices and sensors that offer specialized services in support of the highly mobile and technology-savvy staff. Context-aware systems can help such environments tailor themselves to better serve their users. However, such systems must be able to adequately manage the dynamic nature of context; otherwise, they might present services and information disembodied from the users’ current goal. For example, displaying a patient’s medical record when a physician is in front of the patient’s bed seems appropriate, unless the physician is prescribing medicines. In that case, a pharmacological database would be more useful. So, stipulating what contextual information is relevant when adapting pervasive computing environments can be challenging.

Humans construct their plans as they engage in specific activities, creating and altering their next move on the basis of what has happened.[1] So, looking at how humans achieve their goals by acting through the execution of activities in pervasive environments could help context-aware applications identify information relevant to the task at hand. Activity-based computing is an interaction and design paradigm that explores how a computing system can directly support an activity (see the sidebar).[2] Its applications let users explicitly organize their resources in terms of activities, so the applications can then manipulate the resources and select the one most relevant to the task at hand.

Here, we introduce activity-aware computing, which uses activity-based computing to enhance pervasive environments in two ways: to help users associate resources and services with activities, resulting in seamless interaction with those resources and services, and to enable pervasive environments to automatically infer activities and thus opportunistically offer services that support the user’s current goal. Thus, activity-aware applications persuade users to commit themselves to the technology, moving from a paradigm of activity-based interaction” toward one of activity-aware “engagement” with a computationally augmented environment. We present a set of tools for developing activity-aware applications, including a computational representation of human activities that we defined using data from a hospital case study we conducted. We also used the data to create an activity recognition approach and a set of design principles for developing activity-aware applications. The mobile activity monitor we designed to create a wearable connection between patients and nurses exemplifies our design principles. 

1.2 Understanding human activities: A hospital case study

We conducted our case study in a public hospital’s internal-medicine unit, observing the practices of the hospital staff, who attended to patients with chronic or terminal diseases. Such patients are often immobile and incapable of performing the activities of daily living (ADL) by themselves. 

1.2.1 Conducting an observational study

For nine months, we used mobile structured observation to shadow five nurses, five medical interns, and five physicians for two complete working shifts. Mobile structured observation requires researchers to shadow individuals, annotating and time-stamping their actions as executed. We later transcribed and analyzed these handmade detailed records using grounded theory—a systematic research methodology for generating theory from data.[3]

The total time of detailed observation was approximately 196 hours and 46 minutes. We measured the time spent performing different activities, the average number of activity segments, and the mean time of activity segments observed for each individual. Each segment accounts for uninterrupted engagement in a particular activity.

1.2.2 Identifying the activities

TABLE 1 The time hospital workers spend performing various activities.

Table 1 shows the primary activities we identified and the percentage of time hospital workers devoted to each one.

Following the way hospital workers conceptualize their work, each of these activities represents a particular type of work carried out through a set of related actions mediated by the activity’s execution context. For example, when assessing a patient’s condition, the physician might decide to insert a catheter on the basis of the patient’s progress. Thus, the physician’s activity switches from clinical case assessment to patient care. Once the physician inserts the catheter, he or she must report this in the patient’s medical record—so the activity then switches from patient care to information management. In this case, the physician switched activities by executing different actions—inserting a catheter and reporting a diagnosis. Furthermore, the different actions were mediated by the artifacts or tools used (for example, medical equipment or medical information) to accomplish a common goal.

(1) Monitored activities. 

Hospital staff are highly mobile, spending more than 50 percent of their time on the move, making it difficult for them to know the status of their patients. So, we quickly realized that we had to analyze not only the activities being executed but also those that the staff was supposed to be monitoring.

For example, when performing patient-care activities, hospital workers can provide integral or specialized care. For integral care, the staff monitors patients as they conduct ADLs, such as taking medicine, getting out of bed, walking, and evacuating. Specialized care involves monitoring the behavioral patterns that patients exhibit during a set of activities that put them at risk. Such risk activities (RA) include agitation, bleeding, and respiratory insufficiency. So, identifying the hospital workers’ activity would require understanding the type of activity (ADL or RA) he or she was monitoring. Furthermore, differentiating between an ADL and RA would require recognizing the user’s level of consciousness during the activity execution.

(2) Distributed activities. 

In analyzing the activities, we also realized that carrying out an activity often requires interacting with others and using a heterogeneous collection of artifacts. For example, when medical interns conduct a ward round, they interact with nurses and physicians. They consult nurse charts and medical records or use medical equipment. These elements aren’t generally concentrated in a single place—rather, they’re distributed in space and time. Consequently, the interns must set up their environment before executing an activity. Before the ward round starts, they navigate the hospital premises to gather information related to their patients and place it in each patient’s room.

Our analysis revealed that the hospital workers interacted with others 69 percent of the time. Of those interactions, 30 percent were verbal interactions, 26 percent involved artifacts (such as a phone), and 13 percent were merely observational. The hospital workers averaged 2.5 minutes of sustained interaction.

(3) Dynamic activities. 

We also noted that hospital work is highly fragmented. The hospital workers spent no more than five minutes conducting an activity, typically spending only 1.5 minutes on an activity before switching tasks. This fragmentation normally occurred owing to an interruption or a change in context (such as the patient collapsing or a colleague arriving).

Figure 1. A medical intern’s work shift. A graphical representation of the activities an intern performs during a typical work shift.

Figure 1 shows the activities a physician performed during a typical work shift. The ward round occurred from approximately 10 a.m. to 1:30 p.m., and, as the figure shows, quite a bit of activity occurred just after 10 a.m. According to the study observation, this corresponded to the actions executed during a catheter insertion, which lasted less than 10 minutes:

At 10:20 a.m., for 1 minute, Juan assessed the condition of a patient. During this activity, the attending physician asked Juan to insert a catheter. Based on such interruption, he immediately switched his activity to provide specialized care (5 minutes). He interrupted this activity to track medical equipment (30 seconds). And finally, he reported in the medical record the procedure conducted (2 minutes).

During the round, the activities performed presented a similar phenomenon, while the activities conducted before and after the round lasted approximately 20 minutes with a lower level of fragmentation.

We also measured the transitions for each activity—that is, the probability of switching from one activity to another. We found that the activities present a recurrence phenomenon proportional to their location and, to a lesser extent, their duration. (Recurrence phenomenon occurs when the probability of continuing to execute one activity is higher than changing to execute another one.) For example, the classes-and-certification activity had the highest recurrence level. This activity lasted approximately one hour (see figure 1) and was  executed in base locations (such as meeting rooms or offices). The tracking activity, which involved navigating the hospital premises for an average of 51 seconds, had the lowest recurrence level.

1.3 Exploiting activity-aware healthcare scenarios

The data from our study helped us identify problems hospital workers face—in particular, those related to being on the move and to the distributed and dynamic nature of the activities they conduct. Such activities include maintaining awareness of their patients’ status, being easily accessible when an emergency occurs, and prioritizing patient care on the basis of the patient’s health condition and current activity. We decided to use scenarios as a way to envision how activity-aware computing could augment hospital work in support of these activities.

1.3.1 Scenario 1: Creating coherent action histories

To conduct a medulla extraction, Dr.Diaz, a specialist, prepares the patient in room 222 in collaboration with Lety, the patient’s nurse. While Lety is preparing the equipment for the procedure, a screen in the patient’s room displays similar procedures through a timeline history. Dr. Diaz touches the display to select the activity that represents a medulla extraction previously executed on this patient.

The information displayed reveals that this patient has had an allergic reaction to the typical anesthesia used in the hospital, so Dr. Diaz asks Lety to use another anesthesia. Lety hands Dr.Diaz a needle with the local anesthesia, and after he injects the patient, the public display presents a medical guide for this procedure. Once the time for the anesthesia to take effect elapses, the display highlights the next step in the procedure, indicating that Dr.Diaz should start the medulla extraction. The display then marks the previously executed actions and highlights the next step.

Once the procedure is completed, the display presents a history of the activities executed. Lety selects the actions related to the medicines administered to the patient to integrate the information into the nurse chart. Dr. Diaz selects the medulla extraction activity and transfers it to his PDA to later discuss the procedure with his colleagues.

1.3.2 Scenario 2: Monitoring patient ADLs

Carmen, the nurse in charge of Pedro, explains to Rita, the nurse who just arrived for the night shift, that Dr. Perez, the attending physician, has changed Pedro’s medication to include cyclo-sporine. Pedro is a 56-year-old man who has chronic renal failure and just had a kidney transplant. So, to monitor Pedro’s reaction to the new kidney, Rita needs to supervise the frequency and quantity of Pedro’s urine.

Rita has an activity-aware mobile assistant in her smart phone that can specify that a light in a bracelet she wears should represent Pedro. She uses the phone to program the bracelet’s light to let her know when Pedro urinates. The bracelet acts as an indicator for Rita to perform an action, consult her smart phone for more information, or consult with a physician.

1.4 Designing activity-aware applications

We identified the key computational units of these two scenarios as computational activities or e-activities. An e-activity is the computational representation of a human activity, and it stores attributes depicting the activity’s execution context such as who is performing the action, other participants, the location, and the artifacts or applications used. An e-activity can also store a set of rules for informing the system how to adapt the pervasive environment or infer other attributes, such as a person’s availability. We observe in these scenarios that e-activities are

 ・reactive—they can act as a trigger,

 ・sequential—they can form histories,

 ・mobile—they’re executed across places, and

 ・persistent—they can be stored over long time periods.

Activity-aware applications should use e-activities as their basic computational units to support these four characteristics and the following services.

1.4.1 Activity modulation

The system must be able to adapt its level of awareness on the basis of the user’s required level of granularity for each activity. So, it must be able to modulate the e-activities its handling and displaying. In other words, the system must be able to raise its awareness level to distinguish between, for example, an ADL and an RA.

1.4.2 Activity monitor

Because hospital workers are highly mobile, it’s difficult for them to know their patients’ status or to prioritize events. To help trigger a response, we need to provide mechanisms to inform the activity. By knowing what activity a patient is executing, hospital workers can promptly identify patient needs and even decide which patient to attend to first. For instance, as shown in scenario 2, unless Rita knows Pedro has evacuated, she doesn’t know when to change his cloth. However, by receiving personalized notifications in her bracelet, she can care for Pedro whenever needed. Having such awareness lets hospital workers promptly respond to patients’ and colleagues’ needs.

1.4.3 Activity history

Before hospital workers execute a patient-related activity, they frequently consult similar cases or how such an activity was previously performed. For example, if a hospital worker could determine whether a medicine had already been administered to a patient, hospitals could avoid numerous medication errors. They could also avoid problems by showing a history of previously executed activities. For example, in scenario 1, by consulting the patient’s timeline history, Dr. Diaz learns that this patient had experienced an allergic reaction to the anesthesia.

So, activity-aware applications must store computational activities for retrieval when relevant. Furthermore, activity-aware applications must be able to identify the sequence of actions being executed to create histories and to hypothesize what a person will do next or how such activity context is evolving. For example, in scenario 1, the medical guide notifies Dr. Diaz when it’s time to start the medulla extraction.

1.4.4 Activity recognition

Owing to the characteristics of human activities, a user of an activity-based application will sometimes have difficulty identifying when an activity emerges, how it ends, how it relates to other activities, and what its level of fragmentation is. So, activity-aware applications should help users identify, create, and

manage activities during their everyday routine. To cope with this, we need to develop approaches for identifying what information to sense and the appropriate sensing technologies.

1.5 A mobile ADL monitor and display

We designed a mobile activity monitor aimed at creating a wearable ambient connection between patients and nurses—the system envisioned in scenario 2. (We discussed both scenarios with hospital staff and chose to focus on this one because the staff seemed more interested in it.) The system uses e-activities as its core units, letting it react when an event occurs, display activities in different devices (such as a smart phone or bracelet), and store activities to create coherent histories to determine when an ADL is evolving into an RA.

The system uses an activity-aware assistant as its client and an activity-aware server for the basis of its implementation. The activity-aware server comprises three layers that are responsible for creating e-activities and histories on the basis of the information sensed. The lower layer recognizes the activity by reading contextual information from sensors. To recognize activities, we’ve proposed an approach that uses a parallel layered hidden Markov model that’s trained to estimate hospital workers’ activities from contextual information, such as the people involved in the activity and the artifacts being used.[4] We trained the model and evaluated it using the data captured from our case study. The HMM can correctly estimate the user activity 92 percent of the time.

The middle layer defines the activity’s computational equivalence by either extracting a similar activity from its activity knowledge base or creating a new activity from scratch on the basis of information the lower layer provides.

The upper layer uses the e-activity definition to create a history of activities using the activities stored in the history knowledge base. This layer also analyzes such history to infer the next step that should be executed or how such e-activity attributes changed as the user’s course of actions evolve.

Figure 2. The mobile activity monitor. (a) A nurse uses the activity-aware bracelet; (b) the mobile activity-aware assistant shows information related to an activity being executed by a patient; (c) a nurse uses her cell phone to assign colors; and (d) a nurse associates contextual information with an activity.

The activity-aware assistant uses a device and a smart phone to display events related to patient or nurse activities. The device is a two-layered vinyl bracelet containing five buttons with embedded lights (see figure 2a). Each button represents a patient under the nurse’s care. Adapted from the medical model used in the emergency unit, the buttons’ colors are analogous to a traffic light. The lights turn on when a patient is executing an activity, when particular actions occur, or after a series of events occur. Nurses can press the button to consult information associated to the activity a particular patient is executing. This information is displayed in the nurse’s smart phone, which can show a more complex representation of the activity (see figure 2b). Nurses can also use their phone to assign priorities by selecting colors (figure 2c) or to set contextual information to act as a trigger for the activities being monitored (figure 2d).

When a nurse presses a button on the bracelet, a message is sent back to the activity-aware server, specifying a patient and bracelet ID. On the basis of such IDs, the activity-aware server determines which activity should be displayed on which smart phone. Communication between the phone and the server occurs wirelessly. We developed our own components to achieve communication between the bracelet and the server at frequencies under 27 Mhz. This avoids interference between the bracelet and equipment placed in the hospital or worn by patients.

1.5.1 Envisioned implementation

Going back to our scenario: Rita uses the activity-aware mobile assistant in her smart phone to specify that the light representing Pedro in her bracelet should turn yellow when Pedro evacuates. Furthermore, it should turn red if he evacuates more than five times in six hours (see figure 2d).

Later, while Rita is preparing medicines, Pedro’s light turns yellow. Rita presses the button, and her smart phone indicates what Pedro is doing (see figure 2b). Rita learns that Pedro has urinated approximately 10 milliliters (this information is calculated though the weight sensor attached to Pedro’s urine bag). Rita goes to the warehouse and gathers the medical equipment she needs to clean Pedro. Then, she goes to Pedro’s room to change his cloth. Finally, she updates Pedro’s liquid balance.

Throughout the night, Pedro’s light in Rita’s bracelet constantly turns yellow. A couple of hours later, while Rita is talking to Dr. Perez, her bracelet turns red. Rita consults her smart phone and realizes that Pedro has urinated seven times in six hours. She discusses this with Dr. Perez, who then decides to change Pedro’s medication to avoid damaging the new kidney.

1.5.2 Evaluation

We interviewed seven nurses, each for 30 to 60 minutes, to evaluate the bracelet’s design, the system’s core characteristics, the nurses’ intention to use the system, and their perception of system utility. All seven nurses indicated that the bracelet would help them save time, avoid errors, and increase the quality of attention given to patients. One nurse commented,

This bracelet will improve the quality of attention. The work will be the same, but I will do [it] faster. …For instance, if a patient has evacuated … I would promptly know the patient needs and I [could] take with me the things that I would need. 

In addition, nurses noticed that the system will help them prioritize events and patients:

Something that we currently cannot do is identify which patient has to be attended first; a system like this one [would] help me identify the urgency with which each of my patients needs to be attended. 

Although the nurses agreed that the system would have more advantages than disadvantages, they were still worried about some negative issues that this system might raise:

I do not like these things because I am a nurse who likes her job, and we need to give the highest quality of care, warmth, and affection to the patient. …With this type of system, the emotional bonds and the relationship between nurse and patient are lost [even though] the warmth might be replaced by quality.

However, overall, the staff viewed the application as useful, efficient, and generally appealing. Nurses repeatedly expressed that this system would solve many of the problems they face and improve their work, saying that it directly assists with “patient care” rather than merely supporting “secondary tasks,” as they say current systems do.


We plan to conduct another study to test our activity recognition approach and to show the applicability of activity-aware computing in a new setting—in particular, in nursing homes. Workers at nursing homes specialized in the care of elders with cognitive disabilities face working conditions that are similar to those in hospitals. Such workers also use common strategies to monitor and detect changes in the behavioral patterns of ADLs. This monitoring is done manually, making it time consuming and error prone. Also, there aren’t mechanisms for promptly identifying elders’ needs. We plan to use activity-aware computing and our ADL monitor and display to support the assistance and assessment of people with age-related cognitive decline. Activity-aware computing can automatically, adequately, and safely monitor elders conducting ADLs and then infer such activities to later use them as a trigger to adequately inform caregivers of significant events or deviations in the elders’ activities’ patterns.

Acknowledgments

We thank the personnel at IMSS (Instituto Mexicano del Seguro Social) General Hospital in Enseneda, Baja California, Mexico. This work was funded under contract CO3-42447 and through scholarship 179381 provided to Monica Tentori.

Authors

Monica Tentori is a PhD candidate in computer science at Cicese and a lecturer in computer science at the University of Baja California. Her research interests include ubiquitous computing, HCI, medical informatics, mobile computing, and computer-supported cooperative work. She received her MSc in computer science from Cicese. She’s a student member of the ACM Sigchi. Contact her at the Computer Science Dept., Centro de Investigación Científica y de Educación Superior de Ensenada (Cicese), Km. 107 Carretera Tijuana-Ensenada, Ensenada, B.C., 22860, Mexico; mtentori@cicese.mx.

Jesus Favela is a professor of computer science at Cicese, where he leads the Mobile and Ubiquitous Healthcare Laboratory and heads the Department of Computer Science. His research interests include ubiquitous computing, medical informatics, and computer-supported cooperative work. He received his PhD from the Massachusetts Institute of Technology. He’s a member of the ACM and the American Medical Informatics Association. Contact him at the Computer Science Dept., Centro de Investigación Científica y de Educación Superior de Ensenada (Cicese), Km. 107 Carretera Tijuana-Ensenada, Ensenada, B.C., 22860, Mexico; favela@cicese.mx.

References

1. L. Suchman, Plans and Situated Actions: The Problem of Human-Machine Communication, Cambridge Univ. Press, 1987.

2. J.E. Bardram and H.B. Christensen, “Pervasive Computing Support for Hospitals: An Overview of the Activity-Based Computing Project,” IEEE Pervasive Computing, vol. 6, no. 1, 2007, pp. 44–51.

3. A. Strauss and J. Corbin, Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory, Sage, 1998.

4. D. Sánchez, M. Tentori, and J. Favela, “Activity Recognition for the Smart Hospital,” IEEE Intelligent Systems, vol. 23, no. 2, 2008, pp. 50–57.

---

Related Work in Activity-Aware Computing

Activity-based applications let users explicitly organize their resources in terms of activities. For instance, the Rooms system divides a user’s computing desktop into a set of virtual rooms, each containing various documents, contacts, and pending tasks.[1] The Kimura system combines the virtual and physical context to create activity representations or “montages.”[2] These montages are integrated into a pervasive environment, which presents them without intruding on the users’ focal activity and in a manner that supports their needs. Similarly, the activity-based computing framework lets applications move this notion of activity into a pervasive environment supporting the mobile, collaborative, and disruptive use of heterogeneous embedded devices.[3] Thus, when a user selects an activity, the system automatically launches the services and retrieves the information relevant to the task at hand.

Activity-aware computing is supported by the automatic recognition of users’ activities—an area in which significant work has been done. For example, activities of daily living (ADL), such as preparing a drink or taking a medication, have been accurately inferred 73 percent of the time by detecting user interaction with particular objects.[4] The tags were attached to items of interest, and the user wore an RFID-detecting glove to read such tags. The SEER (Selective Perception Architecture for Activity Recognition) system uses a hidden Markov model to estimate activities with a higher level of abstraction (such as a user attending a conference).[5] This architecture’s inputs are contextual information from audio, video, and computer interactions captured through sensors distributed in an office environment. SEER obtained 99.7 percent accuracy. This approach is similar to ours, and its accuracy is comparable, but we use real-world activity data as input for the estimation. Because social relationships are complex and delicate, using social inference to inform human actions could be the difference between success and failure when deploying activity-aware applications. In contrast with these projects, our approach uses data from the observational study to infer activities performed by hospital workers.

Finally, a few systems illustrate the use of activity recognition in pervasive computing environments. The CareNet display is a digital picture frame that augments the photograph of an elderly person with information about the ADLs he or she is conducting. [6] The executed activity can trigger a reminder for the elder or his or her caregiver when a relevant event occurs (such as missing a dose of medication). Another system, the UbiFit Garden, was designed to encourage regular physical activity. The system uses wearable sensors to detect and track people’s physical activities and displays them through an aesthetic image. This image is presented to the user in the form of a flower garden.[7] When the user’s recognition device detects a new physical activity, it improves the appearance of the plants in the garden and adds a new element, such as butterflies. If no physical activity from a user is detected, the flowers in the garden might perish. (For more on the UbiFit Garden, also see the article, “The Mobile Sensing Platform for Capturing and Recognizing Human Activities” in this issue.) These projects are activity-aware in spirit, since both use activity recognition to augment pervasive environments. However, we also use activity-based computing to let the environment discover the contextual information relevant to the task at hand and to create a natural representation of context understood by users in the form of activities.

References

1. D.A. Henderson and S.K. Card, “Rooms: The Use of Multiple Virtual Workspaces to Reduce Space Contention in a Window-Based Graphical User Interface,” ACM Trans. Graphics, vol. 5, no. 3, 1986, pp. 211–243.

2. S. Voida et al., “Integrating Virtual and Physical Context to Support Knowledge Workers,” IEEE Pervasive Computing, vol. 1, no. 3, 2002, pp. 73–79.

3. J.E. Bardram and H.B. Christensen, “Pervasive Computing Support for Hospitals: An Overview of the Activity-Based Computing Project,” IEEE Pervasive Computing, vol. 6, no. 1, 2007, pp. 44–51.

4. M. Philipose, K.P. Fishkin, and M. Perkowi, “Inferring Activities from Interactions with Objects,” IEEE Pervasive Computing, vol. 3, no.4, 2004, pp. 50–57.

5. N. Oliver, A. Garg, and E. Horvitz, “Layered Representations for Learning and Inferring Office Activity from Multiple Sensory Channels,” Computer Vision and Image Understanding, vol. 96, no. 2, 2004, pp. 163–180.

6. S. Consolvo, P. Roessler, and B.E. Shelton, “The CareNet Display: Lessons Learned from an In Home Evaluation of an Ambient Display,” Proc. 6th Int’l Conf. Ubiquitous Computing (Ubicomp 04), Springer, 2004, pp. 1–17.

7. S. Consolvo, E. Paulos, and I. Smith, “Mobile Persuasion for Everyday Behavior Change,” Mobile Persuasion 20: Perspective on the Future of Behavior Change, E. Fogg and E. Eckles, eds., Stanford Captology Media, 2007, p. 166.