About this handbook
This handbook was developed from doctoral supervision practice in Cultural-Historical Activity Theory. The NorthCare NHS Trust case study is entirely fictional, though grounded in the kinds of situations CHAT researchers regularly encounter in healthcare and organisational settings. All theoretical content reflects CHAT scholarship as developed in the literature cited throughout. It is intended as a companion to, not a substitute for, the primary literature. Supervisors and students are encouraged to adapt it freely for their own institutional contexts.
First edition. For academic use.
Cultural-Historical Activity Theory for MA and PhD Students
Cultural-Historical Activity Theory (CHAT) is a framework for understanding human activity as part of a system. Rather than explaining why things happen by looking at individuals, it looks at the structure of the activity itself — the tools people use, the rules they operate within, the communities they belong to, and the shared purposes that give their work direction.
CHAT was developed from the early work of Lev Vygotsky, who argued that human thought and action are always shaped by cultural tools rather than arising from the individual mind alone. Alexei Leontiev extended this into a theory of activity. Yrjö Engeström later developed the activity system model*: a practical framework for mapping collective practice, identifying the tensions within it, and explaining how systems change over time.
In research, CHAT is most useful when you are trying to explain a gap — between what a system is designed to produce and what it actually produces; between how participants experience their work and how institutions describe it. It explains these gaps not by pointing to individuals but by showing how elements of a system interact, where they pull against each other, and why those tensions persist.
This handbook is written for MA and PhD researchers working with CHAT for the first time, or returning to it for practical guidance at a specific stage of their study. It is designed to be dipped into, not read cover to cover — each chapter opens with a “Use this chapter when” line so you can navigate directly to what you need.
Every chapter follows the same structure: the concept is introduced, applied to a running case study set in an NHS hospital, and closed with a “Do this now” action and a necessity statement that tells you what must be in place before moving forward. Templates, worked examples, and research instruments are embedded throughout. A tiered bibliography at the back guides your wider reading.
Throughout this handbook, a single fictional but realistic case study shows how CHAT concepts apply in practice. Amara is a PhD researcher studying nursing teams at NorthCare NHS Trust following the introduction of a new Electronic Patient Records (EPR) system. Management describes the rollout as a success. Nurses describe a working life that has become harder — workarounds, after-shift documentation, functions quietly abandoned. Compliance is high; something is still wrong. The gap between those two things is what CHAT is designed to explain, and what Amara’s study sets out to understand. You will follow her research from first observation to viva defence.
Contents
New to CHAT? Read the opening sections then begin at Chapter 1. If you are returning to a specific stage of your research, use the “Use this chapter when” line at the top of each chapter to navigate directly. Chapters 5b and 6b cover data analysis and research instruments respectively.
Part I — Entering CHAT Thinking
Use this chapter when you are at the very start — you have a research context but no system map, no research question, and no clear sense of where CHAT fits.
Begin not with theory, but with a situation that resists individual-level explanation.
Amara arrives at NorthCare not with a theory but with a situation she cannot explain. Nursing staff are compliant with the new Electronic Patient Records system on paper — compliance rates are high, management describes the rollout as a success — yet the nurses she speaks to informally describe a working life that has become harder, not easier. Some are completing records at home after their shifts. Others have developed quiet workarounds. A few have stopped using certain functions entirely. No individual is failing. The system, somehow, is.
This gap — between what the system reports and what people actually experience — is the kind of problem CHAT is built to address. It cannot be explained by pointing to any single person. Something structural is producing it. That is where this handbook begins.
CHAT research often begins differently from more traditional approaches. Rather than starting with a theory or hypothesis, most studies begin with a situation that feels complex and not easy to explain through individual behaviour alone. E’87Engeström, Y. (1987)Learning by Expanding. Helsinki: Orienta-Konsultit.
You might notice that a digital tool is available but not widely used, that a policy has been introduced but outcomes are uneven, or that participants describe frustration even when resources are present. CHAT offers a way to look more carefully at how different elements of a situation are connected.
Instead of asking “Why is this person not performing as expected?” begin to ask “What is happening in the system that shapes this activity?” Individual actions are understood as part of a wider set of relationships — situated within tools, rules, communities, and shared goals.
The six-node activity system model. Each node mediates the relationship between subject and object. Contradictions arise within and between nodes.
Begin by identifying a micro-level system (e.g., a nursing team conducting a ward round), while recognising it exists within a larger context (hospital policy, NHS frameworks, national digitisation programmes). If an element does not directly interact with your Object, place it in the macro-system or leave it out. This prevents the most common structural error in CHAT theses: trying to include everything at once.
Micro-system (focus of study): ______
Macro-system (wider context): ______
How are they connected?: ______
Starting a CHAT study can feel uncertain at first. Clarity develops gradually as you explore your context, engage with data, and revisit your initial assumptions.
Amara’s micro-system is the nursing team on a medical admissions ward. Her macro-system includes NHS digitisation policy, hospital-wide governance, and the national EPR rollout programme. The EPR system itself is the connection between them — a tool that descends from macro-level policy into the daily work of individual nurses. She resists the temptation to study the whole hospital. The ward is her unit. Everything else is context.
Write three sentences about your research context, in plain language, without using any CHAT terminology. Describe the situation that feels complex. Identify the gap — between what should be happening and what is. If you cannot write those three sentences yet, your starting point is not yet clear enough. Return to your setting before continuing.
You cannot map an activity system until you have a situation that resists individual explanation. If your research problem can be fully answered by asking one person why they did something, you do not yet need CHAT. Return to the situation and look for the gap between what should be happening and what actually is.
The starting point for a CHAT thesis is always a situation that feels structured and complex — one that resists explanation at the level of the individual.
Engeström, Y. (1987). Learning by Expanding. Orienta-Konsultit.
Vygotsky, L.S. (1978). Mind in Society. Harvard University Press.
Use this chapter when you have identified your situation and need to represent it as a system for the first time.
The activity system structures complexity — it does not reduce it.
Amara has her situation. Now she needs a way to represent it. Her instinct is to describe the problem in terms of people: the ward manager who pushed the EPR rollout, the nurses who resent it, the IT team that provides insufficient support. But this is a cast of characters, not a system. CHAT asks her to think differently — not who is involved, but how the elements of the activity are structured and how they relate to each other.
Her first map is rough. She writes “nursing team” as the subject and “safe patient care” as the object. She lists the EPR and the handover sheet as tools, shift patterns and documentation standards as rules, doctors and ward managers as community. It feels incomplete — because it is. But it is a beginning. She will return to this diagram six more times before her thesis is submitted.
The activity system provides a relational model for making sense of practice. A first system map is an interpretive starting point, not a final diagram. L’78Leontiev, A.N. (1978)Activity, Consciousness, and Personality. Prentice-Hall.
Subject: ______ Object: ______
Tools: ______ Rules: ______
Community: ______ Division of Labour: ______
Revisit your activity system at different points: an early version based on initial understanding, a revised version after data collection, a further version showing emerging tensions. The most important insights come not from identifying each element in isolation, but from how the elements interact — how tools relate to rules, how division of labour* affects the object, how community shapes what is possible.
Draw your first activity system triangle. Label all six nodes with what you know so far. Leave blank what you do not yet know. Put the date on it. This is Version 1. File it. You will return to it repeatedly, and the distance between Version 1 and your final version is evidence of your analytical development.
You cannot move to identifying contradictions until you have a working system map. It does not need to be correct — it needs to exist. Draw it now, even incompletely. The act of placing elements forces decisions about what belongs in the system and what belongs in the wider context. Those decisions are the beginning of your analysis.
The activity system is a working hypothesis, not a finished product — its value lies in what it reveals as you interrogate it against your data.
Use this chapter when you are beginning your literature review, writing your theoretical framework section, or when you feel lost in the vocabulary and need a way to read more purposefully.
Read CHAT scholarship as system-building examples, not as fixed definitions. Every text is making choices about the system — your job is to understand those choices and make your own.
Amara reads Engeström (1987) before her first fieldwork visit. She does not read to achieve mastery. She reads with a specific question in mind: what does Engeström mean by the object? She is already uncertain whether the object of nursing activity at NorthCare is safe patient care, or accurate documentation, or both — and whether that uncertainty is itself analytically significant. It is. The literature gives her a language for something she has already half-seen in her data. That is the right relationship between reading and analysis: the literature clarifies what the data is already suggesting, rather than imposing a framework onto data that has not yet been gathered.
CHAT literature can feel dense because authors vary in emphasis and vocabulary. The key is to read for structure, not definition. Different scholars — Engeström, Vygotsky, Leontiev, Bligh, Virkkunen — emphasise different parts of the system. Understanding these emphases helps you position your own theoretical choices. B’17Bligh, B. & Flood, M. (2017)“Activity Theory in Empirical Higher Education Research.” Tertiary Education and Management, 23(2).
Most graduate students approach the CHAT literature with one of two problems. The first is reading too broadly too early — trying to master the full theoretical landscape before beginning fieldwork, and finding that the landscape is vast, contested, and internally diverse enough to be disorienting. The second is reading too narrowly too late — citing only Engeström (1987) and using the framework as a labelling device rather than an analytical one. Both produce weak literature reviews: the first produces a survey that lacks a focused argument; the second produces a thin theoretical justification that examiners will probe.
The solution to both is purposeful reading — reading in response to specific analytical questions generated by your data and your emerging system map, rather than reading to achieve comprehensive coverage or satisfy a citation quota.
Every CHAT text is making decisions about the system: what to include, what to name, how to connect the elements, and how to explain change. Reading analytically means noticing those decisions rather than absorbing the conclusions. The following three questions work across any CHAT text:
The following passage is from Engeström’s 2001 paper “Expansive Learning at Work” — one of the most widely cited CHAT texts and a standard reference in most CHAT theses. Reading it analytically, rather than receptively, looks like this.
“The object of activity is a moving target, not a fixed endpoint. In the course of expansive learning, the object itself is reconstructed — it is both the starting point and the outcome of the transformation.”
A receptive reading notes: the object can change. That is true and worth knowing.
An analytical reading asks all three questions:
Unit of analysis: Engeström is not describing an individual’s goal. He is describing a collective, historically moving purpose — something the system is oriented toward, not the same as what any individual intends. When you map the object, you are not asking what participants are trying to achieve personally. You are identifying what the system as a whole is pointed toward — and noting where different participants’ understandings diverge. That divergence is data.
Relational claim: The object and the outcome are not the same thing. The object is the ongoing motive; the outcome is what is produced at a specific moment. In Amara’s study, the outcome of the EPR system (a completed, timestamped record) is not the same as the object of nursing (safe, responsive patient care). The system produces one while participants are oriented toward the other. That is a structural feature of the system — a secondary contradiction between tool and object.
Connection to your own system: In NorthCare, the object is contested — nurses and managers hold different versions, and the EPR embeds a third. Engeström’s claim gives Amara a theoretical warrant for treating that contest as analytically significant. She is not looking for the “correct” object; she is analysing what the multiplicity of object-versions reveals about the system’s contradictions. The literature has given her permission to treat the problem as structural.
Not all CHAT scholarship should be read the same way. The foundational texts — Vygotsky, Leontiev, Engeström (1987) — establish the conceptual architecture. Read them for the concepts, not for empirical findings. The empirical CHAT literature should be read for how the concepts are applied and adapted in settings like yours. Methodological texts — Virkkunen and Newnham, Bligh and Flood — should be read for research design decisions. These three types serve different purposes and belong in different parts of your thesis.
When reading empirical CHAT studies, add a fourth question: what does this study do that my study is building on, departing from, or doing differently? Your literature review should position your study in relation to existing work, not merely cite it. A study that found a secondary contradiction between tools and rules in a school setting is relevant to an NHS study, but the relevance needs to be articulated: similar contradiction type, different institutional context, different historical trajectory. That articulation is what makes a review an argument rather than an annotated bibliography.
Some of the most productive literature reading involves noticing what a CHAT text does not do — and using that absence to define your own contribution. A study that identifies contradictions but does not trace their historical origins gives you a gap to fill. A study that uses the Change Laboratory but compresses it into two sessions gives you a methodological contrast to articulate. A study that focuses on subject and object nodes but neglects division of labour gives you an analytical dimension to foreground. Reading for absence is not a dismissive critical exercise — it is a way of locating the space your study will occupy.
Most CHAT theses spend several pages tracing the intellectual lineage from Vygotsky through Leontiev to Engeström. This is not wrong, but it is often longer than it needs to be and less analytically purposeful than it could be. A useful discipline: each step in the lineage should do analytical work, not just establish credentials.
Vygotsky matters because mediation is foundational to understanding why your tools matter analytically, not just descriptively. Leontiev matters because the distinction between activity, action, and operation explains why participants can describe their individual actions without ever naming the structural object those actions serve. Engeström matters because the six-node model and the concept of contradictions provide the analytical vocabulary your study depends on. Two or three paragraphs, each ending with a sentence that connects the concept to your own study, is more effective than two pages of intellectual history. If a paragraph about Vygotsky could be removed without affecting your analytical argument, it belongs in a footnote.
Build your reading list in three tiers. The first tier is non-negotiable: Engeström (1987), Vygotsky (1978), Leontiev (1978), and Engeström (2001). These four texts are the foundation. If you have not read them, your theoretical framework is not yet established.
The second tier is field-specific: two or three empirical CHAT studies in a setting similar to yours, and one or two methodological texts relevant to your design. These allow you to position your study within existing scholarship and demonstrate that you know the field.
The third tier is responsive: texts you read because your data raises a specific question the first two tiers do not answer. This tier grows throughout the study and cannot be fully planned in advance. If you find yourself identifying something that looks like a quaternary contradiction, you go back and read Engeström on boundary crossing. If your participants’ accounts raise questions about professional identity, you read Edwards on relational agency. Let the data tell you what else you need.
Halfway through her analysis, Amara finds something she did not expect. The ward manager describes the EPR implementation in almost entirely positive terms, using language about “accountability” and “transparency” that the nursing staff never use. Amara initially codes this as multi-voicedness. But something feels more structural — the manager seems to be operating with a different understanding of what the ward’s activity is fundamentally for. Amara goes back to the literature and finds Engeström’s work on quaternary contradictions between neighbouring activity systems. The ward nursing system and the hospital management system are not simply two perspectives on the same activity. They are two different activity systems with different objects, different tools, and different divisions of labour — and the EPR sits at the intersection between them, serving both objects imperfectly. The literature did not tell Amara this. The data did. The literature gave her the words for it.
Build your three-tier reading list. Write the first tier from memory — if you cannot name the four foundational texts without looking them up, start there. Then identify two empirical CHAT studies in a setting similar to yours and one methodological text relevant to your design. That is your minimum reading before fieldwork. As you collect data, keep a running list of questions your data is raising that your current reading does not answer. Those questions are your third tier — and they are the sign of a study that is analytically alive.
Take one page from any CHAT text you are currently reading and apply the three analytical questions: What is the unit of analysis? What relational claim is being made? Where does this connect to your own system? Add a fourth: what does this study not do, and how does that absence define a space for your own contribution? Write your answers in your research journal before reading further. The discipline of answering all four questions for every section you read is what turns a literature review from a summary into an argument.
You cannot justify your use of CHAT without having read the four foundational texts. But you do not need to have read everything before you begin fieldwork. Read the first tier. Then let your data tell you what else you need to read.
The literature is most useful when read as a set of choices about how to construct and analyse a system — read to learn how others made those choices, identify where they fell short, and then make your own.
Use this chapter when you are analysing the tools in your system and need to understand what each one makes possible, forecloses, and assumes about the activity.
Human activity is always mediated — never direct or unfiltered.
Amara watches a nurse complete a medication round. In her right hand, the nurse carries a printed handover sheet annotated in four colours of pen. On the wall behind her, the bedside terminal waits, its cursor blinking. The nurse glances at the screen, then back at the paper, then moves on. She will enter the data later — “when it goes quiet,” she says, though it never quite does.
Amara notes this as a mediation problem before she has the language to name it. The EPR system and the handover sheet do not do the same thing. They mediate nursing practice in structurally different ways — and the nurse’s preference for the paper is not resistance. It is evidence that one tool fits the activity and the other does not.
Tools are not simply instruments people use. In CHAT, tools shape what is possible within the system itself. They carry historical traces of prior practice and constrain or enable action in specific ways. V’78Vygotsky, L.S. (1978)Mind in Society. Harvard University Press.
At NorthCare, two mediating artefacts compete: the EPR system (designed for audit) and the handover sheet (designed for clinical communication). Neither is neutral — each embeds a different theory of what nursing work is for.
Analysing mediation means asking: what does this tool make possible, and what does it foreclose? How does it carry the history of the system into present practice? A tool that appears neutral is rarely so — it embeds assumptions about how work should be done, by whom, and at what pace.
List every tool present in your activity system. For each one, answer three questions: Who designed it, and with what model of practice in mind? What does it make possible? What does it make difficult or impossible? If any tool is poorly understood, you need more data before you can analyse it. Incomplete tool analysis produces weak contradiction identification.
You cannot identify contradictions in your system until you have analysed what each tool makes possible and what it forecloses. For every tool in your system, ask: who designed this, with what model of practice in mind, and how does that model fit — or not fit — the activity as it actually occurs? That mismatch, if it exists, is your first candidate for a secondary contradiction.
Analysing mediation means asking not just what a tool does, but what assumptions about practice it carries — and whose assumptions those are.
Use this chapter when you have a working system map and data that shows tensions, breakdowns, or gaps between intended and actual practice.
Contradictions are not errors in your data — they are structural features of the system.
Three weeks into fieldwork, Amara notices a pattern. Every time she observes a high-dependency period — a deteriorating patient, a rushed medication round, a complex admission — EPR documentation stops. Nurses revert entirely to paper. Then, when the immediate crisis passes, they face a backlog of records to enter, often from memory, often after their shift has officially ended.
This is not a behaviour problem. It is a structural problem. The EPR system requires something — continuous real-time documentation — that the activity cannot provide during its most demanding moments. The tool and the rules it operates within are in direct tension. Amara has found her first contradiction.
Contradictions* emerge as recurring breakdowns, mismatches between intention and practice, or misalignment between system elements. They are the engine of development and change — the reason systems do not stay still. E’87Engeström, Y. (1987)Learning by Expanding. Ch. 2.
At NorthCare: a primary contradiction exists within the EPR system itself (designed for audit and for care simultaneously). A secondary contradiction exists between the EPR tool and shift-pattern rules. A tertiary contradiction exists between the ward’s established practice and the new digitised model. A quaternary contradiction exists between the ward system and the management reporting system.
Contradictions are not always stated directly. Look for patterns in your data: repeated delays, differences between expectations and practice, participant frustration. Statements like “We are expected to use the platform, but there is not enough time” point toward underlying systemic tensions.
“A tension exists between ______ and ______, because ______.”
Example: “A tension exists between the EPR system’s requirement for real-time documentation and the pace of acute nursing care, because entering data at the bedside takes time that is not available during high-dependency periods.”
Amara’s three contradictions take weeks to identify. Each requires triangulation across data sources before she will name it. The secondary Tool–Rules contradiction is supported by observation logs showing documentation gaps during high-dependency periods, interview data in which nurses describe timing pressures, and shift records showing EPR entries clustered at the end of shifts rather than throughout them. The secondary Tool–Object contradiction is supported by interviews in which nurses distinguish between “what the system needs” and “what the patient needs.” The tertiary contradiction is supported by historical accounts of how handover used to work and why it felt more reliable. Each contradiction is named only after the evidence makes it unavoidable.
Complete this sentence for each tension you have observed: “A tension exists between [element] and [element], because [structural explanation].” If you cannot complete the sentence — if the “because” clause remains vague — the contradiction is not yet analytically grounded. Return to your data and look for more evidence before naming it.
You cannot write your findings chapter until each contradiction is supported by evidence from at least two data sources. If you can only find one piece of evidence for a tension, return to your data. Either the contradiction is not yet fully visible, or it is an anomaly rather than a structural feature. Structural contradictions recur. They leave multiple traces.
A contradiction is only analytically useful when it is grounded in specific evidence — naming the tension is the beginning, not the end, of the work.
Engeström, Y. (1987). Learning by Expanding. Orienta-Konsultit.
Engeström, Y. (2001). Expansive Learning at Work. Journal of Education and Work, 14(1).
Use this chapter when you have collected data and need to move from raw interview transcripts, observation notes, and documents toward a mapped activity system and named contradictions.
The gap between collecting data and claiming contradictions is where most CHAT students stall. This chapter shows the analytical steps that bridge it.
After her first twelve interviews, Amara sits with forty-seven pages of transcript and a blank activity system triangle. She knows the framework. She has read the literature. She cannot see how to get from what nurses said to what the system looks like. This chapter is for that moment.
Data analysis in a CHAT study is not a single event. It is an iterative movement between data and framework, in which each pass deepens both your understanding of the data and your construction of the system. There is no shortcut through this process, but there is a sequence that makes it manageable.
Before applying any analytical framework to your data, read all of it once without coding. This sounds obvious and is frequently skipped. Reading without coding allows you to form an overall sense of the territory — what participants are concerned about, what keeps recurring, where the unexpected moments are. The patterns you notice in this first reading will orient your later analytical decisions in ways that are difficult to reproduce if you begin coding immediately.
Keep a running memo as you read. Note recurring words, repeated frustrations, moments where a participant says something that surprises you, and moments where different participants seem to be describing the same situation from different positions. These memos are data — file them with your transcripts, not in a separate document that will be lost.
Return to your transcripts and, for each passage, ask: which element of the activity system is this participant describing? Not every passage maps to an element — some is contextual, some is narrative, some is biographical. But a significant proportion of interview data in a CHAT study will be describing, directly or indirectly, one of the six nodes. Mark those passages and note the element.
Use a simple coding system: T for tool, R for rules, C for community, DL for division of labour, S for subject, O for object. Do not force codes — if a passage does not clearly describe a system element, mark it as context or leave it uncoded for now. The codes that matter most at this stage are those where the same element is being described differently by different participants. Those divergences are where contradictions begin to appear.
Transcript extract, Nurse 4: “The thing is, during a crash you just can’t stop and type. You might not get back to the terminal for forty minutes. And then what you enter is basically from memory, which isn’t the same as entering it at the time.”
This passage maps to: T (EPR terminal — unavailable during crisis), R (requirement for real-time entry — structurally violated), and gestures toward O (the difference between an accurate record and what the nurse can actually produce). It is also the first indication of a secondary contradiction between Tool and Rules. Amara marks it T / R / contradiction-candidate and returns to it in Step 3.
Once you have coded a substantial portion of your data — not all of it, but enough to have covered each system element multiple times — begin to build your system map. Place what the coded data tells you about each element into the corresponding node. This map is not a diagram yet. It is a working document: a list under each heading of what your data says about that element.
This step often reveals gaps. You may have rich data on tools and rules but thin data on division of labour or community. Those gaps are a signal to return to the field — to ask a question you have not yet asked, or to seek out a participant whose position in the system you have not yet accessed.
It also often reveals convergences that complicate your initial coding. A passage you coded as describing a tool may, on reflection, be describing a rule — because the tool has become so embedded in practice that participants experience it as a mandate rather than an instrument. Note these ambiguities. They are analytically significant.
A contradiction candidate is a pattern in your data where two system elements appear to be pulling in different directions. At this stage, it is a hypothesis, not a finding. You are asking: does this data suggest a structural tension between these elements? You are not yet claiming it as a contradiction.
The following signals are reliable indicators of contradiction candidates in interview and observation data:
Amara identifies “EPR entries happening after the shift” as a recurring pattern across eleven of her first twelve interviews. She marks this as a contradiction candidate: Tool (EPR) vs Rules (shift patterns / real-time entry requirement). She returns to her observation data and finds six instances of nurses deferring EPR entry during high-dependency periods. She checks the EPR timestamp records and finds entries clustered at the end of shifts rather than distributed throughout. Three data sources now support the same pattern. She names it a secondary contradiction: Tool–Rules. The candidate has become a finding.
For each named contradiction, ask: what evidence do I have, from how many data sources, that this tension is structural rather than incidental? A contradiction that appears in one interview is an observation. A contradiction that appears across interviews, observations, and documents is a structural feature of the system. The threshold for naming a contradiction is evidence from at least two independent data sources showing the same pattern in different forms.
Also ask: could this be explained without CHAT? If the tension can be fully explained by saying “the software is poorly designed” or “the manager made a bad decision,” it is not yet a structural analysis. A CHAT explanation must identify the system elements in tension and explain why the structure of the activity produces the pattern — not why an individual or a product caused it.
Once you have named your contradiction candidates and tested them against the data, return to your system map and revise it. The contradictions you have identified will have revealed new things about the system elements: the object may be more contested than your first map suggested; the division of labour may be more fragmented; a tool you initially listed as secondary may be central. This is your second-version system map. Date it. It will not be your last.
The movement between Step 3 and Step 6 — map, identify candidates, test, revise map — is the core analytical cycle of CHAT research. Most studies go through this loop three to five times before the system map stabilises enough to write from. Do not mistake an early stable-feeling map for a finished one. Stability at this stage usually means you have not yet asked enough of your data.
Take your first five interview transcripts. Read them without coding. Write a one-page memo noting: what are participants most concerned about? What recurs? What surprises you? Then code one transcript using the six-element system. For each coded passage, note which element it describes and whether it suggests a tension with another element. This is your first analytical pass. It will feel incomplete. That is correct.
You cannot write a findings chapter until you have completed at least one full cycle of Steps 1–6 — read, code, map, identify candidates, test, revise. A findings chapter written from a first-pass system map will show its seams. The analytical depth that examiners look for is produced by the revision cycles, not the initial mapping.
The distance between your first system map and your final one is not a sign of early error — it is the evidence of your analytical work.
Use this chapter when you are writing your methodology chapter, preparing to justify your approach to a supervisor, or anticipating the viva question "why not thematic analysis?"
CHAT is a theoretical framework, not a fixed method. Know why you have chosen it.
Amara’s supervisor asks her a direct question in their second meeting: “Why not thematic analysis? You have interviews. You could code them for themes.” Amara knows the answer but has to find the words for it. Thematic analysis would tell her what nurses say about the EPR system. It would not tell her why those things are being said — what structural features of the activity produce the experience nurses are describing. The gap she is trying to explain is not a gap in nurses’ perceptions. It is a gap in the system. That is why she needs CHAT.
CHAT provides a conceptual lens, informs how research is designed and interpreted, and supports explanation of complex systems. It does not replace your methodology — it informs and guides it. Always distinguish: CHAT provides the lens; your methods provide tools for collecting data; your methodology explains how and why these fit together. B’17Bligh, B. & Flood, M. (2017)“Activity Theory in Empirical Higher Education Research.” Tertiary Education and Management, 23(2).
The literature review in a CHAT thesis has a specific job that is different from literature reviews in other qualitative traditions. It is not primarily a survey of what others have found. It is an argument for why the activity system model — and the specific concepts you are drawing on — is the right analytical apparatus for your research problem. Every section of your literature review should be traceable to a decision you have made about your own study.
Most CHAT theses need two distinct layers of literature work, which may appear in the same chapter or in separate chapters depending on your institution’s conventions. The first layer is the theoretical framework: CHAT itself, its conceptual vocabulary, and how you are positioning your use of it. The second layer is the substantive context: the literature about the setting, the phenomenon, or the policy you are studying. Both layers are necessary. A CHAT thesis that has a strong theoretical chapter but no engagement with the substantive literature leaves the reader unable to assess the significance of the findings. A thesis with rich substantive literature but thin theoretical framing leaves the reader uncertain whether the researcher actually understands the framework they are using.
Most CHAT theses include a section tracing the intellectual history from Vygotsky through Leontiev to Engeström. This section is often longer than it needs to be. A useful discipline: each step in the lineage should do analytical work, not just establish historical credentials.
Vygotsky’s contribution to your study is mediation — the claim that all human action is shaped by cultural tools and signs, never direct. If your study analyses how a digital tool mediates practice, Vygotsky provides the theoretical warrant. That is one paragraph, not three pages.
Leontiev’s contribution is the hierarchy of activity, action, and operation — and the concept of the object as the motive that gives an activity system its direction. If your study investigates how the object is contested or shifting, Leontiev provides the warrant. Again, one or two paragraphs of conceptual precision is more valuable than a biographical account.
Engeström’s contribution is the six-node model, the typology of contradictions, and the concept of expansive learning. These are the concepts that most directly shape your analytical decisions. Give them space proportional to the work they do in your study — which will typically be more space than Vygotsky and Leontiev, because they are closer to your analytical apparatus.
“This study draws on Cultural-Historical Activity Theory (Engeström, 1987; Leontiev, 1978; Vygotsky, 1978) as a framework for examining [phenomenon] as a system of collective, tool-mediated activity. Specifically, it employs the concept of [key concept] to analyse [specific analytical focus], following [scholar] in treating [theoretical position]. This approach is chosen over [alternative] because [specific reason grounded in the research problem]. The study contributes to a body of CHAT research in [substantive field], including [2–3 key empirical references], while extending that work by [specific contribution].”
Write your CHAT positioning paragraph using the template above. Fill in every blank. If you cannot complete the sentence “This approach is chosen over [alternative] because [reason grounded in the research problem],” return to Chapter 6 (Methodology) and work on your methodology justification before writing further. The positioning paragraph is the spine of your literature review. Everything else hangs from it.
| Approach | Focus | Output |
|---|---|---|
| Thematic Analysis | Recurring themes in participant accounts | Descriptive categories |
| Grounded Theory | Theory building from data | Conceptual model |
| Discourse Analysis | Language in context | Textual/rhetorical patterns |
| CHAT | System relations and tensions over time | Explanatory activity system |
“This study adopts a CHAT framework to explore how ______ is shaped by the interaction of tools, rules, and community within the activity system. Data was collected using ______.”
Write your methodology justification in one paragraph. Include: what CHAT provides that other approaches do not; what your specific research problem requires that makes CHAT the appropriate choice; and what data collection methods you are using and why each one illuminates a different aspect of the activity system. Show it to your supervisor before proceeding to data collection.
You cannot write your methodology chapter until you can answer this question in one sentence: “I am using CHAT rather than [alternative] because my research problem requires explanation of [specific systemic feature] rather than description of [what the alternative would produce].” If you cannot complete that sentence, your justification is not yet strong enough.
Your justification for using CHAT should explain not just what the framework is, but why your specific research problem requires a systemic and relational explanation that other approaches cannot provide.
Use this chapter when you are designing your data collection instruments, adapting them for your own context, or checking that your questions are genuinely aligned with your CHAT analytical framework.
CHAT instruments are not generic data-gathering tools. Every question should serve the activity system — mapping an element, probing a tension, or tracing a historical layer.
Amara drafts her interview guide three times before she is satisfied with it. The first draft asks nurses how they feel about the EPR system. Her supervisor points out that this will produce attitudinal data — rich, personal, and analytically insufficient for CHAT. The second draft asks about the system’s impact on their work. Better — but “impact” is still a black box. The third draft asks about tools, rules, time, division of responsibility, and the purpose of nursing work. Each question is designed to illuminate a specific element of the activity system or a possible tension between elements. The guide is no longer a list of questions about the EPR. It is a systematic attempt to construct the activity system from participant knowledge.
Before designing any instrument, identify which elements of the activity system each question is intended to illuminate. A question that does not serve the system map — that would produce data you could not place anywhere in your analytical framework — does not belong in your instrument. This is not restrictive; it is clarifying. It forces you to make your analytical intentions explicit before you enter the field.
The six system elements provide a natural organising structure. Most CHAT studies need data on: the tools participants use and how those tools shape their practice (mediation); the rules that formally and informally govern activity; the community involved and the different roles within it (division of labour); and the object — what participants understand the activity to be for, and whether that understanding is shared or contested. Historical data — what the activity looked like before any significant change — is a seventh category that many CHAT studies require.
The following protocol was used by Amara across her forty-two semi-structured interviews with nursing staff, ward managers, and IT support personnel at NorthCare. Each question is annotated with the system element it is designed to illuminate.
Replace every reference to the EPR system, nursing, and NorthCare with the equivalents from your own research context. The structural logic of the protocol — opening on object, moving through tools and rules, tracing division of labour and community, seeking historical comparison, closing on contradiction and development — is transferable to any CHAT study. What is not transferable is the specific content. Your instrument must be grounded in your system, not borrowed from this one.
Draft your own interview protocol using this structure as a scaffold. For each question you write, note in brackets which system element it is designed to illuminate. If a question does not have a system element annotation, either find one or cut the question. Every item in your instrument should serve the analysis.
Amara supplements her interviews and observations with a short survey distributed to all registered nursing staff across three wards at NorthCare (n = 74; response rate 68%). The survey is not designed to produce statistical findings. Its purpose is to map the distribution of the tensions identified in the interviews — to establish whether the contradictions Amara has observed are concentrated in particular roles or shifts, or are systemic across the workforce.
In CHAT research, surveys are most useful not as standalone instruments but as triangulation tools. They can confirm that a pattern observed in interview and ethnographic data is not an artefact of who you happened to speak to. They cannot, on their own, identify contradictions or explain their structural origins — that analytical work requires qualitative depth.
A survey is not a required component of a CHAT study. Many strong CHAT theses use no survey at all. Before deciding to include one, ask a specific question: what would a survey tell me that my interviews and observations cannot? If the answer is “the distribution of a pattern I have already identified qualitatively,” a survey may add value. If the answer is “I am not sure,” it probably will not.
Surveys are most useful in CHAT research when your setting involves a large number of participants across whom you cannot conduct interviews — and when you want to test whether a contradiction you have identified in interview and observation data is concentrated in a particular system position (a specific role, shift pattern, or site) or is distributed across the system more broadly. They are least useful when your study is already analytically rich from interviews and observation, when your participant group is small enough that everyone can be interviewed, or when the questions you need to ask are too contextual and relational to translate into a survey format.
It is also worth being honest about what surveys cannot do in a CHAT study. They cannot identify contradictions — they can only confirm the distribution of patterns that qualitative analysis has already surfaced. They cannot explain why a tension exists — only how widely it is felt. And they can produce a false impression of quantitative rigour in a study that is fundamentally interpretive. If you include a survey, be explicit in your methodology chapter about its role: it is a triangulation instrument, not an independent source of findings.
The value of Amara’s survey lies not in its means but in its distributions. When she breaks down responses to C2 (“I sometimes complete EPR records after my shift has officially ended”) by shift pattern, she finds that the pattern is significantly more pronounced among rotating and night-shift nurses than among day-shift staff. This is analytically significant: it suggests the Tool–Rules contradiction is not uniformly distributed across the system, but is particularly acute at specific positions within the division of labour. That finding generates new interview questions, deepens the contradiction analysis, and ultimately strengthens the argument that the tensions are structural rather than individual.
Survey data in a CHAT study should always be interpreted through the analytical framework rather than presented as findings in their own right. The question is not “what percentage of nurses agree?” but “what does the distribution of agreement tell us about how this contradiction is structured within the activity system?”
Before deciding whether to include a survey, answer the decision guide questions above. If you conclude a survey is appropriate, draft five to eight questions using this instrument as a model. Annotate each question with the system element it illuminates. Design the survey after your first round of interviews, not before — the questions should be grounded in what your qualitative data has already begun to reveal.
Every question in a CHAT instrument should be traceable to a system element, a potential contradiction, or a historical comparison — if it is not, it does not belong in the instrument.
Use this chapter when you are designing or running participant sessions, selecting mirror data, or writing up your intervention-oriented methodology.
The Change Laboratory is not a data collection method. It is a structured process in which participants examine, challenge, and begin to transform their own activity system.
Eight months into her study, Amara holds the first of three structured reflection sessions with six nurses from the medical admissions ward. She has prepared one slide. It shows two numbers side by side: twelve minutes — the average documentation time per shift that the EPR system was designed to require — and thirty-eight minutes — the average time nurses are actually spending, derived from Amara’s own shift observations and from EPR entry timestamps. She does not comment on the numbers. She puts them on the screen and waits.
A senior nurse says: “That can’t be right.” Then, after a pause: “Actually, yes it can.” The room shifts. What follows is forty minutes of sustained discussion — not about individual nurses’ efficiency or IT support failures, but about the design logic of the system itself. One nurse asks: “Did anyone who built this ever actually work a night shift?” Nobody answers. Amara notes it down. The question is not rhetorical. It is the beginning of analysis.
The Change Laboratory is a structured intervention method developed by Engeström in which researchers and participants work together to examine the contradictions in their activity system and begin to develop new forms of practice. V’13Virkkunen, J. & Newnham, D.S. (2013)The Change Laboratory. Sense Publishers. It is not a focus group, a consultation exercise, or a training session. It is a space in which the activity system itself becomes the object of collective analysis — where participants move from describing their experience to theorising its structural causes.
Participant selection for a Change Laboratory is not a sampling decision in the conventional research sense. You are not seeking representativeness. You are assembling a group that has the collective capacity to examine the activity system, analyse its contradictions, and — crucially — do something with what the analysis produces. That last requirement shapes who you invite.
The group needs to include people who experience the activity from different positions within it. In a workplace setting this means different roles, different levels of seniority, and — as discussed in the two-triangle section — both experienced staff who remember the old system and newer staff who know only the current one. This positional diversity is not about balance for its own sake. It is because the contradictions in an activity system are experienced differently depending on where you sit within it, and a group that only represents one position will produce a partial analysis.
But the most important selection criterion, and the one most often overlooked in PhD research, is this: at least one participant needs to have a credible route to management. This does not mean a manager must be in the room — managerial presence often inhibits the candour that the Change Laboratory requires. It means that someone in the group must have the standing, the relationships, and the confidence to take what the group produces and communicate it upward. Without this person, the Change Laboratory becomes a closed loop: participants develop a sophisticated understanding of their situation, the researcher produces a strong analysis, and nothing changes because there is no mechanism for the findings to reach the people with the authority to act on them.
In healthcare settings this might be a senior nurse, a ward education lead, or a union representative. In school settings it might be a department head or a teacher who chairs a working group. In any setting, it is someone who already has a legitimate reason to speak to management and who can frame the group’s proposals in terms that the institution can hear. Identify this person in your scoping conversations, before you finalise your participant group. Invite them explicitly, explain why their role matters, and make sure they understand from the outset that the later sessions will ask them to take something back.
Amara’s participant group includes a Band 7 senior nurse who sits on the ward’s clinical governance committee and has a standing monthly meeting with the nursing director. Amara does not tell the group about this at the start — it would shape the dynamic in ways she does not want. But she has identified this nurse in her scoping conversations and has spoken with her privately about the research before the first session. By session four, when the group begins modelling alternative documentation structures, this nurse is already thinking about how she will present the group’s proposals at the next governance meeting. She does not say this out loud in the session. But when session five asks who could take the findings forward, she speaks first, and she is specific. The route to management was built into the participant group from the start.
The Change Laboratory produces two things simultaneously: data for the researcher, and expanded understanding for the participants. These are not separate outcomes. The discussion that generates your most analytically rich data is the same discussion in which nurses, teachers, or healthcare workers begin to see their situation differently. The researcher is not extracting insight from participants — insight is being produced collectively, in the room.
This is why the selection of mirror data is the most consequential decision you make in designing a Change Laboratory session. Mirror data is not illustrative — it is provocative. It should show participants something about their own practice that they cannot easily dismiss or explain away at an individual level. The two numbers Amara presents — twelve minutes versus thirty-eight minutes — are carefully chosen. They cannot be explained by any individual nurse working slowly. They can only be explained by something structural.
Engeström describes the process of collective development through a Change Laboratory as an expansive learning cycle. E’01Engeström, Y. (2001)"Expansive Learning at Work." Journal of Education and Work, 14(1), 133–156. The cycle has seven stages, each of which represents a qualitatively different form of collective engagement with the activity system. They are not steps to be completed in sequence — they are zones that participants move through, return to, and sometimes inhabit simultaneously. What matters is the direction of travel: from accepting the current system as given, toward being able to imagine and act on a transformed one.
Before mirror data, before activity system triangles, before any theoretical framework — start with the participants' own problems. Ask the group, in the very first session, to list every difficulty, frustration, or tension they notice in their daily work. Write them on the wall. Do not edit, prioritise, or comment. Just collect. This does something important: it establishes from the outset that the knowledge in the room belongs to the participants, not to the researcher. It also produces a raw list that you will return to in later sessions as a baseline — a record of what the group knew before the analysis began.
This problem-listing phase is distinct from the mirror data. The mirror data is selected by the researcher and introduced deliberately. The problem list is generated by the participants themselves, without prompting beyond the initial question. Together, they create the conditions for Stage 1.
Amara opens her first session not with her slide of two numbers, but with a question written on the flipchart paper: “What problems do you encounter in your day-to-day documentation work?” She gives participants five minutes to write individually on sticky notes — one problem per note — before placing them on the paper. Twenty-three problems are generated by six nurses in five minutes. Some are operational (“the terminals crash”); some are structural (“there’s never time to document during the shift”); some touch the object of the activity directly (“I don’t always feel like what I record is what actually happened”). Amara does not comment on them yet. She groups similar notes loosely and says: “Let’s keep these on the wall. We’ll come back to them.” Then she introduces the mirror data. The two numbers — twelve minutes designed, thirty-eight minutes actual — connect immediately to what is already on the wall. The participants recognise the data because they generated the question themselves.
Questioning is triggered when participants encounter something about their practice that they cannot accept as normal. It often begins with frustration or confusion — a sense that something is wrong without yet being able to name what. The researcher’s role at this stage is to create the conditions for questioning without directing its content. The problem-listing exercise and the mirror data work together: the participants’ own problems make the mirror data legible, and the mirror data gives structural weight to what would otherwise remain a list of complaints.
After the problem-listing and mirror data introduction, Amara adds a second piece of mirror data: an anonymised excerpt from an interview in which a nurse describes completing EPR entries from memory at 10pm, an hour after her shift ended, because there had been no opportunity during the shift. One nurse in the session says: “That’s me. That’s every Tuesday.” Another: “We’ve just accepted this as normal and it isn’t.” A third nurse points to one of the sticky notes already on the wall — “there’s never time to document during the shift” — and says: “We wrote that twenty minutes ago and we didn’t realise how big it was.” The questioning has begun — not of individual behaviour, but of the system that produces it.
Analysis involves participants examining why the situation is as it is — tracing current tensions back to their historical and structural origins. This is the most intellectually demanding stage of the cycle, and it is where the activity system model earns its place as a practical thinking tool rather than a theoretical framework. When introduced at the right moment — after questioning has destabilised the assumption that current practice is simply normal — the model gives participants a vocabulary for naming what they are already noticing.
The sequence within the analysis stage matters. Begin with the present activity system — the one participants are living in now. Then move to the past system — what the activity looked like before the change that produced the current tensions. Only after both are clearly mapped does it become useful to look toward the future: what a different system might look like, and what it would require. This present → past → future sequence is not arbitrary. The present system is where participants’ frustration lives. The past system explains why the present feels wrong. The future system is where the energy released by that explanation can be directed. Compressing the sequence — jumping to the future before the past has been properly examined — produces proposals that are not grounded in structural understanding, and participants can feel that.
The two-triangle exercise — present and past systems mapped simultaneously — is the core analytical tool of this stage. It externalises the historical comparison that participants are already making internally, often without a language for it, and makes it available for collective scrutiny.
In many workplace settings, this historical analysis is given particular texture by the presence of two distinct participant groups: those who remember the old system from experience — the “oldies”, to use an informal but useful term — and those who joined after the change and know only the current one. These two groups do not simply have different opinions about the present system. They carry different activity systems in their professional memory, and their accounts, placed alongside each other, construct the two triangles directly from participant knowledge.
Experienced staff can speak to what the tools were, what the rules demanded, how the division of labour was organised, and what the object of the activity felt like in practice. Newer staff can speak to the present system from the inside — often without the implicit comparison that longer-serving colleagues carry, but with a clarity about its current demands that those who knew the old system sometimes struggle to see freshly. Together, they build both triangles. The researcher’s role is to draw them out — to ask the questions that surface the comparison — and to represent what emerges in the two diagrams on the wall.
The analytical power of this moment should not be underestimated. When an experienced nurse describes how handover used to work and a newer colleague says “I didn’t know it used to be like that — that actually makes more sense,” something analytically significant has happened. The current system has been denaturalised. It is no longer simply how things are — it is one way of organising a system that was previously organised differently, and the comparison makes the structural choice visible. That visibility is the condition for everything that follows in the cycle.
Amara’s second session includes six nurses: four with more than eight years on the ward, two who joined after the EPR system was introduced. She has printed two blank activity system triangles on A3 paper and placed them side by side on the table, labelled simply “Before” and “Now.”
She asks the experienced nurses to fill in the “Before” triangle first, from memory. They do so quickly and with confidence. Tools: handover sheet, observation charts, verbal communication at the nurses’ station. Rules: end-of-shift handover as the primary documentation event; retrospective, collective, time-bounded. Division of labour: senior nurses synthesising information for handover, documentation shared across the team. Object: they pause on this. One says “getting the patient through safely.” Another: “making sure the next shift knew what we knew.” Both. The object was integrated — care and communication as one activity.
Then she asks the newer nurses to fill in the “Now” triangle. Tools: EPR terminals at the bedside; the handover sheet they have been told is unofficial. Rules: real-time individual entry throughout the shift; documentation is an individual responsibility. Division of labour: fragmented — each registered nurse accountable for her own records, regardless of what else is happening. Object: one of the newer nurses writes “completing the record.” Then crosses it out. Then writes it again.
The experienced nurses look at the second triangle and say nothing for a moment. Then one says: “The object changed and nobody told us.” This is the most analytically significant statement of the entire study. It is not something Amara could have said. It required two triangles and six nurses to produce it.
She asks the group: where are the tensions? They identify them without prompting — between the EPR tool and the shift-pattern rules, between the object embedded in the EPR design and the object they still believe they are there to serve. Amara adds the arrows to the diagrams as they speak. The contradictions are now on the wall, constructed by participants, visible to everyone in the room.
The two-triangle approach works because it separates the historical comparison from the evaluative one. Participants are not being asked whether the old system was better. They are being asked what was different — a descriptive question, not a political one. Once the differences are named and diagrammed, the evaluative questions arise naturally from the comparison. The researcher does not need to prompt them. The structure of the activity system model, applied twice in parallel, does the analytical work.
A practical note on facilitation: allow the experienced staff to build the past triangle before newer staff comment on it. Allow newer staff to build the present triangle before experienced staff respond. The sequence matters. Premature commentary collapses the comparison before it can be fully constructed. Each group’s account deserves to be complete before it is placed in relation to the other.
Print two blank activity system triangles, labelled “Before [the change]” and “Now.”
Ask participants with experience of the old system to complete the first triangle: What were the tools? What were the rules? How was the work divided? What was the activity trying to achieve?
Ask participants who know only the current system to complete the second triangle using the same questions.
Place both triangles side by side. Ask: Where do they differ? Where do the differences produce tension? Which elements of the old system are still present in the new, and in what form?
The contradictions that emerge from this comparison are participant-constructed. Record them. They are your data.
Modelling involves participants beginning to imagine an alternative — a different way of organising the activity that would resolve or reduce the contradictions they have identified. This stage is often tentative and partial. Participants do not produce a finished redesign of their system; they articulate a direction. The researcher’s role is to support the expression of that direction without steering it toward a predetermined conclusion. The model that emerges belongs to the participants. It is their theory of what a better system would look like.
By the end of session two, nurses at NorthCare begin to articulate what a better system might involve. Their proposals are practical and structural: designated documentation time built into shift patterns; a simplified EPR interface for high-dependency periods; the reinstatement of a collective end-of-shift handover as the primary documentation event, with the EPR used to formalise rather than replace it. These are not fully formed proposals — they are directions. But they are the nurses’ directions, grounded in their analysis of the system’s contradictions. Amara records them carefully. They become part of her data and part of the emerging model.
At this stage, the proposed model is subjected to scrutiny. Participants examine its implications, identify its limitations, and test it against known constraints. What would change? What would stay the same? Who would resist it, and why? What does it require that the current system does not have? This stage often produces the most analytically rich discussion, because it forces participants to articulate the structural features of the current system that their proposal would need to overcome.
The later stages of the cycle move from analysis into action: new practices are tried in context (implementing), evaluated against the original contradictions (reflecting), and — if they work — stabilised into new forms of activity (consolidating). In many research contexts, including PhD studies, the researcher does not accompany participants through all of these stages. The timeline, access constraints, and scope of the study may mean that the research concludes during the modelling or examining phase. This is not a failure of the method. Reaching a point at which participants can articulate a grounded alternative, and at which the researcher can explain why the current system produces its contradictions, is a significant analytical achievement. Not every Change Laboratory produces a transformed activity system. All of them, done well, produce a deeper understanding of the one that exists.
The expansive learning cycle does not end when the final session concludes. If the Change Laboratory has worked, participants leave Session 5 with a shared analysis of their situation and a set of modelled proposals for how it might be different. What happens to those proposals is not a separate, post-research question — it is part of the research design, and it needs to be planned from the start.
The participant with a route to management — identified during participant selection — is now the critical figure. Before the final session ends, the group should agree: what are the two or three concrete proposals we want to communicate? Who will communicate them, to whom, and in what form? The researcher’s role at this point is to support the preparation of that communication, not to make it on the group’s behalf. This is an important distinction. If the researcher presents findings to management directly, the Change Laboratory becomes a conventional consultancy exercise. If participants present their own analysis — using the language and the diagrams that emerged from the sessions — it is something qualitatively different: workers communicating a collectively produced understanding of their own activity system to the institution that governs it.
In practical terms, this means spending part of Session 5 preparing the communication. What are the key points? What evidence supports them? What is being asked of management — a decision, a resource, a conversation, a pilot? The group should produce something tangible: a one-page summary, a set of annotated diagrams, or a short presentation that the designated participant can take into their management conversation. The researcher can help draft or refine this, but the voice must be the participants’.
In the final twenty minutes of Session 5, Amara asks the group: “If you had fifteen minutes with the nursing director, what would you say?” The group produces three proposals: designated documentation windows built into the shift pattern; a simplified EPR entry form for high-dependency periods; and a pilot of collective end-of-shift handover on one ward, with EPR used to formalise rather than replace it. The senior nurse who sits on the clinical governance committee writes these up as a one-page summary during the session, with the group adding evidence and refining the language. She leaves with a document she is prepared to present. Amara leaves with a record of what the group produced and how. Both of them have something to take forward.
Implementation is not guaranteed by the Change Laboratory process. Management may respond slowly, partially, or not at all. One or two proposals may be adopted; others may be declined; others may be absorbed into existing structures in a diluted form. This is normal and should be expected. What matters analytically is not whether the proposals are fully implemented, but what the response reveals about the activity system — about which contradictions the institution is willing to address and which it is structurally unable or unwilling to engage with.
Where implementation does occur — even partially — testing follows naturally. Does the change reduce the contradiction it was designed to address? Does it produce new tensions elsewhere in the system? Who benefits from it, and who does not? These questions are empirical, and if your research timeline allows, they are worth pursuing. A Change Laboratory that reaches implementation and produces data on its effects is a significantly stronger study than one that concludes at the modelling stage.
Consolidation — the point at which a new practice stabilises and becomes the new normal — typically takes months, sometimes longer. For most PhD studies, consolidation will not be observable within the research timeline. That is not a failure. It is an honest account of where the research ended, and what would need to happen for the cycle to complete. Your conclusions chapter can address this directly: what did the Change Laboratory produce, how far through the cycle did it progress, and what would the next stage require?
If your research timeline and institutional access permit it, a follow-up session three to six months after the final Change Laboratory session is one of the most valuable things you can add to a PhD study. It serves three purposes: it allows you to observe what has changed and what has not in the activity system; it gives participants the opportunity to reflect on their own development through the process; and it produces data on whether the proposals generated in the modelling stage have moved toward implementation, stalled, or been absorbed into existing structures in modified form.
A follow-up session does not need to replicate the structure of the Change Laboratory sessions. It is closer to a structured reflection: what has changed since we last met? What has been attempted, what has been resisted, and what do we understand now that we did not understand then? Return to the problem list from Session 1 and ask participants to revisit it. Some problems will have been addressed; others will look different in the light of the analysis; others will be unchanged. The comparison between the Session 1 problem list and the follow-up reflection is a form of evidence about the impact of the process itself.
In a PhD context, a single follow-up session is realistic. Two follow-up sessions, at three and six months, is ideal but requires institutional access and participant willingness that not all settings can guarantee. Plan for one; design for the possibility of two; accept what your context makes possible.
Not every research context permits in-person sessions. Participants may be distributed across sites, shift patterns may make a common meeting time impossible to find, institutional access may be restricted, or — as the pandemic demonstrated — circumstances may change after the research has already begun. An online Change Laboratory is not a lesser version of an in-person one. It is a different version, with its own affordances and its own specific pitfalls. Understanding both before you begin is what makes the difference between an online Change Laboratory that works and one that produces thin data and frustrated participants.
The platform matters less than how you use it, but some platforms are better suited to the Change Laboratory than others. The key requirements are: stable video with visible participant faces; a shared digital whiteboard that everyone can write on simultaneously; breakout room functionality for small-group work; and the ability to display and annotate documents together. The following platforms have all been used successfully in research contexts:
Online sessions require more preparation than in-person ones, not less. The physical materials that would normally be on the table — printed triangles, sticky notes, marker pens — must be replaced by digital equivalents that participants can access and use without technical difficulty. This means testing everything in advance, not on the day.
The protocols that make an in-person Change Laboratory work need to be explicitly re-established in an online environment, because the physical cues that normally reinforce them — everyone in the same room, materials visible, the researcher at the front — are absent.
The two-triangle exercise is the most physically dependent element of the Change Laboratory and requires the most careful adaptation for online delivery. In person, participants stand around large paper on the wall, write directly on it, point to each other’s contributions, and physically organise sticky notes. Online, all of this must be replicated through a digital whiteboard, and the social dynamic that makes it work — the physical proximity, the shared material, the ability to reach across and add something — is more difficult to recreate.
The following approach has worked well in practice. Prepare two large triangle frames in Miro or Mural, labelled “Before” and “Now,” each with sticky note zones corresponding to the six system elements. Assign one colour of sticky note to experienced participants and another to newer ones. Ask participants to add their sticky notes simultaneously rather than sequentially — this preserves the energy of the in-person exercise and prevents the whiteboard from becoming dominated by whoever types fastest. After five minutes of simultaneous contribution, pause and read the board together: what patterns are visible? Where do the two triangles differ? Where do the differences produce tension?
Screen-share the whiteboard rather than asking participants to navigate to it themselves — this keeps the group’s attention on a single shared view. Use the pointer or annotation tools to draw attention to specific elements as you discuss them. At the end of the session, export the whiteboard as a PDF or image and share it with participants before the next session — this maintains continuity in a way that folded-away physical paper cannot.
Honesty matters here. Some things that happen in an in-person Change Laboratory are genuinely difficult to replicate online, and your methodology chapter should acknowledge this if you ran sessions remotely.
The physical act of writing on large paper together — standing at a wall, reaching across each other, editing in real time — creates a kind of shared ownership of the emerging analysis that digital whiteboards approximate but do not fully reproduce. The informal moments — before and after the session, during refreshment breaks — are where participants process what they are noticing and sometimes say the most analytically significant things. The researcher’s ability to read the room — to notice a glance between two participants, to see who is leaning forward and who has pushed back in their chair — is significantly reduced on video. And the sense of collective presence in a shared physical space, which builds trust and candour over multiple sessions, is harder to establish online even when everything else is done well.
None of these limitations makes an online Change Laboratory invalid. They make it different, and that difference belongs in your analytical account of the method. If you ran sessions online, say so explicitly in your methodology chapter, describe the platform and tools you used, identify the limitations relative to in-person delivery, and explain how you attempted to mitigate them. That is not a weakness in your study. It is methodological transparency.
If you are running sessions online, complete the following before confirming dates with participants:
The Change Laboratory places the researcher in an unusual position. You are simultaneously the analyst who has constructed the activity system model and identified the contradictions, and the facilitator who must not impose that analysis on participants. The mirror data you choose reflects your analytical judgements. The vocabulary you introduce shapes what participants can say. The questions you ask — and the ones you withhold — direct the conversation in ways that your reflexivity account must acknowledge. This dual position is not a problem to be resolved; it is a condition to be worked with honestly.
This is one of the most commonly underestimated aspects of Change Laboratory design, and it is worth addressing directly. Three sessions is not enough. It may feel like a manageable commitment for participants and a realistic scope for a PhD study, but it is insufficient to move through the cycle with genuine analytical depth. What typically happens in a three-session design is that Session 1 triggers Questioning, Session 2 reaches early Analysis, and Session 3 attempts Modelling before participants are ready for it. The result is a cycle that has been compressed rather than completed, and findings that reflect the limitations of the design rather than the complexity of the system.
The minimum for a modified Change Laboratory that can credibly claim to have engaged with the expansive learning cycle is five sessions. This is not an arbitrary figure. It reflects what each stage of the cycle actually requires:
For a full Change Laboratory aimed at reaching Implementation and Consolidation, six to eight sessions is a more realistic target, typically spread over three to four months. This allows participants to attempt new practices between sessions and to return with evidence of what changed and what resisted change.
The interval between sessions matters as much as the number of sessions. A week to ten days between sessions gives participants time to return to their work with new eyes, notice things they would previously have explained away, and arrive at the next session with fresh observations. Sessions held in rapid succession — say, within days of each other — compress the cycle in a way that defeats its purpose. The change laboratory works precisely because it is interwoven with the activity it is examining.
The Change Laboratory is sometimes described in the literature in terms that make it sound more clinical and resource-intensive than it needs to be. In practice, a well-run session requires modest but specific physical provisions. Getting these right matters: a cramped room, no refreshments, and nowhere to put large paper will undermine a session before it begins. What follows is what Amara used, and what we recommend as a working baseline.
The room should be separate from the normal working environment of participants — not the ward, not the office where line managers are present, not a space where participants feel observed by their institution. This matters for the quality of discussion. Participants need to feel that what they say in the room stays in the room, at least until they have decided together how it should be used. A meeting room, seminar room, or even a community space away from the site is preferable to a breakout area within the workplace.
The room needs to be large enough for participants to sit around a table together and to have wall space — or a floor space — for large paper. If wall space is not available, a long table or the floor works. What does not work is a room where everyone is seated in rows facing a screen. The Change Laboratory is a discussion, not a presentation. The physical arrangement should reflect that: circular or horseshoe seating, everyone able to see each other, the research materials visible and accessible to everyone in the room.
For five sessions with six to eight participants, the room should accommodate the group comfortably without feeling institutional. Community rooms, library seminar rooms, and hospital education centres have all worked well in practice. University seminar rooms, booked outside teaching hours, are usually adequate and often free to researchers.
The following materials are needed across the five-session cycle. Some are used once; most recur throughout.
This may seem like a minor detail. It is not. Participants are giving you their time, often outside working hours or during a break. Tea, coffee, and something to eat are not a luxury — they are an acknowledgement of that generosity, and they materially affect the quality of discussion. A session that begins with participants waiting for a kettle to boil and helping themselves to biscuits is a session that begins with informal conversation, which loosens the room before the formal work begins. A session that begins with participants sitting in silence in a cold meeting room is a session that starts ten minutes late in atmosphere even if it starts on time on the clock.
Budget for refreshments at every session. In a UK context, a working figure is £5–10 per person per session for tea, coffee, milk, and a modest selection of biscuits or snacks. For five sessions with six participants, this is roughly £150–300 for the full Change Laboratory cycle. This should be costed into your research budget from the start and, where possible, covered by your institution rather than paid personally. Many universities have small research expenses funds that cover exactly this kind of cost; your supervisor or research administrator can advise.
Where institutional funding is not available, honest conversation with participants is better than cutting corners. In Amara’s study, the ward education budget covered refreshments for two sessions after the ward manager, who had heard about the first session from nursing staff, asked to be kept informed and offered practical support. That outcome was not planned — it emerged from the Change Laboratory process itself.
Securing five sessions with the same group of participants, over two to three months, requires more planning than three sessions. Be honest with participants and gatekeepers at the outset about what you are asking. A five-session commitment of ninety minutes each is seven and a half hours of participant time. That is significant, and asking for it requires a clear explanation of what participants will get from the process — not just what the research will produce.
The most effective framing is not “this will help my research” but “this is a structured process in which you will examine and analyse your own working situation, with support. The findings will be shared with you, and you will have the opportunity to contribute to how they are written up and used.” That framing is also, in a CHAT sense, more accurate: the Change Laboratory is as much for participants as it is for the researcher.
Schedule sessions at the same time and day each cycle where possible — this reduces the cognitive load of coordination and makes it easier for participants to protect the time. Early morning sessions (before shift changes), lunchtime sessions, or immediately after a shift ends tend to work better in healthcare settings than sessions in the middle of a working day. Ask participants what works for them in the scoping conversation, before booking anything.
Before approaching participants or gatekeepers, complete the following:
The full Change Laboratory as described by Engeström and Virkkunen involves multiple sessions over an extended period, often with institutional support, a dedicated physical space, and a research team. Many PhD researchers, working within time constraints and with limited institutional access, cannot implement the full model. A modified version is legitimate — but it must be modified thoughtfully, not minimally.
The minimum credible modified design for a PhD study is five sessions, as outlined above, spread over at least two to three months. Anything fewer than five sessions should be described in your methodology chapter honestly: as structured reflection sessions that engaged with the early stages of the expansive learning cycle, rather than as a Change Laboratory. That distinction matters, and examiners will notice if it is blurred.
What you cannot do — and what this handbook will not encourage — is run two or three sessions, call them a Change Laboratory, and claim to have traced the full expansive learning cycle. That is a misrepresentation of the method. It is also, practically, a weaker study than one that is honest about the scope of its intervention and rigorous about what that scope can and cannot support analytically.
“Five structured Change Laboratory sessions were conducted with [participants] over [timeframe], each lasting approximately [duration], with intervals of [one to two weeks] between sessions. Session 1 introduced mirror data drawn from [observation / interview / document analysis] to initiate the questioning phase. Session 2 used the two-triangle exercise to construct past and present activity systems collectively, engaging the analysis phase. Session 3 deepened the analysis by naming and evidencing specific contradictions using the activity system model as an analytical tool. Session 4 supported the modelling phase, in which participants proposed alternative forms of activity in response to the contradictions identified. Session 5 examined the proposed models against the known constraints of the system and returned to the original mirror data to assess whether participants’ understanding of their situation had developed. The process engaged with the questioning, analysis, modelling, and examining stages of the expansive learning cycle. Participants’ engagement with the model and their proposed alternatives constitute both a data source and a set of findings in their own right.”
By session four, the nurses at NorthCare have a shared vocabulary that did not exist in session one. They speak of the Tool–Rules contradiction without prompting. They distinguish between what the EPR was designed to produce and what they understand nursing to be for. Amara has not taught them this vocabulary — it has emerged from four sessions of structured engagement with their own practice. She is now confident that they are ready to model.
Session four focuses on a single question: if you could redesign how documentation is structured on this ward, what would you change? The proposals that emerge are structurally grounded in a way that Session 1 or 2 could not have produced. Nurses propose designated documentation windows built into shift patterns, a simplified EPR interface for high-dependency periods, and the reinstatement of a collective end-of-shift handover as the primary documentation event. They are not venting frustration — they are constructing alternatives informed by their analysis of the system’s contradictions.
In session five, Amara returns to the two numbers from session one — twelve minutes designed, thirty-eight minutes actual. She asks: has anything changed since we first looked at these? Two nurses report having begun keeping a supplementary paper log during their shifts and entering EPR data from it at the end — a more systematised workaround that three others have since adopted informally. Whether this constitutes the beginning of a new practice or a more sophisticated adaptation to an unchanged system is a question the group debates for twenty minutes. The debate itself is evidence of what the Change Laboratory has produced: participants who can analyse their own activity system with a precision and a vocabulary that was not available to them four months earlier. That is what five sessions makes possible. Three would not have been enough to get here.
Plan your session schedule before you begin fieldwork. Map each of the five minimum sessions onto your data collection timeline. Identify the gap between sessions (aim for one to two weeks each). Identify who needs to approve participant time and how that approval will be sought. A Change Laboratory that is not planned into the research design from the start is one that gets compressed under time pressure into three sessions — and three sessions is not enough.
Then, separately, select your mirror data for Session 1. Choose three to four pieces of evidence that show a structural breakdown rather than an individual failure. For each piece, ask: could this be explained away as one person’s problem? If yes, choose different data. Your mirror must show a pattern, not an incident.
You cannot introduce the activity system model to participants in a Change Laboratory session until they have already begun questioning their current practice. If you introduce the theoretical framework before participants are ready to use it as an analytical tool — before the mirror data has done its work — it will be received as a lecture rather than a thinking aid. Sequence matters. Mirror data first. Theory when the need for it has been felt.
The Change Laboratory is most powerful not when it produces a new system, but when it produces participants who can analyse the one they are in — and a researcher who has witnessed that analysis taking place.
Engeström, Y. (2001). Expansive Learning at Work. Journal of Education and Work, 14(1), 133–156.
Virkkunen, J. & Newnham, D.S. (2013). The Change Laboratory. Sense Publishers.
Engeström, Y. (2008). From Teams to Knots. Cambridge University Press.
Part III — System Analysis in Practice
Use this chapter when you are moving from data collection into analysis, building system maps, or trying to explain why current tensions exist rather than just describing them.
Your diagrams evolve with your analysis, and your analysis deepens when you situate the present system within its history.
By month ten of her study, Amara has stopped working with a single activity system diagram. She has three. The first represents nursing practice before EPR implementation. The second represents the system as designed — what EPR implementation was supposed to produce. The third represents the system as it actually operates, including the informal workarounds nurses have developed to survive it. It is the distance between the second and third maps that her analysis must explain.
A CHAT analysis that only describes the present system is incomplete. Activity systems are not created from scratch — they are modified. What nurses do now is intelligible only in relation to what they did before. Their workarounds are not irrational responses to a new tool; they are the residue of a practice system that worked, pressed into service against a replacement that does not quite fit.
One nurse in Amara’s study puts it precisely: “The handover sheet told you everything in one glance. The EPR tells you everything in forty clicks.” This is not a comment about technology. It is a description of two different epistemic structures — two different theories of what clinical knowledge is, how it should be organised, and who it is for. The handover sheet was designed by nurses, for nurses, over decades of accumulated clinical practice. The EPR was designed by software architects, for administrators, with compliance and audit as the organising logic. When Engeström writes of tertiary contradictions — tensions between an existing system and a more advanced form of activity — this is precisely what he means. Not simply a new tool, but a new theory of work, introduced into a system whose participants have a different theory, equally developed and equally valid. E’87Engeström, Y. (1987)Learning by Expanding. Ch. 4.
The analytical work of this chapter is to build at least three maps and hold them in relation to each other. A single diagram cannot do this work. Multiple diagrams, compared deliberately, are what make historical explanation possible.
Map 1 — the past system. At NorthCare before EPR: tools were the handover sheet, the physical observation chart, and verbal communication. Rules were organised around shift handover as the primary documentation moment — retrospective, collective, and time-bounded. The division of labour distributed documentation responsibility across the team, with senior nurses synthesising information for handover. The object — safe patient care — was served by a system whose rhythms matched the rhythms of acute nursing work.
Map 2 — the designed system. The EPR implementation assumed that real-time individual documentation at the bedside would improve care quality and create auditable records. The design logic imagined nursing work as a series of discrete, documentable events occurring at a pace that permitted contemporaneous recording. This is a coherent model of nursing practice — it simply does not describe nursing practice as it occurs on a busy medical admissions ward.
Map 3 — the actual system. What Amara observes is an improvised hybrid: nurses using the handover sheet as their primary clinical tool and the EPR as a compliance instrument, entering data in bulk at the end of shifts from memory and from paper notes. The division of labour has fragmented — every registered nurse now carries individual documentation responsibility — while the workload has increased. The object is increasingly split: nurses are simultaneously oriented toward patient care and toward record completion, and the two are in structural competition for the same finite time.
The analytical move from Map 2 to Map 3 is where explanation begins. It is not enough to say that the system is not working as designed. You must explain why — in terms of the structural relationships between elements, not in terms of individual failure.
Descriptive: “Nurses are using workarounds because the EPR is difficult to use.”
Analytical: “The workarounds nurses have developed are a structural response to a tertiary contradiction between the EPR system’s embedded theory of nursing work — discrete, individual, contemporaneous documentation — and the established activity system’s theory of nursing work — collective, synthesised, retrospective handover. The workarounds are not resistance to the tool; they are the activity system defending its object against a tool that threatens it.”
Notice what the analytical version does. It names the contradiction type. It characterises each system’s theory of work. It locates the workarounds not as a problem to be solved but as evidence of a structural tension to be explained. And it grounds the explanation in the relationship between system elements rather than in the behaviour of individuals.
Map 1 (Past): Tools: ______ / Rules: ______ / Object: ______ / Division of Labour: ______
Map 2 (Designed): What the change assumed: ______ / What theory of work does it embed?: ______
Map 3 (Actual): What is really happening: ______ / Where does it diverge from Map 2?: ______
The explanation lives in the distance between Map 2 and Map 3, understood through the history embedded in Map 1.
Build your past-system map. Interview at least one participant who experienced the activity before the change you are studying. Ask them to describe: what tools they used, what the rules required, how work was divided, and what the activity was trying to achieve. Compare their account with your present-system map. Where do the elements differ? Where does a difference create a current tension? That is where your historical explanation begins.
You cannot write a historically grounded CHAT analysis until you have built your past-system map. This requires data about what came before — documents, accounts, historical records, participant memories. If you have not collected this data, return to your fieldwork. The present system is explained by the past system it displaced. Without that comparison, your analysis is synchronic. CHAT demands diachronic explanation.
The most analytically powerful moment in a CHAT study is when you can show not just that a tension exists, but where it comes from historically and why it persists structurally.
Use this chapter when you are drafting or restructuring your findings chapter and need to move from thematic description to relational CHAT explanation.
Structure your writing around contradictions, not data sources — and let the system logic hold the argument together.
Amara submits a first draft of her findings chapter in month fourteen. It is organised into three sections: Nurse Perspectives, Ward Manager Perspectives, and IT Support Perspectives. Her supervisor reads it overnight and returns it with a single annotation at the top: “This is three summaries. Where is the system?”
Amara spends a week restructuring. The same data — the same quotations, the same observations — is reorganised around her three contradictions instead of her three participant groups. The chapter becomes shorter. The argument becomes visible. Her supervisor reads the second draft and says: “Now I can see what you are claiming.”
Writing up CHAT research requires you to maintain system logic throughout. Organise your findings around contradictions and system relationships, not data sources. Each section can follow a clear structure: introduce the contradiction, present supporting data, explain the relationship, link to system development.
| Observation (Descriptive) | CHAT Explanation (Analytical) |
|---|---|
| “Nurses are frustrated with the EPR.” | A secondary contradiction exists between the Tool (EPR) and Rules (shift patterns), which structurally prevents real-time documentation during high-dependency care periods. |
| “Staff are using workarounds.” | The Object (safe, timely care) is not aligned with the Tool’s design logic (complete, auditable records), producing informal compensatory practices. |
| “Compliance is high but care experience is poor.” | A tertiary contradiction exists between the established nursing practice system and the EPR, which embeds a partially incompatible theory of clinical work. |
| “Managers see success; nurses see failure.” | Multiple system actors hold different interpretations of the Object, reflecting a quaternary contradiction between the ward activity system and the management activity system. |
Descriptive: “Several nurses mentioned that the EPR takes a long time to use. Some said they completed records after their shifts.”
CHAT-oriented: “A secondary contradiction emerged between the EPR system (tool) and established shift structures (rules). Nurses described documentation requirements exceeding the time available during the shift, leading to records being completed after hours. This pattern reflects not individual inefficiency, but a structural mismatch between a tool designed for continuous real-time entry and a system of work organised around intensive, intermittent patient contact.”
Each section of your findings chapter: introduce the contradiction and name the elements in tension; present supporting data from multiple sources; explain the relationship — how and why these elements produce the pattern you observed; then link to system development by connecting the present tension to its historical origins or trajectory.
Take your current findings chapter draft — or, if you have not started, a list of your main themes — and apply the “So What?” test to every section heading. If the heading is a participant group (“Nurse Perspectives”), a data source (“Interview Findings”), or a theme (“Frustration with Technology”), restructure it around the contradiction that section is evidencing. Your headings should name tensions between system elements, not categories of data.
You cannot begin writing your findings chapter until you have a list of your contradictions with at least two evidential sources for each. Write that list first. It is the skeleton of your chapter. Every paragraph you write must serve one of those contradictions. If it does not, it belongs in the appendix or the background chapter — not in your findings.
Effective CHAT writing names the system, evidences the tensions, explains the relationships, and traces the history — description alone is never enough.
Part IV — Defence and Completion
Use this chapter when you are preparing for your viva, writing your elevator pitch, or anticipating examiner challenges to your theoretical and methodological choices.
Know why CHAT, what it cannot do, and how to defend every interpretive decision.
In her viva, Amara is asked: “Could you not have explained these findings simply by saying that the EPR system was poorly implemented?” It is the question she has been waiting for. She answers: “Poor implementation is an individual or managerial explanation — it locates the problem in a decision or a failure. My study argues that the problem is structural. Even a perfectly implemented EPR system would produce these tensions, because the tensions arise from the collision between two different theories of nursing work embedded in the old and new systems. Implementation quality cannot resolve a tertiary contradiction. That requires a different analysis, and a different kind of intervention.” The examiner nods and writes something down. It is the turning point of the viva.
The viva is not a test of memory. It is a discussion where you explain your reasoning, justify your choices, and reflect on your findings. Your central anchor throughout: what is your unit of analysis? In CHAT, this is the activity system — not the individual.
“This study examined [context] as an activity system. It identified key contradictions, particularly between [element] and [element]. These tensions were explored using [method]. The findings suggest that [main insight about system development].”
If you feel unsure during a question, return to your core framework. If you lack a complete answer: “That is an area the study did not explore in depth, but it may relate to…” — this shows awareness of scope and openness to further development.
Record yourself giving your elevator pitch. Play it back. Ask: does it name the activity system? Does it identify at least one contradiction by type? Does it say what method was used and why? Does it state the main finding in a way that could not have been produced by thematic analysis? If the answer to any of these is no, revise and record again. The pitch should be fluent in 90 seconds without notes.
You cannot sit your viva until you can answer the question “Why not thematic analysis?” in two sentences, without notes, without hesitation. Practice that answer until it is automatic. Everything else in your viva defence rests on the credibility of that justification.
A well-prepared viva candidate knows their system as clearly as their data — they can move between the two without losing the thread of either.
Use this chapter when you need an overview of the whole process — at the start to plan, or at any point to locate yourself in the research arc.
Eleven stages from the initial research context to defence of epistemology.
Think of this workflow not as a checklist to complete but as a map of the research arc. Each stage builds on the one before it, and each produces something concrete — a named system boundary, a set of coded transcripts, an evidenced contradiction, a draft findings section — that the next stage depends on. The stages that feel most uncertain — identifying contradictions, tracing historical development, restructuring the findings chapter — are the stages where the most significant analytical work is done. Do not rush them.
This workflow is not strictly linear. Returning to earlier steps and revising your system model is a normal part of the process — not a sign of difficulty, but a sign of deepening analysis.
The checklist at the back of this handbook is most useful if you consult it at stages 4, 7, and 9 of this workflow — not only before submission. At stage 4 (contradictions identified), use the contradictions section to test whether your evidence is sufficient. At stage 7 (system development traced), use the development-over-time section to confirm your historical analysis is in place. At stage 9 (argument refined), use the structure-of-findings section to audit your chapter before you consider it complete. Print the checklist. Tape it where you work. Mark it as you go.
Print this workflow. Mark the stage you are currently in. Mark the stage you thought you were in before reading this chapter. If they differ, identify what is incomplete in the current stage and what you need to produce before moving forward. Tape the marked workflow above your desk. Update it when you move between stages.
You cannot move to the next stage of this workflow until the current stage has produced something you can point to: a named system boundary, a provisional map, a named contradiction with evidence, a structured comparison of past and present systems. If you cannot produce the output, the stage is not complete. Return to it before moving forward.
A CHAT thesis is built iteratively — the system you submit will look very different from the one you first sketched, and that development is the evidence of your analytical work.
Use this chapter when you are writing your reflexivity section, designing consent procedures, or navigating the dual position of analyst and facilitator.
The researcher is always part of the system they are studying.
Ethics in CHAT research is not a box to tick before fieldwork begins. It is a continuous analytical and relational responsibility that runs through every stage of the study — from the way you frame your research questions, to the mirror data you select, to the interpretations you publish. The activity system you construct is always, in part, a representation of the people who gave you access to their working lives. That representation carries consequences.
By month six, Amara notices something uncomfortable. Her presence on the ward has become a resource. Nurses mention her study in conversations with managers. One nurse says explicitly: “Maybe when your research comes out, they’ll actually listen.” Amara is no longer only observing the activity system. She is part of it. Her study has become a tool that participants are using in their own struggle with the EPR system. She must decide what that means for her analysis — and she must write about that decision in her thesis.
Ethical conduct in CHAT research goes beyond institutional compliance. It requires active reflexivity about how your positioning shapes what you see, interpret, and represent. In the Change Laboratory you are simultaneously outside the system (analysing) and inside it (engaging with participants). This dual position is not a problem to resolve — it is a condition to acknowledge and work with transparently.
Write one page in your research journal about your position in relation to your research site. Address: what assumptions did you bring to the fieldwork? How did your presence affect the activity you were observing? What interpretive choices have you made in constructing your system map, and what alternatives did you consider and reject? This page is the first draft of your reflexivity section.
You cannot complete your methodology chapter without a reflexivity section that accounts for your position within the system you are studying. This is not optional self-disclosure — it is an analytical requirement. Your system model is a construction. Your data is a selection. Your contradictions are interpretations. Every one of those decisions was made by a researcher with a position, a history, and a set of assumptions. Name them.
Reflexivity in CHAT research means making your interpretive choices visible — the system model you construct is always a product of your analytical decisions, and those decisions belong in your thesis.
Use this chapter when you are reviewing a draft chapter, preparing for supervision, or suspecting your analysis has drifted away from CHAT principles.
These patterns are common, examiners notice them, and each has a clear remedy. Recognising your own work in any of them is the beginning of the fix.
These four patterns appear repeatedly in CHAT theses across disciplines and institutions. They are not signs of inadequate ability — they are signs of the genuine difficulty of the analytical work CHAT demands. Most supervisors have seen all of them. Most examiners have assessed theses that contain them. What distinguishes a strong CHAT thesis is not the absence of these tendencies early in the process, but the rigour with which they are recognised and corrected before submission.
The mistake: drawing your activity system triangle in your first chapter and reproducing it unchanged throughout the thesis. The diagram appears in the methodology, reappears in the findings, and is cited in the discussion — always the same, as if the analysis produced no surprises and the system revealed nothing not already known at the start.
Why it happens: drawing the diagram feels like completing a task. Once drawn, revising it feels like admitting the first version was wrong. In fact, revision is the evidence of analytical work. A system map identical in your final chapter to your first is a sign the analysis has not progressed — that you have described the system rather than investigated it.
The fix: date every version of your system map and keep them all. In your final thesis, show the development explicitly: “My initial map identified the EPR as the primary tool. After interview analysis, I revised this to show two tools in tension. After the two-triangle exercise, I identified a third tool — the informal handover sheet — that had been invisible in my original mapping.” That narrative of revision is not a confession of early error. It is the evidence of your analytical development, and it should be in your thesis, not hidden from it.
Amara keeps a folder labelled “System Maps.” By submission it contains seven versions, each dated and annotated with the data event that prompted the revision. Version 1 has two tools. Version 4 has four tools with a margin note: “handover sheet is not informal — it is the actual primary tool.” Version 7 shows two separate systems — ward nursing and hospital management — with the EPR at the intersection. Her examiner asks about the development of the system map in the viva. She talks for eight minutes. The examiner writes “impressive analytical self-awareness” in her notes.
The mistake: naming a tension as a contradiction without sufficient evidential grounding. A single interview excerpt, one observed incident, or a theoretical expectation that a tension should exist — none of these is sufficient. Yet students under pressure to produce findings often reach for contradiction labels before the evidence warrants them.
Why it happens: the CHAT framework creates an expectation of contradictions, and students feel pressure to find them. If the data is not obviously generating contradictions, the temptation is to impose them. Examiners detect forced contradictions quickly: they ask for the evidence and the student finds they have one example rather than a pattern.
The fix: for every named contradiction, require yourself to provide evidence from at least two independent data sources showing the same structural pattern. An interview extract and an observation that corroborates it. A document establishing the rule and an interview showing how the tool violates it. A pattern in timestamps confirming what nurses describe verbally. If you cannot produce two independent sources, the contradiction is a candidate, not a finding. Say so in your thesis and continue collecting data until the evidence is there — or the candidate is abandoned.
A second test: can the pattern be explained without CHAT? If the tension disappears when framed as “the software is poorly designed,” it is not yet a structural analysis. A structural contradiction must name the system elements in tension and explain why the structure of the activity — not an individual decision or a product flaw — produces the pattern.
In her early analysis Amara identifies what she thinks is a primary contradiction within the EPR tool itself: designed for both clinical documentation and administrative compliance, with these two purposes incompatible. She writes it up as a finding. Her supervisor asks: what is your evidence that these functions genuinely conflict, rather than just being differently prioritised by different users? Amara returns to her data. She finds attitudinal evidence in the interviews but no structural evidence — no observation, document, or record showing the two functions conflicting in practice. She demotes it to a “candidate tension” and designs two additional observation sessions specifically to look for evidence. She finds it — but only in Session 3 of the Change Laboratory, when a nurse describes abandoning a clinical assessment function mid-shift because completing it would delay the administrative compliance record the ward manager reviews. That is the evidence. It took six more weeks to find it. The contradiction is stronger for the wait.
The mistake: structuring your findings chapter around themes, participant groups, or data sources rather than around the relationships between system elements and the contradictions those relationships produce. A chapter organised as “Nurse Perspectives,” “Manager Perspectives,” and “IT Support Perspectives” is a thematic analysis. A chapter organised as “The Tool–Rules Contradiction,” “The Tool–Object Contradiction,” and “The Tertiary Contradiction between Past and Present Practice Systems” is a CHAT analysis. The same data can produce either. The choice of organising principle is the analytical decision.
Why it happens: interview data arrives organised by participant. The natural temptation is to write from that organisation. Thematic analysis is also deeply familiar from prior training — many students have done it well before and find its logic reasserting itself under the pressure of writing up.
The fix: before drafting your findings chapter, write a list of your named contradictions with their evidential sources. That list is your chapter outline. Each section takes one contradiction as its subject, draws on data from multiple sources to evidence it, explains the relational mechanism that produces it, and connects it to the historical development of the system. If a piece of data does not serve any of your named contradictions, ask whether it belongs in the findings chapter at all — or in the background or literature review.
Amara submits a first findings draft organised around three participant groups. Her supervisor returns it with one annotation at the top: “This is three summaries. Where is the system?” Amara spends a week restructuring. The same data, the same quotations, the same observations — reorganised around her three contradictions. The chapter becomes shorter. The argument becomes visible. Her supervisor reads the second draft and says: “Now I can see what you are claiming.”
The mistake: beginning your findings chapter in clear CHAT framing — naming elements, constructing relationships, evidencing contradictions — and then allowing the prose to drift, section by section, into generic qualitative description. By the third or fourth section, the activity system has disappeared. Participants are being quoted at length. Themes are being reported. The CHAT framework has become decoration rather than architecture.
Why it happens: sustaining analytical framing across a long chapter is harder than it sounds. Writing is tiring, and the natural mode of qualitative writing — describing what participants said and did — reasserts itself when concentration lapses. The activity system that felt vivid during analysis can fade during writing, especially when the writing is going slowly.
The fix: before writing each section of your findings chapter, write one sentence that states the analytical claim the section will make: “This section demonstrates that the secondary contradiction between Tool and Rules is not uniformly distributed across the division of labour, but is concentrated in the position of rotating-shift nurses who have no documentation time built into their working pattern.” That sentence is your analytical compass. Every paragraph in the section should either evidence the claim, qualify it, or develop it. If a paragraph does neither, it does not belong in the section. The sentence keeps the system logic visible while you write, and it becomes the topic sentence of the section when the draft is done.
A fifth pattern worth naming separately: using Engeström (1987) as the only theoretical reference, citing it for every conceptual claim and treating it as if the CHAT literature ended there. Engeström (1987) is essential, but it is thirty-eight years old. The field has developed substantially since then — through Engeström’s own later work, through Edwards on relational agency, through Virkkunen and Newnham on the Change Laboratory, through empirical researchers in healthcare, education, and professional practice who have applied, extended, and sometimes challenged the framework. A thesis that engages only with the 1987 text signals to examiners that the candidate has not read the field. It also misses conceptual tools — knotworking, formative intervention, double stimulation as a methodological principle — that a more current reading would make available. Read broadly within the tradition, then cite purposefully.
Read your current findings chapter draft and do three things. First, highlight every paragraph that does not name a system element, a relationship between elements, or a contradiction — those paragraphs need to justify their presence or be moved. Second, check each named contradiction for evidence from at least two independent sources — if any contradiction has only one evidential source, mark it as a candidate and note what further data would confirm it. Third, read your section headings: if they name participant groups or data sources rather than system relationships or contradictions, restructure before your next supervision meeting.
Before submitting your thesis, read your findings chapter and ask: could this have been written using thematic analysis? If yes — if you could remove every reference to the activity system and the argument would still hold — you have written a thematic analysis with CHAT terminology added. Return to the system. Restructure around the contradictions. Make the relational logic the spine of the chapter, not the decoration.
If your findings chapter could have been written using thematic analysis, it has not yet become a CHAT analysis — the system, not the theme, must be the organising principle.
Use before submission and as a viva preparation tool. Each item should be answerable in your own words.
Terms marked * in the chapter text link here. Entries are in alphabetical order. Core terms appear first; advanced concepts follow.
Core Terms
Advanced Concepts
A tiered reading pathway. Begin with the first tier before moving to the others.
Start here — Core ideas
Vygotsky, L.S. (1978). Mind in Society: The Development of Higher Psychological Processes. Harvard University Press. Introduces mediation, tools and signs, and the social nature of learning — the foundation on which CHAT is built.
Leontiev, A.N. (1978). Activity, Consciousness, and Personality. Prentice-Hall. Develops the concepts of activity, action, and operation — the structural layers beneath the system model.
Engeström, Y. (1987). Learning by Expanding: An Activity-Theoretical Approach to Developmental Research. Orienta-Konsultit. The foundational text: activity systems, contradictions, and the expansive learning cycle. Read this first.
Next — Applying CHAT
Engeström, Y. (2001). Expansive learning at work: Toward an activity-theoretical reconceptualization. Journal of Education and Work, 14(1), 133–156. doi
Virkkunen, J., & Newnham, D. (2013). The Change Laboratory: A Tool for Collaborative Development of Work and Education. Sense Publishers. doi The essential methodology reference for intervention-oriented CHAT research.
Bligh, B., & Flood, M. (2017). Activity theory in empirical higher education research: Choices, uses, and values. Tertiary Education and Management, 23(2), 125–152. doi
Engeström, Y. (2008). From Teams to Knots: Activity-Theoretical Studies of Collaboration and Learning at Work. Cambridge University Press. doi
Advanced — Extending CHAT
Engeström, Y., & Sannino, A. (2010). Studies of expansive learning: Foundations, findings and future challenges. Educational Research Review, 5(1), 1–24. doi
Edwards, A. (2010). Being an Expert Professional Practitioner: The Relational Turn in Expertise. Springer. doi Develops relational agency as a concept for professional practice research.
Daniels, H. (2008). Vygotsky and Research. Routledge. doi
Sannino, A., Engeström, Y., & Gutiérrez, K.D. (2009). Learning and Expanding with Activity Theory. Cambridge University Press. doi
Flood, M. (2018). Activity theory and its application in educational technology and learning design. Journal of Learning Design, 11(3). doi
Bligh, B. (2020). Designing a Change Laboratory: Outline plan. PubPub. Link
You began this handbook with a situation that felt complex. If you have worked through it carefully, you now have something more than a framework — you have a way of seeing. That is what CHAT, at its best, provides: not a set of labels to apply to data, but a lens that changes what you notice, what you ask, and what you are able to explain.
The activity system you construct in your thesis will not be perfect. It will be provisional, contested, and revised more times than you expect. The contradictions you identify will be harder to evidence than you hoped, and some will resist clean naming until very late in the process. The Change Laboratory sessions will not always go the way you planned them. The writing will take longer than the analysis. These are not signs that your study is failing. They are signs that you are doing the work.
What CHAT offers — and what no other approach quite offers in the same way — is a framework for explaining structural problems structurally. When you sit in your viva and an examiner asks why nurses were resistant, or why the tool was not adopted, or why the policy did not produce the outcomes it intended, you will be able to say: it was not resistance, it was a tertiary contradiction. It was not adoption failure, it was a structural mismatch between the tool’s embedded theory of practice and the activity system it was introduced into. It was not a policy failure, it was a quaternary contradiction between two neighbouring systems with incompatible objects. Those answers are only available to you because you chose to look at the system rather than the individual. That choice is what this handbook has been about.
One final thing. The people who participated in your study — the nurses, the teachers, the administrators, whoever they were — gave you their time, their candour, and their trust. They described a working life that is often harder than it looks from the outside. Whatever your thesis produces analytically, it should also honour that generosity. CHAT, used well, does not just explain why systems are difficult. It creates the conditions in which the people inside those systems can begin to see them differently, and to act. If your research contributes to that, even modestly, even partially, it has done something worth doing.
Good luck with the study. The system will reveal itself in time.