I study what people actually do when they work together through technology, and design systems around that reality rather than assumptions about it.
That applies equally to AI: Frontier AI models will improve to equal or exceed some human capabilities, but the next collaboration frontier will be AI that adapts to human intentionality.
This question first drew me to study how strangers open conversations on Internet Relay Chat (1997) and how long-distance couples cope with video-calling glitches (2007–2015). It led me to Microsoft Research, where I have studied hybrid meetings, telepresence robots, avatar communication, and the pandemic's impact on meetings.
In recent years, this line has converged on a new challenge: How should Generative AI support human collaboration, without undermining the thinking that makes collaboration valuable?
My research now sits at the intersection of three areas: understanding collaboration as social practice, designing AI-augmented collaboration tools, and protecting human cognition in the age of AI. The practical impact is direct, informing Microsoft Teams features, 12 US patents, and the New Future of Work Report series.
The foundation: close analysis of how people actually interact through technology — conversation practices, expressive possibilities, meaning-making. You cannot design good collaboration tools without this understanding first.
Building on that understanding, designing systems that support collaboration as a social practice: hybrid meetings, telepresence, spatial audio, avatar presence. The insight: social reality matters as much as objective reality.
The current challenge: as GenAI accelerates, ensuring AI scaffolds the thinking that makes collaboration valuable — rather than bypassing it. Tools for thought are tools that extend human cognition, not replace it.
Before you can design for collaboration, you need to understand how people actually interact.
My earliest research, studying how strangers manage interaction on Internet Relay Chat (Human Communication Research, 1997), established an approach I have applied ever since: close analysis of how people actually interact through technology, rather than how they say they do.
This conversation-analytic method revealed that even in the simplest text-based chat, people construct sophisticated social practices: openings, turn-taking, and recovery from ambiguous silences. These are not features of the technology but human achievements accomplished through the technology.
The same method revealed how long-distance couples opportunistically use video-calling glitches as relational resources, how domestic soundscapes can be instrumentalised for ambient awareness, and how membership categorisation works as embodied social action.
The same pattern holds beyond one-to-one interaction: people repurpose the expressive affordances of technology to make meaning together.
Crisis Memes (2013) showed that the templatability of image macros is what makes them central to internet culture and freedom of expression: people take a shared visual frame and rework it for their own situation, turning private feeling into public, remixable commentary.
The Evolution of Fail Pets (2012) traced how error screens evolved from terse alerts into whimsical brand mascots — a small design move that turned moments of failure into occasions for empathy, humour, and brand identity.
Across both, the pattern is the same: technology sets the frame, but people decide what the frame means, together.
Design takeaway: study what people do with technology, not just what they say they do. Social practices, not features, are the design unit.
With that understanding: design systems that support the social reality of collaboration.
People talk collaboration into being: negotiate attention, manage speaking turns, signal engagement, and express relational bonds.
Across 30+ papers on meetings, hybrid work, and telepresence, the same patterns recur:
Design takeaway: meeting design is an ecology problem, not a UX problem. Fix the interplay of tools, spaces, and practices — not individual interfaces.
Hybrid meeting breakdowns arise not from individual UX failures but from mismatches in the meeting ecology: the interplay of practices, technologies, and spaces. This reframing shifts design from fixing individual interfaces to supporting the meeting as a whole.
In a 75-person experiment, enabling spatial audio led to more equitable turn-taking and higher perceived social presence, because audio spatiality gives listeners the same environmental cues they rely on in person.
A large-scale analysis of meeting behaviour at a global technology company found that multitasking is pervasive, contextual, and sometimes functional. People switch attention strategically based on meeting relevance, not just distraction.
The text chat alongside video meetings creates a secondary channel that enables inclusion (quiet participants speak up) but also division (side conversations fracture attention).
Three studies on avatars and 3D video in meetings show that representation choices shape mood perception, evaluation, and inclusion in ways that pure fidelity does not predict.
Now the challenge: ensure AI scaffolds the thinking that makes collaboration valuable.
Generative AI is transforming knowledge work at unprecedented speed. The central question is what AI should do, not what it can do. My research finds a critical tension: automation, fluency, and speed can erode the critical thinking, metacognition — monitoring your own thinking process — and intentionality that make human work valuable.
The next generation of AI tools must keep humans in the cognitive loop, scaffolding metacognition rather than bypassing it.
Design takeaway: AI interfaces must support verification, uncertainty inspection, and task stewardship — not just speed. Keep the human in the cognitive loop.
In the 2nd most downloaded paper on the ACM Digital Library, we surveyed 319 knowledge workers and found that higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more. GenAI shifts critical thinking toward verification and task stewardship rather than generation. Therefore AI interfaces must actively support verification, uncertainty inspection, and stewardship — building in moments to question rather than accelerate past them.
Using GenAI well requires monitoring your own thinking: knowing when to trust AI output and when to question it. This metacognitive load is the hidden cost of GenAI adoption, and also its opportunity. Tools can be designed to scaffold metacognition rather than bypass it. In interface terms: confidence signals before AI responses, mid-task checkpoints, comparison views of alternatives, goal review prompts, and explicit reasoning handoffs.
The Metacognitive Demands and Opportunities of Generative AI · CHI 2024
Drawing on Bainbridge's ironies of automation, we showed that GenAI can reduce productivity through over-reliance, loss of skill, and misalignment between user intent and AI output. The solution lies in better interaction design that keeps humans in the cognitive loop, rather than in more or less AI.
Without a solid understanding of why we are meeting, meetings will always be ineffective. Our work on meeting goals is unique in both HCI and Meeting Science, both because of our empirical focus on goals, and because we have now explored all phases of the meeting lifecycle: before, during, and between meetings.
Generative AI is uniquely suited to this problem. Goals are expressed in natural language, they are ambiguous, and they evolve as work unfolds. These are exactly the conditions that rule-based meeting tools have always failed to handle. We argue GenAI helps in three ways: it can elicit tacit goals by asking the questions organisers rarely ask themselves; it can scaffold reflection at moments when people are most able to act on it (before, during, and between sessions); and it can maintain continuity by carrying intent across the meeting lifecycle, so that goals accumulate rather than evaporate. The risk, of course, is that the same fluency that surfaces intent can also substitute for it, which is why our designs treat GenAI as a prompt for human thinking, not a replacement for it.
Crucially, explicitly expressed goals create both direct value for users (clearer plans, sharper meetings, better follow-through) and added value through an AI flywheel. As generative AI shifts human work toward planning and orchestration, externalised reflections become structured inputs for AI collaboration systems. Human reflection generates intent-rich data that improves AI assistance, which in turn reduces the cognitive burden of maintaining intentionality and yields artifacts that make all future collaboration more effective, reinforcing continued reflection.
The formative study underpinning the program: how organisers and participants mentally model meeting goals, and what intentionality support would have to look like to fit those models.
Mental Models of Meeting Goals: Supporting Intentionality in Meeting Technology · CHI 2024
A preregistered field experiment with 361 employees across 7,196 real meetings. Even a brief pre-meeting goal-reflection prompt improved self-reported awareness and behaviour.
An AI-assisted prospective reflection tool that helps organisers articulate success conditions before a meeting begins, catalysing intentionality at the moment plans are still malleable.
Catalyzing Meeting Intentionality with AI-Assisted Prospective Reflection · CHIWORK 2025
A GenAI-powered adaptive interface technology probe spanning planning and running video meetings. Used as a generative design vehicle to surface what intentionality support could look like across the lifecycle.
Active and passive AI-assisted goal reflection helps participants recalibrate mid-meeting without disrupting flow. Deployed as a technology probe inside Microsoft.
Are We On Track? AI-Assisted Active and Passive Goal Reflection During Meetings · CHI 2025
Interfaces that use Generative AI to scaffold the connective work people do between meetings: tying threads forward, surfacing unresolved goals, and making the gap between sessions productive rather than lost.
Designing Interfaces that Support Temporal Work Across Meetings with Generative AI · DIS 2025
12 US patents · 7 Microsoft Hackathon wins (1 Grand Prize, 5 Firsts, 1 Second).
These patents and prototypes trace directly from research insights: meeting-centred design → spatial hybrid meeting systems; avatar and telepresence research → personalised animation and companion devices; robotic telepresence → multi-endpoint mixed reality.
Dynamically-generated UI for AI: everyday, any purpose, anywhere. Now Promptions on AI Foundry Labs.
Teach your avatar to move like you, using few-shot training.
Connecting children from anywhere to career role models.
Mobile Intelligent Lenses for everyday Mixed Reality at work.
Tangible Active Cloud Intelligence: making the cloud a conversation.
Surface Hub cross-device patch-panel UI. Productized as Companion Mode in Microsoft Teams. Garage Wall of Fame.
Most AI interfaces put the burden of exploration on the user: you write a prompt, get a response, guess at a better prompt. Promptions inverts this. It generates contextual UI elements (radio buttons, checkboxes, toggles) that let users explore variations of the AI's response in real time.
Join us in a growing movement to make AI more user-centred, customisable, and accessible — and it is open source. Clone the repo and build with it today: github.com/microsoft/Promptions.
We are at an inflection point. Generative AI is being integrated into every collaboration tool, but most implementations treat AI as a productivity accelerator: summarise this, draft that, automate the other. My research suggests a different path.
Instead of asking "what did the AI do for you?", we should ask "did the AI help you think more clearly about why you're doing this?"