Jofish Kaye
#Conversation

Jofish Kaye

Visit our partners to support us

Framer ProtoPie

Jofish Kaye is a computer scientist who aligns design, data, and qualitative exploration for human-centered product innovation.

His superb track record as a scholar includes more than 100 publications and affiliations with MIT, Cornell, and Microsoft Research. Surprisingly, his career has been in corporations: following stints at Nokia, Yahoo, and Mozilla, he is now a director of AI and UX at Anthem.

I have been a fan of his work for many years – he is a brilliant research writer whose takes are as entertaining as they are rigorous and instructive. In the longest episode of Design Disciplin to date, we had a massively wide-ranging conversation on pretty much all of our mutual interests: research, philosophy, social sciences, leadership, and more.

The transcript below has been edited to maintain clarity and brevity.


There is a question I have been pondering for many years, ever since I became acquainted with your work; and I'm sure a lot of people are wondering the same:

How did you come to be called Jofish?

I finished high school early, and then I spent a year traveling around the world with a backpack. At some point I was down in New Zealand, and a friend said, "I'm going to get a tattoo!" And I was like, "That sounds like a great idea, I'm going to get a tattoo as well! I want to get a little smiley fish with three air bubbles!" So I have a little smiley fish with three air bubbles on my right thigh, that I got when I was 17 or something.

Then somebody was like, "oh well, it's the Jofish!" So everyone called me Jofish. When I went off to undergrad – which is a lovely time to reinvent yourself a little bit, you know, decide who you want to be – I was like, "My name is Jofish," and everyone called me Jofish. So basically everyone who met me from, whatever it was, age 17 onwards, calls me Jofish; and everyone who met me before calls me Joseph.

That's really interesting because you sign your work as Jofish Kaye.

As time went on I think I got more comfortable just deciding that was the identity I wanted to go with. If you look back at some of the earlier ones, it's sort of like, Joseph 'Jofish' Kaye. Over time I was like, look, let's just make life easier.

And there's been some nice times when, like, people met other people because of it. Someone says, "we should get this for Jofish!" And someone turns around and says, "wait do you know Jofish?" There's something lovely about that. It's very convenient having a unique string.

True. My name, Mehmet, is the most common name in the Turkish language. So my Turkish friends call me Baytaş – my last name. But in in the English-speaking communities we frequent, Mehmet is more unique.

On the topic of names: Your website says you're a "scientist" who takes a strategic research approach to innovation and product decisions. For many years, "scientist" has been your actual job title. In UX design, human-computer interaction (HCI), product design, very few people call themselves a "scientist."

What is the meaning of "science" to you, and why do you choose this title?

Let's acknowledge that all of these things are framings. They are ways to tell stories about people, that suggest you're a good fit for something.

When I talk to people about how you get jobs... The key thing is: you want to solve problems that people have. And framing myself as a scientist was a way. That was a useful thing to do. And it does talk to a certain core identity that I have.

I think I'm pretty critical about the value of science. I have too much of that science and technology studies, sort of, you know – "Let's talk about the social construction of science and scientists!" – to necessarily believe it as a label wholesale. But partly, I was in a situation where "scientist" was a valid job title, and a valid way to to articulate the value that I could bring to an organization.

There's another thing. The softer sides of user research – qualitative interviewing, those kind of approaches – are undervalued by people who've been trained in classical computer science, a lot of the times. We're seeing this change over time, but wow, are we not really good at this as a discipline, still. Partly, framing what I'm doing as science is a way to say to those people: "This is a perfectly valid scientific discipline that produces valid knowledge in the world." So is yours – I'm not saying that yours is not, whatever it is – but let's acknowledge that this is an epistemologically valid approach to generating knowledge.

So that's sort of where the scientist comes from. It's also that, within large corporations, it's one of the categories that people are increasingly recognizing as a valid track. We have software engineers, program managers, project managers, site reliability engineers... These are categories that exist, that then have things like job frameworks. People understand: "Ah, this is what this person does, at this level." And we're starting to see that, at some places, that's the case for scientists and researchers. It's a way to be part of that conversation.

The other thing that it does is to frame this as being an IC – an individual contributor. This is one of these things that academics don't always grok about most industry situations, in that the two core divisions in industry is: you're either an individual contributor, or you're a manager. Those are never as pure as they're claimed to be, but you're either an IC or a manager track. And you can usually swap back and forth between those two tracks. It's not really about pay – you might pay someone who is an IC and a manager, at the same level, exactly the same. But they have different responsibilities. The IC is responsible for doing the work themselves, and the manager is responsible for doing it through the people they manage. So those are two different ways of creating value.

I want to dig into some of the other words I've seen on your CV. One of them was "understanding users," which seems to be a priority in your work.

Do you have any favorite approaches or methods for this "understanding users" that seems important for you?

We teach graduate students methods because it's a valuable way to communicate what it is people are expected to do. One of the things I find is, when you actually get out in the world and start creating things, the methods are a lot less set in stone as they appear when you're teaching them.

As time goes on I become more agnostic about the roles of particular methods to do particular things. I am generally and increasingly in favor of being able to use multiple methods, to triangulate into some fundamental understanding. I love to be in a situation where you can, say, do a bunch of interviews – qualitative work. You go and talk to people, and just get a core understanding of things. And then you do, maybe, some survey work. So your interviews, n equals 10, 15, or 5 – it doesn't really matter. Then you might do some survey work, and there your n might be higher – maybe it's 100, maybe it's 1000. (There's not much point in going above a thousand – sometimes people do a few thousand, but you sort of max out your levels of understanding at, call it 1000 people.) But it's surveys – they're pretty cheap to administer, and you get sort of a limited understanding of what people are doing, but you can ask them real questions.

And then I would put the big data layer, where you're looking at log data. You're looking at large aggregates of data, and you're putting those together. Maybe you're doing patterns of what people actually click on at a website. Maybe you're looking at traces of how people react to advertising. The specific thing in the medical domain that I'm in: We're looking a lot of things around patient records – the medications people take, the diagnoses...

As you have those three different levels of understanding, you can triangulate across them and say: "Well, it really does look like this is the pattern that we're seeing..." And you want to jump back and forth between those levels. Maybe you do the survey first, because you just get bored on a Wednesday afternoon. You're like, "I wonder what people think about this..." You put together a quick survey and slap it on Reddit, you get 150 answers, and you throw away 50 of them since they're junk... But you get an inkling of something that's interesting. So then, you go and look at the log data, and you're like, "yeah, that pattern is showing up!" Then you go and talk to some people, and get a sense for this... You want to go back and forth, and challenge yourself with the ways that you created knowledge, to make sure that you can validate those ways.

Often, people are not familiar with all three levels. Most people might understand two of those three levels. They often only understand one. So partly, it's communicating to people who only start with one of those levels. How do you create a story about the information, so people can make the decisions they need?That's how I feel about methods. They're useful for persuading people. And partly, your job is to figure out how you knit those together, to tell the story you need to tell.

I know this is kind of random but I was half expecting that you would at least mention ethnomethodology; because I read the paper that you wrote about ethnomethodology, and it was one of the most entertaining research papers I ever read. But you're talking about a whole different class of methods.

Well, let's talk about ethnomethodology for a minute – I'm going to indulge myself...

Garfinkel's insight with ethnomethodology was to say, "look, let's try and understand how people make sense of the world." These things that we just accept as categories in the world – like data science or interviews... Let's go and look at how people create those social facts. How do they make sense of them?

What I find fascinating is; if you look at the history of sociology, it came out about the same time as grounded theory. Sort of a movement in sociology – both of them are about, "look, let's not have these a priori assumptions about the way the world is organized. Let's go to the people who are the experts or the end users, whoever it is we're looking at – and let's understand how they make sense of the world."

What I love is that, in all three of those systems – which is hardest in the survey work – you can try and go into them with this sense of, "what if we don't know how the world is organized?" What if we say, "let's try and understand the users that we have... How do they make sense of the world?" That's how I apply the ethnomethodological assumptions.

For all of the complicatedness of it as an area, it's a quite humble approach. It doesn't assume that you know what's going on. The great thing about ethnomethodology is, you can go and be like, "well, I don't understand this. What's important is: the people who I'm looking at, how do they understand things?" That's the great power of ethnomethodology.

To be fair, it has been sort of lost and obscured... I don't know how much ethnomethodology you've read, but there's a lot of very very long words in it, I mean, starting with "ethnomethodology..."

This is interesting because I've heard you speak on another podcast, and you were talking about "epistemological assumptions." Today also, you mentioned epistemology, and [in the other podcast] you were saying that it's important to work with people who share your epistemological assumptions. Is this the same thing as what you spoke about now: how people see the world, and how other people's mental models of the world ensue?

It's a super good area to dive into... I'm letting myself talk about epistemology now. I try not to do it at work, because it's a scary word. Most people don't know what it means, and they're like, "he's using a word, I don't know what it means, and this is scary, and I don't want to ask and show that I don't know what this long word means..."

I think of epistemology being: How do you define what is knowledge? What counts as valid knowledge in the world? That's the question we're trying to answer. To phrase it like that, it seems very abstract, but most people have assumptions about this. They've just never articulated them that way.

So it may be: "a good piece of knowledge, a good study, will have a large n." Like, if you've got 100 people, it's a good study; if you've got 30 people, it's not good. It's like a feature of quality in the world. It may be: you spend a bunch of time with these people, so you can use all of the words they use, in a way that suggests you know the underlying concepts. So you can talk about HbA1c if you're talking to doctors about diabetes. You have the terms of the art, locally.

A lot of people have these assumptions about epistemology. They just don't think of them as being epistemology. They think of them as just being good or bad knowledge, good or bad studies. Where the difficulty comes is that most people are only familiar with one kind of study. So if you're a big data person, you're like, "well, is your data big enough?" That's the question. And then someone comes in and says, "I did 10 interviews..." They're like,"n equals 10? That's laughable! I'm not even gonna listen..." That's because you're applying the wrong epistemological standards, the wrong standards of what is good knowledge, to a different form of of creating knowledge.

A lot of the work I end up doing is sort of translating. Someone says, "I did this..." And someone else is like, "no, that doesn't sound very good." And you're like, "ah, but this is the right thing. This is the way to make this. And did you notice how it backs up your result down here?"

I find that books are effective for sharing these kinds of assumptions or creating shared ground. Books are an ancient technology for this purpose, and they're still one of the best technologies. I recommend a lot of books to my colleagues, and I find it easy to work with people if we have read the same books.

Are there books you recommend frequently to colleagues?

It depends a lot on what I'm currently working on, and the stuff I think they need to understand. Recently I've been doing a lot of work around how you present good information to doctors. How do you show doctors that the research you're proposing is good – it, again, is sort of epistemologically valid? So I've been handing people Thomas Kuhn's Structure of Scientific Revolutions. It suffers a little bit from, like, wow, is it obscure... It's hard to read. It's the sort of book that you read in grad school. But you hand it to someone who's not in grad school, and they're like, "what is this?"

Books also suffer from being long. We've all read books where you're like, "this could have been a paper... I got the gist of it. You didn't need to keep hammering it in." So sometimes I'm looking for papers that give the essence of a book. But then, the academic paper... We forget how weird they are. They are such strange beasts. You've written your share of papers, you've read hundreds of papers, and you're like, "oh, this is normal." And then you talk to someone who's got a bachelor's degree, and has been in industry their whole career – they just can't make head or tail of it. They're not used to the story form of a scientific paper. Even that's different in different disciplines. In HCI we do the intro, and then we do the related work – we're like, "this is all the background, this is what we did, this is what we found, and this is what we think the conclusion is." In a lot of other disciplines, the related work comes at the end. They tell you what they did right up. They're like, "well, we did this thing, this is what we found – and by the way, here's some other stuff." And it's almost jarring. You're like, "oh wait, but where's the related... Oh, there it is, over at the end, right."

I tell my students that academic papers are a genre of literature. And every genre is good for different things. There are kinds of stories and words you'd use for science fiction, others for fantasy, others for philosophical things. And academic papers are one way of recording knowledge. There are other ways, good for different things – like the conversation we're having. This is an amazing format for certain kinds of knowledge. YouTube videos are a great format for certain kinds of knowledge. It's actually where I learned the fundamentals of my profession – Scott Klemmer has an amazing HCI course that he recorded for Coursera, and you can find it also on YouTube. That was the basis of my entire professional knowledge. So each kind of representation is good for different things.

I wanted to ask about these representations. You mentioned talking to doctors. My friends who do commercial UX design – and research, especially – frequently complain that decision-makers at their organization don't factor in their results, to the detriment of users or other stakeholders.

What are your favorite ways of sharing your results? What are some representations, some deliverables that you like to provide to people in your work?

Again, I like multiple methods, because different people accumulate knowledge in different ways. My boss right now has been in the film and entertainment industry – he was in computer games, he was in film – and he loves video. For him, that's a really valuable way. I don't really like video very much for conveying information. I mean, I like watching stuff; but for conveying information, I don't find it particularly useful.

So I'll try and put things in different forms. Within the corporation, there are often three canonical forms. One is the document. Like, the Word document. Amazon is very fond of these. Microsoft has a really good track record of really doing this, even down to the level of writing books about it.

The second thing is the PowerPoint presentation, or Google Slides, or whatever you're using. And it gets kind of pooh-poohed, particularly in academia, where I think people don't really understand the role that it plays in industry. In academia it's treated as an adjunct. It's just a tool... You go up and you give the talk. In history departments, I've literally seen people go up clutching some pieces of paper, and proceed to read every line off the paper. I was actually shocked, the first time I saw this happen, but in the history department this was a perfectly normal way to do things.

In industry, the PowerPoint deck is not just a support for explaining the information. It's also the place where the data ends up living. You might have an appendix that has a whole bunch of data. You might have a slide for everyone you interviewed. It might live in some folders somewhere else as well, but the deck gets passed around the company. It becomes intellectual currency that people use. That PowerPoint deck is where information lives.

If you get a designer in the team, they're like, "you know, I should build a beautiful new template, because our current decks look, really, not very good..." I always feel that the right thing is to not let them do that. You want people to use the standard company deck –the absolute standard one that everyone else is using. Because success is when people steal your slides. I mean, you can put your name down the bottom of the slide, but you build the slides so that it's easy for people to steal them. And one of the ways you make it easy for them to steal them is to make it look like everyone else's slides; so that they can just pull your slide in and say, "as our research team told us," whatever, you know, "doctors hate filling out forms," or "nurses feel undervalued..." They can just pull your whole slide in.

And then you have the third form, after the document and the deck: the video. Being able to pull out snippets of video is super useful. Being able to just say, "ah, this thing here, click on this will you." That's what I love – very short snippets of video. You were talking about, in academia, Scott Klemmer's work... If you look at Casey Fiesler on TikTok right now, she's doing wonderful outreach work. She's really taking this medium in which there's not a lot of HCI representation, not a lot of information school representation, and she's really doing super work there; using that medium to reach a whole new audience.

The idea is: What are the different forms of media I can use to communicate in different ways? I recognize that I have my assumptions about the ones I like to consume. But if I'm trying to influence people, I want to provide those multiple documents, all together – those different forms of representing the information. So, some people will look at it, and they'll be like, "ah, go watch this video!" Some people might print out the whole document and say, "you know, pages 39 to 42, those are really good – really useful stuff!" You want to let people go wherever they need, so that they listen to you.

I really enjoy tactical design skills like graphic design, typography, video making – sort of hands-on, visual stuff... I believe they should be taught more widely to everyone. But then you said, "don't do it – just use the company deck, whatever is going around as the standard form in your org." So in the absence of using design tactics, what are some other skills – such as in storytelling – we can deploy to communicate within those formats?

I do think those tactical skills are hugely important. I just think you should take the company format, and do whatever you do within that format.

The people we're trying to hire right now are people with those core information design skills. I'm looking for people who have that. Like, I can scribble something on a whiteboard or a piece of paper – it's got some lines and some things here, and there's a picture here... And they turn it into something that looks like an actual diagram. In particular, they can say, "look, you arranged it like this, but it'd be a lot better if we did it horizontally – then we can do this, and have these fit together, we can sort them up and down..."

So being very familiar with whatever set of tools – Figma or Sketch or InDesign, whatever those are... I'm reasonably agnostic, I try not to tell my designers what tools to use. I want them to make the right decision there. But you – particularly as a junior person – are coming in to solve someone's problem, right? You're coming in to fix something. People have a problem, and they hire you because they have that problem. So the core tactical skills – "get this done, and do it quickly" – are super important.

You have a new job as Director of UX and AI at Anthem.ai – congratulations, first of all.

I notice your job title doesn't say "scientist" anymore, and it says that you're in charge of both UX and AI, which is extremely interesting.

I'm starting as senior director of UX and AI. The "AI" part is that I'm within an AI group, and I wrangle a UX team. I brought in Erika Poole, who's from the HCI community; she's the director of that team. And I'm building up the interaction design side as well.

We use data from patient medical records, particularly insurance billing. Let's say you get a cholesterol test. If you have insurance, we see that you've got a cholesterol test, and we pay for the cholesterol test. We also get some information like, "this is their HDL, this is their LDL..." – we get the results. And that's a kind of anti-fraud measure. But we have that data. We know what drugs you're being prescribed, we see the diagnosis – you have hyperlipidemia, and we see that you've been prescribed a statin... Out of that, we can see what the effects are, and we can do some interesting data science.

Let's say that you're given whatever statin it is. Three months later you come back, you get another cholesterol test, and we're like, "oh look, your cholesterol went down!" So we're able to say that for someone like this – of this age, with these set of other diseases – this seems to be effective. Then we can take some other people who took some other drugs to address cholesterol, and we can say, "it looks like for people like this, this is going to be the best drug." We can do that building on an enormous amount of data. It's a pretty exciting thing to be doing.

The tricky thing is that doctors have been trained that the clinical trial is the gold standard. The problem with clinical trials is that they don't necessarily represent the diversity of the overall population. In a clinical trial, you have pretty small n – maybe 100, or a couple of hundred. It actually depends on the trial approach, there's lots of math around this... But chances are, you don't want people who have multiple co-morbidities. If you're testing a new cholesterol drug, you probably don't want people who are already on drugs for diabetes, and already on drugs for chronic heart issues, and kidneys issues... For someone who's got multiple conditions controlled by multiple drugs, you don't want to add one more experimental drug. So the clinical study will often show that a drug works very well for certain people, but it doesn't necessarily generalize to the whole population. With this kind of historical patient data that we're able to look at, we can make some really exciting recommendations. "Look, it really does look like this pattern, for people like this patient."

There's a lot of UI. There's a lot of design that goes into this. There's a lot of data science. I'm very lucky to work with some very, very smart data scientists. My colleague Beau Norgeot particularly – I'm really impressed by the work that he and his team do.

The challenge is: how do we get this in front of doctors? And how do we get them to think differently about the evidence that we presented? Because, we can't necessarily give them a clinical trial. We might be able to, for specific
circumstances – we're looking into this. But how do we say: "Look, the evidence from 5 000 people is this. Looks like this is the right approach that you might want to take on." And we don't want to usurp the doctor's decision-making – that's not the aim. The aim is to empower them to be even better – "functioning at the top of their license" is a phrase we keep coming back to. How do you give doctors the tools they need, that are really useful? Like a weighing scale: if you don't have a weighing scale, then you won't know how much someone weighs, so it would be hard for them to manage their weight. What are the tools you can give to doctors, to clinicians, to nurses, to people involved in healthcare; so that they're able to do what they do better?

You have been referring to the community that you and I are in, which is computer science, HCI and interaction design research. You have a track record in this community which is, I would say, objectively excellent – not only because of the publications and citations you have accumulated, but also due to the institutions and people that appear on your resume – a lot of names that we look up to. You've studied at MIT with Steven Pinker, Michael Hawley, and Hiroshi Ishii. You've been at Cornell with Phoebe Sengers. I have learned so much from these people, just by reading what they publish.

What is a thing you gain from having personal, physical access to these people and these places?

All of those four people were influential in the work that I chose to do, and how I've done it. I would absolutely agree with that. These are people I like and respect and look up to. But the biggest thing you take away is the communities that are around those institutions – connections with people who have a similar set of experiences.

I think about the Information Science department at Cornell... It didn't exist when I went to do my PhD. I went to CHI in 2003, having lost my job like a month or two before that. I talked to Phoebe because I met her when she was a graduate student at Carnegie Mellon – she was doing the circuit, so she'd come to talk at the MIT Media Lab for a faculty position. I ended up going to Cornell, but there was no Cornell Information Science to go to. I just went to CHI, and everything I went to, there was the same group of people... There was Phoebe. Jeff Hancock, who at the time was at Cornell, was there. Kirsten Boehner was there, who was a graduate student. And I was like, "wow, these people... I really like them! And they keep being in the same place!" So I went off to Cornell. I actually joined the Science and Technology Studies (STS) department for the first year, because there was no Information Science department. Then I switched into Information Science as soon as it existed, after the first year. So there were a handful of us at the beginning, people like myself and Gilly Leshed, sort of all vaguely hanging out there. I mean, I love being in a new situation, right? I love being places where there's maybe not that many rules. I suspect, since I left, they put in a whole bunch of rules, like "don't do it like Jofish, ever again!"

The thing that the Information Science department did really well was that they acknowledged the inherently interdisciplinary nature of creating knowledge in the world. They treat information science as being a four-part thing: between computer science, cognitive science, HCI, and STS. I think they've changed a little bit, how they're phrasing it right now... But the acknowledgement was that you need to understand those disciplines. You need to be able to read a cognitive science paper and say, "OK, this is how this contributes to the work I'm doing in HCI." Same in STS. So this idea of treating it as interdisciplinary from the very beginning was a really powerful thing. That core decision is probably the biggest single influence I can point to.

With this kind of track record, I'm sure you had a lot of opportunities in academia. But you have chosen to go and work for companies: Nokia, Yahoo, Mozilla, now Anthem...

What did you expect when you went to work for corporations?

Originally I was vaguely thinking that I might spend a little bit of time in corporations, and then go into academia.

People present it as sort of a binary decision: it's one or the other, there's only the two choices... The reality is more complex. People transition from one to the other. People have ongoing relationships between the two...

One of the things I missed was teaching, so I found ways that I could do some teaching at Stanford. John Tang from Microsoft Research and I taught the Interaction Design class at Stanford, and then we ended up doing it with a bunch of other people – which was good, because the two of us were just a bit overwhelmed, trying to do that on top of everything else. So I continue to do work with the academic community.

One of the things I love about being in industry is that you get fabulous problems – great problems that can really influence how millions of people do things. Your opportunity for impact through that is really significant, and I love that. I can really fix fundamental things for the world. I love the problems that you have, and your opportunity to have impact there.

I love the data that you get – this goes back to that big data layer we were talking about. The scale of data you have access to in industry is huge! You can get that data in academia – it's getting easier, and we're seeing more of it – but it's hard. You can make a change and do A/B test in the industry, on the people coming to your company's web page, if you're in a big company – I don't want to ignore that level of privilege. But the data is just fascinating. It's so much fun to have that level of data, and that level of engagement.

I love that I've had a consistent stream of super, super smart interns.

At Mozilla I was in a situation where I was actually able to fund a bunch of work in academia, and I loved that. That was a really great way to to shape a field, to think seriously about where I wanted the field to go. I mean, it wasn't just me – it was always in collaboration with a whole bunch of other people. But that idea that you could really think about, what would be healthy practices for the field as a whole? So one of the things we did was to try and encourage people to do more open source work. I think HCI would be a lot better if we did open source code, open source data, all over the place. I don't actually think it should be required. I've spent enough time in industry to see that you can't do that – if you require open data for everything you just knock out a huge amount of the vibrant industry work that's going on. But should we encourage it? Yeah, absolutely. I'd love to see more things like that.

One of the things that I like best about HCI as a community is that we are quite so inclusive. We're quite so broad. The CHI conference is going on this week (in May). You go and look at the stuff... There's someone who's building an iPad app for dogs to talk to each other across the internet, the next person is doing Fitts's Law in 3D with virtual reality, the next person is doing a critical piece on assumptions around race as embodied in headphone design...

Azaela by Sjoerd Hendriks and Simon Mare et al.

Yeah, we made a cushion this year.

Exactly. And I love that all of those things are there, as valid parts of HCI. I don't know of another discipline... The ACM has, whatever it is, 37 different conferences or something, and there's some great work going on in all of them. I've been really impressed by people I've talked to in the supercomputing community, I've been really impressed by the stuff I've seen coming out of information retrieval... Yet, the sheer breadth and openness... I mean, it's this very ethnomethodological thing: we don't assume that we know what knowledge is, right? What a bold statement for a community to make!

I think we run into problems with this. There's papers where you're like, "I don't know that I think this should have been accepted..." There are specific things you can point to, where you're like, "does this one count? Should it count?" But I think, as a community, wow, are we erring on the right side. We're erring on the side of inclusivity, and being open to new disciplines, and new ways of creating knowledge... That's where I want to be.

Some of those were exactly my feelings when I started in this line of work. Now I have this mental model of organizations as platforms. Companies are platforms where individuals create certain products for the world. Universities are a great platform for doing other things. But they are not interchangeable. If you want to build products, to put things out in the world for people to be able to buy and use and get support for; it's difficult to accomplish inside a university. I imagine that science in a corporation has its difficulties too.

What is the relationship of a staff scientist with a corporation?

Is it a sponsorship where you do whatever science you like? Or are you obliged to show the connection between your work and the company's operational goals?

Nearly always the latter. You want to be able to show why the work you're doing is important to the company. There's these fantasies of these places in which people just do whatever they want. But even if you hark back to the glory days of like Bell Labs, there is always an alignment. Because that's the way you get to have impacts. That's the way you get to have the sponsorship. And that's where you continue to be at the company.

So there is often an alignment. And you want to figure out what's interesting: What's the work that you can do, within that framework of contributing to whatever the company thinks is important?

As you get more senior, you have an opportunity to shape those directions. Partly, you end up making the space. If you're doing some really new stuff, you may want to be thinking about, "OK, the company does not do this thing right now, but here's why this is a logical thing to do. This is the next step from whatever we're doing, it's this direction that I'm exploring..."

So it's not always just the tactical, "this is what we're doing, let's do it better" – although that can be valuable, because it gets you a sort of moral standing: "Look how this person managed to do this work, which was really useful because it got us to our goal. That means I'll listen, when they're doing the longer-term work!"

When you're thinking about the lab as a unit, and the researcher as well; you want to think about the mix you do between the immediate, tactical stuff, and the longer-term, more strategic work – in which you're thinking about how you build towards the next future, and how you're contributing to longer-term visions within the company. But it's always a balance.

I want to ask you something I will frame in the context of science, but we can generalize to all kinds of endeavors. And I want to ask this because you are a couple of steps ahead of me in a similar kind of career.

You seem to be doing high-quality work, and in my experience this level of quality and quantity together is very challenging. I also have done similar work; and I would say that some of has cost me. I've paid for some of my "successes" through arguments with colleagues, sometimes with breaking my habits of physical exercise and losing fitness – once I've even had a relationship come to an end because I'm so absorbed in my work.

So my strategy for chasing success in work was radical de-prioritization of everything else. I was fortunate to be able to do this, but it's not a great strategy that I can recommend to my students or anyone.

Have you found, throughout your career, any strategies, frameworks, or competences to achieve great results without compromising other dimensions of your life?

I'm extremely lucky in that my wife Erin is incredibly smart. The two of us have three great kids, and have worked very carefully together to balance those priorities. I don't think we get it perfect all the time. It is absolutely a topic of ongoing discussion. And there's times it goes better than others. It's always hard.

This may be selfish but my physical health has gotten better during the pandemic, because of how they changed the swimming around here. I've been able to swim about three times a week, and it turns out that makes all the difference in the world. Before that, I was able to bike to work, so I could have a 45-minute bike ride. I actually get some exercise that way. I've never really been very good at going to the gym. It's sort of somehow not quite my style.

But it really does feel like it's an ongoing topic of discussion and engagement at all times: how you do that balance thing? So that's the first thing.

The second is that I have been incredibly blessed to have and very, very good collaborators. Like, amazing. In particular, being able to bring in superb interns who work for a summer, sometimes two summers. They end up doing so much of the work. I feel like, as I get more senior, the percentage of the papers that I write goes down. I'm no longer writing most of the paper, or 50% of the paper even. If I'm collaborating with one other person, they're writing a lot more than me. But I want to believe that I'm contributing by helping to frame the whole thing.

I think about the people that I've had the pleasure of collaborating with over the last couple years. Julia Cambre and Jessica Colnago from CMU, and Alex Williams, who's just taken on a professor position at Tennessee... These interns have been just superb. They've done great work and enabled me to do this. Also the colleagues that I've been working with: people like Janice Tsai who was at Microsoft before she worked at Mozilla with me... There's been a great list of superb people. So much of it is, like, surround yourself with awesome people. That makes all the difference in the world. Try and find those opportunities to work with interesting people.

And figure out how you can turn the work you were going to do anyway into an interesting paper. How do you design your study, from the beginning? In academia you're often not doing studies that are not going to turn into papers. In industry it's sort of the opposite problem. For the Firefox Voice work that we did that Julia presented at CHI, we did something like 50 different discrete studies. You just can't put 50 studies in a paper, so you talk about four of them. Many of them just never get written up.

There's a super article about olympic swimmers – The Mundanity of Excellence: An Ethnographic Report on Stratification and Olympic Swimmers.  It talks about different swimmers at different levels: the local meets, and then there might be state meets, national meets, and olympic meets... These are different levels in the hierarchy of swimming. The point is that, it's not just that you practice more. Let's say that I'm swimming at the junior level, and I want to go up, eventually, to the olympic level. Someone who's trying to compete at nationals; they swim four hours a day, three hours a day... It's not that if you go to the olympics, they're swimming eight hours a day. It's not just sheer increase in quantity, but it's an increase in quality of how they're doing it. If you look at how someone swims breaststroke, who's just an amateur like me, and you look at an olympic swimmer; it's almost like they're doing completely different strokes. That's the idea – it's not about quantity, but rather about quality.

You can think about the same as a researcher: How do I do really quality work? There's a limit on how many papers anyone can publish in a year. So what does it mean to think about, "can I level up in in the quality of the work? Can I publish less, but better things?"

Different universities are getting better and worse at thinking about this stuff. You know the tendency that some universities had to basically just look at the length of your CV... I hope we're moving the hell beyond that, because it's it's embarrassing and shitty, outright shitty to do things like that. I hope we're moving beyond counting the raw number of CHI papers that your name is on, as a measure of success – it's a shitty way to measure humans, and it's a shitty way to measure excellence in creating knowledge. But it's tricky because, to pick two ends of a spectrum, one is: you just hire the person with the most CHI papers you can get. The other is: you only hire people who went to the same universities that you did, and look like you. And I think both of those are pretty bad places to be. So how do you articulate it when you bring someone in? I've read a great deal of work on things like de-biasing hiring processes, and there are things that make a lot of sense; like figuring out your criteria in advance and making sure you're actually using them. If your criteria is "number of CHI papers," and you're not counting CSCW papers, and you're not counting journal papers – fine, do that, and but make sure that's explicit, and then you fill it out for everybody, asking the same questions.

When I hire people, I'm looking for a level of enthusiasm and engagement, and excellence in some domain. So how do you define that? I interviewed someone once, and she trained police dogs. She was a high school student. She got up every morning at 6:30 and she trained dogs for an hour – police dogs, search and rescue, etc. And then she'd go off to school. I was like, "OK, that's a level... I definitely didn't get up at 6:30 when I was in high school." I don't know about you, but that was not my style. If you can do that on a repeated basis, that suggests a level of enthusiasm – and that's exactly the kind of thing that we're looking for.

We've spoken a lot about publishing.

Are you still going to publish in your new role?

I hope so. People do publish in our area. My colleague Beau has a Nature paper or two that has come out recently. There is a tradition of publishing within the medical world, it's seen as validation. So there's a context in which that works.

I'm hoping to do some work with some people outside as well. I've been doing some work with Katie Pine at Arizona State University, who does great work, and I would love to work with some of her team on this. I'm trying to work on ways to make that happen, if we can figure out the right thing to publish... There was a workshop on AI and healthcare at CHI this year. So it's figuring out, what's a slice of it that we can take.

This is one of the things that junior scholars often struggle with, because you do the research, and it's this big, hairy, messy ball that's all over the place. How do you represent that ball? Partly it's freeing yourself up to say, "I'm going to write a paper, and it's one slice through this big messy ball. I'm going to really describe this one slice, and what that looks like. And it's not going to show all the things. There's going to be bits of the big messy ball that's completely missed by this. And that's OK. Because I'm going to authentically and accurately and truthfully tell that one slice. But I'm going to recognize that there's going to be bits that are never seen."

When I was a grad student, I struggled with this. I'm like, "well, but, you know, we didn't quite do it this way? We didn't look at this paper, then this paper, and then do this thing... We sort of went back and forth..." It's OK. Like you were saying earlier, it's a rhetorical form. It's like a haiku. You have to write it this way. Sometimes you have to sort of make it fit the 17 syllables kind of thing... That's OK.

I like to call that "scoping" and I consider that a superpower. To be able to decide what not to say in any given form – in a presentation, in a paper, even in this conversation, since we're recording... It's a very powerful skill to have.

I have a question I'll ask in two ways.

How do you convince decision-makers at a company that doing science, publishing research, and going to academic conferences is worth spending resources?

And if they are already convinced, what is their mental model of the value that these activities create for the company?

It's a good question. When I was at Mozilla, and I was actually actively giving out money, it was an even harder question to answer – like how do you actually
really show this value?

There's a lot of reasons, and a lot of complexity around this. For the actual publishing, there is a validation of the work that we're doing. It says that this actually makes sense. That can be, sometimes, a valuable part of of what goes on.

There is being able to hire people, being able to bring in top talent. I saw an interview with Bill Gates maybe 15 years ago, and someone said, "who's Microsoft's top competitor?" – expecting him to say Apple, or Oracle, or something. And he said, "Arthur Andersen." They're like, "what?" They're a consulting company, they've since changed their name... So he says: "We're trying to hire the smartest people in the world, so those are the people that we're competing with – because the people are what makes up a company."

The number of people I brought into companies because I met them through CHI... I mean, the number of jobs I got through meeting people at CHI is really significant! There is a recruitment value, in terms of meeting people but also in terms of the visibility: "Look at this place, this is a place that does quality research, and they publish at CHI!" One of the pieces of advice that I give people, to figure out if they want to join a company: go and look at the CHI proceedings, go and look at the papers, and see who's doing the interesting work. "Oh that person's working on that, great, I'll ping them, because clearly they published a paper at CHI on this!" That's the sort of thing I might do. It gives you an insight into where those people are, and what interesting internships or positions might be available.

A really key thing I would add to this... You used the word "superpower" earlier, which I loved. Having access to the world of research is itself a superpower. Someone says, "if only we knew something about why people do shopping!" And you're like, "ha! All right let me go in!" And you go and you get like the Daniel Miller books off the shelf, and you say, "well, this is what Danny Miller wrote! And he was talking about a supermarket in Islington in London in 1982, but this is why it's relevant to what we're doing right now!" Someone says, "if only we had a way to measure what our customers really think about us!" You're like, "great, let me go and find that work on doing that!" One of the things I've done this week already is to go and be like, "here's 5 papers from CHI!" I went to different Slack channels around the company and said, "look here's this one that's relevant to depression, here's this one that's relevant to diabetes, here's this one about how you organize a program around an AI-driven product..."

You can take this role, as a scientist – to go back to your first question – to bring this stuff into the company and make it visible. And thereby, people go, "oh this CHI thing is really good!" And one of the things I would often do is, I would say, yeah you can go and attend this conference, but you need to write a report. You need to tell us what we should care about.

For our audience members who might not be familiar with CHI, its long name is the Conference on Human Factors in Computing Systems. It is the research conference and publication with the largest readership, the largest attendance, the largest endowment... A big deal in the world of HCI and design research.

On that, note speaking of design: in one sentence...

What does "design" mean to you?

The first thing that came to mind is "understanding people and building them things" – like building the things they need.

And then, of course, I suddenly went into, like, "well, that's not what it is at all..." But it'll do for now.

I too like to say that it's creating for people. Because then I like to say that, OK, engineering is creating for machines. And then I like to say that art is creating for yourself. Simplistic, but it works.

Sounds like a good way to have fights with engineers and artists.

Yeah, it works when I say it to my students in class, or when I'm giving presentations. But recently, I actually said this in a presentation to 100 engineers and I could feel that they didn't like it...

What are some places and tools that you spend most of your time with?

Slack and e-mail. I do spend time on Facebook – I don't love Facebook but it is a great source of community.

That being said, I have a fountain pen that I like – it's really clever because it's retractable. And I take a lot of paper notes as well. Evernote is useful too.

That's oldschool – Notion is very fashionable these days.

I mean, partly, these things are very sticky. I have Evernotes going back for 5 or 6 years, maybe more. I don't think I was consistently using it, but having a place to put notes is really valuable. I try and do more serious notes in Evernote and then this paper stuff is sort of scribbles, sketches, things like that, as I'm going along.

I guess you spend considerable time doing work, perhaps the most amount of time, out of all the things you do.

What is the second thing that you spend most of your time on?

Well, sleeping – big fan of that.

If I'm not spending time on work – and I include the HCI community work as part of that – I'm spending time with my kids. Now we're down to reading, to just my son, a bedtime story every night. I was reading all three of them bedtime stories for a while. It'll start up again, because we're about to do the last book of Harry Potter. We've read all six of them so far, and then I had to go back and read the first three one more time for the little one... I'm slightly looking forward to being done with the last one.

How how old are they?

I have two nine-year-olds and a six-year-old.

Your house sounds like a fun place to be.

It is. It is very fun. It's not easy, but it's very fun.


Connect with Jofish Kaye

Personal Website | Twitter | LinkedIn


Mehmet Aydın Baytaş

Mehmet Aydın Baytaş

|

Mehmet is the founder of Design Disciplin. He has been designing and building since 2005, and spent 10 years as academic computer scientist and design researcher.