Today’s Enterprise Research Landscape: A Q&A with Leanne Waldal

On a recent episode of the Early Adopter Research (EAR) Designing Enterprise Platforms Podcast, EAR’s Dan Woods spoke with Leanne Waldal, a leading technology researcher who understands how to create programs and manage research and analytics departments to support product development and design. Her career has spanned many different areas including user experience research. The Designing Enterprise Platforms podcast focuses on understanding how to create platforms that serve users inside and outside of companies. This is an edited version of their conversation that can be heard in full on the EAR Podcast.

Woods: Can you give a brief overview of your career with product support and design research?

Waldal: I started out in the early ’90s as a statistician, something that we now call a data scientist. I moved to San Francisco, when the web was exploding, to work in the web industry. I worked for a couple of different startups and then started my own agency. We did QA work for web and mobile, we did user research qualitative and quantitative, and at one time we actually did performance load testing before that got automated.

This was a company called Otivo, and it was through Otivo that I first met Leanne when I was CTO of TheStreet.com and we were using the Otivo services to help the website become better.

It was a great engagement. When I shut down the consulting agency, I went to Dropbox, started their research team there as the company grew. When I left Dropbox, I went to Autodesk to run a team of research and analytics across multiple desktop software applications. Lastly, I was at New Relic, where I ran product research, feeding insights into product design, marketing and sales.

At New Relic, Autodesk, and Dropbox, you were always trying to create a product that was going to be used in an enterprise context. When I talk to CIOs and CTOs and line of business leaders who are creating new ways of working, they often don’t spend enough time to figure out how their users are reacting and what their users want. That research would really benefit and reduce the risk of some of these large implementations. How do you actually help somebody in that process with research?

If it’s an enterprise product where people are being told to use it, they may not actually want to use it. When you do research with users and enterprise products, it’s slightly different from users of consumer products where they chose to use it. In the enterprise situation where somebody made a decision and then rolled it out to all of the users, when you’re looking at how users get value, you’re looking at how users get value in the work where they’ve been asked to use the product. Or you’re looking at users who are big fans of the product, maybe were instrumental in encouraging the decision maker to purchase the product. They would actually pay for the product.

I see that in the enterprise marketplace you have certain products where people get really excited about them and they want to use them and they become advocates. Then you have other products where people use them because they have to and they don’t tend to get product love from their users.

Yes, and I see that with even the things that are often touted as being things people love, like Slack. Slack is the thing at the company they use when they come in and even though maybe even though maybe everybody seems to love it, there’s always a percentage of people who are like, “I have to use this for work, I would never use this if I had a choice.” Often, you aren’t asking people about their satisfaction or asking them to provide an NPS score, you’re asking them more about like how does this help you get your work done?

What is the modern research landscape like?

I’ve often been brought into companies to lead research when they hit a point in product development or product acceptance out in the market where they aren’t seeing as much growth as they want to see. For a lot of companies who have products that have had a lot of high growth when they started, a lot of early adopters, and then they get the upside down hockey stick, once they hit that point they wonder like, “What’s going on?” Another point that people bring in research is when they’re trying to understand how to engage people more in their product. The road splits between are we looking to increase engagement because that provides value for the user or are we looking to increase engagement because that’s the measure that we have of our business success?

What it brings to mind is the idea that you seem to get brought in when it’s time to institutionalize a lot of these product decisions, and up until that point perhaps the product was created based on the vision of the founders or based on the original product market fit. It seems like you’re being brought in to manage the risk of knowing institutionally how do sort all of that out.

Yes, and figuring out what to prioritize. Particularly when you are making products for an enterprise market, you’re prioritizing things based on what your most important customers are saying. And that’s perfectly fine, but oftentimes people realize that’s not going to help them grow the business and move into other markets or move into other companies that they want to sell their product into. Sometimes the salespeople are doing research, product managers are doing research, designers are doing research, marketing is doing research. Oftentimes they might have a bias towards it. They want to make sure they get something launched, they want to make sure they sell something to the customer, they want to make sure the messaging is right. Oftentimes when research as a practice is brought in, it’s being brought in so that everybody can just step back a little and look at a bigger landscape.

You’ve said before research is really about risk management and the double diamond model. Could you explain what you mean by that?

Double diamond came from people who think more about design and how design fits in and how humans fit in to the way the that we develop products in a product delivery lifecycle. So if you put two diamonds next to each other, the diamonds go wide and then move back into a point and then go wide again and move back into a point. If you start on the far left of the first diamond, you might be going out to explore if the problem you’re solving for is the right problem. You might think that the problem is people like red pens better than blue pens so we’re going to make red pens. You might just check in with your user base, the target markets you’re going after, whoever it is who’s using the pens that you’re making to make sure that’s still the problem they’re having. You might find out, for example, when you go out and explore, either quantitatively or qualitatively with users or markets, that red pens and blue pens aren’t the thing, it’s black pens that are the problem. And then you might converge down into redefining your problem statement to be, “Oh, we need to actually develop things around black pens.” If you hadn’t checked in with your users or market first, you might have self-referential looked at yourself and how you and others that you know use it and think that blue pens are the solution. Then when you go into designing, prototyping, wire framing and developing solutions, so say we’re developing black pens at this point, you’d have some sort of beta group of users or you’d have a panel or community of users or you’d have iterative user testing while you’re developing before you launch. Once you get to the edge of the right side of the double diamond, you would know what you’re looking for, you wouldn’t be surprised by the reaction of your market or your user base.

You’re managing the risk that when you finally get something in the market, it actually is fitting the need that you intended?

Yes, if you spend no time testing your product for functionality and with research you spend no time checking in with users out there to see if it fits their needs, you might have a big surprise and it might be much more difficult to fix once you launch. The earlier you start just doing like little bits of testing and research involving humans and making sure things work the way you expect it to, the less expensive it is going to be to fix things after it launches.

Could you just go through the different food groups of research and what they’re used for and how they fit in to this risk management challenge?

If you look at like qualitative research, that’s something that’s looking at a smaller number of people and going more deeply with them, getting richer information about the intention behind why people do things, their needs, maybe going out and observing people to see what they’re doing so you can see gaps. When you do qualitative research that’s remote interviews or just phone interviews, you don’t get to see somebody’s environment, you don’t see the Post-it notes around their computer, you don’t see the other products that they might be using throughout their day. So we think of a qualitative as small numbers and really rich. Quantitative is usually referred to as surveys. So you’re looking at bigger numbers of people and less rich information about them. Because in a survey you can’t follow-up with someone when they answer a question to ask them to tell you more about it. Then we have analytics, which is always looking at the immediate to more distant past.

Would you include AB testing as part of research?

Sort of. If you mix AB testing with research, you make sure that you’re testing with humans the things that are going into the AB test ahead of time, you can make sure that you aren’t testing crap versus crap. Sometimes when you talk with people about their two different choices, they’ll tell you that it’s crap versus crap, they’re just choosing the lesser of two evils. And then you can improve the things that you’re putting out there for an AB test so you make sure you actually have things that people want and you’re testing things that actually meet a need against each other.

How is the research used or not properly used when you have an institutional process of product and design going?

The way it should happen is as a collaborative team sport. The way it shouldn’t happen is research is an academic team that sits in a corner and goes out and does a lot of work and then publishes and throws a paper over the wall. When you have research as a collaborative team sport in an enterprise when you’re going after people who use enterprise products, it means that the researcher is working with sales, might be working with someone in a team that’s like deal strategy or competitive intelligence, might be also working with someone in marketing, product managers, designers, maybe someone from analytics. When all of those people do that together, they buy into the research at the end because they were there from the beginning and they listened to the people along the way.

So it avoids data brawls. A data brawl happens when somebody brings an analysis to a meeting and somebody else says, “Well, I think that conclusion is ridiculous and let me see the data that you based it on,” and then the argument becomes about the quality of the data. You’re saying that you could also have a research brawl. How does that show up?

That usually happens when a designer and a researcher decide to go out and do research but not include the product manager, and when they’re talking to enterprise customers, not including the account managers. If they find out you go and talk to users without letting them know first, you’ve disrespected their relationship with their customer. If you ignore the product manager and you come back and say, “Oh, we were just going out to learn whether or not people like red or blue pens but along the way we discovered that pencils are a really big deal,” the product manager might push back and say, “Well, you know, I’ve talked to a bunch of customers and I’ve never heard that and why were you going out and talking to customers without me?” One of the big things that’s important in newer research teams and newer companies is breaking down siloes and closely collaborating across orgs.

The worst case is when research is ignored. How does that tend to happen?

It tends to happen more in consumer products than enterprise products, because people who run product for consumer products are users themselves. So taking that situation, if you have people who are running enterprise products and they are subject matter experts, so they used to be the type of person who used that product, you’ll run into the same issue where they reference themselves as the user when they make decisions because they know the market and they were once a user. They just can’t let go of their own reference frame as a former user.

How would you go about making a research program focused on an early adopter successful?

I’d start by making sure that we know what we’re trying to move towards. Amazon has this working backwards model where they write the press release and then they work backwards with the customer in mind. Another way to do it is to start by getting everybody who is involved in the decision making for what you’re building, make sure you’re all aligned on the goal. There’s a really old school method of managing teams like that called RACI, where you define who is responsible, accountable, consulted and informed. Anything around that to make sure you know who the people are, what we’re trying to learn. It doesn’t really have anything to do with research. It’s more about project management and sort of understanding where you’re going.

Can you give a few examples of the way that you can do this in a simple, direct way and then get some actual meaningful research findings?

I’ve noticed that people usually don’t put enough thought into is who the user and customer is. We all have assumptions in our mind. It’s really a good idea to make sure you know ahead of time who’s the market we’re going after. Do we all tell the same story when we’re asked who is the customer for this or who is the user for this or where are they on the planet or what other products are they using? Or even are they using Mac or Windows? You might be surprised, sometimes you’ll sit with a group of people who you think really know the product and say, “What are people using when they use the product?” and discover that people have lots of different stories.

How can you do this on your own? What is the guerrilla war process of research and then how does that become a bigger process, and then become institutionalized at the largest companies so that it’s actually staying collaborative, connected, and integrated and not siloed?

Starting with the do-it-yourself, the first thing I advise companies to do, is before you go out and talk to your users, get an idea in your head of what the differences are between what people say and what people do. What I did with a company recently is I said, “Okay, what do you use to calendar with each other?” They said Outlook. I said, okay, pair up and ask each other to tell a story of how you schedule a meeting in Outlook. If you’re the person asking, take notes while you’re doing this. If you’re the person telling, be as detailed as you can. Now open up your computers, and the person who was telling the story, go to Outlook and in great detail show exactly how you schedule a meeting with someone in Outlook. And if you’re the person who was interviewing, just say show me more, tell me more, show me more, tell me more and take notes. Usually when that happens, within five minutes of opening the computer and talking about it, they’ll be like, “Oh, I forgot to tell you this when I told you the story.” That’s the key lightbulb point for people to realize that when you go out to talk to users, be sure not just to talk to them and ask them to tell stories but ask them to show you. It helps. The next thing to do in do-it-yourself research is always to, particularly if you’re a startup or if you’re making a brand new product in an enterprise, always start with going out and talking to humans so that when you’re designing or developing you have these stories in your head of humans that are not you. It doesn’t have to be academic research, it doesn’t have to be fully planned. It can be casual observation of humans in the wild. Send them swag, send them flowers, send them cookies. Always give them some sort of thank you for participating in research with you.

Obviously the results that you get, they have to be collaboratively integrated into whatever design process or product development process you’re having. But then what happens when you say, “You know what, this is really working, we want to institutionalize this at least at a modest level”—what does the org structure look like there?

It helps to first hire a manager or leader. Usually people start by just hiring an independent researcher, just one person who has a few years of experience. That person easily gets burned out and that person doesn’t necessarily know how to manage priorities of all the things that are being thrown at them. So I recommend that the first person you hire is a manager and you give them a little bit of head count. Let them hire someone to do operations, someone to manage access to users, send those incentives to users. Someone to start gathering together a panel, a Slack channel, a list of people in a spreadsheet, just a really basic low-fi way to start gathering up people so that whenever anybody has questions for users, you already have them somewhere. Then hire someone for qual and someone for quant. A lot of people will say that they know how to do both but they usually lean towards one and have a smattering of experience in the other.

 

Essentially, if you want to do this in a modest scale, you’re saying build the department the way you would build the department, don’t just buy a freelancer?

Yes.

You’ve done this at relatively large firms with hundreds of millions in revenue, or billions even. At that point, you’re really institutionalizing this and it’s not just about that small team that you just mentioned, it’s really about a much larger organization. What does it look like at the largest organizations?

In the larger organizations, usually what you do is you have teams of researchers where each researcher is embedded on a product team. So if you think of like a three-legged stool with product, engineering, design, put a researcher in there so you have a four-legged stool. Then you have somebody who has the voice of the customer, who doesn’t have a skin in the game of what’s designed or developed. You also have somebody who has the voice of the user and the customer in all the decision making processes, who has the trust of the team. So team culture is really important. If you have somebody sort of dropping in from time to time delivering insights about analytics or research or something else, they don’t have the full trust of the team. The bulk of the researchers are all these people who are all embedded in all the teams. But there’s also a really strategic centralized team who is providing support for all of those people and then looking across everything.

What does that support central function have in it?

Usually you have principal or senior researchers who are doing strategic research projects that aren’t down at the product level. Companies get really siloed and you want someone who is looking at the whole user experience or also looking out at sort of like the future of what we’re going to do next. You can’t be looking at the future of what we’re going to do next or at the customer experience all across if you’re down in the weeds of a product team. Same with analytics, you want people who are answering the questions, counting things up regularly for all those product teams so that they can be making decisions based on what you see in usage. But you also need people who are data scientists who look at clusters of activity and workflows and are investing at getting new ways to instrument your product and look at data across all your products. You also need an operations team. All of that needs some sort of support. Product teams usually have program managers who provide support. Research teams usually have program managers, operations people, somebody who runs a research community, someone who runs the beta community, someone who runs research panels, someone who keeps track of all the tools.

Did you have a user experience person in the central team or would you consult with the user experience people in the product team?

No, in a larger organization, my research team has been a part of a design and product team and all of the designers report up to design managers. So my researchers work with them and I work with the design managers and design directors and also, between my research team and myself, we sort of help the design team learn how to be more customer centered.

If somebody liked this podcast and wanted to become a researcher or was working in an enterprise capability, how would they know research is the right choice their your career?

It’s so different now. Back in the ’90s, we made it all up. But now there are universities that have bachelor’s degrees in user research, there are universities where you can get master’s degrees in design research and HCI. And a lot of large companies now hire people out of those universities who are getting specific types of degrees to join them. If you aren’t someone who has experience or you already went through university or college and you want to become a researcher, just start looking around for meetups and Google groups. Same as a lot of people learn a lot of tech, go to a boot camp. General Assembly has different offerings for like all of design and it includes research. The larger companies tend to only hire you if you already have experience or if you recently came out of a particular sort of graduate program.

Is there anything you can say about the personality of people who tend to like or succeed in research?

It seems like a lot of researchers are introverts. However, that doesn’t mean they all are. So I wouldn’t define myself as an introvert. I’ve hired lots of researchers who are extroverts. People who are curious and humble and interested in learning about others’ experiences, that’s what matters.