Connect with your exact customers
See, hear, and talk to your customers
Uncover insights about any experience
Share key insights across your organization
IBM Watson is at the forefront of computer learning and our evolving relationship with technology. And when it comes to UX and design challenges, there's a lot to consider. We recently chatted with Carol Smith, Senior Design Manager at IBM Watson to discuss all things UX and design at IBM Watson. We had a great Q&A session with Carol, too, and included our 10 favorite questions below, or you can watch the full webinar here. Enjoy!
They really are partners within the entire process. They are frequently the ones who are coming with ideas and they also are the ones who have to do the prioritization with development, so they're really quite invested in the entire process.
Finding someone who can really span all three is really hard. I don't believe that there are very many unicorns out there. There are some, but most people are best at one particular area, and in some cases one very narrow area.
I find that by not only putting them into these classifications, it's more helpful for staffing reasons because we know who has the skill sets, but it's also helpful to them to be able to focus on one area, but still build the skills in another area, so I think it is really important to have people who are specialists in those various areas, but it doesn't mean that people don't move across those areas fairly fluidly.
You know, a really narrow user group who are also trying to do the same type of task, so within that user group, if I recruit carefully, three to five users should give me 80% of the issues with the interface.
Even though literally, hopefully, we have millions of people using these products, the usability testing that we're doing should raise up 80% of the problems. It's not 100%, but that's enough. It's enough to help us to move forward and to make some adjustments, and then to do more testing, so we try to go with more testing more frequently, but with small batches.
This will be the first one that I've done with Watson, but typically these vary depending on the type of activity.
We're going to be doing some business origami type activities. We're going to be doing more concept discussions at a high level. We're going to be doing usability testing.
Each of those sessions will run differently, and then we're also going to be doing some poster discussions, so we'll have some information up on a board. It might be a list of needs or a list of problems that people have, and having them prioritize them or something along those lines.
We've got a setup where we've got four different rooms and we're going to have four facilitators. They'll be running different sessions at the same time, so it's going to be a little chaotic, but it should be really good.
Oh, all the time. I would say it's never too early. As soon as we have something that we can make even remotely clickable, we try to get it in front of a user just minimally to validate that we're going in the right directions.
We run studies, were trying to run studies anyway, at least a few of them a month. We're not quite where I'd like us to be, which is really having it drive the whole cadence of the development cycle. We're getting here. People are starting to ask, "When are you testing next?"
It's starting to build up that culture of people understanding that that is a metric of success, and that having that feedback and finding those mistakes earlier is key, so as much as possible.
We try to keep them as small as possible, although as you can imagine, there's a drive towards big here, but we do try to keep things as small as possible. Actually, we've gotten to the point now where people aren't looking ahead more than a quarter, which is a huge step for such a big organization. We're just trying to back things up so that people are thinking so big and they're not thinking so far out.
Ideally most of these concepts do scale. Agile itself really is built for short-term and constant releases, and so that's something we're also changing too.
In a perfect world, we would have all of our software monitored and we would know exactly when someone was using it. That would help us to understand the problems and situations, but in the meantime we look at a lot of other things to help inform our work, including the actual usability tests, how those perform, how the software performs in those studies, how our customers are responding in forums and things like that.
We look at generally even as much as how many customers are purchasing things, so the metrics for success can vary widely depending on what area we're looking at, but minimally, at an interaction level, it's: did they successfully complete that task? How many of the individuals were able to complete that task? If it's on the negative side, then we need to do some redesign work.
We do a lot of brown bags here in the office and, as much as possible, we try to attend their brown bags when they have them, and just build a sense of community here in the Pittsburgh office, as well as in the other offices.
We attend a lot of the same meetings and kind of the boring day to day stuff, but we try to be right there with them. We also just work to understand as much as possible about what they're doing to again build that empathy between the teams.
With Kanban, when I first got here about a year and a half ago, everyone was only working on boards for the goal boards with post-it notes. They were moving their post-it notes through the lane, so the lanes at a high level there's a ‘Done’ column to show what people have already finished working on. There's an ‘In Progress’ column that anything that's currently in progress, and ideally only one person is attached to only one post-it at a given time so that it's very controlled, and then there is a ‘Backlog’ column.
Most of our Kanban boards are more complicated than that, but you can imagine someone taking something from the Backlog column and moving it into the In Progress column, and starting to actually work on them, so you can physically see what that person is working on at any given time. You can also see if they're working on too many things, but everything that anyone in that team is working on is visually there.
We've now moved to virtual tools. As you can imagine, it's very hard to read a post-it on a board through a screen share, so we were very happy to move to the online tool and that's helped even more for the people to be able to collaborate within those teams.
The idea with Kanban in general is that there's a very limited amount of time available and cards available, and so something can't be pulled into that middle column, if you will, unless something has been moved out into the Done column or moved back into the Backlog if they determined they couldn't work on it anymore at that time, and so it's a very controlled idea of work. The only things that get done are things that have gotten into the middle column, and then once they're moved to Done, someone can pull in a new item.
Definitely the stand-ups help. We also do large all-hands meetings probably every other month. Within the design team, the 60-odd folks in the Watson design group, we actually hold more frequent all-hands meetings and those are opportunities for people to share.
Maybe it's birthdays that we might celebrate, or we maybe meet a new designer, someone who's just joined the team and have them do an introduction. Someone might share a presentation that they did at a conference. We do try to do those kinds of more informal activities to really help the teams to get to know each other and to feel comfortable with each other, so lots of those little things.
We also, as a team, will do small things. People here in the office might go get ice cream together, or tea or coffee or something like that. Just doing those kinds of off-site activities is a great help.
Get our best human insight resources delivered right to your inbox every month. As a bonus, we'll send you our latest industry report: When business is human, insights drive innovation.