Agent Simplification

Navigating a sea of solutions to root cause

Project Overview

I was the primary lead designer working alongside a team of 3 developers to research and simplify the Agent management experience. The UX Design process has finished, but engineering development is ongoing.

We were able to remove business logic that caused users significant friction when configuring our product. This was done by navigating through a sea of solutions to identify a root cause using research and ultimately creating designs that both eased user frustrations while balancing technical feasibility.

This was my first project as primary lead designer! It showcases my collaboration skills, ability to establish clarity through research, and eagerness to experiment with our design process.

It was an exciting time to lean into discomfort and find my own rhythm as a designer.

Fun Fact

Project Details

Company: Liongard

Timeline: Dec 2022 - Feb 2023

Roles: Research, UX/UI Designer, Engineer Handoff

Tools: Figma, Maze, Miro

Liongard aggregates information from several systems including hardware devices like servers and computers.

Agents are the engine that power this process, allowing Liongard to extract data from hardware devices and display the data on our product, enabling users to perform powerful actions on the data Liongard brings back.

Challenging the Solutions

Stakeholders may have a particular solution in mind for a problem they have encountered, but it is up to us as designers to ground the proposed solution in research.

At the beginning of the project, the engineering and product gave several different solutions, features such as Agent Permission Groups or introducing the concept of an Agent Probe. However, this was where I pumped the breaks. We were bypassing the critical steps of empathy and problem definition in our haste to deliver a solution.

I drew a flow diagram to demonstrate to engineering and product that the suggested proposals introduced not only more process steps but also significant engineering bandwidth to implement.

I effectively communicated across engineering and product steer away from quick solutions and ground ourselves in the design process, using research to inform our solutions rather than the reverse.

Problem Statement

Users create and manage an agent every few months.

We suspect there are complexities that slow users down when they have to relearn how to create and manage an agent.

How might we simplify the creation and management process to streamline the agent experience?

Research Methods

  1. Conducted in-depth internal interviews with 3 partner success engineers (they are responsible for helping users troubleshoot product problems) to understand user’s agent experiences.

  2. Validate mental models and pain points through a survey study. We received 57 responses.

  3. After completing everything above, I created a persona to summarize the research I collected from our users. This person will later help me improve the agent experience by identifying areas of complexity to simplify.

Combing Through Ambiguity

I learned that understanding user sentiment via survey responses is not a straightforward process.

We had to weigh both quantitative & open ended responses users left to contextualize our findings.

Uncovering User Frustrations

Key Findings #1: Agent Types are confusing

While the survey responses showed that users claimed they felt neutral or even positive about Agent Types, the open ended responses revealed deep friction between users and Agent Types with users reporting that…

"We don't install the agents everyday. Each time we have have to go through either ours or Liongard’s documentation to find out what each agent does and how to use them. Having multiple agents takes more time for us to onboard a client. "

"What makes an agent self-hosted, vs. on-prem, etc? Why does the Agent Type have to even matter?"

Insight

Users don’t install or interact with agents on a daily basis or even a monthly basis. Meaning that when they do install or manage their agent, users are forced to relearn Agent Types, at best slowing them down and at worst causing them to abandon their workflow altogether.

Key Finding #2: How will users manage Agents without Agent Types?

"Agent Types determine what agents we will be billed for. Without Agent Types, how do we ensure that we aren’t billed for stuff we’re not using"

Users were worried that changes around Agent Types will cause ambiguity around the level of control they would have over their agents and if that would affect the services they would be billed for. They expressed concerns such as…

Assessing Usability

In addition to my preliminary research, I performed a heuristic analysis of the current screens users are interacting with when managing their agent on our product, and quickly I was able to pinpoint several areas of improvement.

"No idea what the different types mean or do."

  • Missing Key Data: Users has no information other than agent name to make their selections, which can lead to incorrect inputs and therefore misconfigured data.

  • Visual Distinction & Guidance Our most stable and recommended Agent (On Demand Agent) is placed at the bottom of our dropdown and undistinguished from other options

  • Recognize & Recover from Errors: Offline/Broken Agents are listed but not visually distinguished within the dropdown, which can lead to incorrect selections.

  • Disrupting User Flow: Upon click, the help icon navigates the user away from the configuration page onto our doc site, disrupting their selection process

Lofi to HiFi Mockups

Agent Permissions

To address concerns around billing transparency and agent’s configuration control, I created mockups for an Agent Permissions feature to test during our user interviews.

Testing Designs

Evolving User Interview Practices

Before

Interviews went over time & off track from defined tasks. Moderators were unable to steer back the interview and lost confidence as a result.

There were strong reservations around change with existing interviewing methods because of past experiences.

Usability tests were conducted against only prototype designs to decrease the likelihood of going over allotted time, thus limiting our insights and findings


I researched and spoke to UX Designers within my network to improve our moderation methods to keep users focused on the tasks at hand.

I experimented with our user interview structure, putting into practice new moderating tactics to maintain control as moderator and facilitating smooth interviews.

I tested tasks against both the prototype and product, which allowed us to increase confidence in product direction by uncovering user mental models and more accurately gauging the performance of the prototype against the existing product.

By leveling up user interview best practices, I was able to significantly improve the quality of insights we were able to extract from a user interview while strengthening our ability as moderators to guide an interview through.

This project gave me the opportunity to push our team to grow and experiment.


What I Found

After conducting 5 F2F User Interviews, I grouped common findings together. This process helped me understand the major areas to improve my prototype as well as prioritize which area I should focus on.

Agent Permissions

“[Designs are] a lot cleaner [with] valuable info [being shown] without scrolling”

Iterations based on user interview feedback was:

When users were going through tasks within Agent Permissions, all felt like the granularity of control was unnecessary. Users said that they…

“[I don’t see a use case] and [this feature] wouldn’t matter to me. As long as I can deploy what I need to deploy and alert what I need to alert on, that’s all I care about”.

This allowed us to descope Agent Permissions with the confidence that users would find no use case for this feature, shaving off 3 weeks of engineering sprint work.

Agent Selection

Users were delighted by improvements we made to the agent selection process, saying that…

  1. Fine tuning what key pieces of information users wanted to be surfaced during their Agent selection process.

  2. Displaying Offline Agents rather than hiding them. Users called out that knowing what was offline was valuable information that they would have overlooked if hidden.

  3. Placing Agent Details at the bottom of the dropdown to align with user’s line of sight.

After

Simplicity and Research are Key

As my first project as lead designer, I grounded myself in research and the knowledge gained from other designers in the industry. This empowered me to lead as a primary contributor and form my own unique design process.

What I learned…

  • Doing research to understand user pain points and  defining a narrow problem statement will help focus your work

  • Testing against a control will yield us clearer success measurements!

  • Pushing the team to be better moderators allows us the space and confidence to experiment with our processes and become better designers!

This project was an incredible exercise in constraint and simplification. It allowed me to bring a new idea into an already established design and business. Designing within the restrictions of an existing app was both a challenge and an opportunity to utilize strategic critical thinking and user insight.