Simplifying Legacy Business Logic
Agent Simplification
Project Context
I was the primary designer working alongside a team of 3 developers to research and simplify the agent management experience. The UX Design process has finished, but engineering development is ongoing.
We were able to remove business logic that caused users significant friction when configuring our product. This was done by navigating through a sea of solutions to identify a root cause, using research and usability analysis to create designs that ease user frustrations while also being easy to implement for engineers.
Fun Fact!
Project Details:
Company: Liongard
Timeline: Dec 2022 - Feb 2022
Role: Research, UX/UI Design, Developer Handoff
Tools: Figma, Miro, Maze
This was my first project as primary lead designer! It showcases my collaboration skills, ability to navigate a sea of solutions to dig down to a problem’s root cause, and my eagerness to experiment with our design process. It was an exciting time to lean into discomfort and find my own rhythm as a designer.
Product Context
Liongard aggregates information from several systems including hardware devices like servers and computers.
Agents are the engine that power this process, allowing Liongard to extract data from hardware devices and display the data on our product, enabling users to perform powerful actions on the data Liongard brings back.
Problem Statement
Users create and manage an agent every few months.
We suspect there are complexities that slow users down when they have to relearn how to create and manage an agent.
How might we simplify the creation and management process to streamline the agent experience?
Stakeholders’ Proposed Solutions
At the beginning of the project, the engineering and product gave several different solutions, features such as Agent Permission Groups or introducing the concept of an Agent Probe.
However, this was where I pumped the breaks. We were bypassing the critical steps of empathy and problem definition in our haste to deliver a solution.
Challenging the Solutions
Stakeholders may have a particular solution in mind for a problem they have encountered, but it is up to us as designers to ground the proposed solution in research.
After several meetings going back and forth between PM and Design, I drew out a flow diagram of all the solutions product was recommending to find alignment with all teams.
The flow diagram was able to demonstrate to both engineering and product that the suggested proposals introduced not only more steps for the user to navigate but also required significant engineering bandwidth to implement.
Through the flow diagram, I was able to effectively communicate across engineering and product to refer back to the design process, starting with research to inform our solutions rather than the reverse.
Concept Validation
Research Methods
Conducted in-depth internal interviews with 3 partner success engineers (they are responsible for helping users troubleshoot product problems) to understand user’s agent experiences.
Validate mental models and pain points through a survey study. We received 57 responses.
After completing everything above, I created a persona to summarize the research I collected from our users. This person will later help me improve the agent experience by identifying areas of complexity to simplify.
Combing Through Ambiguity
I learned that understanding user sentiment via survey responses is not a straightforward process.
To contextualize our findings, we had to weigh quantitative responses alongside open ended responses users left.
Early Research Findings
Key Findings #1: Agent Types are confusing
While the survey responses showed that responses where users felt neutral or even positive about Agent Types, the open ended responses revealed deep friction between users and Agent Types.
Quotes
"What makes an agent self-hosted, vs. on-prem, etc? Why does the Agent Type have to even matter?"
"We don't install the agents everyday. Each time we have have to go through either ours or Liongard’s documentation to find out what each agent does and how to use them. Having multiple agents takes more time for us to onboard a client. "
"No idea what the different types mean or do."
Insight:
Users don’t install or interact with agents on a daily basis. Meaning that when they do install or manage their agent, they are forced to relearn Agent Types, at best slowing them down and at worst causing them to abandon their workflow altogether.
Key Finding #2: Users were unsure how to manage agents in absence of Agent Types
Quotes
"Agent Types determine what agents we will be billed for. Without Agent Types, how do we ensure that we aren’t billed for stuff we’re not using"
"I can't identify a specific scenario, but I am a control freak from time to time, so flexibility to choose when/where an Agent can run is important to me."
Insight:
Users are concerned that changes around Agent Types will cause ambiguity around what they will be billed for. Especially the user persona that we service, they are security minded and will always say yes to having control over settings within Liongard.
Heuristic Analysis
In addition to my preliminary research, I performed a heuristic analysis of the current screens users are interacting with when managing their agent on our product, and quickly I was able to pinpoint several areas of improvement.
Users has no information other than agent name to make their selections, which can lead to incorrect inputs and therefore misconfigured data.
Our most stable and recommended Agent (On Demand Agent) is placed at the bottom of our dropdown and undistinguished from other options
Offline/Broken Agents are listed but not visually distinguished within the dropdown, which can lead to incorrect selections.
The ? help icon navigates the user away from the configuration page onto our doc site, disrupting their selection process
Designs: Hifi Prototypes
Improving Agent Selection
3
2
1
5
1
2
Placing On Demand Agent at the top of the dropdown with (Recommended) to nudge to the user to select this option
Surfacing up a key data point such as Domain Name to help the user distinguish between choices at a glance
Allow users to see full details of their agents list if they require more info to assign the correct agent
4
4
5
Change from a link to a tooltip so the user won't be disrupted during their selection process by being navigated to a separate webpage.
Visually distinguish Offline Agents without obstructing dropdown selection options. Users have the option to view offline agents and troubleshoot the error as a secondary flow.
HiFi Prototype
3
Agent Permissions
To address concerns around users wanting transparency around billing and control over their agent’s configuration, I created mockups for an Agent Permissions feature.
User Testing
Before
Experiment & Evolve
Interviews went over time and off track from defined tasks, with moderators unable to steer back the interview.
There were strong reservations around change with existing interviewing methods because of past experiences.
Usability tests were conducted against only prototype, to decrease the likelihood of going over allotted time.
I researched and spoke to UX Designers within my network to improve our moderation methods for keeping users on track.
I experimented with our user interview structure using what I learned from other designers and practicing new moderating skills.
I tested tasks against both the prototype and product, increasing confidence in product direction and uncovering mental models.
By leveling up user interview best practices, I was able to significantly improve the quality of insights we were able to extract from a user interview while strengthening our ability as moderators to guide an interview through.
This project gave me the opportunity to push our team to grow and experiment.
What I Found
After conducting 5 F2F User Interviews, I grouped common findings together. This process helped me understand the major areas to improve my prototype as well as prioritize which area I should focus on.
Agent Permissions
When users were going through tasks within Agent Permissions, all felt like the granularity of control was unnecessary, saying “I don’t [see a use case] and [this feature] wouldn’t matter to me”.
This allowed us to descope Agent Permissions and shave off 3 weeks of engineering sprint work.
Agent Selection
Overall, users were delighted by the small improvements we made to the agent selection process, saying the designs were “a lot cleaner” and “[he could see] valuable info without scrolling through”.
We did make iterative improvements based on user feedback such as:
Users expressed that IP Address rather than Domain Name was the most important info to help identify the right agent to select
Users expected that Offline Agents are not hidden but instead greyed out and placed at the bottom of the dropdown
I placed the Agent Details button at the bottom of the dropdown to align with user’s line of sight. During tasks, they often missed this option because they are looking up and down and not left to right.