Red Hat, a leading provider of enterprise open-source solutions, listened to software development teams' struggles with navigating diverse tools and technologies beyond coding. To tackle these challenges, we created two AI-powered tools integrated into the developer's workflow. These tools aim to streamline coding, improve developer productivity, and fuel innovation and growth throughout the software development lifecycle.
Role: Product Designer / Research Consultant
Timeline: 3 Months
Software: Figma
Team: 👤 Jessie Huang (Product Design Lead) 👤 Michael Yung (UI/UXR) 👤 Caroline Yang (UXR) 👤 Ami Sao (UXR) 👤 Quinn Truong (Strategy Lead)
Skills: User Research Product Strategy Visual Design Interaction Design Storytelling
How can we create experiences to improve a software developer's workflow with AI?
The Problem
DocumentationAI
Automatically generates an overview of the program with clear headers and details of code. Reduces the time developers spend on documentation, allowing them to focus on more critical tasks.
SummarizerAI Walkthrough (Jessie)Â
SummarizerAI
Get actionable insights and refactoring suggestions to optimize your codebase, enhance maintainability, and streamline development.
SummarizerAI Walkthrough (Michael)Â
My Contribution
I designed DocumentationAI to improve the onboarding experience for new developers and reduce the need for extensive documentation. As the product designer and UX researcher, I collaborated with a team of three researchers and one designer to scope the problem space through user interviews and desk research. I drafted interview guides and led two rounds of usability testing, collecting and summarizing findings for both DocumentationAI and SummarizerAI. I presented our findings to stakeholders, incorporated their feedback, and designed DocumentationAI using Red Hat's PatternFly design system.
The Impact
Following the project, I received feedback from the client indicating their plans to integrate DocumentationAI into their Red Hat Developer's Hub. This implementation aims to streamline the onboarding process, reducing the time and effort required for developers to familiarize themselves with the code and processes. This will lead to a significant increase in developer productivity, improved knowledge alignment, and enhanced overall efficiency.
UXR
RedHat frequently hears from software development teams about the challenge of going beyond coding, as they are required to grasp various tools and technologies. Artificial Intelligence (AI) can assist developers in navigating this complex landscape.
At the beginning of the project, we were given a broad objective: to improve the developer's workflow using AI within a Red Hat developer product. To gain a clearer understanding, we reviewed the developer's workflow chart provided by Red Hat, which detailed the various stages and processes involved.
Red Hat Developer's Flow
Inner Loop & Outer Loop Developers
Through our research, we identified two main user groups of Red Hat products: inner loop developers and outer loop developers. To gather insights, we conducted 13 user interviews (2 entry levels, 2 grad students, 9 undergraduate students). Based on the severity and frequency of the issues identified, we narrowed down the problem space to documentation and refactoring.
We conducted a competitive analysis to identify existing products in the developer space, examining their strengths and weaknesses. Our research revealed that most AI tools cater to developers in the inner loop, primarily offering features like spell-checking code and code suggestions. Our goal is to differentiate our product by exploring an untapped space and addressing unmet needs in the market. Thus, we chose to focus on documentation and refactoring, two critical areas that have been largely overlooked by existing solutions.
Microsoft CoPilot
Automates tedious tasks, provides real-time suggestions, and streamlines documentation and refactoring processes.
GitLab AI
Integrated into the GitLab platform, it provides intelligent assistance for developers, automating routine tasks, suggesting code improvements, and enhancing collaboration.
SonarSource
Code analysis and cleaning tool that leverages machine learning to improve code quality and security. It automatically detects bugs, vulnerabilities, and code smells, providing actionable insights for developers to refine their code.
THEÂ Users
Based on our research, we created these two personas to better understand the needs and challenges of our target users:
Outer Loop Developer Persona (Developer Dave)
Inner Loop Developer Persona (Student Sally)
IDEATION
How might we imagine an AI-powered tool that improves a software developer’s existing workflow, enabling faster onboarding onto project teams, more efficient comprehension of code, and expedited code documentation?
To brainstorm ideas for our Documentation AI and Summarizer AI, we utilized the Crazy 8 method and How Might We (HMW) questions.
Once we had a variety of concepts, we used a voting system with stickers to prioritize the ideas. Each team member placed stickers on their favorite concepts, and the ones with the most votes were selected for further development.
How-Might-We
We framed our challenges as HMW questions to inspire innovative approaches. For example, "How might we make make debugging more efficient?"
Crazy-8
Each team member sketched 8 different ideas in 8 minutes, encouraging rapid ideation and diverse thinking.
Documentation AIÂ Concept &Â Usability Testing
After we crated these concepts, we created initial designs and conducted concept testings on our design to see the effectiveness of thse concepts.
Concept Testing ‍‍We conducted 3 interviews, each of which lasted about 45 minutes. Two of the meetings were done virtually and one of them was completed in person.
‍Key Insights - Concept Testing 1. Our users found the purpose of the documentation and citation tool unclear. They were confused, stating that the categories under the style and types (e.g. “numpy” and “DocStrings”) used in the wireframe. 2. Concept tests were really helpful in ensuring a match of the vocabulary and content design used in our wireframes to developer mental models. We discovered our use of the word “citation” was not common in the computer science field. Therefore, to avoid confusion, we decided to change the wording to “DocumentationAI” because it was more familiar in a computer science context and was recommended by our participants. 3. Users had trouble locating the tool and felt that the popup was too small to see in the IDE, especially when using a larger monitor.
Usability Testing ‍‍‍‍Our usability testing involved six participants with each session lasting approximately 45 minutes. We interviewed three participants over Zoom and three in-person.
‍Key Insights - Usability Testing 1. Our users had positive feedback on our designs for the “View Source” and “View Architecture” pages as they matched their mental models 2. 2 users had difficulty finding the DocumentationAI tool within VS code, suggesting potential usability issues with its placement and accessibility. 3. Users said they would have preferred if there was an option for users to confirm before proceeding with Documentation to prevent unexpected actions. 4. Users were confused about the vertical left-hand navigation bar, particularly about the ability to change architecture and the origin of architecture diagrams. 5. 3 Users were also unsure about whether the AI automatically generated the architecture diagram or if users had to manually create it.
Reflection
This project was a great experience that allowed me to dive deep into the world of developer workflows and explore how AI can enhance these processes. From discovery research to the final iteration of our designs, I was actively involved in every step. Although I focused mainly on DocumentationAI, I also participated in testing SummarizerAI. This project was particularly exciting due to the innovative potential of AI. After presenting our solutions, our client decided to move forward with the AI tool for documentation, confirming the value and feasibility of our work.