GitLab's Registries

The package registry and container registry is an often overlooked yet critically important aspect of the product development process. My first design task for GitLab was to improve the overall experience of the registries for users and set the foundation for the overall product vision. I partnered directly with the Package team comprised of individuals from throughout the product org, including product management, engineering leadership, and quality assurance. In the end, we confidently built a new user interface grounded in robust user data. 


Discovery research

Discovery research

Is this actually a problem? How does it impact users? What is the full context?

As the foundational research for the newly formed team, we focused our research on understanding the data user’s needed about their package. We split our discovery efforts into two phases: one quantitative and the other quantitative. We set out to understand what data is important first with a survey going out to a broad audience related to the package and container registries. We ported the learnings to guide in-person interviews with participants to get a better understanding of why certain pieces of package data ranked higher than others. 


Exploration

Before kicking off formal user research, I spent a few weeks in exploration; interviewing, and discussing the Package Registries with stakeholders across the organization. These individuals included team members from sales, marketing, support, and our internal infrastructure team who dogfooded the product. These interviews provided a richer perspective to base our formal user research.


After performing a robust audit in the current product and general package market, we had a comprehensive understanding of the data related to packages and images. We discovered containers and packages had completely different needs and data points from packages, and even then, each package format (npm, maven, Conan, etc) had unique data requirements. With this data in hand, we started formal user research.


Research plan

We started with a survey in order to gather quantitative data around what package metadata was most important. We asked users to self-select the package format they were both familiar with (up to 2). After self-selection, they were provided with a list of metadata specific to that package format in a random order, asking the participant to stack-rank by order of importance. We concluded the survey by asking if any data they thought was relevant was missing or if they had any other thoughts. This data would answer the “What?” as in “What data is most important?”

After gathering the quantitative data, we used remote interviews to answer the question “Why was that data so important?”. After hearing from users how they used specific data points and why they were ranked higher, we then explored a card sorting exercise to better understand how users would organize the full data into groups.

Results

The thorough research we did around Package and Container data was the foundational data for all of our Package stage UI. We walked away with confidence understanding what data was important to users on the whole, how they used specific data points, and even how a small group of users would even group that data in the UI.

18

moderated research participants

284

unmoderated research participants

24

unique research insights

I am so happy Iain pushed to do the survey and the interviews as part of our first research project. We confidently based a lot of business decisions based on that data at the start, and almost 2 years later we still reference some of the insights.

Tim R.

Sr. Product Manager
Collaborative design

Inclusive design

How can we solve the problem? Is the design accessible?

As the foundational research for the newly formed team, we focused our research on understanding the data user’s needed about their package. We split our discovery efforts into two phases: one quantitative and the other quantitative. We set out to understand what data is important first with a survey going out to a broad audience related to the package and container registries. We ported the learnings to guide in-person interviews with participants to get a better understanding of why certain pieces of package data ranked higher than others. 


Exploration

Before kicking off formal user research, I spent a few weeks in exploration; interviewing, and discussing the Package Registries with stakeholders across the organization. These individuals included team members from sales, marketing, support, and our internal infrastructure team who dogfooded the product. These interviews provided a richer perspective to base our formal user research.


After performing a robust audit in the current product and general package market, we had a comprehensive understanding of the data related to packages and images. We discovered containers and packages had completely different needs and data points from packages, and even then, each package format (npm, maven, Conan, etc) had unique data requirements. With this data in hand, we started formal user research.


Research plan

We started with a survey in order to gather quantitative data around what package metadata was most important. We asked users to self-select the package format they were both familiar with (up to 2). After self-selection, they were provided with a list of metadata specific to that package format in a random order, asking the participant to stack-rank by order of importance. We concluded the survey by asking if any data they thought was relevant was missing or if they had any other thoughts. This data would answer the “What?” as in “What data is most important?”

After gathering the quantitative data, we used remote interviews to answer the question “Why was that data so important?”. After hearing from users how they used specific data points and why they were ranked higher, we then explored a card sorting exercise to better understand how users would organize the full data into groups.

Results

The thorough research we did around Package and Container data was the foundational data for all of our Package stage UI. We walked away with confidence understanding what data was important to users on the whole, how they used specific data points, and even how a small group of users would even group that data in the UI.

With the robust data gathered during the Discovery phase, I walked into the design phase with confidence. 


Collaborative start

After sharing those research results with the broader Package team, we kicked off the design phase with an asynchronous workshop, having each teammate go through a series of 8 exercises exploring different areas of the Package and Container Registry UIs and provide their thoughts on solutions.


Leveraging Pajamas

An important aspect of the project as a whole was to bring the Package stage UIs into parody with the Pajamas design system. While this can feel restricting at times as a designer, it allowed quite a few advantages:

  • Inter-product consistency with the rest of the platform raises user trust
  • Pre-built components are much easier to build on the

Iterative design

Iteration is incredibly important to the design process, and it is made even better by an organization that boldly embraces iteration like GitLab. The main design was split into 2 main phases starting with shoring up the existing design with minimally viable changes (MVC) followed by a second phase to polish the interface and overall experience. Inside each of these phases, the design went through many iterations based on feedback gathered from across the company.


Collaborative from the start

After sharing those research results with the broader Package team, we kicked off the design phase with an asynchronous workshop, having each teammate go through a series of 8 exercises exploring different areas of the Package and Container Registry UIs and provide their thoughts on solutions.

This tight collaboration with engineering lead to the discovery of a "quick win" of including other GitLab platform metadata that was already available on the front end. Within one month, we were able to deliver value by exposing in the user interface a package's specific pipeline and MR of origin. This directly connected users with not just the package itself, but where it came from. almost 2 years later and users still talk about that improvement in conversation.


Iterative Design

Iteration is incredibly important to the design process, and it is made even better by an organization that boldly embraces iteration like GitLab. The main design was split into 2 main phases starting with shoring up the existing design with minimally viable changes (MVC) followed by a second phase to polish the interface and overall experience. Inside each of these phases, the design went through many iterations based on feedback gathered from across the company.


Leveraging Pajamas

An important aspect of the project as a whole was to bring the Package stage UIs into parody with the Pajamas design system. While this can feel restricting at times as a designer, it allowed quite a few advantages: 

  • Inter-product consistency with the rest of the platform raises user trust
  • Pre-built components are much easier to build on the

User validation

User validation

Problem solved? Is it usable?

Moderated Solution Validation

The official solution validation exercise for the Container and Package registries was a series of moderated user interviews. We explored each participants technical background then asked them to complete 5 tasks related to the registry using a Figma prototype. Each time a user entered a new user interface in the prototype, we stopped the tasks and asked the participants to review the UI in detail. As researchers, we wanted to discover if participants could identify or explain each of the different pieces of Package or Image related data shown in the UI.

User testing was performed with internal stake holders, dog-fooding engineers, and 8 external participants ranging from engineers to system administrators.


Team collaboration

As part of the research initiative, we collaborated with team members from throughout the organization

  • Each engineer shadowed a research session to interact with users first hand
  • Insights were synthesized as a broader team using affinity mapping
  • Research results where shared across the company including organizations like Marketing, Sales, Stategy, and even the executive team.

✔️ Design Validated

After completing multiple rounds of feedback gathering, validation, and design iteration, we walked away with a user-validated design that was ready to be built by the engineering team. Outside of the user interface itself, we gained a lot of initial context around what users expected from their package manager and what they where hoping to see next.

We were able to further utilize the data and insights from testing to guide priority during planning breakdown with the larger team.

Empower execution

Empower execution

How do we build this? Can we iterate while still delivering value?

Measure results

Measure results

What value did we deliver? Is it being used? Did it change other behaviors?