At Slack, the aim of the Cell Developer Expertise Group (DevXp) is to empower builders to ship code with confidence whereas having fun with a nice and productive engineering expertise. We use metrics and surveys to measure productiveness and developer expertise, resembling developer sentiment, CI stability, time to merge (TTM), and take a look at failure charge.

We have now gotten plenty of worth out of our deal with cellular developer expertise, and we predict most corporations under-invest on this space. On this publish we are going to talk about why having a DevXp crew improves effectivity and happiness, the price of not having a crew, and the way the crew recognized and resolved some widespread developer ache factors to optimize the developer expertise.

How it began

A couple of cellular engineers realized early on that engineers who had been employed to put in writing native cellular code may not essentially have experience within the technical areas round their developer expertise. They thought that if they might make the developer expertise for all cellular engineers higher, they might not solely assist engineers be extra productive, but additionally delight our clients with sooner, higher-quality releases. They obtained collectively and fashioned an ad-hoc crew to handle the commonest developer ache factors. The cellular developer expertise crew has grown from three folks in 2017 to eight folks at present. In our 5 years as a crew, we’ve got centered on these areas:

  • Native improvement expertise and IDE usability
  • Our rising codebase. Guaranteeing visibility into problematic areas of the codebase that require consideration
  • Steady Integration usability and extensibility
  • Automation take a look at infrastructure and automatic take a look at flakiness
  • Preserving the principle department inexperienced. Ensuring the newest foremost is at all times buildable and shippable

The price of not investing in a cellular developer expertise crew

A cellular engineer normally begins a function by making a department on their native machine and committing their code to GitHub. When they’re prepared, they create a pull request and assign it to a reviewer. As soon as a pull request is opened or a subsequent commit has been added to the department, the next CI jobs get kicked off:

  • Jobs that construct artifacts
  • Jobs that run assessments
  • Jobs that run static evaluation

As soon as the reviewer approves the pull request and all checks go on CI, the engineer might merge the pull request in the principle department. Right here is the visualization of the developer circulation and the circulation interruptions related to every space.

Here’s a tough estimate of the price of some developer ache factors and the associated fee to the corporate for not addressing these ache factors because the crew grows:

Whereas builders can study to resolve a few of these points, the time spent and the associated fee incurred will not be justifiable because the crew grows. Having a devoted crew that may deal with these drawback areas and figuring out methods to make the developer groups extra environment friendly will be certain that builders can preserve an intense product focus.


Our crew companions with the cellular engineering groups to prioritize which developer ache factors to deal with, utilizing the next method:

  • Take heed to clients and work alongside them. We’ll companion with a cellular engineer as they’re engaged on a function and observe their challenges.
  • Survey the builders. We conduct a quarterly survey of our cellular engineers the place we monitor basic Internet Promoter Rating (NPS) round cellular improvement.
  • Summarize developer ache factors. We distill the suggestions into working areas that we are able to break up up as a crew and deal with.
  • Collect metrics. It is crucial that we measure earlier than we begin addressing a ache level to make sure that an answer we deploy really fixes the problem, and to know the precise affect our answer had on the issue space. We give you metrics to trace that correlate with the issue areas builders have and monitor them on dashboards. This permits us to see the metrics change over time.
  • Spend money on experiments that enhance developer ache factors. We’ll consider options to the issues by both consulting with different corporations that additionally develop at this scale, or by developing with a novel answer ourselves.
  • Think about using third-party instruments. We consider whether or not it makes extra sense to make use of present options or to construct out our personal options.
  • Repeat this course of. As soon as we launch an answer, we have a look at the metrics to make sure that it strikes the needle in the suitable course; solely then will we transfer onto the following drawback space.

Developer pains

Let’s dive into some developer ache factors so as of severity and look at how the cellular developer expertise crew addressed them. For every ache level, we are going to begin with some quotes from our builders after which define the steps we took.

CI take a look at jobs that take a very long time to finish

When a developer has to attend a very long time for assessments to run on their pull requests, they swap to engaged on a distinct activity and lose context on the unique pull request. When the take a look at outcomes return, if there is a matter they should handle, they should re-orient themselves with the unique activity they had been engaged on. This context switching takes a toll on developer productiveness. The next are two quotes from our quarterly cellular engineering survey in 2018.


Quicker CI time! I feel that is requested rather a lot, however it could be superb to have this improved

Jenkins construct occasions are fairly excessive and it could be nice if we are able to scale back these

From 1 to 10 builders, we had a few hundred assessments and ran all of them serially utilizing Xcodebuild for iOS and Firebase Take a look at lab for Android.

Operating the assessments serially labored for a few years, till the take a look at job time began to take nearly an hour. One of many options we thought-about was introducing parallelization to the take a look at suites. As a substitute of operating the entire assessments serially, we might break up them into shards and run them in parallel. Right here is how we solved this drawback on the iOS and Android platforms.


We thought-about writing our personal instrument to attain this, however then found a instrument referred to as Bluepill that was open sourced by Linkedin. It makes use of Xcodebuild underneath the hood, however added the flexibility to shard and execute assessments in parallel. Integrating Bluepill decreased our whole take a look at execution time to about 20 minutes.

Utilizing Bluepill labored for just a few extra years till our unit take a look at job began to as soon as once more take nearly 50 minutes. Slack iOS engineers had been including extra take a look at suites to run, and we might now not merely rely solely on parallelization to decrease TTM.

How transferring to a contemporary construct system helped drive down CI job occasions

Our subsequent technique was to implement a caching layer for our take a look at suites. The aim was to solely run the assessments that wanted to be run on a selected pull request, and return the remaining take a look at outcomes from cache. The issue was that Xcodebuild doesn’t assist caching. To implement take a look at caching we wanted to maneuver to a distinct construct system:s Bazel. We utilized Bazel’s disk cache on CI machines so builds from completely different pull requests can reuse construct outputs from one other consumer’s construct somewhat than constructing every new output domestically.

Along with the Bazel disk cache, we use the bazel-diff instrument that permits us to find out the precise affected set of impacted targets between two Git revisions. The 2 revisions we examine are the tip of the principle department, and the final commit on the builders department. As soon as we’ve got the checklist of targets that had been impacted, we solely take a look at these targets.

With the Bazel construct system and bazel-diff, we had been in a position to lower TTM to a median of 9 minutes, with a minimal TTM  of 4.5 minutes. This implies builders can get the suggestions they want on their pull request sooner, and extra rapidly get again to collaborating with others and dealing on their options.


Within the early days, TTM was round 50 minutes, and Firebase Take a look at Lab (FTL) didn’t have take a look at sharding.  We constructed an in-house take a look at sharder on high of FTL referred to as Gasoline to interrupt assessments into a number of shards and name FTL APIs to run every take a look at shard in parallel. This introduced TTM from 50+ minutes to underneath 20 minutes. Right here is the excessive degree overview:

We continued utilizing Gasoline for 2 and a half years, after which moved to an open supply take a look at sharder referred to as Flank. We proceed to make use of Flank at present to run Android purposeful and end-to-end UI assessments.

Take a look at-related failures 

When a examine fails on a pull request due to flaky or unrelated take a look at failures, it has the potential to take the developer out of circulation, and probably affect different builders as effectively. Let’s check out just a few culprits inflicting non-related pull request failures and the way we’ve got addressed them.

Fragile automation frameworks

From 2015 to early 2017, we used the Calabash testing framework that interacted with the UI and wrapped that logic in Cucumber to make the steps human readable. Calabash is a “blackbox” take a look at automation framework and desires a devoted automation crew to put in writing and handle assessments. We noticed that the extra assessments that had been added, the extra fragile the take a look at suites grew to become. When a take a look at failed on a pull request, the developer would attain out to an Automation Engineer to grasp the failure, try to repair it, then rerun it once more and hope that one other fragile take a look at doesn’t fail their construct. This resulted in an extended suggestions loop and elevated TTM.

Because the crew grew we determined to maneuver away from Calabash and switched to Espresso as a result of Espresso was tightly coupled with the Android OS and can also be written within the native language (Java or Kotlin). Espresso is highly effective as a result of it’s conscious of the interior workings of the Android OS and will interface with it simply. This additionally meant that Android builders might simply write and modify assessments as a result of they had been written within the language they had been most comfy with. A couple of advantages to focus on with migrations:

  • This helped to shift testing duty from our devoted automation crew to builders, to allow them to write assessments as wanted to check the logic within the code
  • Testing time went from ~350 minutes to ~60 minutes after we moved from Calabash to Espresso and FTL

Flaky assessments

In early 2018 the developer sentiment in the direction of testing was poor and brought about plenty of developer ache. Listed below are couple of quotes from our developer survey:


Flimsy assessments are nonetheless a bottleneck typically. We should always have a greater method monitoring them and ping the proprietor to repair earlier than it causes an excessive amount of friction

Flaky assessments gradual me all the way down to a halt – there ought to be a extra streamlined course of in place for continuing with PR’s as soon as flaky assessments are discovered (as a substitute of blocking a merge because it occurs now)

At one level, 57{79c2d5f4633b4a18cbb72669d3b9130a25e242070f7ddf547332a9c02c9386f7} of the take a look at failures in our foremost department had been as a result of flaky assessments and the proportion was even larger on developer pull requests. We spent a while studying about flaky assessments and managed to get them underneath management lately by constructing a system to auto-detect and suppress flaky assessments to make sure developer expertise and circulation is uninterrupted. Here’s a detailed article outlining our method and the way we decreased take a look at failures charge from 57{79c2d5f4633b4a18cbb72669d3b9130a25e242070f7ddf547332a9c02c9386f7} to 4{79c2d5f4633b4a18cbb72669d3b9130a25e242070f7ddf547332a9c02c9386f7} 

CI-related failures

For a few years we used Jenkins to energy the cellular CI infrastructure, utilizing Groovy-based .jenkinsfiles. Whereas it labored, it was additionally the supply of plenty of frustration for builders. These issues had been probably the most impactful:

  • Frequent downtime
  • Diminished efficiency of the system
  • Failure to choose up Git webhooks, and subsequently not beginning pull request CI jobs
  • Failure to replace the pull request when a job fails
  • Problem in debugging failures as a result of poor UX

After flaky assessments, CI downtime was the largest bottleneck negatively impacting the cellular crew’s productiveness. Listed below are some quotes from our builders concerning Jenkins:


Want extra dependable hooks between the jenkins CI and GitHub. When issues do go flawed, there are typically no hyperlinks in GH to go to the suitable place. Additionally, typically CI passes however does not report again to GH so PR is caught in limbo till I manually rebuild stuff

Jenkins is a ache. Take away the Blue Ocean jenkins UI that’s complicated and everybody hates

Jenkins is a large number to me. There are too many hyperlinks and I solely care about what broke and what button/hyperlink do I must click on on to retry. Every part else is noise

After utilizing Jenkins for greater than six years, we migrated away from it to BuildKite, which has had 99.96{79c2d5f4633b4a18cbb72669d3b9130a25e242070f7ddf547332a9c02c9386f7} uptime thus far. Webhook-related points have fully disappeared, and the UX is straightforward sufficient for builders to navigate with no need our crew’s assist. This has not solely improved developer expertise but additionally decreased the triage load for our crew.

The quick affect of the migration was an 8{79c2d5f4633b4a18cbb72669d3b9130a25e242070f7ddf547332a9c02c9386f7} enhance in CI stability from ~87{79c2d5f4633b4a18cbb72669d3b9130a25e242070f7ddf547332a9c02c9386f7} to 95{79c2d5f4633b4a18cbb72669d3b9130a25e242070f7ddf547332a9c02c9386f7}  and decreased Time to Merge by 41{79c2d5f4633b4a18cbb72669d3b9130a25e242070f7ddf547332a9c02c9386f7} from ~34 minutes to ~20 minutes

Merge conflicts

Battle whereas including new modules or recordsdata to the Xcode undertaking for iOS 

Because the variety of iOS engineers at Slack grew previous 20, one space of fixed frustration was the checked in Xcode undertaking file. The Xcode undertaking file is an XML file that defines the entire Xcode undertaking’s targets, construct configurations, preprocessor macros, schemes, and far more. As a small crew, it’s straightforward to make modifications to this file and commit them to the principle department with out inflicting any points, however because the variety of engineers will increase, the possibilities of inflicting a battle by making a change on this file additionally will increase.


“I feel the priority is extra so the xcode undertaking file, resolving conflicts on that factor is painful and error susceptible. I’m undecided what the very best method is to assuaging this attainable ache level, particularly if they’ve added new code recordsdata.”

“I had a dozen or so conflicts within the undertaking file that I needed to manually resolve. Not an enormous challenge in itself however whenever you’re anticipating to merge a PR it may be a shock”

The answer we applied was to make use of a instrument referred to as Xcodegen. Xcodegen allowed us to delete the checked in .xcodeproj file and create an Xcode undertaking dynamically utilizing a YAML file that contained definitions of all of our Xcode targets. We related this instrument to a command line interface in order that iOS engineers might create an Xcode undertaking from the command line. One other profit was that the entire undertaking and goal degree settings are outlined in code, not within the Xcode GUI, which made the settings simpler to search out and edit.

After adopting Bazel we took it a step additional and created the YAML file dynamically from our Bazel construct descriptions.

A number of concurrent merges to foremost have the potential to interrupt foremost

Up to now we’ve got talked about completely different points that builders can expertise when writing code domestically and opening a pull request. However what occurs when a number of builders are attempting to land their pull requests to the principle department concurrently? With a big crew, a number of merges to foremost occur all through the day which might make a developer’s pull requests stale rapidly. The longer a developer waits to merge, the bigger the possibility of a merge battle.

An growing variety of merge conflicts began inflicting the principle department to fail as a result of concurrent merges and began to negatively have an effect on developer productiveness. Till the merge battle is resolved, the principle department would stay damaged and pause all productiveness. At one level merge conflicts had been breaking the principle department a number of occasions a day. Extra builders began requesting a merge queue.


We preserve breaking the principle department. We want a merge queue.

We brainstormed completely different options and in the end landed on utilizing a 3rd social gathering answer referred to as Aviator, and mixed it with our in-house instrument Mergebot. We felt that constructing and sustaining a merge queue can be an excessive amount of work for us and that the very best answer was to depend on an organization that was spending all of their time engaged on this drawback. With Aviator, builders add their pull request to a queue as a substitute of immediately merging to the principle department, and as soon as within the queue, Aviator will merge foremost into the developer branches and run the entire required checks. If a pull request was discovered to interrupt foremost, then the merge queue rejects it and the developer is notified by way of Slack. This method helps keep away from any merge conflicts.


Method higher now with Aviator. Solely ache level is I can not merge my pull requests and should depend on Aviator. Aviator takes hours to merge my PR to grasp. Which makes me anxious.

Being an early adopter means you get some advantages but additionally some ache. We labored carefully with the Aviator crew to determine and handle developer pains resembling elevated time to merge a pull request in the principle department and failure reporting on a pull request when it’s dropped out of queue as a result of a battle.

Checking pull request progress/standing

This can be a request we acquired in 2017 in certainly one of our developer surveys:


Would actually love well timed alerts for PR assignments, feedback, approvals and so on. Additionally can be good if we might get a DM if our builds go (somewhat than solely the alert for after they fail) with the choice to merge it proper there from slack if we’ve got all of the wanted approvals.

Later within the 12 months we created a service which screens Git occasions and sends Slack notifications to the pull request writer and pull request reviewer accordingly. The bot is called “Mergebot” and can notify the pull request writer when a remark is added to their pull request or its standing modifications. It would additionally notify the pull request reviewer when a pull request is assigned to them. Mergebot has helped shorten the pull request overview course of and preserve builders in circulation. That is yet one more instance of how saving simply 5 minutes of developer time saved ~$240,000 for a 100-developer crew in a 12 months.

Just lately github rolled out the same function referred to as “github scheduled reminder” which, as soon as opted into, notifies a developer of any PR replace by means of Slack notification. Whereas it covers the fundamental reminder half, Mergebot remains to be our developer’s most well-liked bot because it doesn’t require specific opt-in and in addition permits pull requests to be merged by means of a click on of the button by means of Slack.


We wish Slack to be the very best place on the planet to make software program, and a method that we’re doing that’s by investing within the cellular developer expertise. Our crew’s mission is to maintain builders within the circulation and make their working lives simpler, extra nice, and extra productive.  Listed below are some direct quotes from our cellular builders:


Dev XP is nice. Thanks for at all times taking suggestions from the cellular improvement groups! I do know you care 💪

We’re utilizing trendy practices. Bazel is nice. I really feel extremely supported by DevXP and their laborious work.

The instruments work effectively. The code is modularized effectively. Devxp is responsive and useful and continues to iterate and enhance.

Are a majority of these developer expertise challenges fascinating to you? In that case, join us!