Recently Github announced a feature to automatically merge a pull request. It’s a much awaited feature and everyone should try it out! In the light of that feature, we want to share more thoughts on managing the merge workflows and why it’s worth the effort for high-output engineering teams.
An essential characteristic of large-scale software development is parallel development by teams of software engineers. With hundreds of engineers working on the same code base updating millions of lines of code, build failures are common. These build failures could happen due to unit test failures, compilation errors, merge conflicts or even flaky build setups. A paper by Perry et al. shows that there is a strong correlation between the degree of parallelism and the number of build quality problems it has.
Proof by example
Let’s take a simple example to demonstrate. Say Emily and Josh are working on a web application of a food delivery company. They are launching a new feature that offers discounts to users who order ahead.
PR #1: Emily made the following change to calculate the discount offered to the users.
PR #2: And Josh added a new API endpoint for charging users.
Even though the changes look trivial, and both would pass the build independently, when merged the code would break the build because the method signature for
charge_order has been changed!
In a traditional system, this would pass through the standard Continuous Integration workflow and make it to the
main branch. In situations like these engineers have to spend several hours identifying the root cause of failures, getting blocked with not being able to push further changes, and potentially delaying shipping the application.
Even though Github solves the painful manual labor required to merge a change, it does not solve the issue if broken builds. So how do we keep builds green? There are a few ways:
Use a manual mutex
mutexis a synchronization primitive that can be used to protect shared data from being simultaneously accessed by multiple threads.
You would want to avoid any kind of concurrent merge. This can be done by basically “synchronizing developers”, i.e., make sure only one person merges at a time. In one of my previous roles, we used to have an internal app that was a manual mutex. If you have to merge a change, you grab the mutex or queue your name up. When the turn comes, you merge the change and manually release the mutex. If you are a 4-5 engineers team that can work, imagine playing manual mutex in a 100 engineers org.
Build custom automation
I pledged to solve this problem in my last role. So I built a custom automation where instead of merging each change manually, you tag each pull request to be queued. The automation then picks up each pull request, pulls latest changes from the base branch, runs the CI and validations, and eventually merges the changes. This ensured that the build was tested with latest changes before merging in the main branch.
That was build using Github’s REST API framework. It is not too complex to build, but may require some work to deploy and maintain. This custom automation enforces serializability of all the changes that are merged by developers without requiring any oversight. A failed change would get dequeued, and the developer would be notified of the failure.
Inspired by the custom automation we built MergeQueue. It performs similar operations to the automation but with a lot more added features and functionalities:
- with MergeQueue you can connect and authorize and Github repository,
- select rules on what types of changes are picked, get failure reports on Slack,
- select required reviewers, ignore certain failed tests,
- skip-the-line for hotfixes and high-priority changes,
- enterprise ready with support for on-prem installation and Okta based SAML authentication.
If you have any questions or thoughts on merge automation, we would love to hear from you, email me at: email@example.com.