WhoDo: Automating Reviewer Suggestions at Scale
- Sumit Asthana ,
- Rahul Kumar ,
- Ranjita Bhagwan ,
- Chetan Bansal ,
- Christian Bird ,
- Chandra Maddila ,
- Sonu Mehta ,
- B. Ashok
ESEC/FSE 2019 |
Published by ACM | Organized by ACM
Today’s software development is distributed and involves continuous
changes for new features and yet, their development cycle
has to be fast and agile. An important component of enabling this
agility is selecting the right reviewers for every code-change – the
smallest unit of development cycle. Modern tool based code review
is proven to be an effective way to achieve appropriate code review
of software changes. However, the selection of reviewers in modern
tool based code review systems is at best manual. As software and
teams scale, this poses the challenge of selecting the right reviewers,
which in turn determines software quality over time. While
previous works have suggested automatic approaches to code reviewer
recommendations, these have been limited to retrospective
analysis. We not only deploy a reviewer suggestions algorithm –
WhoDo and evaluate its effect but also incorporate load balancing
as part of it to address one of its major shortcomings – of recommending
experienced developers very frequently. We evaluate the
effect of this hybrid recommendation + load balancing system on
five repositories within Microsoft. Our results are based around
various aspects of a changelist and how code review affects that.
We attempt to quantitatively answer questions which are supposed
to play a vital role in effective code review through our data and
substantiate it through qualitative feedback of partner repositories.