The Evolution of Feedback in Our Organizations
Across 22 years and two companies now, our system of giving performance feedback has evolved significantly. I thought I’d take a pass at chronicling it here and seeing if I had any learnings from looking at the evolution. Here’s how things evolved over the years:
- Written performance reviews. The first year of Return Path, we had a pretty standard process for reviews. They were more or less “one-way” (meaning managers wrote reviews for their direct reports), and they only happened annually.
- Written 360 reviews. We pretty quickly moved from one-way reviews to 360s. I wrote about this here, but we always felt that being able to give/receive feedback in all directions was critical to getting a full picture of your strengths and weaknesses.
- Live 360 reviews. In addition to the above post/link, I wrote about this a bit further here and here. The short of it is that we evolved written 360s for senior leaders into facilitated live conversations among all the reviewers in order to resolve conflicting feedback and prioritize action items.
- Live 360 reviews with the subject in the room. I wrote about this here…the addition of the subject of the review into an observer/clarifying role present for the facilitated live conversation.
- Peer feedback. At some point, we started doing team-based reviews on a regular cadence (usually quarterly) where everyone on a team reviews everyone on a team round-robin style in a live meeting.
The evolution follows an interesting pattern of increasing utility combined with increasing transparency. The more data that is available to more people, the more actionable the feedback has gotten.
The pluses of this model are clear. A steady diet of feedback is much better than getting something once a year. Having the opportunity to prioritize and clarify conflicts in feedback is key. Hearing it firsthand is better than having it filtered.
The biggest minuses of this model are less clear. One could be that in round robin feedback, unless you spend several hours at it, it’s possible that some detail and nuance get lost in the name of prioritization. Another could be that so much transparency means that important feedback is hidden because the people giving the feedback are nervous to give it. One thing to note as a mitigating factor on this last point is that the feedback we’re talking about coming in a peer feedback session is all what I’d call “in bounds” feedback. When there is very serious feedback (e.g., performance or behavioral issues that could lead to a PIP or termination), it doesn’t always surface in peer feedback sessions – it takes a direct back channel line to the person’s manager or to HR.
The main conclusion I draw from studying this evolution is that feedback processes by design vary with culture. The more our culture at Return Path got deeper and deeper into transparency and into training people on giving/receiving feedback and training on the Difficult Conversations and Action/Design methodologies, the more we were able to make it safe to give tough feedback directly to someone’s face, even in a group setting. That does not mean that all companies could handle that kind of radical transparency, especially without a journey that includes increasing the level of transparency of feedback one step at a time. At Bolster, where the culture is rooted in transparency from the get go, we have been able to start the feedback journey at the Peer Feedback level, although now that I lay it out, I’m worried we may not be doing enough to make sure that the peer feedback format is meaningful enough especially around depth of feedback!