Performance management - your input needed

performance

#1

As discussed in this week’s town hall, we’re looking to try a peer performance system over the next month.

Why?
As an organisation, we should have a method of understanding contributor performance and impact. Doing that allows:

  • Individuals toreceive some form of feedback on their performance, understanding of what they can do better, and appreciation for the work they are doing.
  • Teams to understand their overall collaboration and impact.
  • Status (as an organisation) to understand how our resources are being invested, and if they are being used wisely.
  • Status (as an organisation) to understand which contributors are performing above the standard, and who could use support/development.

How?
We considered the more traditional models of performance management (i.e. top down systems, reliant on “managers”) before going an alternative route. The 3 methods shortlisted are:

  1. A stacking ranking system where each contributor would be asked to stack rank their peers based on performance.
    Pros: Simple, very lightweight, gives a clear read of performance by team member.
    Cons: Although we’d understand performance by team, we wouldn’t be able to understand performance across the organisation.

  2. A peer review system (similar to our trial period system) where every contributor would invite peers to complete a form/survey, with questions around performance, collaboration, etc.
    Pros: Provides good feedback and data for the individual.
    Cons: Time investment: 95 contributors x 3-5 reviews each

  3. A “supercharged kudos” system, where every contributor would write up a small summary of their work over the 6 month period, and it would be posted publicly for all to read, review and comment.
    Pros: Completely transparent & open. Could provide a “leaderboard” style showcase of our top contributors.
    Cons: Could negatively impact those not as “visible” or “confident” at self-promotion. Chance that people will avoid constructive criticism in an open forum.

Outcomes
This data could be used to help determine future salary increases or bonus awards. Right now, to do that we’d rely heavily on the input of Carl, Jarrad & Nabil but as we’re close to 100 Core Contributors it’s not realistic to expect them to be close enough to everyone’s work to make valid assessments.

Challenges
There are a long list of pros and cons for each possibility but we should call out some specific challenges:
In any performance system, it’s likely that some people are more confident to “self-promote” whilst others are more modest. We want to recognise/reward performance not style.
Without people leads, some of the “behind the scenes” work with this system actually centralises within PeopleOps.
In an open performance system, there’s a chance that people do not feel comfortable to provide development or constructive criticism feedback to others.

What are we doing?
We’re looking to implement one of the above systems (unless there is a better alternative), so we’re looking for your feedback and suggestions.

There is no good system/process out there that translates well to an open, decentralized organization like ours. So we’re trying to build something.


Compensation update
#2

My thoughts, or requirements, as a core contributor, based on 3 beliefs:

  • “Stick to the trade you’re good at” (Dutch saying, I’m not sure how to translate)
  • Groups or infinitely more powerful than individuals
  • Performance review should lead to a ‘result’ that drives decision making. I prefer this decision to go beyond an “in-or-out” based on rating, as binary decisions are hardly ever the most optimal ones.

I appreciate the thought of the “supercharged kudos system” as it’s time efficient. My suggestions below are essentially to do with the structure in which I believe this method works best.

I would suggest to apply a “supercharged kudos” system to circles and differentiate between team and individual performance:

  • Individual performance ratings within functional circles (design team, Clojure dev, Go, etc)
  • Team performance ratings between circles
  • Transparency of individual ratings within a circle
  • Transparency of team ratings across circles
  • An overall view of individual performance for those who need it to fulfil responsibilities of their role by joining all circles as ‘viewer’.

This means the system has at least two preconditions:

  • For this to work, circles would need to be established first.
  • Within each circle people would need to be assigned to a ‘Viewer’ or ‘Reviewer’ status.

The Pro for me would be to see transparency about team performance across the organization. I don’t need to see individual’s performance outside of my domain and don’t see how this will help improve us as an organization. In my mind a public negative rating of individuals can only negatively impact future performance, i.e. bring individuals with a lower rating down a cycle of selffulfilling prophecy.

The Con of negative impact on people less visible/confident contributors, could be counteracted by keeping the kudos within a functional team where those rating have more knowledge about the individuals actual performance.


#3

While I’m in favor of the suggestions here (particularly point 3), I’ll share some alternative ideas I’ve stumbled upon in the past, mostly as a food for though.

The game development company Valve is notorious for running a flat organization where the employees are ultimately responsible for determining each other’s compensation. They have published quite a lot of information about their experience and we can probably use some of the acquired wisdom. A Google search will find you plenty of interviews and other resources, but probably the best one is this handbook for new employees:

Since we are aiming to be pioneers in the DAO world, I think everyone should become familiar with the concept called Futarchy - it’s a fascinating form of government where decisions are based on the confidence of rational actors in certain outcomes, measured by “votes” that are effectively backed up with their money:

Under Futarchy, the compensation of an individual can be influenced by a prediction market asking questions such as:

  1. How much the value of the company will increase if this person is hired (as measured by the SNT price)
  2. How much the release date of Project X will move if this person leaves the team?

These examples are a bit blunt and suffer from the problem of coming up with a proper desirable metric that is not exposed too much to external volatility, but the concept is still fascinating and it can probably be refined significantly if more thought is put into it.


#4

I vote for the second option — peer review system. I didn’t have a chance to pass a trial period review (joined before it was established) Nonetheless I like how time efficient and simple it is on the reviewer side.

But yeah we need clarity on how those peers will be chosen. On my daily basis, I work mostly within feature teams and would like to have (and interested in as a contributor to this team) a review from them (product managers, developers, QA engineers all of them have different perspectives and needs) rather than from peers of a functional group.


#5

Since I have been doing many trial period reviews, I must say that the form that we used was compact and it wasn’t taking much time so the second option sounds the best.

However, there is one more drawback: not everyone has paid attention and many people skipped the process. Eventually, there were cases when we had 1-2 reviews for a candidate which was simply not enough to make a decision.

In this case, I believe it won’t be a problem because if one person does not fill in a review form, that might impact her/his reviews. Is it sound?

A “supercharged kudos” system

My problem with this one is that we have teams and not only individuals. It might be wrongly perceived that “I contributed to X” is less important than “I did Y” while it might be opposite. This relates to

Could negatively impact those not as “visible” or “confident” at self-promotion.

drawback.

Unless the comments to a particular summary will be more important than summary itself :thinking:


Also, do we want to try to collect some quantitative metrics of individuals? It might be the number of PRs/commits on Github, number of posts/comments on discuss etc? I am not sure if it makes sense but I like numbers :smiley:


#6

A key component of success is its definition at onset of the attempt.

How are we to evaluate each other qualitatively if we don’t have a rubric to do so?

Core contributors should have a well defined set of responsibilities, and anything outside of that is an attribution of being a team player and driving the overall vision of the organization (which should have different metrics for evaluation).

IF that is set in place, it should fall naturally who has the qualifications to evaluate that rubric… then we can discuss how we define that evaluation.


#7

For me personally, number (2) sounds like the best out of the options laid here.

Could negatively impact those not as “visible” or “confident” at self-promotion.

sounds like a deal breaker. I wouldn’t want to penalize for a lesser self-promotion skills or confidence, especially in a company where we are trying to build something together.

But I agree with @petty, that we need some scale to be able to measure something.

For instance, there are direct responsibilities, there are “making the whole organization better” kinds of tasks. Also, one important thing to encourage is self-improvement, I think. It matters less where a person is right now, if his performance and skills are improving.


#8

Unless we want to optimize for specific metrics I’d rather stay away from tracking numbers.
Reminds me old practices of judging based on number of lines of code produced. That topic has been described in the old but still relevant The Mythical Man-Month I believe.

Surely there are others ways to satisfy your love for numbers? :smiley:


#9

I also lean toward number 2 on the condition that @petty brings up, and which I think I’ve raised on another thread about trial feedback. In order to give constructive reviews, we need clear criteria and enough knowledge of that person to give feedback.

I’m guilty of skipping a review because I simply didn’t have enough experience working with someone to offer feedback.

Some criteria can be applied to all contributors—e.g. adherence to Status principles—but we also need role-specific criteria that are more related to the expectations for that specific position.

What matters for me is: how easy or enjoyable is it to work with someone, what results are they getting, how much value do they bring to the organization overall?


#10

+1 to to supporting (2) with caveats to 1) establish qualitative criteria via some type of rubric or scale system as outlined by @petty which, I believe, can include a measure of “team player-ness” or “organizational thinker/driver” weighted equally along with core contributions and 2) ensuring that peers who evaluate have a reasonable level of familiarity with your work and feel comfortable giving that feedback / rating. Better to have 2-3 people give a bit more detailed and constructive feedback rather 7-8 making a perfunctory effort.

@hester’s idea around circles is also interesting in that I agree that it would be helpful and interesting to be transparent about ratings at a team level rather than at individual level. Just not sure how we would define teams or circles in this case but something we can experiment with.


#11

Could negatively impact those not as “visible” or “confident” at self-promotion
We want to recognise/reward performance not style.

I wouldn’t disregard those completely. In a flat and decentralized organization that favors permission less initiatives and ownership, effective communication and ability to convince others play an important role.

How do we include that in the performance review?


#12

An alternative idea would be to combine #2 and #3. As highlighted by others it’s pretty hard to judge someone without much context.
So the reviewed person could write details about what they achieved and how impactful they have been. This would then be sent to reviewers.

How would reviewers be selected? Do we want those to be people part of the same team? Or cross teams?

@j12b Peopleops team is pushing a number of decentralized themes lately. Great to see!
How do they all fit in a more holistic approach? Would be nice to get a glimpse of the bigger picture.
e.g. performance reviews are usually associated to career growth. Does that fit in the context of status moving to a DAO?


#13

Please keep the feedback/ideas coming! Currently it looks like merging some of the suggestions into a hybrid model would work best.

@julien we’re actually working on a post to outline what the future looks like, and the rough timelines we think it will take, will share as soon as is ready!


#14

My primary concern revolves around aligning individual and organizational incentive structures as much as feasible – to that end, (1) stack ranking in particular has well-documented issues.

Generally speaking, though it’s often possible to extract some kind of ranking system from anything which can effectively function as performance management, inextricably relying on zero-sum games to achieve ends which rely on people working together, in a coordinated way, as a team, creates such incentive structure alignment risks.


#15

Great discussion.

I have a bunch of ideas and formulating some materials related to this and agree with @zahary and @julien quite strongly in the sense that we need much better decentralised ‘managerial’ tools.

Futarchy is actually probably not the greatest model we can employ, Prediction Markets can serve as a good signal.

What we do need to do is cultivate a strong meritocracy organisation wide, ensure the right people are making the right decisions, having people accountable to their peers and cultivating a strong sense of ownership and understanding of what we’re trying to achieve as a collective.

In any sense let’s definitely not rush rolling this out and ensure we make good decisions on this.


#16

See SITG Experiment #3 - give yourself a raise in SNT (and related
SITG Experiment #2 - DAC0: Liquid pledging and Breaking Whisper
) for a proposal on how we can gradually roll out a decentralized, meritocratic, and accountable alternative structure for compensation/“performance management”.