You probably are already aware that we think of each new joiner’s first few months on the job as a trial period (more about that here). We haven’t always been super timely in getting mid- and end- trial reviews completed, so we wanted to propose a change to make these a bit more streamlined and self-driven.
Remove team leads from the process as these only serve to reinforce hierarchy, and remove agency/accountability from the individual in owning their own development.
This draft timeline describes the process for a 12 week trial period for full-time contributors. The timeframes can be adapted for shorter-term trials.
The main changes being proposed are;
- Remove team lead involvement in the process. People Ops sends out the mid-trial feedback requests, newbies themselves send out the end-trial feedback.
- We want to move to more objective assessment of trials, so we’ll be experimenting with using the trial feedback average scores to dictate trial outcome (alongside a qualitative evaluation during the iteration period).
An example scoring band being proposed is: >4 - pass, 3.5 - 3.9 - extend, <3.5 - fail
- Ask better questions in the feedback form and collect feedback giver names to help People Ops give coaching on feedback, or normalise where there are inconsistencies in scoring.
Legacy trials (i.e. those that started before today but didn’t yet have their trial process completed) will be moved to this new process if earlier on in the trial, or closed out in collaboration with the people lead if mostly already completed.
We’re busy updating our trial feedback forms to make these more informative, and to include additional contextual questions that will help People Ops better parse the feedback received, and offer coaching to those giving feedback where needed.
We recognise that there are still some challenges in how we organise trials, e.g.:
- There is still some element of centralisation in that someone - (People Ops in collaboration with Carl/Nabil/Jarrad) still makes a unilateral decision on trial outcome.
- Trial evaluations seem heavily weighted to the feedback provided but we don’t have a mechanism to validate the feedback. i.e. what if the person providing the feedback is wrong?
- We should think about whether we’ll need to normalise the ratings given to account for variations in how generously each individual feedback giver rates their coworkers.
- Our trial feedback form asks about certain attributes (motivation, productivity, etc.) and we will need to think about how these factors are weighted when evaluating trials.
If you have any thoughts/ideas/solutions we’d love to hear them!