Trial periods for newbies - a proposed change in how these are structured


#1

You probably are already aware that we think of each new joiner’s first few months on the job as a trial period (more about that here). We haven’t always been super timely in getting mid- and end- trial reviews completed, so we wanted to propose a change to make these a bit more streamlined and self-driven.

Proposal

Remove team leads from the process as these only serve to reinforce hierarchy, and remove agency/accountability from the individual in owning their own development.

Process

This draft timeline describes the process for a 12 week trial period for full-time contributors. The timeframes can be adapted for shorter-term trials.

The main changes being proposed are;

  • Remove team lead involvement in the process. People Ops sends out the mid-trial feedback requests, newbies themselves send out the end-trial feedback.
  • We want to move to more objective assessment of trials, so we’ll be experimenting with using the trial feedback average scores to dictate trial outcome (alongside a qualitative evaluation during the iteration period).
    An example scoring band being proposed is: >4 - pass, 3.5 - 3.9 - extend, <3.5 - fail
  • Ask better questions in the feedback form and collect feedback giver names to help People Ops give coaching on feedback, or normalise where there are inconsistencies in scoring.

Legacy trials (i.e. those that started before today but didn’t yet have their trial process completed) will be moved to this new process if earlier on in the trial, or closed out in collaboration with the people lead if mostly already completed.

We’re busy updating our trial feedback forms to make these more informative, and to include additional contextual questions that will help People Ops better parse the feedback received, and offer coaching to those giving feedback where needed.

We recognise that there are still some challenges in how we organise trials, e.g.:

  • There is still some element of centralisation in that someone - (People Ops in collaboration with Carl/Nabil/Jarrad) still makes a unilateral decision on trial outcome.
  • Trial evaluations seem heavily weighted to the feedback provided but we don’t have a mechanism to validate the feedback. i.e. what if the person providing the feedback is wrong?
  • We should think about whether we’ll need to normalise the ratings given to account for variations in how generously each individual feedback giver rates their coworkers.
  • Our trial feedback form asks about certain attributes (motivation, productivity, etc.) and we will need to think about how these factors are weighted when evaluating trials.

If you have any thoughts/ideas/solutions we’d love to hear them!


Project Flatten - next steps
#2

My 2 cents: I find it hard to score people when I’m not quite sure what I’m scoring them against. Meaning, level of experience & expectations of their role should inform my responses.

The proposed changes sound good to me. I think it would be helpful to everyone involved if there were standard forms for both mid- and end-of-trial feedback. That way there’s some form of control.

I would also love the opportunity to give feedback non-anonymously (in the eyes of PeopleOps), as I think the amount of interaction I’ve had with a given person should probably be factored in as well. And I’d love the opportunity to clarify any confusion!


#3

Cheers @rachel for the feedback!

I find it hard to score people when I’m not quite sure what I’m scoring them against. Meaning, level of experience & expectations of their role should inform my responses.

This is a really good point! We’ve tried to keep role profiles really open (the thinking being that giving folks a named role might imply some restriction to the variety of places across the org they can contribute). That said, each person does bring with them some functional expertise in a specific area.

Could it be helpful when sending feedback requests to have a step where the person being evaluated shares a bit more about their own objectives within Status, to help feedback givers orientate their feedback?


#4

Would it help to add a confidence rating? For mid-trial feedback it can be difficult to find people who have worked with an individual extensively and can judge their domain. In some cases I feel like I am in the right position to score given how much I collaborated and my own knowledge of a domain. In other cases I feel less confident, for example because the work is not my expertise.

and +1 to @rachel comment on it being helpful to know the expected level of experience.


#5

Amazing coincidence @hester - in the new feedback forms, we’re asking a question that says:

How confident are you in this feedback? (In other words, do you feel in a strong position to give this feedback? ‘Not confident’ could mean that you don’t feel you have the insight into the contributor’s work, or haven’t observed much behaviour to be able to make an assessment. You would not feel comfortable arguing for your feedback, nor insisting that it be taken onboard by the contributor. ‘Very confident’ means you feel you have a solid amount of observation and insight into the contributor’s work, and would feel comfortable justifying your feedback, along with recommending that the contributor takes it onboard)

I think this will go a long way in contextualising feedback, which will be super helpful :slight_smile:


#6

Could it be helpful when sending feedback requests to have a step where the person being evaluated shares a bit more about their own objectives within Status, to help feedback givers orientate their feedback?

Hmm, I do think that might be helpful context @ceri. I remember something from the old form asking specifically if the contributor was meeting expectations for their “level” (or some similar language)—maybe that language is gone?


#7

Perhaps a first step to move away from this would be to ask the question (non-mandatorily) to peers in the feedback form? Something on the lines of Do you think this trial should continue? and `Would you be happy for this person to stay as CC indefinitely?’


#8

I checked the feedback form and didn’t see anything specific to level :+1: Could be some old wording?

In any case, I added a reminder to the person on trial that they should share a role description with People Ops so that that can go out with the feedback requests to the feedback givers. Let’s see how it goes :slight_smile:


#9

Added to the feedback forms!