How 3 Big Tech Companies Hire

The subtle little differences in hiring practices at Google, Amazon and Microsoft that make a big difference

Carlos Arguelles
10 min readApr 7, 2024

Have you ever thought about how much of a difference the way interviewers interact with each other during the interview and debrief process makes?

In the last 26 years, I’ve worked at Microsoft (11 yrs), Amazon (11 yrs) and Google (4 yrs). At Microsoft (1997–2009) I did 200 interviews or so. At Amazon (2009–2020), I did 813. And at Google (2020–2024), about 50. So all in all, I’ve seen, and lived, the processes, pretty deeply. What’s interesting is how different they are. And how subtle differences end up making a big difference.

I’m not talking about the individual Coding or Systems Design questions. There’s no discernible difference between those among Big Tech. In 2020 and in 2024, I did multiple full loops at Google, Meta, Uber, Microsoft, Amazon, Apple, Twitter, Atlassian and Databricks. Coding and System Design questions were indistinguishable across the board.

But the way interviewers interact with each other during the interview and during the debrief are vastly different. As is the way each one of these companies achieves final decision-making.

The Microsoft Way

Please note that my experiences with the Microsoft interview process are old (mostly from the nineties and early 2000s), so they may have changed. But they’re still interesting to dive into to contrast it with the Amazon way and the Google way.

Each interviewer used to do a physical “handoff” of the interviewee to the next interviewer. This was actually very important and it changed the dynamics of the entire loop.

In the nineties, before the days of cubicles and shared working spaces, we all had our own private offices, so the candidate would be brought to our office and we’d conduct the interview there. After the interview was over, we would walk the candidate to the office of the next interviewer. And there was a handoff where I would speak privately with the next interviewer for a quick minute (often while the candidate waited nervously outside the office). This was a critical process, because in the handoff I could tell the next interviewer whether I thought the candidate was doing well, or poorly, and I could ask the next interviewer to focus on a particular area that was concerning to me, to get another data point.

The handoff process had some advantages. It allowed a Microsoft loop to be highly dynamic and adapt to whatever circumstances arose, real-time, and very effectively. We could customize the loop to exactly the strengths and weaknesses as we saw them appear.

However, I can now see that the handoff process was deeply flawed. The problem is that it allowed biases to creep in, and little details to get amplified as the candidate moved from interviewer to interviewer.

Let me illustrate it with an example. Let’s say Candidate has a poor experience with Interviewer#1. Maybe it’s just that Candidate is nervous about their first interview and it takes a while for them to warm up and be their best self. Maybe Interviewer#1 is just a jerk. Or they asked a question that Candidate was just not prepared for or knowledgeable on. So during the handoff Interviewer#1 tells Interviewer#2 that the Candidate is not doing well. At this point, Interviewer#2 is biased: they’re already starting the interview expecting that the Candidate may not make it. So they’ll be more tuned to looking for little problems that they may not have originally paid attention to. By the time Candidate gets to Interviewer#4 or Interviewer#5, that tiny little mistake made in the first interview has been amplified so much that it’s a clear No Hire. And perhaps it was just a small, irrelevant thing.

Microsoft also had a final interviewer that was called As Appropriate (“AA”). This was the person that made the final call as to whether to hire the candidate or not. AAs were usually long-tenured, high-level, well-calibrated interviewers, although to be honest, in reality the AA system leaned towards a straight HiPPO system. A candidate only reached an AA if the feedback from the other interviewers was positive. Microsoft could (and often did) short-circuit the loop and walk the candidate out, to save the AA some time if it looked like the candidate wasn’t going to make it. It was a well-known secret, so when a candidate didn’t make it to the AA they knew they weren’t getting an offer. AAs tended to make the final decisions unilaterally without much debate with other interviewers.

The Google Way

Google’s culture does a really nice job focusing on removing Bias from a lot of their processes. One of these is the interview process.

Unlike Microsoft back in the nineties, Google interviewers conduct their interview in isolation, without talking to other interviewers before or during their interview. This is nice because it prevents the amplification and bias problem from the Microsoft handoff. As an interviewer, you have no idea whether a candidate did a great job in the previous interview, or a terrible job. You have to make your own call without knowing anything other than what you’re seeing in that moment. Removing bias is a good thing.

Here’s where the Google way has its own twist though. Interviewers do not make the call as to whether a candidate is getting an offer or not. They write the feedback, as factually as possible, and submit it to a Hiring Committee (“HC”).

To fully remove biases, when interviewers write their feedback, they must refer to the candidate as “TC” (literally: “The Candidate”). This is a really interesting idea, so that the feedback contains no information as to the candidate’s gender, race, religion, sexual orientation, etc.

The HC is composed of several well-calibrated individuals that meet, read the feedback from the interviewers, and debrief. Nobody in the HC has met the candidate. They don’t know the candidate’s gender, skin color, etc. They don’t know if the candidate has a squeaky annoying voice, if they’re sloppy, or if they smell bad. They simply know the facts as written in the feedback.

Using a group of people to gather data, and an entirely different group of people to make the decision by looking at only the data, is a very effective way to decisively remove biases from the interview process. And the fact that the HC is a group of people and not a single individual is effective in removing any bias that might creep in from any one particular individual — the decision must be by group consensus.

Although I appreciated that, I personally did not love the approach, for a couple of reasons.

The first reason I disliked the HC system is that, as an interviewer, there was not really a feedback loop to tell me whether I had done a good job interviewing or not. Were my questions appropriate? Were my expectations properly calibrated for the level? Was the feedback I had written good enough for the HC to make a decision? I never really knew. Once I submitted the feedback, I seldomly saw what happened to the candidate. Most interviewers didn’t bother — once they entered the feedback, they were “done” and moved on to other tasks.

There was a process called “FoF” (“feedback-on-feedback”) where members of the HC could give you feedback on the feedback you had written, and occasionally ask for additional information, but it was seldomly used. In 50 interviews I did, I got FoF once. I think the reason that it wasn’t used is because of friction: the app where you entered feedback was clumsy, and having to write FoF is cumbersome and impersonal so most members of the HC just kind of shrugged bad feedback and put up with it. So interviewers never got better.

The second reason I disliked the HC system is that it introduces an additional layer of bureaucracy and delay into the process, as well as social cohesion. Google is infamous for taking weeks to respond to a candidate. I’m also not convinced that making decisions by consensus is always the right approach… social cohesion doesn’t necessarily drive better outcomes (Amazon on the other hand is notoriously averse to social cohesion).

The entire process felt very impersonal, like I was just doing this 45-minute thing and I wasn’t really part of something bigger. And a hiring decision should not be impersonal, it is deciding the future of a human being.

The Amazon Way

I may be biased myself here because Amazon is where I conducted the majority of my interviews (over 800!), but I think Amazon has a nice middle ground removing bias while offering a solid feedback loop.

Like Google, Amazon interviewers conduct their interview in isolation. You’re not allowed to speak to other interviewers during the loop. You don’t see anybody’s feedback on a candidate until you submit yours. So that removes the bias introduced by the Microsoft handoff.

But unlike Google, Amazon interviewers meet a day or two after the interview to Debrief. So there’s a single group of people who both (1) interview and (2) make the decision.

You’re probably thinking: don’t you lose that nice separation of the people who gather the data and the people who make the decision that you liked at Google? Yes, you do. But Amazon has a key differentiator to reach the same outcome of removing bias: the Bar Raiser (“BR”).

I’ve written extensively about this in a previous blog, “Memoirs of an Amazon Bar Raiser — Demystifying the Amazon interview.”

In short, a BR is an interviewer that has demonstrated high judgment and calibration and has been specially trained and vetted. BRs are responsible for auditing the loop structure before an interview (do we have an appropriate mix of interviewers? are they experienced enough? do we have the right competencies assigned to the right interviewers?), they interview the candidate and they conduct the debrief. Having a highly trained individual driving the debrief is critical to remove bias and (re)calibrate all interviewers appropriately.

Being a BR at Amazon is a big deal. You don’t just become one. You have to be nominated into the BR Training process: when BRs observe that an interviewer shows exceptional high judgment and calibration, they can nominate them. It’s by invitation-only. The BR training process itself is grueling. You start by shadowing a number of BRs (I think I did something like 20 shadows), to observe what they do and how they do it. And then, the hard part begins: the reverse-shadows. You’re conducting the interview as if you were a BR, but with an experienced BR observing you and critiquing every single tiny thing you do (you even get a score at the end). Eventually, when enough BRs reverse-shadow you and are pleased with your judgment and calibration, you graduate into being an official BR. The process takes months and dozens of interviews where every single thing you do is scrutinized. Many people fail BR training.

The reason that the process is so grueling is because BRs are critical in the Debrief. As a BR, you have the responsibility of making the final Hire/NoHire call, and technically you have veto power (although I never actually used it myself). Inevitably, when you have 5 or 6 people giving their opinion on anything, you’ll end up with strong dissenting opinions. It’s very unusual to have unanimous voting the first time around; usually it’s more like 3 Hire, 2 No-Hire votes (or viceversa). So you need a decision maker. At Microsoft, it’s the AA. At Google, it’s the HC. And at Amazon, it’s the BR. The dynamics are entirely different for all three.

Great BRs lead a focused discussion for less than 30 minutes, and drive consensus at the end. This isn’t the same as Google’s social cohesion. As a BR, you do not need social cohesion, since you have full power to make a decision on behalf of Amazon, but you want to have a spirited, fact-driven debate where viewpoints are weighted and interviewers at least disagree and commit.

Importantly, as a BR, I regularly gave feedback to my interviewers, particularly when I felt the data they had gathered was insufficient or they were way off in their calibration (either asking a question that was too easy or too hard for the level we were targeting). I often gave this feedback real-time during the debrief, so other interviewers could learn too. If it was something potentially embarrassing or a long discussion, I’d pull my interviewer aside afterwards and chat with them 1:1. Overall, I viewed that a critical part of my job as a BR was to provide that feedback-on-feedback that gets lost at Google due to the strong separation between interviewers and hiring committee.

Top Amazon AWS Bar Raiser, Q1 2020, after 813 interviews

The Amazon process is not perfect. It depends heavily on the judgment of a single individual. Amazon mitigates this by focusing on heavy vetting, and training, of the individuals that receive this trust. But BR training is extremely expensive and hard to scale, and BRs can be a scarce resource (as a BR, I was on the hook for doing 1–2 interviews per week). And ultimately, if you had a bad BR, of course you could end up with a bad outcome too. Having been very active in the Amazon BR community for years, I personally believe most BRs I know are top notch and I would trust their judgment anytime.

Which way is best?

Like anything else in life, there’s tradeoffs. You must put mechanisms in place to remove bias, and it’s fascinating how different companies think about those. You must also think about what mechanisms you’ll have in place to make a final decision, whether that’s a single individual, or a group. With either approach, there’s pros and cons. And lastly, how and when interviewers communicate with each other matters deeply.

While my personal preference is Amazon’s system, my time at Google does have me looking at some of the Amazon processes with a different lens now.

In principle I like the Google mechanism of separating the data-gatherers from the decision-makers. The reason I don’t think it works well in practice at Google is because unlike Amazon, which has a strong writing culture, Google does not. A lot of the writing I saw at Google was littered with weasel words, lacked concrete data, etc.

Also in principle I like that Google uses committees to reduce single-human bias. The reason I don’t think it works well in practice at Google is because unlike Amazon, which has a very direct/blunt communication style, Google does not, so a lot of the communication is very indirect / way-too-polite, erring on the side of social cohesion at all costs.

--

--

Carlos Arguelles

Hi! I'm a Senior Principal Engineer (L8) at Amazon. In the last 26 years, I've worked at Google and Microsoft as well.