"We have data on the performance of >50k engineers from 100s of companies. ~9.5% of software engineers do virtually nothing: Ghost Engineers.”

Last week, a tweet by Stanford researcher Yegor Denisov-Blanch went viral within Silicon Valley. “We have data on the performance of >50k engineers from 100s of companies,” he tweeted. “~9.5% of software engineers do virtually nothing: Ghost Engineers.”

Denisov-Blanch said that tech companies have given his research team access to their internal code repositories (their internal, private Githubs, for example) and, for the last two years, he and his team have been running an algorithm against individual employees’ code. He said that this automated code review shows that nearly 10 percent of employees at the companies analyzed do essentially nothing, and are handsomely compensated for it. There are not many details about how his team’s review algorithm works in a paper about it, but it says that it attempts to answer the same questions a human reviewer might have about any specific segment of code, such as:

  • “How difficult is the problem that this commit solves?
  • How many hours would it take you to just write the code in this commit assuming you could fully focus on this task?
  • How well structured is this source code relative to the previous commits? Quartile within this list
  • How maintainable is this commit?”

Ghost Engineers, as determined by his algorithm, perform at less than 10 percent of the median software engineer (as in, they are measured as being 10 times worse/less productive than the median worker).

Denisov-Blanch wrote that tens of thousands of software engineers could be laid off and that companies could save billions of dollars by doing so. “It is insane that ~9.5 percent of software engineers do almost nothing while collecting paychecks,” Denisov-Blanch tweeted. “This unfairly burdens teams, wastes company resources, blocks jobs for others, and limits humanity’s progress. It has to stop.”

The Stanford research has not yet been published in any form outside of a few graphs Denisov-Blanch shared on Twitter. It has not been peer reviewed. But the fact that this sort of analysis is being done at all shows how much tech companies have become focused on the idea of “overemployment,” where people work multiple full-time jobs without the knowledge of their employers and its focus on getting workers to return to the office. Alongside Denisov-Blanch’s project, there has been an incredible amount of investment in worker surveillance tools. (Whether a ~9.5 percent rate of workers not being effective is high is hard to say; it’s unclear what percentage of workers overall are ineffective, or what other industry’s numbers look like).

Over the weekend, a post on the r/sysadmin subreddit went viral both there and on the r/overemployed subreddit. In that post, a worker said they had just sat through a sales pitch from an unnamed workplace surveillance AI company that purports to give employees “red flags” if their desktop sits idle for “more than 30-60 seconds,” which means “no ‘meaningful’ mouse and keyboard movement,” attempts to create “productivity graph” based on computer behavior, and pits workers against each other based on the time it takes to complete specific tasks.

What is becoming clear is that companies are becoming obsessed with catching employees who are underperforming or who are functionally doing nothing at all, and, in a job market that has become much tougher for software engineers, are feeling emboldened to deploy new surveillance tactics.

“In the past, engineers wielded a lot of power at companies. If you lost your engineers or their trust or demotivated the team—companies were scared shitless by this possibility,” Denisov-Blanch told 404 Media in a phone interview. “Companies looked at having 10-15 percent of engineers being unproductive as the cost of doing business.”

Denisov-Blanch and his colleagues published a paper in September outlining an “algorithmic model” for doing code reviews that essentially assess software engineer worker productivity. The paper claims that their algorithmic code assessment model “can estimate coding and implementation time with a high degree of accuracy,” essentially suggesting that it can judge worker performance as well as a human code reviewer can, but much more quickly and cheaply.

I asked Denisov-Blanch if he thought his algorithm was scooping up people whose work contributions might not be able to be judged by code commits and code analysis alone. He said that he believes the algorithm has controlled for that, and that companies have told him specific workers who should be excluded from analysis because their job responsibilities extend beyond just pushing code.

“Companies are very interested when we find these people [the ghost engineers] and we run it by them and say ‘it looks like this person is not doing a lot, how does that fit in with their job responsibilities?’” Denisov-Blanch said. “They have to launch a low-key investigation and sometimes they tell us ‘they’re fine,’ and we can exclude them. Other times, they’re very surprised.”

He said that the algorithm they have developed attempts to analyze code quality in addition to simply analyzing the number of commits (or code pushes) an engineer has made, because number of commits is already a well-known performance metric that can easily be gamed by pushing meaningless updates or pushing then reverting updates over and over. “Some people write empty lines of code and do commits that are meaningless,” he said. “You would think this would be caught during the annual review process, but apparently it isn’t. We started this research because there was no good way to use data in a scalable way that’s transparent and objective around your software engineering team.”

Much has been written about the rise of “overemployment” during the pandemic, where workers take on multiple full-time remote jobs and manage to juggle them. Some people have realized that they can do a passable enough job at work in just a few hours a day or less.

“I have friends who do this. There’s a lot of anecdotal evidence of people doing this for years and getting away with it. Working two, three, four hours a day and now there’s return-to-office mandates and they have to have their butt in a seat in an office for eight hours a day or so,” he said. “That may be where a lot of the friction with the return-to-office movement comes from, this notion that ‘I can’t work two jobs.’ I have friends, I call them at 11 am on a Wednesday and they’re sleeping, literally. I’m like, ‘Whoa, don’t you work in big tech?’ But nobody checks, and they’ve been doing that for years.”

Denisov-Blanch said that, with massive tech layoffs over the last few years and a more difficult job market, it is no longer the case that software engineers can quit or get laid off and get a new job making the same or more money almost immediately. Meta and X have famously done huge rounds of layoffs to its staff, and Elon Musk famously claimed that X didn’t need those employees to keep the company running. When I asked Denisov-Blanch if his algorithm was being used by any companies in Silicon Valley to help inform layoffs, he said: “I can’t specifically comment on whether we were or were not involved in layoffs [at any company] because we’re under strict privacy agreements.”

The company signup page for the research project, however, tells companies that the “benefits of participation” in the project are “Use the results to support decision-making in your organization. Potentially reduce costs. Gain granular visibility into the output of your engineering processes.”

Denisov-Blanch said that he believes “very tactile workplace surveillance, things like looking at keystrokes—people are going to game them, and it creates a low trust environment and a toxic culture.” He said with his research he is “trying to not do surveillance,” but said that he imagines a future where engineers are judged more like salespeople, who get commission or laid off based on performance.

“Software engineering could be more like this, as long as the thing you’re building is not just counting lines or keystrokes,” he said. “With LLMs and AI, you can make it more meritocratic.”

Denisov-Blanch said he could not name any companies that are part of the study but said that since he posted his thread, “it has really resonated with people,” and that many more companies have reached out to him to sign up within the last few days.

  • db2@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    1
    ·
    26 days ago

    And what this tells me is that automating garbage commits that don’t actually do anything is what those employers want. 5000 lines a day but it’s all comments? I think so.

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      43
      ·
      26 days ago

      You’re 100% right. And I have absolutely done this in the past when some dipshit has the bright idea to tie comp adjustments to SLOC metrics. And it’s more than just comments: you just make EVERYTHING a variable, duplicate as much as possible, and avoid terse syntax at all costs. It makes the codebase nigh unmaintainable… but hey, if you’re gonna hit me in the wallet if I don’t do that, I don’t fucking care about the quality of the codebase under those constraints.

    • Nighed@feddit.uk
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      26 days ago

      Did you read the article, they address that and how this detects that (apparently)