The number is striking enough that it bears repeating: three-quarters of all new code written at Google is now generated by artificial intelligence. Human engineers review it, approve it, and take responsibility for it — but they did not write it in any traditional sense. The machines did.
Google CEO Sundar Pichai disclosed the figure on Wednesday in a blog post announcing what the company is calling a shift to 'truly agentic workflows.' The statistic represents a dramatic acceleration from where Google was just eighteen months ago. In October 2024, approximately 25% of the company's code was AI-generated. By fall 2025, that figure had risen to 50%. The jump to 75% in less than a year suggests the adoption curve is steepening, not flattening.
Data Visualization
AI-Generated Code Share at Major Tech Companies (2026)
- percentage
What 'AI-Generated Code' Actually Means
The phrase 'AI-generated code' covers a spectrum of human-AI collaboration that is worth unpacking. At one end, an engineer types a comment describing what a function should do, and an AI tool like Google's Gemini Code Assist or Anthropic's Claude Code generates the implementation. The engineer reviews the output, perhaps makes minor modifications, and commits it. At the other end, an AI agent autonomously plans, writes, tests, and iterates on code for a complex feature, with a human reviewing the final result.
Pichai's blog post described the latter scenario in concrete terms. 'Recently, a particularly complex code migration done by agents and engineers working together was completed six times faster than was possible a year ago with engineers alone,' he wrote. Code migrations — moving large codebases from one framework, language, or architecture to another — are among the most tedious and error-prone tasks in software engineering. They are exactly the kind of work that AI agents, with their ability to process vast amounts of code systematically, are well-suited to handle.
"We're shifting to truly agentic workflows where engineers are increasingly directing AI agents rather than writing every line themselves."
— Sundar Pichai, CEO, Google, April 23, 2026The Industry-Wide Pattern
Google is not alone. The 75% figure places it at the leading edge of an industry-wide transformation that is moving faster than most observers anticipated even a year ago. Meta has set a goal that 65% of engineers in its creation organization should write more than 75% of their committed code using AI tools in the first half of 2026. Snap, which recently announced significant layoffs, disclosed that at least 65% of new code under its restructured operating model is AI-generated. Microsoft CEO Satya Nadella predicted last year that 95% of code would be AI-generated within five years — a target that now looks conservative given current trajectories.
The adoption is not uniform across companies or roles. Engineers working on novel algorithmic research, systems programming close to hardware, or security-critical infrastructure report lower AI code generation rates than those working on application-layer features, internal tooling, or data pipelines. The tasks most amenable to AI generation tend to be the ones with clear specifications, established patterns, and well-defined test criteria — which, it turns out, describes a substantial fraction of the code written at large technology companies.
Internal Tensions at Google
The transition has not been frictionless. Business Insider reported this week that some Google DeepMind employees have been using Anthropic's Claude Code rather than Google's own Gemini tools, creating internal tensions. DeepMind researchers, who work on some of the most technically demanding AI problems in the world, apparently found Claude Code's capabilities better suited to their specific needs — a somewhat awkward situation for a company that is simultaneously one of Anthropic's largest investors and a direct competitor.
Google has also tied AI adoption to performance reviews, setting specific AI usage goals for engineers that will factor into their annual evaluations. This approach has generated mixed reactions internally. Some engineers welcome the productivity gains and see AI tools as genuinely transformative. Others worry that the metrics incentivize superficial adoption — using AI to generate code that then requires significant human rework — rather than thoughtful integration of AI into engineering workflows.
What Happens to Software Engineers
The question that hangs over all of this data is the one that nobody in the industry wants to answer directly: what happens to software engineering jobs? The optimistic framing, which Pichai and other tech executives consistently offer, is that AI makes engineers more productive, allowing them to tackle more ambitious projects and focus on higher-level design and architecture decisions. The pessimistic framing is that if AI can do 75% of the coding work, companies need fewer engineers to produce the same output.
The evidence so far is mixed. Google has not announced engineering layoffs attributable to AI productivity gains. But the broader tech industry has been contracting its engineering headcount since 2022, and the pace of hiring has not returned to pre-2022 levels despite strong revenue growth. Whether this reflects cyclical adjustment, AI-driven efficiency, or both is genuinely difficult to disentangle.
What is clearer is that the skills premium for software engineers is shifting. The ability to write code fluently — to translate a specification into working implementation — is becoming less differentiating. The ability to architect systems, evaluate AI-generated code critically, understand security and performance implications, and communicate technical requirements clearly is becoming more valuable. The engineer who can direct AI agents effectively is becoming more productive than the engineer who codes everything by hand. Whether that means more engineers or fewer engineers overall is a question the industry has not yet answered.
The Reliability Question
One dimension of the AI code generation story that receives less attention than the productivity numbers is reliability. AI-generated code introduces a new category of software defects: plausible-looking implementations that are subtly wrong in ways that human reviewers, under time pressure, may not catch. Security researchers have documented cases in which AI code generation tools produced code with exploitable vulnerabilities — not because the AI was malicious, but because it optimized for producing code that looked correct rather than code that was provably correct.
Google's approach of having human engineers review all AI-generated code before it is committed is a meaningful safeguard, but it depends on those reviews being genuinely rigorous rather than rubber-stamp approvals. As the volume of AI-generated code increases and the pressure to ship features accelerates, maintaining the quality of human review becomes both more important and more difficult. The industry is still in the early stages of developing the tooling, processes, and cultural norms needed to ensure that AI-generated code is as reliable as human-written code — and the 75% figure suggests that the stakes of getting this right are already very high.