Skip to content

TechChannels Network:      Whitepaper Library      Webinars         Virtual Events      Research & Reports

×
DevOps

Nearly All Developers Use AI But Trust Remains Elusive

While nine in 10 developers are now incorporating AI into their daily workflows–up front 76 percent last year–nearly half don’t fully trust it, according to new research from Google Cloud. 

Widely regarded as a benchmark for evaluating software delivery performance, this year’s DevOps Research and Assessment (DORA) report highlights a major change in how software professionals are using AI, revealing that 65 percent of developers are also relying heavily on AI for tasks like code generation, code reviews, and software testing, spending an average of two hours daily using AI tools.

The trust paradox in AI adoption

Despite soaring adoption rates, the report also reveals a paradox. Even though the vast majority of developers use AI tools, less than a quarter claim to trust AI ‘a lot’. By contrast, 30% trust it little or not at all, while another report—the 2025 Stack Overflow Developer Survey—found that almost half of developers actively distrust the accuracy of AI tools. At the same time, most respondents in both surveys found that the clear majority of developers report increased productivity and better code quality.

While adoption is higher than ever, confidence is getting lower. The skepticism arises from concerns around so-called AI hallucinations, potential security vulnerabilities, and a lack of transparency. After all, many LLMs used in software development are black boxes, offering little to no transparency over how they draw their conclusions. As a result, organizations face growing pressure to introduce AI with robust guardrails in place, rather than deploying them en-masse as autonomous agents.

Clearly, the adage “move fast and break things” doesn’t work anymore. Those that have incorporated AI successfully use it strategically for specific tasks while also investing more in platform engineering and internal tooling to mitigate security and trust concerns. They’re also establishing stricter review protocols to validate AI-generated outputs and to maintain accountability, rather than relying on it as a wholesale replacement for human labor.

The trust paradox also suggests that culture and training matter just as much as technology. Although leaders should encourage experimentation with AI, they must also provide guidance for validating its outputs. This has given rise to the “human-in-the-loop” approach, where AI generates suggestions, but developers are ultimately responsible for final decisions.

The latest DORA report also signals that future assessments will explicitly evaluate how organizations adopt AI and integrate it into DevOps processes. As AI tooling matures, we can expect a convergence between DevOps and machine learning operations (MLOps), requiring unified platforms and cross-disciplinary teams to narrow the trust gap and ensure accountability. 

 

 

 

 

Share on

More News