The AI Disappointment Nobody Is Talking About
By now, most organizations have deployed AI tools. And most are quietly disappointed.
AI’s outputs feel generic, search results miss the point, and recommendations lack context. Many teams that were promised transformative productivity gains are reverting to the old way of doing things. According to McKinsey, 71% of organizations used generative AI in at least one function in 2024—yet the same report found that over 80% report no tangible bottom-line impact on their business as a whole. The gap between adoption and value is real, and it’s growing. In our experience working with CTG clients, the culprit is rarely the AI tool itself, but the knowledge layer underneath it.
Knowledge Built for Humans Will Fail Machines
For decades, organizations have built knowledge for human consumption—written by people who already understood the context, for people who would bring their own context to reading it. A maintenance procedure that says “check the secondary valve before shutdown” works perfectly well for a technician who spent three years on that platform. It tells an AI system almost nothing useful.
Humans are extraordinary at filling in gaps. We infer. We cross-reference. We call a colleague. We remember that the “standard approval process” means something different in Q4, or that the procedure for Facility A was updated after the 2019 incident even though the document header still shows 2017. Machines do none of this automatically. They can only reason from what is explicit, structured, and connected.
This is the translation problem sitting at the heart of most failed AI deployments, and it is far more common than organizations realize. The lack of clear ROI from gen AI investments as reported by McKinsey reflects not just underperforming tools, but an underlying structural problem. When knowledge is built for humans and not for systems, no AI tool can close that gap on your behalf.
What We See in the Field
CTG works extensively with oil and gas operators, and the pattern is consistent. An upstream company invests in an AI-powered retrieval tool to help engineers find relevant procedures, inspection records, and technical standards across a complex asset base. The content exists—years of well records, corrosion assessments, regulatory submissions, and engineering modifications. The problem is it was never designed to be found systematically; it was designed to be filed.
Without consistent metadata, documents lack the attributes that allow a system to distinguish a current operating procedure from a superseded one, a site-specific variance from a corporate standard, or a regulatory submission from an internal working draft. In a safety-critical environment where procedure accuracy is non-negotiable, the consequences are not just operational—they are legal and reputational. Engineers, unable to trust the AI outputs, default to calling colleagues who might remember where something is filed. The tool is sidelined and the investment wasted.
This is not a story about a bad AI tool, but rather about knowledge that was never structured for machine interpretation.
What Machine-Readable Knowledge Actually Requires
Making knowledge machine-readable is not primarily a technology initiative. It is an organizational discipline that pays dividends far beyond AI performance. At its core, it requires three things:
- Explicit metadata: Every significant document needs attributes that make it machine-readable—document type, owner, effective date, applicable asset or business unit, regulatory classification, and review status. These are not administrative luxuries, but the scaffolding that allows AI to distinguish signal from noise.
- Consistent taxonomy: If different teams tag the same type of document differently—or don’t tag at all—no AI system can make reliable connections across them. Taxonomy governance is a prerequisite for machine intelligence, not an afterthought.
- Documented relationships: Knowledge is not just documents, but the relationships between them —which policy governs which process, which standard informs which procedure, which data set supports which decision. Encoding those relationships gives AI something to reason across, not merely retrieve from.
The same upstream operator that struggled with AI retrieval made a focused investment in these three areas—starting with a single high-consequence knowledge domain: their well intervention procedures. Within that domain, they applied consistent metadata, established ownership and review cycles, and mapped relationships between procedures and the engineering standards that governed them. The AI tool produced reliable results when redeployed against that structured knowledge base. That success became the template and the funding justification for expanding the program.
The Organizations Getting Ahead Are Starting Now
Gartner projects that more than 40% of agentic AI projects will be canceled by 2027 due to a lack of clear governance and value realization. The organizations that avoid that outcome are not necessarily the ones with the most sophisticated AI strategies. They are the ones whose knowledge was structured well enough for those strategies to work.
IDC research identifies knowledge management as the most significant use case promise for generative AI, which means the organizations investing in knowledge management infrastructure now are building the foundation that every future AI initiative will depend on. The advantage compounds.
How CTG Can Help
CTG’s Enterprise Information Management (EIM) practice helps organizations assess their knowledge infrastructure and identify where structural gaps most significantly hinder AI performance and operational efficiency. We build a practical roadmap toward machine-readable, AI-ready knowledge. Our teams work across industries and are especially experienced in the complex, high-stakes knowledge environments common in oil and gas, energy, and regulated sectors.
The first step is usually a conversation about where you are today. If your AI tools are underperforming and you suspect the problem runs deeper than the technology, we’d welcome that conversation. Contact our team to get started.