Glean's knowledge graph covers text, but 80% of enterprise content is visual. Learn how MuseDAM CCS complements Glean to build a complete content AI layer.

Key Takeaways: Enterprise knowledge graphs have solved text-based semantic retrieval, but 80% of enterprise content assets are images, videos, and design files—still a blind spot for AI. Visual assets need their own semantic layer to become AI-understandable, callable, and orchestrable. A Content Context System (CCS) is filling this gap, giving visual assets the same level of AI intelligibility as text.
In this article:
The enterprise AI search category is booming. Platforms like Glean use knowledge graphs to connect text information scattered across systems, letting employees find answers using natural language. The direction is unquestionable—in information-overloaded enterprise environments, finding the right document is productivity.But at MuseDAM, serving over 200 enterprises, we've found that knowledge graphs only address the "text side" of enterprise content. When companies try to make AI understand all their content assets, a massive blind spot emerges.
Enterprise knowledge graphs are built on text semantics—documents, emails, chat logs, wiki pages. But the reality is stark: Gartner estimates that 80% of enterprise data is unstructured, and more than half consists of images, video, audio, and other rich media.A consumer goods company might have 100,000 product images and design source files, yet these assets are invisible to any text-based knowledge graph.Search "Q3 marketing plan" and you'll find the PPT, but not the product photos already shot for it. Search "brand visual guidelines" and you'll find the text document, but not the actual design files and their version history.This isn't a flaw in any particular tool—it's that text semantics and visual semantics are two fundamentally different technical problems. Knowledge graphs solved the former. The latter needs a dedicated semantic layer.
A semantic layer isn't about tagging images with a few keywords. It means AI understanding the full context of a product image—which product line it belongs to, which photoshoot, which market version, which channels have used it, which design files it's linked to, and whether it complies with the latest brand guidelines.Traditional DAM handles storage and classification. A visual semantic layer makes assets "AI-understandable"—searchable, reasonable, and generatable. When an AI Agent needs to create localized materials for a specific market, the semantic layer tells it where to find source files, what guidelines to reference, and what brand constraints to follow.MuseDAM defines this capability as the Content Context System (CCS)—building a unified semantic foundation for all enterprise visual assets, making them first-class citizens in AI workflows.
CCS builds visual asset AI intelligibility across three dimensions: Discoverability. AI can use semantic search to find "the hero image that performed best during last year's Singles' Day"—not just "JPGs with 1111 in the filename." MuseDAM's AI-powered auto-tagging gives every image and design file a machine-readable semantic description. Comprehensibility. AI knows an image's brand ownership, channel fit, and approval status, and can directly determine whether it's suitable for a specific campaign. Relationships between assets are made explicit—which brand, which campaign, which market, which usage stage. Orchestrability. Visual assets become resources that AI workflows can automatically recommend, combine, and generate variants from—instead of attachments requiring manual search and transfer. Enterprise AI Agents can query, filter, and retrieve content assets through standard APIs.As an Asia-Pacific leading vendor in Forrester's global DAM report, MuseDAM has accumulated over 170 invention patents, holds SOC 2 Type II and ISO 27001 certifications, and serves more than 200 mid-to-large enterprises.
Zooming out to the enterprise content architecture level, complete content AI infrastructure requires two layers:
Traditional DAM is a storage and classification system. A visual semantic layer builds AI-understandable context on top—enabling AI to understand asset ownership, usage, relationships, and status, making visual assets directly invocable and reasoned about by AI Agents.
Knowledge graphs handle text knowledge semantic retrieval. Visual semantic layers handle rich media asset semantic understanding. They're complementary—the former processes the text world, the latter covers visual and multimedia content.
No. MuseDAM's CCS integrates with existing CMS, PIM, cloud storage, and AI platforms via API. It's a supplementary layer, not a replacement.
Images, videos, design source files (PSD/AI/Sketch/Figma), 3D models, PDFs, and other mainstream formats are all supported.
When content assets exceed tens of thousands of files across multiple brands and channels, the cost of finding and reusing visual assets grows exponentially. For enterprises adopting AI Agents, a visual semantic layer is a prerequisite for those Agents to actually work. Does your enterprise AI understand only text, or images too? MuseDAM's Content Context System makes visual assets truly comprehensible and callable by AI. Book a MuseDAM Demo