Knowledge graphs are useful for linking related content, complementing vector similarity. They enable connections between content that may not be similar but relevant. Content-centric knowledge graphs, where nodes represent content like text passages and images, are well-suited to capturing multimodal information and are easier to construct than entity-centric ones. Techniques for inferring links between content include explicit HTML links, common keywords using Keybert, named-entity extraction using GLiNER, and the hierarchy of documents and headings. However, high connectivity can lead to scaling problems in knowledge graphs. To address this issue, LangChain introduced a new data model that stores outgoing and incoming links rather than materializing edges, enabling faster traversals. This approach allows for efficient storage and retrieval of highly connected content-centric knowledge graphs. The latest improvements in langchain-core 0.2.23 and langchain-community 0.2.10 can be integrated into projects to experience the benefits of these advancements.