LangChain Blog
Explore AI’s future through LangChain's lens, with expert articles and guided tutorials for enthusiasts and experts alike.
TL;DR
Deep research has broken out as one of the most popular agent applications. OpenAI, Anthropic, Perplexity, and Google all have deep research products that produce comprehensive reports using various sources of context. There are also many open source implementations.
We've built an open deep researcher that
TL;DR
Agents need context to perform tasks. Context engineering is the art and science of filling the context window with just the right information at each step of an agent’s trajectory. In this post, we break down some common strategies — write, select, compress, and isolate —
Header image from Dex Horthy on Twitter.
Context engineering is building dynamic systems to provide the right information and tools in the right format such that the LLM can plausibly accomplish the task.
Most of the time when an agent is not performing reliably the underlying cause is that the
Late last week two great blog posts were released with seemingly opposite titles. “Don’t Build Multi-Agents” by the Cognition team, and “How we built our multi-agent research system” by the Anthropic team.
Despite their opposing titles, I would argue they actually have a lot
Co-authored by Assaf Elovic and Harrison Chase. You can also find a version of this post published on Assaf's Medium.
Why do some AI products explode in adoption while others struggle to gain traction? After a decade of building AI products and watching hundreds of launches across the
By Will Fu-Hinthorn
In this blog, we explore a few common multi-agent architectures. We discuss both the motivations and constraints of different architectures. We benchmark their performance on a variant of the Tau-bench dataset. Finally, we discuss improvements we made to our “supervisor” implementation that yielded a nearly