Back to Insights

May 16, 2025

Key announcements from Interrupt LangChain 2025: Ushering in the era of AI Agents

At LangChain Interrupt 2025, the launch of LangGraph Platform marked a turning point in AI Agent infrastructure. As a sponsor and active participant, Qubika witnessed firsthand the speed of developments in AI and LangChain. In this post, we share key takeaways from the event – and how we’re integrating the latest LangChain innovations into our Agentic Platform to help organizations deploy intelligent agents at scale.

In San Francisco, the LangChain Interrupt conference marked a pivotal moment in the evolution of AI agent infrastructure. LangChain announced the general availability of the LangGraph Platform, bringing in a new era of how we build, manage, and deploy intelligent agents at scale.

As part of our Data + AI Roadshow, Qubika was proud to sponsor the Interrupt conference – and our senior leadership was there to participate and discuss the latest developments in Agentic AI. Here are 5 key highlights from the conference, and what they mean for enterprises building AI systems.

1. The LangGraph Platform is now generally available

LangChain officially launched the LangGraph Platform into general availability. Designed for long-running, stateful workflows, it brings 1-click agent deployment, memory APIs, human-in-the-loop controls, and seamless integration with LangGraph Studio. One of the key aspects of LangGraph Platform is that it will make it significantly easier to build scalable AI solutions – with enterprises now able to share and configure AI agents across their organizations.

2. The rise of the “Agent Engineer”

One of the central themes of the event was the emergence of a new hybrid role: the Agent Engineer. Interestingly this aligns closely with the multidisciplinary work Qubika teams have been doing for well over a year – and how we bring together data engineers, analytics engineers, data scientists, and machine learning engineers. These teams design, build, and scale AI agents in sectors including finance and healthcare. We expect this role to become increasingly common in what we’re calling “AI-native” organizations.

3. Observability specifically for Agents (not SREs)

LangSmith now supports tool call and trajectory observability – designed specifically for agent workflows, rather than more traditional site reliability engineering (SRE) use cases. Having this ability to trace the specific actions and decisions of the LLM agent in greater detail will provide a range of benefits, such as for auditability and compliance in highly-regulated industries like finance.

4. New tools for every role

LangChain unveiled a broad set of tools for different roles:

  • Developers: Move from single-agent systems to full multi-agent swarms with Supervisor prebuilt modules.
  • Product teams: A revamped LangGraph Studio v2 improves visual debugging, versioning, and in-app editing.
  • Citizen developers: With RAG-as-a-Service, non-coders can now build knowledge-based AI agents.

5. The release of the MCP Server will reduce the need for custom integrations

The release of the Model Context Protocol (MCP) server within LangGraph now makes it easier to expose and connect agents to different sources. Put simply, it standardizes how an organization’s LLMs connect to all the different data sources it requires – so you won’t need to spend time handling custom integrations.

What it all means for the future of Agentic AI

The implications are clear: The tools for building intelligent, scalable, and maintainable AI agents are ready. The maturity of the LangChain platform has reached a level that will enable widespread deployments. If your organization hasn’t begun exploring the potential of AI agents, now is the time – waiting too long means risking competitive disadvantage.

Here at Qubika we’re already using our proprietary, enterprise-grade Agentic Platform (QAP), which utilizes LangChain, to help organizations build and deploy AI agents with a range of accelerators. The latest advancements in LangChain, announced at Interrupt, will further expand QAP’s capabilities, unlocking more opportunities for innovation and impact with our clients.

AI Agent design and development services

Discover more about our work building enterprise-grade AI Agents using the LangChain platform

Learn more!
Sebastian
Sebastian Diaz

By Sebastian Diaz

Data Studio Manager

Sebastian Diaz is Qubika's Data Studio Manager. With 9 years of experience working in data, he has held positions such as Data Engineer, Data Scientist, and Data Analyst. Sebastian has been enhancing and improving the synergies between different data profiles, and on top of being a leader, he is also a Database and Power BI teacher. Sebastian is in charge of developing and leading data profiles in the Data Studio, as well as focusing on generating opportunities that support the growth of the studio.

News and things that inspire us

Receive regular updates about our latest work

Let’s work together

Get in touch with our experts to review your idea or product, and discuss options for the best approach

Get in touch