Advances in automation and artificial intelligence are beginning to blur the line between the physical world of instruments and the digital world of data systems. At HPLC 2025, Christian Haas, R&D scientist at Agilent Technologies, suggested that in the automated laboratory of the future, analytical instruments could bridge those two domains – feeding into connected workflows and even refining methods independently. Analysts may soon view their instruments not only as tools, but as partners in collaboration.
In the following interview, Haas explains how machine learning and software integration can automate demanding tasks such as gradient optimization, while also stressing the importance of data quality and close collaboration between developers and scientists.
Please give us an introduction to yourself and your role
Certainly! My work bridges the disciplines of chemistry, data science, and laboratory automation. Prior to joining Agilent, I contributed to the MOCCA open-source project at MIT and Bayer, which focused on automated chromatographic data analysis. At Agilent, I now lead innovation projects that integrate AI, machine learning, and automation into liquid chromatography (LC) workflows. My goal is to help scientists generate better data, faster, by fostering collaboration between developers and users to create smarter, more connected lab solutions.
At HPLC 2025, you spoke about the “new role” of analytical instrumentation in today’s laboratory landscape…
The key idea of my talk was that analytical instruments are no longer passive data collectors. Instead, they are evolving into intelligent interfaces that bridge the physical and digital worlds. Their value now lies in how well they integrate into digital workflows – producing actionable insights with minimal manual intervention.
This shift turns instruments into active collaborators. For example, we developed a prototype for closed-loop LC gradient optimization that uses machine learning to refine methods in real-time, based on actual experimental feedback, with minimal human involvement.
What opportunities do you see emerging at the intersection of analytical instrumentation and technologies like AI and machine learning?
The most exciting opportunities aren't just about smarter algorithms or increased automation. They’re about overcoming long-standing challenges in data integration, standardization, and infrastructure.
In our LC method development projects, we quickly realized that deploying machine learning effectively meant first solving foundational integration issues like connecting instruments, software, and data in a robust way.
What surprised us was how much we could achieve by using our existing tools in more interconnected ways to overcome many of the traditional roadblocks to lab digitalization. For me, AI and ML are catalysts, accelerating our journey toward more holistic, intelligent lab ecosystems.
Why is data quality so crucial in this new digital ecosystem – and how are modern systems addressing that challenge?
Data quality has always been essential for good science. But traditionally, it has relied on expert-driven method development and hands-on data interpretation. In an automated lab, we need systems that can produce and interpret high-quality data independently, without sacrificing analytical rigor.
By integrating machine learning and advanced analytics into method development, we’ve created closed-loop systems that not only generate high-quality data but also use it to directly optimize lab processes.
You also spoke about fully autonomous LC gradient optimization – could you reflect on what that kind of innovation means for scientists on the ground?
Method development for LC gradients is traditionally time-consuming and requires significant expertise. With closed-loop gradient optimization, that complexity is handled by machine learning algorithms. They continuously refine the method based on experimental feedback.
This means scientists can spend less time on repetitive setup and more time on interpretation and innovation. Plus, the system can run productively around the clock and generate reliable data even when no expert is present.
How are software and hardware evolving together to support end-to-end automation in the lab?
We're seeing a much tighter coupling between instrument control, data acquisition, data analysis, and decision-making. At Agilent, we're developing APIs and automation frameworks that unify these elements. However, even as these systems grow more autonomous, maintaining human oversight remains critical particularly when dealing with edge cases, unexpected results, or decisions that rely on nuanced domain expertise.
What do you see as the biggest hurdles to achieving seamless digital-physical integration in the lab, and how might we overcome them?
One of the biggest barriers is fragmentation across instruments, software platforms, data formats, and even organizational silos. Many labs still rely on legacy systems that weren’t built to communicate with each other.
Equally important is the human challenge: we need talent capable of bridging chemistry, hardware, software, and data science. This calls for a shift in analytical chemistry education. For example, future curricula might combine foundational analytical chemistry with training including programming (e.g., Python) and data analytics (e.g., chemometrics), preparing future scientists not just to use digital tools, but to develop and integrate them too.
Interdisciplinary teams are crucial for the digital lab transformation because no individual can master every domain. Open interfaces, APIs, and data standards like Allotrope are essential enablers, helping interdisciplinary teams create flexible, integrated automation solutions.
Do you see the “dark analytical laboratory” becoming a genuine possibility in the next 10 years? Is this even something we should be striving for?
In highly standardized environments like QA/QC labs, a fully automated “dark lab” is indeed possible within the next decade and could offer significant efficiency benefits.
But in R&D settings, human creativity and intuition still play a vital role. Rather than aiming for complete autonomy, we aim for smart automation: systems that handle routine tasks efficiently, while enabling scientists to focus on the big-picture thinking that drives innovation.
What does effective collaborationbetween users and developers look like to you, and how can it accelerate innovation?
Effective collaboration means building solutions together through an iterative, interactive process between users and developers. Developers must deeply understand real-world lab challenges, while users should help shape the tools they depend on.
In our Gradient Optimizer project, close collaboration with Evonik and their experts was crucial – specifically with Jim Boelrijk and Johannes Dürholt, two senior data scientists deeply involved in the open-source project BoFire, and Sebastian Detlefsen, group leader for Chromatography & Purification. They provided practical domain knowledge and immediate feedback, ensuring the tool addressed real-world needs effectively. Such collaborative co-creation accelerates innovation by shortening the path from concept to real-world value.