AI Agents: Catalyst for Change or Cause for Concern

Imagine a world where AI not only answers our questions but performs tasks for us. This innovative technology goes beyond providing information – it can independently complete entire tasks (but AI won’t replace critical thinking yet). These AI agents are the future of intelligent automation that functions smoothly without much human input. They’re super-smart digital assistants that can handle complex tasks and streamline our processes.

These AI agents are being hailed as the darling of AI in 2024, but the concept of AI agents has evolved over several decades. From the 1950s, when early foundations of AI were laid, including concepts that would later contribute to agent theory, to the 2000s, when Agent-based approaches became widely adopted in various AI applications, from software assistants to robotics.

The book “Artificial Intelligence: A Modern Approach,” first written in 1995 by Russell and Norvig, is the bible of AI and structured around the concept of intelligent agents, a fundamental framework for understanding and developing AI systems.  Using the agent framework, the book provides a unifying theme for discussing various AI techniques and approaches, from classical AI to modern machine learning methods. This approach helps students understand how different AI technologies fit into the broader context of creating intelligent systems that can perceive, reason, and act in complex environments [1].

However, the current problem (specifically for healthcare) is that we’re limited in data. Healthcare providers lack comprehensive data, which can hamper the model we’ve developed, i.e., what they know about the world (and hence what tasks they can solve).  There are various methods to overcome this, such as transfer learning and synthetic data generation.

We’re also on the incredible cusp of change from monolithic models (typically standalone models trained from scratch on a specific dataset) to compound AI systems (composed of multiple components or models that can interact and integrate). For instance, transfer learning involves taking a pre-trained model (a component) and fine-tuning it on a new, often smaller dataset, a characteristic of compound AI systems. RAG (Retrieval Augmented Generation) is an even stronger example of a compound AI system, which we’ll discuss in more detail in another post. Compound AI systems are ultimately faster to build and adapt [2].

 

How Can We Apply AI Agents to a Pharma 4.0 Environment?

We’ve already discussed and developed the technicalities and theory of developing a fundamental AI Agent (non-generative) for drug discovery processes. We’ve also discussed PandaOmics, which Utilizes generative AI agents (although still requires human oversight and input to interpret the results and make final decisions) for target discovery and validation by analysing omics data (genomics, transcriptomics, proteomics, etc.) to identify novel therapeutic targets.

However, let’s run through another example within the drug development lifecycle: operations.  We can integrate our model into existing pharmaceutical operations. For example, we can incorporate our AI agent into an existing OSD production line. Here’s how it could work:

An AI agent automates the quality control process in a pharmaceutical tablet production line.

  1. Perception:
    • Data Collection: Sensors and cameras are installed along the production line to monitor the tablets continuously.
    • Data Input: The AI agent receives real-time data on tablet characteristics (e.g., size, shape, colour) and production conditions (e.g., temperature, humidity).

 

    2. Processing:

    • AI Models: The agent uses computer vision models to analyse the visual data and machine learning models to detect tablet anomalies or defects.
    • Large Language Models (LLMs): LLMs interpret and analyse unstructured data such as operator notes, maintenance logs, and quality reports. They can understand natural language inputs, extract relevant information, provide insights and recommendations and produce easy-to-understand reports.

 

     3. Decision-Making:

    • The AI agent processes the data using its models and LLMs to determine if the tablets meet the quality standards.
    • If an anomaly is detected, the AI agent decides the appropriate action, such as alerting a human operator, adjusting production parameters, or removing defective tablets from the line. 

 

     4. Action:

    • Actuation: The AI agent can directly control machinery to adjust production parameters (e.g., line speed, pressure) or activate mechanisms to remove defective tablets.
    • Communication: The AI agent can send alerts or generate reports for human operators, providing detailed analysis and recommended actions, such as adjustments to process parameters in real time to prevent further deviations.

 

     5. Feedback Loop:

    • Learning: The AI agent continuously learns from the outcomes of its actions. If certain defects are repeatedly missed or new types of defects appear, the AI models and LLMs are updated to improve accuracy and reliability.

 

The AI agent uses AI models and LLMs to perform autonomous quality control in this scenario. The AI models handle structured data like images and sensor readings, while the LLMs process unstructured data like operator notes. The AI agent’s comprehensive system includes perception, processing, decision-making, action, and an iterative feedback loop to ensure continuous improvement and reliable automation in the tablet production line.

It’s called an agentic approach when we put LLMs in charge of the logic to enhance their reasoning and problem-solving capabilities. AI agents are effectively digital labour. However, this is a broad generalisation as AI agents can vary significantly in complexity and function. AI agents, particularly those involving LLMs, benefit from continuous feedback to refine their actions and improve performance over time. This is a fundamental aspect of intelligent systems.

These systems can combine LLMs with other AI models and technologies to perform various tasks, from simple automation to sophisticated problem-solving.

The integration is also seamless. The AI Agent can work with the existing MES and SCADA systems (or anywhere within the vertical stack), so applied digital transformation within our tablet facility is paramount.  The LLM can offer support through an interactive assistant, helping the production team troubleshoot issues and optimise the process based on historical trends and structured and unstructured text.  It can also incorporate feedback to improve system performance continuously.

 

Transparent and Responsible AI – Ethical Dangers of Using AI Agents

Of course, ensuring transparency and responsibility in AI deployment is crucial to mitigating the ethical dangers of using AI agents. Several tools and packages are designed to enhance model transparency and explainability in AI and machine learning. Industrial or large-scale operations often use SHAP (SHapley Additive exPlanations) for model explainability. SHAP is widely adopted in various industries due to its robust theoretical foundation and practical utility in interpreting complex machine learning models

ONNX can also be deployed for AI agents. ONNX’s ability to provide a standardised format for machine learning models enables these agents to utilise models trained in various frameworks, enhancing interoperability and flexibility. This makes integrating, deploying, and scaling AI agents across different platforms and environments easier, ensuring consistent performance and streamlined operations.

Here are some ethical dangers of using AI agents:

Data Sources:

    • AI models risk incorporating biased, private, or copyrighted data, leading to ethical concerns around privacy and intellectual property​
    • Effective data governance and filtering processes are crucial to mitigate these risks​

 

Ethical Usage:

    • AI models can produce misinformation, hallucinations, and harmful content, raising ethical issues around truthfulness and potential misuse.​
    • Ensuring prosocial behaviour and aligning models with societal values helps mitigate misuse and value misalignment risks.

Data Governance Procedures:

    • Comprehensive data and model details documentation is essential for transparency and accountability​
    • Rigorous evaluation and adherence to regulatory compliance and ethical standards are necessary to ensure responsible AI deployment.​

 

Quality Control Specific Risks

 

Accuracy and Reliability: The AI system must maintain high accuracy and reliability in detecting defects to avoid harmful consequences from non-compliant tablets.

Biases in Quality Assessment: Avoid any potential biases that could lead to inconsistent quality control decisions, affecting patient safety and trust in the pharmaceutical product.

 

Human Oversight

  • Monitoring: Implementing human-in-the-loop systems to monitor and review AI decisions regularly, ensuring they meet quality and ethical standards.
  • Accountability: Defining clear accountability structures for decisions made by AI systems to ensure that human supervisors can intervene when necessary.

 

Transparency to Stakeholders

  • Stakeholder Communication: Informing all stakeholders, including regulatory bodies and the public, about the use of AI in the quality control process and how it impacts product safety and efficacy.
  • Explanation of Decisions: Ensuring that AI decision-making processes can be explained in understandable terms, particularly for critical quality control decisions.

 

Continuous Improvement

  • Feedback Loops: Establishing feedback mechanisms to continually improve the AI system based on real-world performance data and stakeholder input.
  • Ethical Audits: Conducting regular ethical audits of the AI system to identify and address any emerging ethical issues.

 

These ethical dangers of incorporating an AI agent in a Pharma 4.0 environment emphasise the importance of robust data governance, ethical usage frameworks, and transparency in the deployment and operation of AI agents [3 and 4].   As of August 2026, any AI company must comply with new EU rules concerning transparency and copyright [5]. 

EMA has also published a draft reflection paper outlining the current thinking on using AI to support the safe and effective development, regulation, and use of human and veterinary medicines. This paper reflects on the principles relevant to applying AI and ML at any step of a medicine’s lifecycle, from drug discovery to the post-authorisation setting [6].

 

Navigating the Risks of AI Agent Communication: The Imperative of Guard Rails

The collaboration of AI agents presents both opportunities and challenges. On one hand, their collective problem-solving abilities could lead to powerful breakthroughs. However, this synergy may come at a cost. Eric Schmidt, former Google CEO, expresses concern about AI agents potentially developing their own language and communicating in ways incomprehensible to humans—a scenario he predicts could unfold within the next five years.

This “agent-to-agent interaction” raises critical questions: How can we effectively manage a non-human intelligence that lacks human experience and perspective? The answer lies in implementing robust guardrails to ensure these AI systems remain within controllable bounds [7].

 

 

The FDA, MHRA, and EMA all have their take on this. Here are some guard rails that could be incorporated.

Bias and Discrimination:

    • Guardrails: Implement fairness audits and bias detection tools
    • Issue: AI can perpetuate existing biases, leading to unfair treatment.

Lack of Accountability:

    • Guardrails: Establish clear responsibility frameworks.
    • Issue: It is difficult to assign blame when AI makes mistakes.

Neglecting Human Element:

    • Guardrails: Maintain human oversight and decision-making roles.
    • Issue: Over-reliance on AI can lead to dehumanisation and reduced empathy.

Privacy Violations:

    • Guardrails: Enforce strict data protection policies.
    • Issue: AI systems can misuse or leak sensitive data.

Transparency Issues:

    • Guardrails: Ensure AI processes are understandable and explainable.
    • Issue: Lack of transparency can lead to mistrust and misuse.

 

Key Guardrails:

  • Regular audits
  • Clear accountability
  • Human oversight
  • Data protection
  • Transparency measures

 

Maintaining these guardrails helps mitigate the ethical risks of AI agents or possible catastrophes [8, 9 & 10].

 

Takeaway

AI agents are game-changing how we approach problem-solving in AI, making systems more autonomous and capable of handling diverse and complex queries.  Integrating AI agents into various fields represents a significant leap in automation and efficiency, particularly within pharmaceutical operations. These agents, evolving over decades, can perform complex tasks with minimal human intervention, streamlining processes and enhancing productivity. Despite their benefits, deploying AI agents necessitates stringent ethical considerations, data governance, and continuous human oversight to ensure responsible and transparent use. As the technology progresses, it is imperative to implement robust guardrails to manage potential risks, ensuring AI systems remain beneficial and trustworthy.

 

References:

[1] Norvig, P. and Russell, S.J., 2003. Artificial Intelligence: A Modern Approach. 2nd ed. Upper Saddle River, NJ: Prentice Hall

 

[2] IBM Technology – What are AI Agents?

www.youtube.com/watch?v=F8NKVhkZZWI

 

[3] AI Through Ethical Lenses: A Discourse Analysis of Guidelines for AI in Healthcare

https://link.springer.com/article/10.1007/s11948-024-00486-0

 

[4] The Ethics of Using Artificial Intelligence in Scientific Research: New Guidance Needed for a New Tool

https://link.springer.com/article/10.1007/s43681-024-00493-8

 

[5] EU’s New AI Rules Ignite Battle Over Data Transparency

https://www.reuters.com/technology/artificial-intelligence/eus-new-ai-rules-ignite-battle-over-data-transparency-2024-06-13/

 

[6] EMA Reflection Paper on the Use of Artificial Intelligence in the Lifecycle of Medicines

www.ema.europa.eu/en

 

[7]  The Future Of AI, According To Former Google CEO Eric Schmidt

Refer to above video

 

[8] European Medicines Agency (EMA) AI Workplan

https://www.ema.europa.eu/en/news/artificial-intelligence-workplan-guide-use-ai-medicines-regulation

 

[9] CDER, Artificial Intelligence in Drug Manufacturing

https://www.fda.gov/media/165743/download?attachment

 

[10] MHRA, Impact of AI on the regulation of medical products, Implementing the AI White Paper principles

https://assets.publishing.service.gov.uk/media/662fce1e9e82181baa98a988/MHRA_Impact-of-AI-on-the-regulation-of-medical-products.pdf

 

Let's collaborate to develop better healthcare solutions for tomorrow

error: Data is Protected!