From “Right” to “Reasonable”: Explainable AI and its Role in Business Decision-Making for Pharma

Artificial intelligence (AI) is increasingly embedded in our daily lives, and we hold high expectations about the quality of the service or answers it provides. Many believe that AI, like a human expert, should deliver the "right" answer every time. But is this reasonable? And should it apply when assessing AI's utility in complex domains, such as pharma commercialization?

When it comes to decision-making in business, trust is paramount—especially in life sciences, where patient care and outcomes are on the line. And when the decision-maker is an AI, that trust requirement is magnified by a significant factor. How can we trust AI? As much as a tool’s accuracy matters, the ability to understand and interpret its decision-making process is crucial. Explainable AI, or XAI, serves as the bridge between complex AI models and human comprehension.

While AI can crunch numbers and process data with speed and accuracy, the notion of “right” in the context of machine-generated answers, particularly in the multifaceted context of pharma and healthcare, is problematic. Unlike mathematical equations with single, definitive solutions, questions around patient care and market share often lack such certainty…not to mention, require nuance.

It’s worth keeping in mind the reality that people ALSO get things wrong. But (usually!) humans are more understanding of flaws in human behavior and outputs. If you ask ten different people a business question, you’re a good chance to receive ten distinct answers, each shaped by their unique perspective, experiences, and knowledge. There can be many valid interpretations of the question, and as a result, many possible answers or directions that follow. And of course, some of them may be erroneous or lead to risky decisions.

Business questions, more often than not, are inherently subjective. In the world of AI-driven analytics, these subjective factors are equally at play.

What’s better than a “right” answer is a “reasonable” answer, one that stems from a clear and logical interpretation of the query.

Explainable AI: What Makes an Answer Reasonable?

AI models, like GPT, generate responses by first interpreting input based on patterns and information it has learned from a vast amount of data. Its responses are rooted in the training data and the context provided. It doesn’t “think” or “understand” questions in the human sense. Instead, it uses statistical probabilities and data patterns to generate responses.

But, just like humans, these AI models aren’t immune to errors. They might misconstrue context, misunderstand nuances, or misinterpret data—similar to how humans might falter in their interpretations or judgments. 

When we assess a human response to a question, or the decision a human makes for a given scenario, we do so based on gaining some understanding into how the decision was made. We want it to be right, but we also don’t want it to just be a guess! This line of thinking leads to the concept of “reasonableness”. Does the answer seem reasonable, is there a logic that makes us trust it?

You can see where I’m going with this … the idea of reasonableness is an implicit requirement when evaluating decisions made by AI systems. Explainable AI becomes pivotal here—it’s about understanding why a particular decision or output was arrived at. It’s not just about the outcome but understanding the rationale behind it.

When a GPT service produces outputs that are reasonably grounded in the context of the query and available data, even if not 100% accurate, it aligns with the concept of reasonableness. Explainable AI mechanisms empower users to comprehend the reasoning behind these decisions or insights, thus making the outputs more interpretable and justifiable. It also encourages exploration and refinement.

When a human offers a decision, but is then presented with additional information, he or she will modify their response accordingly. If an AI system provides its rationale, it makes it easy for someone to dig deeper, challenge the assistant and refine the answer. The GPT-powered service can take on the new information, and a collaborative conversation can take place. Just like human conversations in this context, not every conversation leads to a lightning bolt moment. But some do —and genuine value can emerge.

How Pharma Brand Teams Get Value from GPT

When using GPT-powered solutions—particularly in sensitive sectors like healthcare and pharma—the right answer is important, but it’s essential to understand how the system arrived at that conclusion.

Explainable AI offers transparency into the black box of AI models, shedding light on the probabilistic reasoning behind generated insights.

For Pharma brand teams, GPT-powered tools, like Prospection AI and its ProGPT assistant, offer an immediate improvement in the way people can interact with data. We know that pharma teams want to be more patient-centric, and we know that the business places a high value on data-driven decisions. But when the bar to using data is high, teams can struggle to meet these goals.

Having tools that bring patient-centricity and data-driven decision making to life is an enormous advance for pharma teams, but explainability becomes a linchpin for trust and acceptance. It’s not just about accepting the results but understanding why the AI arrived at those particular conclusions. This transparency is instrumental, enabling users to validate the outputs against their domain expertise and ensuring alignment with business objectives.

By exposing the AI’s decision-making process, users gain confidence in the tool’s capabilities. They can better leverage its strengths, verify its insights, and, critically, have the ability to explain those insights to stakeholders.

In the first screen, you can see ProGPT describes how it has configured the report to produce the chart. In the second screen, after clicking “Explain how you did that”, ProGPT gives its reasoning for how it has configured the report, and suggests what insights could be gleaned from the resulting chart.

Embracing the Assistive Nature of AI

When working with AI in business decision-making for pharma, it’s crucial to understand that the technology isn’t here to replace human judgment. Instead, it serves as a valuable tool to aid in information gathering, data interpretation, and problem-solving. Its role is to suggest answers based on the data to which it’s exposed.

Embracing this assistive nature of AI means recognizing “reasonableness.” If AI provides a reasonable response—one that is a logical interpretation of the question based on the context and data available—it’s fulfilling its purpose. AI’s capacity to quickly process large datasets, detect patterns, and offer insights in a fraction of the time it would take humans, is its strength; the human ability to assess and execute on the insights is where the true value lies. 

For our life sciences customers that use Prospection AI, ProGPT’s ability to understand and interpret patient journeys across ~500 million de-identified patients at scale enables critical brand decision making to happen in real time.

What we’re seeing is that the best business decisions emerge from a synthesis of human expertise and AI-generated insights. The subjective interpretations of AI can be one piece of a larger puzzle, to be triangulated with diverse human opinions and validated through the analysis of multiple data sources.

The next time you find yourself interacting with AI for critical business decisions, remember that “right” is not always the objective – in fact, there may not even be a “right! Instead, look for “reasonableness.” And then remember, AI can provide a reasonable interpretation, but the ultimate decision should rest in the hands of well-informed human experts who consider all factors and validate with their knowledge and experience.

Contact us today for a demo to learn more about how Prospection AI and award-winning ProGPT can help brand teams be more patient-centric, independent and data-driven.