Create your personal agenda –check the favourite icon
As traditional R&D models strain under rising costs and declining ROI, the pharmaceutical industry stands at a tipping point. This keynote explores how in-silico methods—powered by AI, digital twins, and virtual trials—are transforming drug development from end to end, revolutionizing everything from pre-clinical safety studies and dose optimization to clinical trial design, regulatory submissions and post-marketing expansion.
We’ll explore the science, the systems, and the strategy behind this transformative shift. Attendees will gain insight into how organizations can embrace this paradigm, build future-ready capabilities, and lead in a data-driven, AI-enabled era of drug development.
This panel will explore how generative AI is transforming the regulatory and scientific landscape – from enabling in silico alternatives to animal testing, to revolutionizing how regulatory dossiers are authored, reviewed, and approved. With the FDA accelerating its initiatives around AI, the industry is at a pivotal moment to rethink how we bring therapies to market faster and more efficiently.
The integration of unstructured, multimodal healthcare data into standardized analytics-ready formats has traditionally required multiple disconnected projects: LLM pipelines, OCR/PDF tools, FHIR mappings, and clinical terminology APIs. This talk presents a new approach powered by Generative and Agentic AI that enables healthcare organizations to consolidate all raw, multimodal, longitudinal clinical data into a unified OMOP Common Data Model in a single automated pipeline. We describe the architecture behind John Snow Labs’ Patient Journeys platform, which combines document understanding, entity linking, code normalization, temporal reasoning, and longitudinal patient modeling – entirely within the customer’s private cloud environment. By integrating information from both structured sources and unstructured free-text notes, the resulting data model delivers far more accurate downstream calculations for medical risk scoring, clinical coding, care gap detection, and population health metrics – because all available clinical context is reconciled into one semantic layer. Attendees will learn how to build and deploy a unified LLM-based data engineering pipeline, evaluate its output, and operationalize it for real-world use cases.
Join us for an inside look at how Capgemini and Insilico Medicine are partnering to developPharmaceutical Superintelligence (PSI)— a groundbreaking AI system designed to revolutionize the pharmaceutical industry. This session will explore how PSI leverages generative AI, predictive modeling, and automation to accelerate drug discovery and development. Learn how this collaboration is reshaping the future of medicine by enabling faster, safer, and more personalized treatments through data-driven innovation.
At BiotechX, Metaphacts introduces a cutting-edge Enterprise Information Architecture (EIA) solution designed to empower the pharmaceutical industry by ensuring the integration, accessibility, and governance of data across the entire value chain. This innovative approach leverages knowledge graphs to seamlessly connect business objects—such as those derived from common ontologies like Mondo and IDMP—with logical and physical data models, facilitating a unified view from high-level conceptual frameworks down to system-specific representations.
By adhering to FAIR (Findable, Accessible, Interoperable, Reusable) principles, Metaphacts' solution addresses the growing need for transparent, interoperable, and ethically managed data that drives AI and analytics in pharma. Through an intelligent framework that ties business processes, IT systems, data product definitions, and governance responsibilities, the solution not only enhances data integration but also ensures comprehensive data governance. Key stakeholders across data, IT, and business domains are equipped with the tools necessary to collaborate efficiently and effectively—laying the foundation for better decision-making, regulatory compliance, and operational agility in a rapidly evolving pharmaceutical landscape.
In this presentation, we will demonstrate how Metaphacts is transforming the way pharma companies structure and govern their data, enabling smarter AI applications, accelerated innovation, and a more holistic approach to digital transformation in the sector.
DNA-Encoded Libraries (DEL) generate chemistry datasets of unprecedented scale, yet most of this information remains unexplored. X-Chem applies chemomics - the integration of DEL analytics with machine learning and structural biology - to transform raw screening data into actionable insights. By reconstructing high-definition target structure-function maps and training predictive AI/ML models, we enable rapid, mechanism-spanning SAR development and discovery acceleration for our partners. This approach compresses years of traditional iteration into a single dataset, expands accessible chemical space, and delivers translational value across therapeutic programs. The talk will highlight how chemomics reshapes small-molecule drug discovery by converting underutilized DEL data into a strategic engine for innovation
Discover how new agentic artificial intelligence (AI) and large language model (LLM) technologies will enable researchers to get a complete understanding of the regulatory network between biomedical entities from the literature alone. This talk dives into exciting case studies for how new agentic frameworks elevate AI's ability to provide reliable, deep answers to key scientific questions during the drug discovery process, for example when describing mechanistic rationale during target discovery/selection.
Today, medicine stands at the edge of a data-driven revolution. With healthcare generating more data than ever before, the challenge lies in making sense of this complexity to drive real-world impact. AI-powered technologies are leading the charge to meet this challenge, accelerating innovation, improving patient outcomes, and advancing more equitable care on a global scale.
In this session, we will explore how AI and collective intelligence are translating vast amounts of data into actionable insights, enabling clinicians to make more informed decisions and improving outcomes across borders.
· Learn how AI is redefining real-world healthcare by transforming data complexity into valuable insights that unlock groundbreaking medical advancements and reshape care delivery.
· Explore how a connected network of healthcare institutions is powering global collaboration by creating a feedback loop of knowledge, leading to patient benefits worldwide.
· Discover how a decentralized, cloud-based AI technology is bridging the gap in cancer care, paving the way for a connected future, where the patients of today are helping the patients of tomorrow.
This session will present how Lenovo’s Neptune™ liquid cooling, combined with NVIDIA accelerated computing, is enabling faster, more efficient AI drug discovery. Attendees will learn how real-world use cases, demonstrate solutions that shorten research timelines, drive biotech breakthroughs, and deliver sustainable high-performance computing by reducing energy use and carbon footprint.
Eroom's Law, the observation that drug development costs double approximately every nine years, presents a critical challenge to the pharmaceutical industry. This talk will demonstrate how the strategic application of Artificial Intelligence (AI) and data-driven methodologies is transforming drug discovery, offering a powerful solution to this unsustainable trend.
Despite advancements in immunotherapy for metastatic non-small cell lung cancer (NSCLC), a significant proportion of patients do not respond to immune checkpoint inhibitors (ICIs). Identifying effective prognostic biomarkers that are independent of squamous and nonsquamous histology is critical for optimizing treatment strategies in NSCLC. Interferon gamma (IFNG) has emerged as a potential marker for predicting immunotherapy outcomes due to its role in immune modulation independent of squamous histology
Virtual chemical space has exploded into the trillions, yet most tractable searching methods still search in 2D - misaligned with the fundamentally 3D nature of molecular recognition. Existing 3D approaches top out at billions of structures, leaving the vast majority of chemical space untouched.
In this talk, we present a joint workflow that enables true 3D search at trillion scale by combining our ultra-large virtual library (trillions of compounds, with ≥85% synthesizability via eMolecules) with Fujitsu Uvance’s QIMERA Ultra platform - a quantum-inspired, multi-vector R-group annealer with ultra-large library interface. This enables full 3D screening across ultra-large space at ~0.0001% of the compute cost of conventional software and complete in less than a couple of hours, without sacrificing medicinal chemistry realism and compound tractability. Identified virtual hits are practically obtainable within 2-4 weeks.
In a collaboration with a top-ten pharmaceutical organization, this approach delivered a confirmed hit rate of ~45% (full dose–response) from a single starting point (~90× over expectations) and contributed to a program now advancing into clinical trials.
The growing demand for safer, more effective drugs is driving a transformation in early-stage drug discovery. Traditional in vitro and in vivo models often fail to reflect the complexity of human biology, leading to poor translational outcomes and costly failures. In this context, exploratory toxicology is gaining momentum, with imaging and transcriptomics emerging as key technologies. Among these, Cell Painting and Drug-seq–based gene expression profiling offer high-throughput, cost-effective, and information-rich approaches to characterize compound-induced cellular responses. At Axxam, we are implementing an integrated strategy that combines morphological and gene expression profiling to explore how these complementary modalities capture phenotypic effects. In this talk we will introduce our pipelines and the results of tests performed on U2OS (osteosarcoma, widely used in Cell Painting) and HepaRG cells (hepatocyte-like, relevant for hepatotoxicity assessment).
Rob will discuss the significant advancements in ELaiN, an AI-driven digital assistant designed to create experiment content and perform advanced scientific searches. He will highlight the remarkable progress that has been made and how ELaiN has evolved over the past year.
For those that want to meet ELaiN already? https://www.sapiosciences.com/blog/sapio-elain-your-science-aware-ai-lab-assistant/
Create your personal agenda –check the favourite icon
How data platforms are designed to enable AI for success in R&D and improve productivity. This panel will provide practical insights for leveraging data to accelerate discovery and drive results.
1. Overcoming silos in life sciences data ecosystems
2. Emerging tools for data ingestion, transformation, and labeling
3. Data as the foundation for the Agentic AI Lab
4. Agentic AI space across the R&D continuum
More than 980 million people worldwide suffer from mental and neurological disorders, over half of whom remain untreated—highlighting a pressing unmet medical need. Traditional drug discovery approaches often lack a deep understanding of the genetic imbalances underlying these conditions, while AI/ML-guided methods frequently rely on low-quality input data and struggle to produce results that hold up in wet lab validation.
We present a novel workflow for the rapid prioritization and further elucidation of biological mechanisms in schizophrenia. This approach uses differentially expressed genes (DEGs) derived from case–control transcriptomic studies as input and integrates them the Dimensions Knowledge Graph by metaphacts, which transformed publicly available relational data into semantically-rich data, enabling structured and context-sensitive exploration of disease-relevant genes, pathways, and interactions. To validate and functionally characterize computational insights, we leverage Systasy’s pathwayProfiler technology to assess the pathway-modulating effects of active compounds in neuronal models.
This integrated pipeline facilitates the discovery of novel drug targets by transforming typically descriptive relational data into a semantically rich knowledge graph, directly informing early-stage drug discovery through its structured and interconnected nature. Our pipeline sets the foundation for a scalable framework where the semantic layer is used to incorporate diverse omics data as inputs and integrate additional layers of biological context—including protein–protein interaction networks, pathway annotations, and compound characteristics. This layered approach enables a hypothesis-driven selection of candidate drugs through knowledge graph exploration, complemented by functional validation using Systasy’s pathwayProfiler—ultimately accelerating translational drug discovery in complex diseases such as schizophrenia.
Background: Large enterprises in the biopharmaceutical sector face a significant challenge in leveraging the vast amount of unstructured data within scientific publications. Integrating this evidence into internal R&D workflows is often hampered by a lack of data standardization, hindering downstream analytics and the creation of enterprise-wide knowledge graphs. The FAIR principles (Findable, Accessible, Interoperable, Reusable) provide a crucial framework for data stewardship, but applying them at scale to unstructured literary sources remains a major hurdle.Methods: We present a novel workflow that operationalizes FAIR principles for evidence synthesis by integrating a Large Language Model (LLM)-powered platform for literature reviews, KiaKia, with enterprise data governance systems. The core of this workflow is the use of controlled vocabularies and ontologies managed by a central Data Stewardship system. This "single source of truth" is synchronized with KiaKia to guide the data extraction process. Our LLM-powered extraction engine is "grounded" to these enterprise dictionaries, ensuring that extracted data points (e.g., "progression-free survival") strictly adhere to the organization's approved terminology, thereby guaranteeing semantic interoperability. Furthermore, the system includes a robust "human-in-the-loop" governance model. When users encounter concepts not present in the dictionary, they can submit suggestions, which are adjudicated by review leads within a dedicated data cleaning module. Accepted terms are then formally reviewed by Data Stewards for inclusion in the master ontology, creating a closed-loop system that enriches the enterprise vocabulary while ensuring all downstream data remains 100% compliant.Results: This workflow transforms unstructured text into fully compliant, analysis-ready data. At Roche, data processed by KiaKia is seamlessly ingested by downstream systems, populating Snowflake data warehouses. This interoperability is a direct result of enforcing a common language at the point of extraction. The resulting structured data powers a suite of business analytics dashboards in platforms like Tableau, enabling researchers and decision-makers to derive insights from a harmonized, high-quality evidence base.Conclusion: By tightly coupling the power of LLMs with rigorous, human-led data governance and existing enterprise ontologies, our workflow provides a scalable and reliable solution for creating FAIR data assets from scientific literature. This methodology not only accelerates evidence synthesis but also builds a lasting, interoperable foundation for advanced analytics and the construction of comprehensive internal knowledge graphs, turning unstructured information into a strategic enterprise asset.
Could a SMILES string alone help us predict in vivo pharmacokinetics? Equipped with the right PhysChem property predictors and a physiologically-based model of PK, discovery and translational scientists can now look ahead to animal and human ADME from the earliest stage of compound design. Explore the existing tools and what’s on the horizon with Chara Litou, PhD, Senior PBPK Consultant at Certara.
Today’s scientists spend more time managing data than interpreting it. Agentic AI is changing that. In this session, we’ll show how agents are turning fragmented, manual work into AI-powered automations that compress timelines dramatically. You’ll see how researchers are saving time by using natural language to filter experimental data, validating notebook entries in real time, and uncovering connections across assays and studies that used to take days to piece together. You’ll also learn how scientists at leading biopharma companies are using agentic AI in Benchling to change the way they do R&D for the better.
This panel will explore how a collaborative, cross-sector approach, uniting industry leaders and academic institutions, can unlock the full potential of integrated data and federated learning to transform R&D in complex disease areas. Focusing on osteoarthritis (OA) as a case study, the discussion will highlight how these technologies can overcome long-standing barriers in clinical research, particularly in diseases with high prevalence and socio-economic burden. By embedding disease progression modeling within next-generation analytics such as digital twins, this approach offers a path toward more predictive, efficient, and outcome-driven therapeutic development – bringing new hope to millions of patients worldwide.
Large Language Models and AI agents are only as smart as the data foundation beneath them.While dashboards once stole the spotlight, today it’s about whether enterprises can deliver AI-ready data—governed, high-quality, and semantically rich—directly into platforms like Snowflake.
At Novartis, we’re re-imagining data governance as the enabler of trustworthy AI. Our federated model combines a Snowflake foundation with a business-centric catalog and lineage to create ai ready data products that ship not just with access, but with meaning—glossaries, ontologies, synonyms, policies, and lineage that allow LLMs to reason rather than guess.
This session will show how Novartis is building a semantic intelligence layer in Snowflake to support conversational AI, text-to-SQL, and agentic workflow
BioTech360 is a semantic intelligence solution purpose-built for life sciences, designed to transform fragmented scientific data into a living, explainable knowledge network. Built on the SemantX platform, it integrates structured and unstructured data from lab systems, literature, public and proprietary databases, and ontologies, contextualizing information across molecular biology, chemistry, and other scientific domains.With prebuilt data models, modular scientific applications, and extensibility across content, ontologies, and use cases, BioTech360 addresses core challenges in R&D: disconnected data, missing context, and limited AI-readiness. It enables knowledge graphs, semantic search, explainable AI, and integrated analytics, accelerating discovery, enhancing traceability, and empowering smarter decisions. BioTech360 is fully configurable and extensible, and integrates seamlessly with the broader LabVantage ecosystem, including LIMS, ELN, and analytics platforms, ensuring continuity across scientific workflows and enterprise systems.Discover how BioTech360 unifies scientific intelligence across disciplines, making life sciences truly interoperable, contextualized, and AI-ready.
Create your personal agenda –check the favourite icon