GNNs in Multi-omics Cancer Research & The Rise of In-Context Learning

Graph Neural Networks: Revolutionizing Multi-omics Cancer Research

Mapping Mathematical Operations to Biological Insights & Exploring the Future with In-Context Learning

The Intricate Puzzle of Cancer

Cancer is not a single disease but a complex constellation characterized by significant molecular heterogeneity. This diversity drives variability in clinical presentation, progression, and response to therapy.

High-throughput multi-omics technologies (TCGA, CPTAC) generate vast datasets spanning genomics, transcriptomics, proteomics, and more, holding immense potential for precision oncology.

However, traditional analytical methods often struggle with the relational complexity inherent in biological systems.

Genomics

Transcriptomics

Proteomics

Enter Graph Neural Networks (GNNs)

GNNs are specialized deep learning models designed to operate on graph-structured data. They represent entities (genes, proteins, patients) as nodes and their relationships (interactions, similarities) as edges.

The core mechanism is message passing, where nodes iteratively aggregate information from their neighbors, learning representations (embeddings) that capture both node features and local graph topology.

Model Relational Data

Explicitly captures biological interactions and similarities.

Integrate Multi-Omics

Can fuse diverse data types into a unified framework.

Performance Gains

Often yield 6-15% median improvement over non-graph methods.

Nodes aggregate information from neighbors, learning context-aware representations.

Gene A Gene B Protein X

The GNN Workflow in Cancer Research

1. Data Acquisition

Multi-omics, Clinical, Imaging Data

2. Graph Construction

Nodes (Genes, Patients), Edges (Interactions, Similarity)

3. GNN Model Application

GCN, GAT, GraphSAGE, etc. for learning

4. Downstream Tasks

Subtyping, Survival, Drug Response, Biomarkers

5. Insights & Outcomes

Improved Accuracy, Novel Discoveries, Personalized Medicine

Key GNN Applications in Cancer Genomics

Subtype Classification

Identifying distinct molecular subtypes for tailored treatments. GNNs integrate multi-omics or leverage interaction networks.

Models: omicsGAT, MOGONET, MCRGCN

Survival Prediction

Predicting patient outcomes (e.g., overall survival). GNNs model patient similarity or gene/pathway networks.

Models: GNN-surv, PathGNN, FGCNSurv

Drug Response

Predicting sensitivity or resistance to drugs. GNNs model drug structures, target interactions, or cell line graphs.

Models: DRPreter, XGDP, DualGCN

Biomarker Discovery

Identifying key molecular players (driver genes, predictive biomarkers) by integrating diverse omics layers.

Models: CGMega, MRNGCN, MICAH

Spotlight on GNN Architectures

GCN (Graph Convolutional Network)

Aggregates neighbor features, akin to image convolutions. Foundational for many tasks.

GAT (Graph Attention Network)

Assigns different importance (attention) to neighbors, allowing context-specific aggregation.

GraphSAGE

Inductive learning framework; samples and aggregates features from a fixed-size neighborhood.

GIN (Graph Isomorphism Network)

Theoretically powerful for distinguishing graph structures, useful for molecular graphs.

Graph Transformers

Adapt Transformer architecture for graphs, capturing global dependencies via attention.

Heterogeneous GNNs / Hypergraphs

Model diverse node/edge types or higher-order interactions beyond pairwise connections.

Mathematical Foundations: GNNs in Biology

Core GNN operations translate mathematical constructs into biologically meaningful interpretations, enabling models to learn from complex omics data.

1. k-Nearest-Neighbour (kNN) Adjacency Matrix

Defines connections based on similarity, forming patient or gene networks.

\( A_{ij}\;=\; \begin{cases} 1, & \text{if } j \in \mathrm{kNN}(i) \text{ or } i \in \mathrm{kNN}(j),\\ 0, & \text{otherwise}. \end{cases} \)

Biological Link: Connects patients with similar molecular profiles (e.g., from TCGA expression data) for subtype discovery or genes with related functions.

2. Graph Convolutional Network (GCN) Layer

Updates a node's representation by averaging features of its interacting partners.

\( \mathbf{h}_v^{(l+1)} \;=\; \sigma\!\Biggl(\sum_{u \in \tilde{\mathcal{N}}(v)} \frac{1}{\sqrt{\tilde{d}_v \tilde{d}_u}}\; \mathbf{W}^{(l)} \mathbf{h}_u^{(l)}\Biggr) \)

Biological Link: Models how a gene's function or a patient's state is influenced by its molecular or clinical neighborhood (e.g., in PPI networks or patient similarity graphs).

3. Graph Attention Network (GAT) Layer

Assigns learnable importance (attention weights \(\alpha_{vu}\)) to neighbors during aggregation.

\( \mathbf{h}_v^{(l+1)} = \sigma\!\Biggl( \sum_{u \in \mathcal{N}(v)} \alpha_{vu} \mathbf{W}^{(l)} \mathbf{h}_u^{(l)} \Biggr) \)

Biological Link: Identifies critical interactors for a gene in a specific cancer context or influential patient similarities for subtype determination (e.g., omicsGAT).

4. Cox Partial-Likelihood for Survival Analysis

Standard loss for training GNNs to predict patient survival outcomes, handling censored data.

\( \mathcal{L}_{\text{Cox}} = -\sum_{i: \delta_i=1} \Biggl( f(\mathbf{x}_i) - \log \sum_{j:t_j \ge t_i} \exp(f(\mathbf{x}_j)) \Biggr) \)

Biological Link: Enables GNNs (e.g., GNN-surv) to learn risk scores from omics profiles that correlate with patient survival times, aiding prognostic modeling.

GNNs in Cancer: Current Hurdles & The Road Ahead

Key Challenges

  • Graph Construction: Robustly defining nodes and edges from noisy, high-dimensional omics data remains critical.
  • Interpretability (XAI): Understanding *why* a GNN makes a prediction is crucial for clinical trust and biological discovery.
  • Scalability: Applying GNNs to massive biobanks or genome-wide interaction networks efficiently.
  • Data Heterogeneity: Effectively integrating diverse data types (multi-omics, clinical, imaging) with varying quality and scales.
  • Validation: Ensuring models generalize to new, unseen patient cohorts and diverse clinical settings.

Main Future Directions

  • Causality-Aware GNNs: Moving beyond correlation to infer causal relationships in biological networks.
  • Graph Foundation Models: Pre-training large GNNs on vast biological data for broad applicability.
  • Federated Learning: Training GNNs collaboratively across institutions without sharing sensitive patient data.
  • Dynamic & Temporal GNNs: Modeling disease progression and treatment effects over time.
  • Multi-Scale Modeling: Integrating information from molecular to tissue and patient levels.

The Next Frontier: In-Context Learning (ICL) for Adaptive GNNs

What is In-Context Learning for GNNs?

Inspired by Large Language Models (LLMs), ICL enables a pre-trained GNN to adapt to new tasks or data contexts using a few examples (a "prompt"), often without any parameter updates.

Instead of extensive retraining, the GNN leverages its learned knowledge to interpret the prompt and make predictions for the new scenario.

Why ICL in Cancer Research?

  • Rapidly adapt models to new cancer subtypes or rare diseases with limited data.
  • Personalize prognostic or treatment response models using patient-specific "prompts" (e.g., unique molecular features).
  • Accelerate drug discovery by prompting GNNs with effects of novel compounds or targets.
  • Handle evolving biological understanding and data distributions more dynamically.

Simplified ICL-GNN Workflow

📊 Pre-trained GNN (Foundation Model)

↓ +

📝 Graph Prompt (e.g., few examples of a new cancer subtype, drug effect)

🧠 GNN Processes Prompt + Query

🎯 Adaptive Prediction for New Context

Pioneering ICL-GNN Methods

PRODIGY (Huang et al., 2023)

Pre-trains GNNs on "prompt graphs" connecting examples and queries to label nodes, enabling ICL for node/edge classification on unseen graphs.

All in One (Sun et al., 2023)

Multi-task prompting framework unifying node, edge, and graph tasks via prompt graphs and meta-learned prompt initializations.

One For All (OFA) (Liu et al., 2024)

Uses text-attributed graphs and class nodes as prompts, aiming for a single GNN for all tasks across diverse domains, including zero-shot learning.

GPPT (Sun et al., 2022)

Early method unifying pre-training and downstream node classification by reformulating it as link prediction to prompt tokens (label nodes).

AskGNN (Hu et al., 2024)

Hybrid GNN-LLM approach where a GNN retrieves relevant graph examples to construct prompts for an LLM to perform ICL on text-attributed graphs.

GraphPrompter (Lv et al., 2025)

Focuses on optimizing the selection and generation of graph prompts via multi-stage refinement to boost ICL performance.

ICL-GNNs: Potential & Challenges

Potential Impact in Oncology:

  • Faster development of models for rare cancers or novel biomarkers.
  • More dynamic and personalized risk assessment adapting to patient journey.
  • Efficient exploration of drug repurposing candidates.
  • Reduced need for extensive, task-specific labeled datasets.

Key Challenges for ICL-GNNs:

  • Defining effective and interpretable "graph prompts".
  • Scalability of processing complex prompts for large graphs.
  • Ensuring robustness and generalization across diverse cancer types and datasets.
  • Theoretical understanding of how GNNs perform ICL.

Paving the Way for Precision Oncology

GNNs have demonstrated remarkable potential in transforming cancer research by deciphering complex multi-omics data. Their ability to model relationships and integrate diverse information sources offers tangible improvements in subtype classification, survival prediction, and drug response modeling.

The advent of In-Context Learning promises to further enhance their adaptability and utility, pushing GNNs towards becoming indispensable tools in the quest for personalized and effective cancer treatments.

While challenges in interpretability, generalization, and robust graph construction remain, ongoing research is actively addressing these hurdles, bringing GNN-powered precision oncology closer to clinical reality.

Infographic synthesized from research on GNNs in Cancer Genomics and In-Context Learning (2018-2025).

Visual representation for educational purposes. Content derived from provided academic manuscripts.