pygmtools.multi_graph_solvers.gamgm

pygmtools.multi_graph_solvers.gamgm(A, W, ns=None, n_univ=None, U0=None, sk_init_tau=0.5, sk_min_tau=0.1, sk_gamma=0.8, sk_iter=20, max_iter=100, param_lambda=1.0, verbose=False, backend=None)[source]

Graduated Assignment-based multi-graph matching solver. Graduated assignment is a classic approach for hard assignment problems like graph matching, based on graduated annealing of Sinkhorn’s temperature \(\tau\) to enforce the matching constraint.

The objective score is described as

\[\max_{\mathbf{X}_{i,j}, i,j\in [m]} \ \sum_{i,j\in [m]} \left( \lambda \ \mathrm{tr}(\mathbf{X}_{ij}^\top \mathbf{A}_{i} \mathbf{X}_{ij} \mathbf{A}_{j}) + \mathrm{tr}(\mathbf{X}_{ij}^\top \mathbf{W}_{ij})\right)\]

Once the algorithm converges at a fixed \(\tau\) value, \(\tau\) shrinks as:

\[\tau = \tau \times \gamma\]

and the iteration continues. At last, Hungarian algorithm is applied to ensure the result is a permutation matrix.

Note

This algorithm is based on the Koopmans-Beckmann’s QAP formulation and you should input the adjacency matrices A and node-wise similarity matrices W instead of the affinity matrices.

Parameters
  • A\((m\times n \times n)\) the adjacency matrix (\(m\): number of nodes). The graphs may have different number of nodes (specified by the ns argument).

  • W\((m\times m \times n \times n)\) the node-wise similarity matrix, where W[i,j] is the similarity matrix

  • ns – (optional) \((m)\) the number of nodes. If not given, it will be inferred based on the size of A.

  • n_univ – (optional) the size of the universe node set. If not given, it will be the largest number of nodes.

  • U0 – (optional) the initial multi-graph matching result. If not given, it will be randomly initialized.

  • sk_init_tau – (default: 0.05) initial value of \(\tau\) for Sinkhorn algorithm

  • sk_min_tau – (default: 1.0e-3) minimal value of \(\tau\) for Sinkhorn algorithm

  • sk_gamma – (default: 0.8) the shrinking parameter of \(\tau\): \(\tau = \tau \times \gamma\)

  • sk_iter – (default: 200) max number of iterations for Sinkhorn algorithm

  • max_iter – (default: 1000) max number of iterations for graduated assignment

  • param_lambda – (default: 1) the weight \(\lambda\) of the quadratic term

  • verbose – (default: False) print verbose information for parameter tuning

  • backend – (default: pygmtools.BACKEND variable) the backend for computation.

Returns

the multi-graph matching result (a MultiMatchingResult object)

Note

Set verbose=True may help you tune the parameters.

Example for Pytorch backend:

>>> import torch
>>> import pygmtools as pygm
>>> import itertools
>>> import time
>>> pygm.BACKEND = 'pytorch'
>>> _ = torch.manual_seed(1)

# Generate 10 isomorphic graphs
>>> graph_num = 10
>>> As, X_gt, Fs = pygm.utils.generate_isomorphic_graphs(node_num=4, graph_num=10, node_feat_dim=20)

# Compute node-wise similarity by inner-product and Sinkhorn
>>> W = torch.matmul(Fs.unsqueeze(1), Fs.transpose(1, 2).unsqueeze(0))
>>> W = pygm.sinkhorn(W.reshape(graph_num ** 2, 4, 4)).reshape(graph_num, graph_num, 4, 4)

# Solve the multi-matching problem
>>> X = pygm.gamgm(As, W)
>>> matched = 0
>>> for i, j in itertools.product(range(graph_num), repeat=2):
...     matched += (X[i,j] * X_gt[i,j]).sum()
>>> matched / X_gt.sum()
tensor(1.)

# This function supports graphs with different nodes (also known as partial matching)
# In the following we ignore the last node from the last 5 graphs
>>> ns = torch.tensor([4, 4, 4, 4, 4, 3, 3, 3, 3, 3], dtype=torch.int)
>>> for i in range(graph_num):
...     As[i, ns[i]:, :] = 0
...     As[i, :, ns[i]:] = 0
>>> for i, j in itertools.product(range(graph_num), repeat=2):
...     X_gt[i, j, ns[i]:, :] = 0
...     X_gt[i, j, :, ns[j]:] = 0
...     W[i, j, ns[i]:, :] = 0
...     W[i, j, :, ns[j]:] = 0

# Partial matching is challenging and the following parameters are carefully tuned
>>> X = pygm.gamgm(As, W, ns, n_univ=4, sk_init_tau=.1, sk_min_tau=0.01, param_lambda=0.3)

# Check the partial matching result
>>> matched = 0
>>> for i, j in itertools.product(range(graph_num), repeat=2):
...     matched += (X[i,j] * X_gt[i, j, :ns[i], :ns[j]]).sum()
>>> matched / X_gt.sum()
tensor(1.)

Note

If you find this graph matching solver useful in your research, please cite:

@article{gamgm1,
  title={Graduated assignment algorithm for multiple graph matching based on a common labeling},
  author={Sol{\'e}-Ribalta, Albert and Serratosa, Francesc},
  journal={International Journal of Pattern Recognition and Artificial Intelligence},
  volume={27},
  number={01},
  pages={1350001},
  year={2013},
  publisher={World Scientific}
}

@article{gamgm2,
  title={Graduated assignment for joint multi-graph matching and clustering with application to unsupervised graph matching network learning},
  author={Wang, Runzhong and Yan, Junchi and Yang, Xiaokang},
  journal={Advances in Neural Information Processing Systems},
  volume={33},
  pages={19908--19919},
  year={2020}
}

This algorithm is originally proposed by paper gamgm1, and further improved by paper gamgm2 to fit modern computing architectures like GPU.