pygmtools.classic_solvers.rrwm

pygmtools.classic_solvers.rrwm(K, n1=None, n2=None, n1max=None, n2max=None, x0=None, max_iter: int = 50, sk_iter: int = 20, alpha: float = 0.2, beta: float = 30, backend=None)[source]

Reweighted Random Walk Matching (RRWM) solver for graph matching (QAP). This algorithm is implemented by power iteration with Sinkhorn reweighted jumps.

The official matlab implementation is available at https://cv.snu.ac.kr/research/~RRWM/

Parameters
  • K\((b\times n_1n_2 \times n_1n_2)\) the input affinity matrix, \(b\): batch size.

  • n1\((b)\) number of nodes in graph1 (optional if n1max is given, and all n1=n1max).

  • n2\((b)\) number of nodes in graph2 (optional if n2max is given, and all n2=n2max).

  • n1max\((b)\) max number of nodes in graph1 (optional if n1 is given, and n1max=max(n1)).

  • n2max\((b)\) max number of nodes in graph2 (optional if n2 is given, and n2max=max(n2)).

  • x0\((b\times n_1 \times n_2)\) an initial matching solution for warm-start. If not given, x0 will filled with \(\frac{1}{n_1 n_2})\).

  • max_iter – (default: 50) max number of iterations (i.e. number of random walk steps) in RRWM. More iterations will be lead to more accurate result, at the cost of increased inference time.

  • sk_iter – (default: 20) max number of Sinkhorn iterations. More iterations will be lead to more accurate result, at the cost of increased inference time.

  • alpha – (default: 0.2) the parameter controlling the importance of the reweighted jump. alpha should lie between 0 and 1. If alpha=0, it means no reweighted jump; if alpha=1, the reweighted jump provides all information.

  • beta – (default: 30) the temperature parameter of exponential function before the Sinkhorn operator. beta should be larger than 0. A larger beta means more confidence in the jump. A larger beta will usually require a larger sk_iter.

  • backend – (default: pygmtools.BACKEND variable) the backend for computation.

Returns

\((b\times n_1 \times n_2)\) the solved matching matrix

Note

We support batched instances with different number of nodes, therefore n1 and n2 are required to specify the exact number of objects of each dimension in the batch. If not specified, we assume the batched matrices are not padded.

Note

This solver is differentiable and supports gradient back-propagation.

Warning

The solver’s output is normalized with a sum of 1, which is in line with the original implementation. If a doubly- stochastic matrix is required, please call sinkhorn() after this. If a discrete permutation matrix is required, please call hungarian(). Note that the Hungarian algorithm will truncate the gradient.

Numpy Example
>>> import numpy as np
>>> import pygmtools as pygm
>>> pygm.BACKEND = 'numpy'
>>> np.random.seed(1)

# Generate a batch of isomorphic graphs
>>> batch_size = 10
>>> X_gt = np.zeros((batch_size, 4, 4))
>>> X_gt[:, np.arange(0, 4, dtype=np.int64), np.random.permutation(4)] = 1
>>> A1 = np.random.rand(batch_size, 4, 4)
>>> A2 = np.matmul(np.matmul(X_gt.transpose((0, 2, 1)), A1), X_gt)
>>> n1 = n2 = np.repeat([4], batch_size)

# Build affinity matrix
>>> conn1, edge1, ne1 = pygm.utils.dense_to_sparse(A1)
>>> conn2, edge2, ne2 = pygm.utils.dense_to_sparse(A2)
>>> import functools
>>> gaussian_aff = functools.partial(pygm.utils.gaussian_aff_fn, sigma=1.) # set affinity function
>>> K = pygm.utils.build_aff_mat(None, edge1, conn1, None, edge2, conn2, n1, None, n2, None, edge_aff_fn=gaussian_aff)

# Solve by RRWM. Note that X is normalized with a sum of 1
>>> X = pygm.rrwm(K, n1, n2, beta=100)
>>> X.sum(axis=(1, 2))
array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])

# Accuracy
>>> (pygm.hungarian(X) * X_gt).sum() / X_gt.sum()
1.0
Pytorch Example
>>> import torch
>>> import pygmtools as pygm
>>> pygm.BACKEND = 'pytorch'
>>> _ = torch.manual_seed(1)

# Generate a batch of isomorphic graphs
>>> batch_size = 10
>>> X_gt = torch.zeros(batch_size, 4, 4)
>>> X_gt[:, torch.arange(0, 4, dtype=torch.int64), torch.randperm(4)] = 1
>>> A1 = torch.rand(batch_size, 4, 4)
>>> A2 = torch.bmm(torch.bmm(X_gt.transpose(1, 2), A1), X_gt)
>>> n1 = n2 = torch.tensor([4] * batch_size)

# Build affinity matrix
>>> conn1, edge1, ne1 = pygm.utils.dense_to_sparse(A1)
>>> conn2, edge2, ne2 = pygm.utils.dense_to_sparse(A2)
>>> import functools
>>> gaussian_aff = functools.partial(pygm.utils.gaussian_aff_fn, sigma=1.) # set affinity function
>>> K = pygm.utils.build_aff_mat(None, edge1, conn1, None, edge2, conn2, n1, None, n2, None, edge_aff_fn=gaussian_aff)

# Solve by RRWM. Note that X is normalized with a sum of 1
>>> X = pygm.rrwm(K, n1, n2, beta=100)
>>> X.sum(dim=(1, 2))
tensor([1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
        1.0000])

# Accuracy
>>> (pygm.hungarian(X) * X_gt).sum() / X_gt.sum()
tensor(1.)

# This solver supports gradient back-propogation
>>> K = K.requires_grad_(True)
>>> pygm.rrwm(K, n1, n2, beta=100).sum().backward()
>>> len(torch.nonzero(K.grad))
272
Paddle Example
>>> import paddle
>>> import pygmtools as pygm
>>> pygm.BACKEND = 'paddle'
>>> _ = paddle.seed(1)

# Generate a batch of isomorphic graphs
>>> batch_size = 10
>>> X_gt = paddle.zeros((batch_size, 4, 4))
>>> X_gt[:, paddle.arange(0, 4, dtype=paddle.int64), paddle.randperm(4)] = 1
>>> A1 = paddle.rand((batch_size, 4, 4))
>>> A2 = paddle.bmm(paddle.bmm(X_gt.transpose((0, 2, 1)), A1), X_gt)
>>> n1 = n2 = paddle.to_tensor([4] * batch_size)

# Build affinity matrix
>>> conn1, edge1, ne1 = pygm.utils.dense_to_sparse(A1)
>>> conn2, edge2, ne2 = pygm.utils.dense_to_sparse(A2)
>>> import functools
>>> gaussian_aff = functools.partial(pygm.utils.gaussian_aff_fn, sigma=1.) # set affinity function
>>> K = pygm.utils.build_aff_mat(None, edge1, conn1, None, edge2, conn2, n1, None, n2, None, edge_aff_fn=gaussian_aff)

# Solve by RRWM. Note that X is normalized with a sum of 1
>>> X = pygm.rrwm(K, n1, n2, beta=100)
>>> X.sum(axis=(1, 2))
Tensor(shape=[10], dtype=float32, place=Place(cpu), stop_gradient=True,
        [0.99999988, 0.99999988, 0.99999994, 0.99999994, 1.        , 
         1.        , 1.        , 1.00000012, 1.00000012, 1.        ])

# Accuracy
>>> (pygm.hungarian(X) * X_gt).sum() / X_gt.sum()
Tensor(shape=[1], dtype=float32, place=Place(cpu), stop_gradient=True, [1.])

# This solver supports gradient back-propogation
>>> K.stop_gradient = False
>>> pygm.rrwm(K, n1, n2, beta=100).sum().backward()
>>> len(paddle.nonzero(K.grad))
544
Jittor Example
>>> import jittor as jt
>>> import pygmtools as pygm
>>> pygm.BACKEND = 'jittor'
>>> _ = jt.seed(1)


# Generate a batch of isomorphic graphs
>>> batch_size = 10
>>> X_gt = jt.zeros((batch_size, 4, 4))
>>> X_gt[:, jt.arange(0, 4, dtype=jt.int64), jt.randperm(4)] = 1
>>> A1 = jt.rand(batch_size, 4, 4)
>>> A2 = jt.bmm(jt.bmm(X_gt.transpose(1, 2), A1), X_gt)
>>> n1 = n2 = jt.Var([4] * batch_size)

# Build affinity matrix
>>> conn1, edge1, ne1 = pygm.utils.dense_to_sparse(A1)
>>> conn2, edge2, ne2 = pygm.utils.dense_to_sparse(A2)
>>> import functools
>>> gaussian_aff = functools.partial(pygm.utils.gaussian_aff_fn, sigma=1.) # set affinity function
>>> K = pygm.utils.build_aff_mat(None, edge1, conn1, None, edge2, conn2, n1, None, n2, None, edge_aff_fn=gaussian_aff)

# Solve by RRWM. Note that X is normalized with a sum of 1
>>> X = pygm.rrwm(K, n1, n2, beta=100)
>>> X.sum(dims=(1, 2))
jt.Var([1.         1.0000001  1.         0.99999976 1.         
        1.         1.         1.0000001  0.99999994 1.        ], dtype=float32)

# Accuracy
>>> (pygm.hungarian(X) * X_gt).sum() / X_gt.sum()
jt.Var([1.], dtype=float32)

Note

If you find this graph matching solver useful in your research, please cite:

@inproceedings{rrwm,
  title={Reweighted random walks for graph matching},
  author={Cho, Minsu and Lee, Jungmin and Lee, Kyoung Mu},
  booktitle={European conference on Computer vision},
  pages={492--505},
  year={2010},
  organization={Springer}
}