pygmtools.classic_solvers.ipfp

pygmtools.classic_solvers.ipfp(K, n1=None, n2=None, n1max=None, n2max=None, x0=None, max_iter: int = 50, backend=None)[source]

Integer Projected Fixed Point (IPFP) method for graph matching (QAP).

Parameters
  • K\((b\times n_1n_2 \times n_1n_2)\) the input affinity matrix, \(b\): batch size.

  • n1\((b)\) number of nodes in graph1 (optional if n1max is given, and all n1=n1max).

  • n2\((b)\) number of nodes in graph2 (optional if n2max is given, and all n2=n2max).

  • n1max\((b)\) max number of nodes in graph1 (optional if n1 is given, and n1max=max(n1)).

  • n2max\((b)\) max number of nodes in graph2 (optional if n2 is given, and n2max=max(n2)).

  • x0\((b\times n_1 \times n_2)\) an initial matching solution for warm-start. If not given, x0 will filled with \(\frac{1}{n_1 n_2})\).

  • max_iter – (default: 50) max number of iterations in IPFP. More iterations will be lead to more accurate result, at the cost of increased inference time.

  • backend – (default: pygmtools.BACKEND variable) the backend for computation.

Returns

\((b\times n_1 \times n_2)\) the solved matching matrix

Note

We support batched instances with different number of nodes, therefore n1 and n2 are required to specify the exact number of objects of each dimension in the batch. If not specified, we assume the batched matrices are not padded.

Note

This solver is non-differentiable. The output is a discrete matching matrix (i.e. permutation matrix).

Numpy Example
>>> import numpy as np
>>> import pygmtools as pygm
>>> pygm.BACKEND = 'numpy'
>>> np.random.seed(1)

# Generate a batch of isomorphic graphs
>>> batch_size = 10
>>> X_gt = np.zeros((batch_size, 4, 4))
>>> X_gt[:, np.arange(0, 4, dtype=np.int64), np.random.permutation(4)] = 1
>>> A1 = np.random.rand(batch_size, 4, 4)
>>> A2 = np.matmul(np.matmul(X_gt.transpose((0, 2, 1)), A1), X_gt)
>>> n1 = n2 = np.repeat([4], batch_size)

# Build affinity matrix
>>> conn1, edge1, ne1 = pygm.utils.dense_to_sparse(A1)
>>> conn2, edge2, ne2 = pygm.utils.dense_to_sparse(A2)
>>> import functools
>>> gaussian_aff = functools.partial(pygm.utils.gaussian_aff_fn, sigma=1.) # set affinity function
>>> K = pygm.utils.build_aff_mat(None, edge1, conn1, None, edge2, conn2, n1, None, n2, None, edge_aff_fn=gaussian_aff)

# Solve by IPFP
>>> X = pygm.ipfp(K, n1, n2)
>>> X[0]
array([[0., 0., 0., 1.],
       [0., 0., 1., 0.],
       [1., 0., 0., 0.],
       [0., 1., 0., 0.]])

# Accuracy
>>> (pygm.hungarian(X) * X_gt).sum() / X_gt.sum()
1.0
Pytorch Example
>>> import torch
>>> import pygmtools as pygm
>>> pygm.BACKEND = 'pytorch'
>>> _ = torch.manual_seed(1)

# Generate a batch of isomorphic graphs
>>> batch_size = 10
>>> X_gt = torch.zeros(batch_size, 4, 4)
>>> X_gt[:, torch.arange(0, 4, dtype=torch.int64), torch.randperm(4)] = 1
>>> A1 = torch.rand(batch_size, 4, 4)
>>> A2 = torch.bmm(torch.bmm(X_gt.transpose(1, 2), A1), X_gt)
>>> n1 = torch.tensor([4] * batch_size)
>>> n2 = torch.tensor([4] * batch_size)

# Build affinity matrix
>>> conn1, edge1, ne1 = pygm.utils.dense_to_sparse(A1)
>>> conn2, edge2, ne2 = pygm.utils.dense_to_sparse(A2)
>>> import functools
>>> gaussian_aff = functools.partial(pygm.utils.gaussian_aff_fn, sigma=1.) # set affinity function
>>> K = pygm.utils.build_aff_mat(None, edge1, conn1, None, edge2, conn2, n1, None, n2, None, edge_aff_fn=gaussian_aff)

# Solve by IPFP
>>> X = pygm.ipfp(K, n1, n2)
>>> X[0]
tensor([[0., 1., 0., 0.],
        [0., 0., 0., 1.],
        [0., 0., 1., 0.],
        [1., 0., 0., 0.]])

# Accuracy
>>> (pygm.hungarian(X) * X_gt).sum() / X_gt.sum()
tensor(1.)
Paddle Example
>>> import paddle
>>> import pygmtools as pygm
>>> pygm.BACKEND = 'paddle'
>>> _ = paddle.seed(1)

# Generate a batch of isomorphic graphs
>>> batch_size = 10
>>> X_gt = paddle.zeros((batch_size, 4, 4))
>>> X_gt[:, paddle.arange(0, 4, dtype=paddle.int64), paddle.randperm(4)] = 1
>>> A1 = paddle.rand((batch_size, 4, 4))
>>> A2 = paddle.bmm(paddle.bmm(X_gt.transpose((0, 2, 1)), A1), X_gt)
>>> n1 = paddle.to_tensor([4] * batch_size)
>>> n2 = paddle.to_tensor([4] * batch_size)

# Build affinity matrix
>>> conn1, edge1, ne1 = pygm.utils.dense_to_sparse(A1)
>>> conn2, edge2, ne2 = pygm.utils.dense_to_sparse(A2)
>>> import functools
>>> gaussian_aff = functools.partial(pygm.utils.gaussian_aff_fn, sigma=1.) # set affinity function
>>> K = pygm.utils.build_aff_mat(None, edge1, conn1, None, edge2, conn2, n1, None, n2, None, edge_aff_fn=gaussian_aff)

# Solve by IPFP
>>> X = pygm.ipfp(K, n1, n2)
>>> X[0]
Tensor(shape=[4, 4], dtype=float32, place=Place(cpu), stop_gradient=True,
       [[0., 1., 0., 0.],
        [0., 0., 0., 1.],
        [0., 0., 1., 0.],
        [1., 0., 0., 0.]])

# Accuracy
>>> (pygm.hungarian(X) * X_gt).sum() / X_gt.sum()
Tensor(shape=[1], dtype=float32, place=Place(cpu), stop_gradient=True, [1.])
Jittor Example
>>> import jittor as jt
>>> import pygmtools as pygm
>>> pygm.BACKEND = 'jittor'
>>> _ = jt.seed(1)

# Generate a batch of isomorphic graphs
>>> batch_size = 10
>>> X_gt = jt.zeros((batch_size, 4, 4))
>>> X_gt[:, jt.arange(0, 4, dtype=jt.int64), jt.randperm(4)] = 1
>>> A1 = jt.rand(batch_size, 4, 4)
>>> A2 = jt.bmm(jt.bmm(X_gt.transpose(1, 2), A1), X_gt)
>>> n1 = jt.Var([4] * batch_size)
>>> n2 = jt.Var([4] * batch_size)

# Build affinity matrix
>>> conn1, edge1, ne1 = pygm.utils.dense_to_sparse(A1)
>>> conn2, edge2, ne2 = pygm.utils.dense_to_sparse(A2)
>>> import functools
>>> gaussian_aff = functools.partial(pygm.utils.gaussian_aff_fn, sigma=1.) # set affinity function
>>> K = pygm.utils.build_aff_mat(None, edge1, conn1, None, edge2, conn2, n1, None, n2, None, edge_aff_fn=gaussian_aff)

# Solve by IPFP
>>> X = pygm.ipfp(K, n1, n2)
>>> X[0]
jt.Var([[1. 0. 0. 0.]
        [0. 0. 1. 0.]
        [0. 0. 0. 1.]
        [0. 1. 0. 0.]], dtype=float32)

# Accuracy
>>> (pygm.hungarian(X) * X_gt).sum() / X_gt.sum()
jt.Var([1.], dtype=float32)

Note

If you find this graph matching solver useful in your research, please cite:

@article{ipfp,
  title={An integer projected fixed point method for graph matching and map inference},
  author={Leordeanu, Marius and Hebert, Martial and Sukthankar, Rahul},
  journal={Advances in neural information processing systems},
  volume={22},
  year={2009}
}