pygmtools.utils.build_aff_mat

pygmtools.utils.build_aff_mat(node_feat1, edge_feat1, connectivity1, node_feat2, edge_feat2, connectivity2, n1=None, ne1=None, n2=None, ne2=None, node_aff_fn=None, edge_aff_fn=None, backend=None)[source]

Build affinity matrix for graph matching from input node/edge features. The affinity matrix encodes both node-wise and edge-wise affinities and formulates the Quadratic Assignment Problem (QAP), which is the mathematical form of graph matching.

Parameters
  • node_feat1\((b\times n_1 \times f_{node})\) the node feature of graph1

  • edge_feat1\((b\times ne_1 \times f_{edge})\) the edge feature of graph1

  • connectivity1\((b\times ne_1 \times 2)\) sparse connectivity information of graph 1. connectivity1[i, j, 0] is the starting node index of edge j at batch i, and connectivity1[i, j, 1] is the ending node index of edge j at batch i

  • node_feat2\((b\times n_2 \times f_{node})\) the node feature of graph2

  • edge_feat2\((b\times ne_2 \times f_{edge})\) the edge feature of graph2

  • connectivity2\((b\times ne_2 \times 2)\) sparse connectivity information of graph 2. connectivity2[i, j, 0] is the starting node index of edge j at batch i, and connectivity2[i, j, 1] is the ending node index of edge j at batch i

  • n1\((b)\) number of nodes in graph1. If not given, it will be inferred based on the shape of node_feat1 or the values in connectivity1

  • ne1\((b)\) number of edges in graph1. If not given, it will be inferred based on the shape of edge_feat1

  • n2\((b)\) number of nodes in graph2. If not given, it will be inferred based on the shape of node_feat2 or the values in connectivity2

  • ne2\((b)\) number of edges in graph2. If not given, it will be inferred based on the shape of edge_feat2

  • node_aff_fn – (default: inner_prod_aff_fn) the node affinity function with the characteristic node_aff_fn(2D Tensor, 2D Tensor) -> 2D Tensor, which accepts two node feature tensors and outputs the node-wise affinity tensor. See inner_prod_aff_fn() as an example.

  • edge_aff_fn – (default: inner_prod_aff_fn) the edge affinity function with the characteristic edge_aff_fn(2D Tensor, 2D Tensor) -> 2D Tensor, which accepts two edge feature tensors and outputs the edge-wise affinity tensor. See inner_prod_aff_fn() as an example.

  • backend – (default: pygmtools.BACKEND variable) the backend for computation.

Returns

\((b\times n_1n_2 \times n_1n_2)\) the affinity matrix

Note

This function also supports non-batched input, by ignoring all batch dimensions in the input tensors.

Note

If you want to implement your customized affinity function, make sure it respects the input & output dimensions:

  • Input feat1: \((b\times n_1 \times f)\),

  • Input feat2: \((b\times n_2 \times f)\),

  • Output: \((b\times n_1\times n_2)\).

See inner_prod_aff_fn() and gaussian_aff_fn() for examples.

Numpy Example
>>> import numpy as np
>>> import pygmtools as pygm
>>> pygm.set_backend('numpy')

# Generate a batch of graphs
>>> batch_size = 10
>>> A1 = np.random.rand(batch_size, 4, 4)
>>> A2 = np.random.rand(batch_size, 4, 4)
>>> n1 = n2 = np.repeat([4], batch_size)

# Build affinity matrix by the default inner-product function
>>> conn1, edge1, ne1 = pygm.utils.dense_to_sparse(A1)
>>> conn2, edge2, ne2 = pygm.utils.dense_to_sparse(A2)
>>> K = pygm.utils.build_aff_mat(None, edge1, conn1, None, edge2, conn2, n1, ne1, n2, ne2)

# Build affinity matrix by gaussian kernel
>>> import functools
>>> gaussian_aff = functools.partial(pygm.utils.gaussian_aff_fn, sigma=1.)
>>> K2 = pygm.utils.build_aff_mat(None, edge1, conn1, None, edge2, conn2, n1, ne1, n2, ne2, edge_aff_fn=gaussian_aff)

# Build affinity matrix based on node features
>>> F1 = np.random.rand(batch_size, 4, 10)
>>> F2 = np.random.rand(batch_size, 4, 10)
>>> K3 = pygm.utils.build_aff_mat(F1, edge1, conn1, F2, edge2, conn2, n1, ne1, n2, ne2, edge_aff_fn=gaussian_aff)

# The affinity matrices K, K2, K3 can be further processed by GM solvers
Pytorch Example
>>> import torch
>>> import pygmtools as pygm
>>> pygm.set_backend('pytorch')

# Generate a batch of graphs
>>> batch_size = 10
>>> A1 = torch.rand(batch_size, 4, 4)
>>> A2 = torch.rand(batch_size, 4, 4)
>>> n1 = n2 = torch.tensor([4] * batch_size)

# Build affinity matrix by the default inner-product function
>>> conn1, edge1, ne1 = pygm.utils.dense_to_sparse(A1)
>>> conn2, edge2, ne2 = pygm.utils.dense_to_sparse(A2)
>>> K = pygm.utils.build_aff_mat(None, edge1, conn1, None, edge2, conn2, n1, ne1, n2, ne2)

# Build affinity matrix by gaussian kernel
>>> import functools
>>> gaussian_aff = functools.partial(pygm.utils.gaussian_aff_fn, sigma=1.)
>>> K2 = pygm.utils.build_aff_mat(None, edge1, conn1, None, edge2, conn2, n1, ne1, n2, ne2, edge_aff_fn=gaussian_aff)

# Build affinity matrix based on node features
>>> F1 = torch.rand(batch_size, 4, 10)
>>> F2 = torch.rand(batch_size, 4, 10)
>>> K3 = pygm.utils.build_aff_mat(F1, edge1, conn1, F2, edge2, conn2, n1, ne1, n2, ne2, edge_aff_fn=gaussian_aff)

# The affinity matrices K, K2, K3 can be further processed by GM solvers
Paddle Example
::
>>> import paddle
>>> import pygmtools as pygm
>>> pygm.set_backend('paddle')

# Generate a batch of graphs >>> batch_size = 10 >>> A1 = paddle.rand((batch_size, 4, 4)) >>> A2 = paddle.rand((batch_size, 4, 4)) >>> n1 = n2 = paddle.t0_tensor([4] * batch_size)

# Build affinity matrix by the default inner-product function >>> conn1, edge1, ne1 = pygm.utils.dense_to_sparse(A1) >>> conn2, edge2, ne2 = pygm.utils.dense_to_sparse(A2) >>> K = pygm.utils.build_aff_mat(None, edge1, conn1, None, edge2, conn2, n1, ne1, n2, ne2)

# Build affinity matrix by gaussian kernel >>> import functools >>> gaussian_aff = functools.partial(pygm.utils.gaussian_aff_fn, sigma=1.) >>> K2 = pygm.utils.build_aff_mat(None, edge1, conn1, None, edge2, conn2, n1, ne1, n2, ne2, edge_aff_fn=gaussian_aff)

# Build affinity matrix based on node features >>> F1 = paddle.rand((batch_size, 4, 10)) >>> F2 = paddle.rand((batch_size, 4, 10)) >>> K3 = pygm.utils.build_aff_mat(F1, edge1, conn1, F2, edge2, conn2, n1, ne1, n2, ne2, edge_aff_fn=gaussian_aff)

# The affinity matrices K, K2, K3 can be further processed by GM solvers

mindspore Example
>>> import mindspore
>>> import pygmtools as pygm
>>> pygm.set_backend('mindspore')

# Generate a batch of graphs
>>> batch_size = 10
>>> A1 = mindspore.numpy.rand((batch_size, 4, 4))
>>> A2 = mindspore.numpy.rand((batch_size, 4, 4))
>>> n1 = n2 = mindspore.Tensor([4] * batch_size)

# Build affinity matrix by the default inner-product function
>>> conn1, edge1, ne1 = pygm.utils.dense_to_sparse(A1)
>>> conn2, edge2, ne2 = pygm.utils.dense_to_sparse(A2)
>>> K = pygm.utils.build_aff_mat(None, edge1, conn1, None, edge2, conn2, n1, ne1, n2, ne2)

# Build affinity matrix by gaussian kernel
>>> import functools
>>> gaussian_aff = functools.partial(pygm.utils.gaussian_aff_fn, sigma=1.)
>>> K2 = pygm.utils.build_aff_mat(None, edge1, conn1, None, edge2, conn2, n1, ne1, n2, ne2, edge_aff_fn=gaussian_aff)

# Build affinity matrix based on node features
>>> F1 = mindspore.numpy.rand((batch_size, 4, 10))
>>> F2 = mindspore.numpy.rand((batch_size, 4, 10))
>>> K3 = pygm.utils.build_aff_mat(F1, edge1, conn1, F2, edge2, conn2, n1, ne1, n2, ne2, edge_aff_fn=gaussian_aff)

# The affinity matrices K, K2, K3 can be further processed by GM solvers