# pygmtools.classic_solvers.hungarian¶

pygmtools.classic_solvers.hungarian(s, n1=None, n2=None, nproc: int = 1, backend=None)[source]

Solve optimal LAP permutation by hungarian algorithm. The time cost is $$O(n^3)$$.

Parameters
• s$$(b\times n_1 \times n_2)$$ input 3d tensor. $$b$$: batch size. Non-batched input is also supported if s is of size $$(n_1 \times n_2)$$

• n1$$(b)$$ (optional) number of objects in dim1

• n2$$(b)$$ (optional) number of objects in dim2

• nproc – (default: 1, i.e. no parallel) number of parallel processes

• backend – (default: pygmtools.BACKEND variable) the backend for computation.

Returns

$$(b\times n_1 \times n_2)$$ optimal permutation matrix

Note

The parallelization is based on multi-processing workers that run on multiple CPU cores.

Note

For all backends, scipy.optimize.linear_sum_assignment is called to solve the LAP, therefore the computation is based on numpy and scipy. The backend argument of this function only affects the input-output data type.

Note

We support batched instances with different number of nodes, therefore n1 and n2 are required to specify the exact number of objects of each dimension in the batch. If not specified, we assume the batched matrices are not padded.

Example for numpy backend:

>>> import numpy as np
>>> import pygmtools as pygm
>>> pygm.BACKEND = 'numpy'
>>> np.random.seed(0)

# 2-dimensional (non-batched) input
>>> s_2d = np.random.rand(5, 5)
>>> s_2d
array([[0.5488135 , 0.71518937, 0.60276338, 0.54488318, 0.4236548 ],
[0.64589411, 0.43758721, 0.891773  , 0.96366276, 0.38344152],
[0.79172504, 0.52889492, 0.56804456, 0.92559664, 0.07103606],
[0.0871293 , 0.0202184 , 0.83261985, 0.77815675, 0.87001215],
[0.97861834, 0.79915856, 0.46147936, 0.78052918, 0.11827443]])

>>> x = pygm.hungarian(s_2d)
>>> x
array([[0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 0., 0., 0., 1.],
[1., 0., 0., 0., 0.]])

# 3-dimensional (batched) input
>>> s_3d = np.random.rand(3, 5, 5)
>>> n1 = n2 = np.array([3, 4, 5])
>>> x = pygm.hungarian(s_3d, n1, n2)
>>> x
array([[[0., 0., 1., 0., 0.],
[0., 1., 0., 0., 0.],
[1., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]],

[[1., 0., 0., 0., 0.],
[0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 0., 0., 0., 0.]],

[[0., 0., 1., 0., 0.],
[1., 0., 0., 0., 0.],
[0., 0., 0., 0., 1.],
[0., 1., 0., 0., 0.],
[0., 0., 0., 1., 0.]]])


Example for Pytorch backend:

>>> import torch
>>> import pygmtools as pygm
>>> pygm.BACKEND = 'pytorch'

# 2-dimensional (non-batched) input
>>> s_2d = torch.from_numpy(s_2d)
>>> s_2d
tensor([[0.5488, 0.7152, 0.6028, 0.5449, 0.4237],
[0.6459, 0.4376, 0.8918, 0.9637, 0.3834],
[0.7917, 0.5289, 0.5680, 0.9256, 0.0710],
[0.0871, 0.0202, 0.8326, 0.7782, 0.8700],
[0.9786, 0.7992, 0.4615, 0.7805, 0.1183]], dtype=torch.float64)
>>> x = pygm.hungarian(s_2d)
>>> x
tensor([[0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 0., 0., 0., 1.],
[1., 0., 0., 0., 0.]], dtype=torch.float64)

# 3-dimensional (batched) input
>>> s_3d = torch.from_numpy(s_3d)
>>> n1 = n2 = torch.tensor([3, 4, 5])
>>> x = pygm.hungarian(s_3d, n1, n2)
>>> x
tensor([[[0., 0., 1., 0., 0.],
[0., 1., 0., 0., 0.],
[1., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]],

[[1., 0., 0., 0., 0.],
[0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 0., 0., 0., 0.]],

[[0., 0., 1., 0., 0.],
[1., 0., 0., 0., 0.],
[0., 0., 0., 0., 1.],
[0., 1., 0., 0., 0.],
[0., 0., 0., 1., 0.]]], dtype=torch.float64)


Note

If you find this graph matching solver useful for your research, please cite:

@article{hungarian,
title={Algorithms for the assignment and transportation problems},
author={Munkres, James},
journal={Journal of the society for industrial and applied mathematics},
volume={5},
number={1},
pages={32--38},
year={1957},
publisher={SIAM}
}