pygmtools.utils.dense_to_sparse

pygmtools.utils.dense_to_sparse(dense_adj, backend=None)[source]

Convert a dense connectivity/adjacency matrix to a sparse connectivity/adjacency matrix and an edge weight tensor.

Parameters
  • dense_adj\((b\times n\times n)\) the dense adjacency matrix. This function also supports non-batched input where the batch dimension b is ignored

  • backend – (default: pygmtools.BACKEND variable) the backend for computation.

Returns

\((b\times ne\times 2)\) sparse connectivity matrix, \((b\times ne\times 1)\) edge weight tensor, \((b)\) number of edges

Example for numpy backend:

>>> import numpy as np
>>> import pygmtools as pygm
>>> pygm.BACKEND = 'numpy'
>>> np.random.seed(0)

>>> batch_size = 10
>>> A = np.random.rand(batch_size, 4, 4)
>>> A[:, np.arange(4), np.arange(4)] = 0 # remove the diagonal elements
>>> A.shape
(10, 4, 4)

>>> conn, edge, ne = pygm.utils.dense_to_sparse(A)
>>> conn.shape # connectivity: (batch x num_edge x 2)
(10, 12, 2)

>>> edge.shape # edge feature (batch x num_edge x feature_dim)
(10, 12, 1)

>>> ne
[12, 12, 12, 12, 12, 12, 12, 12, 12, 12]

Example for Pytorch backend:

>>> import torch
>>> import pygmtools as pygm
>>> pygm.BACKEND = 'pytorch'
>>> _ = torch.manual_seed(0)

>>> batch_size = 10
>>> A = torch.rand(batch_size, 4, 4)
>>> torch.diagonal(A, dim1=1, dim2=2)[:] = 0 # remove the diagonal elements
>>> A.shape
torch.Size([10, 4, 4])

>>> conn, edge, ne = pygm.utils.dense_to_sparse(A)
>>> conn.shape # connectivity: (batch x num_edge x 2)
torch.Size([10, 12, 2])

>>> edge.shape # edge feature (batch x num_edge x feature_dim)
torch.Size([10, 12, 1])

>>> ne
tensor([12, 12, 12, 12, 12, 12, 12, 12, 12, 12])