Create or manipulate sparse matrices using the RSB format provided by librsb, almost as you do with sparse
.
If A is a full
matrix, convert it to a sparse matrix representation,
removing all zero values.
If A is a sparse
matrix, convert it to a sparse matrix representation.
Given the integer index vectors I and J, and a 1-by-nnz
vector of real or complex values SV, construct the sparse matrix
S(I(K),J(K)) = SV(K)
with overall
dimensions M and N.
The argument
NZMAX
is ignored but accepted for compatibility with MATLAB and sparse
.
If M or N are not specified their values are derived from the
maximum index in the vectors I and J as given by
M = max (I)
, N = max (J)
.
Can load a matrix from a Matrix Market matrix file named MTXFILENAME. The optional argument MTXTYPESTRING can specify either real ("D"
) or complex ("Z"
) type. Default is real.
In the case MTXFILENAME is "?"
, a string listing the available numerical types with BLAS-style characters will be returned. If the file turns out to contain a Matrix Market dense vector, this will be loaded.
If "save"
is specified, saves the sparse matrix as a Matrix Market matrix file named MTXFILENAME.
Note: if multiple values are specified with the same I, J indices, the corresponding values in SV will be added.
The following are all equivalent:
S = sparsersb (I, J, SV, M, N) S = sparsersb (I, J, SV, M, N, "summation") S = sparsersb (I, J, SV, M, N, "sum")
If the optional "unique"
keyword is specified instead, then if more than two values are specified for the
same I, J indices, only the last value will be used.
If the optional "symmetric"
or "sym"
keyword follows, then the input will be considered as the tringle of a symmetric matrix.
If the optional "hermitian"
or "her"
keyword follows, then the input will be considered as the tringle of a hermitian matrix.
If the optional "general"
or "gen"
keyword follows, then no symmetry hint is being given.
sparsersb (M, N)
will create an empty M-by-N sparse
matrix and is equivalent to sparsersb ([], [], [], M, N)
.
If M or N are not specified, then M = max (I)
, N = max (J)
.
If OPN is a string representing a valid librsb option name and OPV is a string representing a valid librsb option value, these will be passed to the rsb_lib_set_opt_str()
function.
If MIF is a string specifying a valid librsb matrix info string (valid for librsb’s rsb_mtx_get_info_from_string()
), then the corresponding value will be returned for matrix S
, in string V
. If MIF is the an empty string (""
), matrix structure information will be returned. As of librsb-1.2, this is debug or internal information. E.g. for "RSB_MIF_LEAVES_COUNT__TO__RSB_BLK_INDEX_T"
, a string with the count of internal RSB blocks will be returned.
If S is a sparsersb
matrix and QS is a string, QS shall be interpreted as a query string about matrix S. String V
will be returned with query results.
Note: this feature is to be completed and its syntax reserved for future use. In this version, whatever the value of QS, a general matrix information string will be returned (like sparsersb(S,"get","RSB_MIF_LEAVES_COUNT__TO__RSB_BLK_INDEX_T")
).
If S is a sparsersb
matrix and the "render"
keyword is specified, and FILENAME is a string, A will be rendered as an Encapsulated Postscript file FILENAME. Optionally, width and height can be specified in rWidth, rHeight
. Defaults are 512.
If S is a sparsersb
matrix and the "autotune"
keyword is specified, autotuning of the matrix will take place, with SpMV and autotuning parameters. Parameters following the "autotune"
string are optional. Parameter transA specifies whether to tune for untransposed ("N"
) or transposed ("T"
) or conjugated transposed ("C"
); NRHS the number of right hand sides; MAXR the number of tuning rounds; TMAX the threads to use. If giving an output argument O, that will be assigned to the autotuned matrix, and the input one A will remain unchanged. See librsb documentation for rsb_tune_spmm
to learn more.
Long (64 bit) index support is partial: if Octave has been configured for 64 bit indices, sparsersb
will correctly handle and convert matrices/indices that would fit in a 32 bit indices setup, failing on ’larger’ ones.
Note: sparsersb
variables behave just as full
or sparse
variables for most operators.
But interaction of binary sparse matrix – sparse matrix operators involving symmetric sparsersb
matrices is not complete and may give unexpected results.
Note: Multiplication of a sparsersb
variable by a sparse
one (or the other way round) will expand sparsersb
’s symmetry because of conversion to sparse
.
Multiplication of two sparsersb
variables will not undergo any conversion or symmetry expansion (which might come as unexpected).
Note: Summation of a sparsersb
variable with a sparse
one (or the other way round) will expand sparsersb
’s symmetry because of conversion to sparse
.
Summation of two sparsersb
variables will not undergo any conversion or symmetry expansion (which might come as unexpected).
Note: Accessing a symmetric or hermitian sparsersb
variable at indices falling in the empty triangle will return a zero.
Accessing via (:,:) will imply symmetry/hermitianness expansion and conversion to sparse
.
See also: sparse, full, nnz, rows, columns, tril, triu, istril, istriu, issparse, iscomplex, isreal, issymmetric, ishermitian.
The following code
disp("'sparsersb' behaves pretty like 'sparse':") R=(rand(3)>.6) A_octave=sparse(R) A_librsb=sparsersb(R)
Produces the following output
'sparsersb' behaves pretty like 'sparse': R = 1 0 0 0 0 0 0 1 0 A_octave = Compressed Column Sparse (rows = 3, cols = 3, nnz = 2 [22%]) (1, 1) -> 1 (3, 2) -> 1 A_librsb = Recursive Sparse Blocks (rows = 3, cols = 3, nnz = 2, symm = U [22%]) (1, 1) -> 1 (3, 2) -> 1
The following code
disp("The interface of 'sparsersb' is almost like the one of 'sparse'") disp("Create a 1x1 matrix:") sparsersb([2]) disp("Create a 2x1 matrix:") sparsersb([1,2],[1,1],[11,21]) disp("Create a 2x2 matrix:") sparsersb([1,2],[1,1],[11,21],2,2) disp("Create a 2x2 lower triangular matrix:") sparsersb([1,2,2 ],[1,1,2 ],[11,21, 22],2,2)
Produces the following output
The interface of 'sparsersb' is almost like the one of 'sparse' Create a 1x1 matrix: ans = Recursive Sparse Blocks (rows = 1, cols = 1, nnz = 1, symm = S [100%]) (1, 1) -> 2 Create a 2x1 matrix: ans = Recursive Sparse Blocks (rows = 2, cols = 1, nnz = 2, symm = U [100%]) (1, 1) -> 11 (2, 1) -> 21 Create a 2x2 matrix: ans = Recursive Sparse Blocks (rows = 2, cols = 2, nnz = 2, symm = U [50%]) (1, 1) -> 11 (2, 1) -> 21 Create a 2x2 lower triangular matrix: ans = Recursive Sparse Blocks (rows = 2, cols = 2, nnz = 3, symm = U [75%]) (1, 1) -> 11 (2, 1) -> 21 (2, 2) -> 22
The following code
disp("'sparsersb' has an option to handle duplicates.") disp("Create a 2x2 lower triangular matrix (last element summed by default):") sparsersb([1,2,2,2],[1,1,2,2],[11,21,11,11],2,2) disp("Create a 2x2 lower triangular matrix (last two elements summed explicitly):") sparsersb([1,2,2,2],[1,1,2,2],[11,21,11,11],2,2,"sum") disp("Create a 2x2 lower triangular matrix (last element ignored, explicitly):") sparsersb([1,2,2,2],[1,1,2,2],[11,21,11,11],2,2,"unique")
Produces the following output
'sparsersb' has an option to handle duplicates. Create a 2x2 lower triangular matrix (last element summed by default): ans = Recursive Sparse Blocks (rows = 2, cols = 2, nnz = 3, symm = U [75%]) (1, 1) -> 11 (2, 1) -> 21 (2, 2) -> 22 Create a 2x2 lower triangular matrix (last two elements summed explicitly): ans = Recursive Sparse Blocks (rows = 2, cols = 2, nnz = 3, symm = U [75%]) (1, 1) -> 11 (2, 1) -> 21 (2, 2) -> 22 Create a 2x2 lower triangular matrix (last element ignored, explicitly): ans = Recursive Sparse Blocks (rows = 2, cols = 2, nnz = 3, symm = U [75%]) (1, 1) -> 11 (2, 1) -> 21 (2, 2) -> 11
The following code
disp("'sparsersb' support symmetric and hermitian matrices:\n") disp("2x2 lower tringular:") sparsersb([1,2,2 ],[1,1,2 ],[11,21 , 22],2,2,"general") disp("2x2 symmetric (only lower triangle stored):") sparsersb([1,2,2 ],[1,1,2 ],[11,21 , 22],2,2,"symmetric") disp("2x2 hermitian (only lower triangle stored):") sparsersb([1,2,2 ],[1,1,2 ],[11,21i, 22],2,2,"hermitian")
Produces the following output
'sparsersb' support symmetric and hermitian matrices: 2x2 lower tringular: ans = Recursive Sparse Blocks (rows = 2, cols = 2, nnz = 3, symm = U [75%]) (1, 1) -> 11 (2, 1) -> 21 (2, 2) -> 22 2x2 symmetric (only lower triangle stored): ans = Recursive Sparse Blocks (rows = 2, cols = 2, nnz = 3, symm = S [75%]) (1, 1) -> 11 (2, 1) -> 21 (2, 2) -> 22 2x2 hermitian (only lower triangle stored): ans = Recursive Sparse Blocks (rows = 2, cols = 2, nnz = 3, symm = H [75%]) (1, 1) -> 11 + 0i (2, 1) -> 0 + 21i (2, 2) -> 22 + 0i
The following code
disp("Any 'sparse' or 'dense' matrix can be converted to 'sparsersb':") d=sparsersb( [1,2;3,4] ) f=sparsersb( full ([1,2;3,4])) s=sparsersb(sparse([1,2;3,4]))
Produces the following output
Any 'sparse' or 'dense' matrix can be converted to 'sparsersb': d = Recursive Sparse Blocks (rows = 2, cols = 2, nnz = 4, symm = U [100%]) (1, 1) -> 1 (1, 2) -> 2 (2, 1) -> 3 (2, 2) -> 4 f = Recursive Sparse Blocks (rows = 2, cols = 2, nnz = 4, symm = U [100%]) (1, 1) -> 1 (1, 2) -> 2 (2, 1) -> 3 (2, 2) -> 4 s = Recursive Sparse Blocks (rows = 2, cols = 2, nnz = 4, symm = U [100%]) (1, 1) -> 1 (1, 2) -> 2 (2, 1) -> 3 (2, 2) -> 4
The following code
disp("'sparsersb' detects symmetry:") d=sparsersb( [1,2;2,1] ) s=sparsersb(sparse([1,2;2,1]))
Produces the following output
'sparsersb' detects symmetry: d = Recursive Sparse Blocks (rows = 2, cols = 2, nnz = 3, symm = S [75%]) (1, 1) -> 1 (2, 1) -> 2 (2, 2) -> 1 s = Recursive Sparse Blocks (rows = 2, cols = 2, nnz = 3, symm = S [75%]) (1, 1) -> 1 (2, 1) -> 2 (2, 2) -> 1
The following code
disp("'sparsersb' detects hermitianness:") d=sparsersb( [1,i;-i,1] ) s=sparsersb(sparse([1,i;-i,1]))
Produces the following output
'sparsersb' detects hermitianness: d = Recursive Sparse Blocks (rows = 2, cols = 2, nnz = 3, symm = H [75%]) (1, 1) -> 1 + 0i (2, 1) -> -0 + -1i (2, 2) -> 1 + 0i s = Recursive Sparse Blocks (rows = 2, cols = 2, nnz = 3, symm = H [75%]) (1, 1) -> 1 + 0i (2, 1) -> -0 + -1i (2, 2) -> 1 + 0i
The following code
disp("The most important use of 'sparsersb' is for multiplying sparse matrices...\n") a=sparsersb( [1,2;3,4] ) disp("...by dense matrices or vectors:\n") x=[1,2;1,2] disp("Untransposed sparse matrix-vector multiplication:") a*x disp("Transposed sparse matrix-vector multiplication:") a'*x
Produces the following output
The most important use of 'sparsersb' is for multiplying sparse matrices... a = Recursive Sparse Blocks (rows = 2, cols = 2, nnz = 4, symm = U [100%]) (1, 1) -> 1 (1, 2) -> 2 (2, 1) -> 3 (2, 2) -> 4 ...by dense matrices or vectors: x = 1 2 1 2 Untransposed sparse matrix-vector multiplication: ans = 3 6 7 14 Transposed sparse matrix-vector multiplication: ans = 4 8 6 12
The following code
d=sparsersb( [1,2;3,4] ); s=sparsersb(sparse([1,2;3,4])); disp("Many sparse-sparse matrix operators work on 'sparsersb'\n") disp("'+' operator:") s+d disp("'.+' operator:") s.+d disp("'-' operator:") s-d disp("'.-' operator:") s.-d disp("'*' operator:") s*d disp("'.*' operator:") s.*d disp("'/' operator:") s/d disp("'./' operator:") s./d disp("'\\' operator:") s\[1;1] disp("And others. Not all operators are native: certain use a conversion; see the printout.\n")
Produces the following output
Many sparse-sparse matrix operators work on 'sparsersb' '+' operator: ans = Recursive Sparse Blocks (rows = 2, cols = 2, nnz = 4, symm = U [100%]) (1, 1) -> 2 (1, 2) -> 4 (2, 1) -> 6 (2, 2) -> 8 '.+' operator: ans = Recursive Sparse Blocks (rows = 2, cols = 2, nnz = 4, symm = U [100%]) (1, 1) -> 2 (1, 2) -> 4 (2, 1) -> 6 (2, 2) -> 8 '-' operator: ans = Recursive Sparse Blocks (rows = 2, cols = 2, nnz = 0, symm = U) '.-' operator: ans = Recursive Sparse Blocks (rows = 2, cols = 2, nnz = 0, symm = U) '*' operator: ans = Recursive Sparse Blocks (rows = 2, cols = 2, nnz = 4, symm = U [100%]) (1, 1) -> 7 (1, 2) -> 10 (2, 1) -> 15 (2, 2) -> 22 '.*' operator: ans = Compressed Column Sparse (rows = 2, cols = 2, nnz = 4 [100%]) (1, 1) -> 1 (2, 1) -> 9 (1, 2) -> 4 (2, 2) -> 16 '/' operator: ans = Compressed Column Sparse (rows = 2, cols = 2, nnz = 2 [50%]) (1, 1) -> 1 (2, 2) -> 1 './' operator: ans = Compressed Column Sparse (rows = 2, cols = 2, nnz = 4 [100%]) (1, 1) -> 1 (2, 1) -> 1 (1, 2) -> 1 (2, 2) -> 1 '\' operator: ans = -1 1 And others. Not all operators are native: certain use a conversion; see the printout.
The following code
o=sparse( [1,2;3,4] ); s=sparsersb([1,2;3,4] ); disp("Most of these operators hide a conversion; see the printout:\n") s(:,:) o(:,:) s(:,2) o(:,2) s(2,:) o(2,:) s(:) o(:)
Produces the following output
Most of these operators hide a conversion; see the printout: ans = Compressed Column Sparse (rows = 2, cols = 2, nnz = 4 [100%]) (1, 1) -> 1 (2, 1) -> 3 (1, 2) -> 2 (2, 2) -> 4 ans = Compressed Column Sparse (rows = 2, cols = 2, nnz = 4 [100%]) (1, 1) -> 1 (2, 1) -> 3 (1, 2) -> 2 (2, 2) -> 4 ans = Compressed Column Sparse (rows = 2, cols = 1, nnz = 2 [100%]) (1, 1) -> 2 (2, 1) -> 4 ans = Compressed Column Sparse (rows = 2, cols = 1, nnz = 2 [100%]) (1, 1) -> 2 (2, 1) -> 4 ans = Compressed Column Sparse (rows = 1, cols = 2, nnz = 2 [100%]) (1, 1) -> 3 (1, 2) -> 4 ans = Compressed Column Sparse (rows = 1, cols = 2, nnz = 2 [100%]) (1, 1) -> 3 (1, 2) -> 4 ans = Recursive Sparse Blocks (rows = 4, cols = 1, nnz = 4, symm = U [100%]) (1, 1) -> 1 (2, 1) -> 3 (3, 1) -> 2 (4, 1) -> 4 ans = Compressed Column Sparse (rows = 4, cols = 1, nnz = 4 [100%]) (1, 1) -> 1 (2, 1) -> 3 (3, 1) -> 2 (4, 1) -> 4
The following code
disp("On large matrices 'sparsersb' may be faster than 'sparse' in sparse matrix-vector multiplication.") disp("In addition to that, 'sparsersb' has an 'empirical online auto-tuning' functionality.") disp("It means you run the autotuning on a specific input, and just after, the multiplication might be faster.") disp("See this case with two different right hand sides (NRHS) count.\n") M=100000; N=100000; P=100 / M; s=sparse(sprand(M,N,P)); for NRHSc = {1,7} r=sparsersb(s); # repeat tuning from 'vanilla' matrix assert(nnz(s)==nnz(r)) NRHS=cell2mat(NRHSc); x=ones(M,NRHS); printf("Here, a %.2e x %.2e matrix with %.2e nonzeroes, %d NRHS.\n",M,N,nnz(s),NRHS) tic(); sc=0; while(toc()<3) s*x; sc=sc+1; endwhile st=toc()/sc; printf("Each multiplication with 'sparse' took %.1es.\n",st); tic(); rc=0; while(toc()<3) r*x; rc=rc+1; endwhile rt=toc()/rc; ut=rt; # untuned time printf("Each multiplication with 'sparsersb' took %.3es, this is %.4g%% of the time taken by 'sparse'.\n",rt,100*rt/st); nsb=str2num(sparsersb(r,"get","RSB_MIF_LEAVES_COUNT__TO__RSB_BLK_INDEX_T")); tic; r=sparsersb(r,"autotune","n",NRHS); at_t=toc; nnb=str2num(sparsersb(r,"get","RSB_MIF_LEAVES_COUNT__TO__RSB_BLK_INDEX_T")); printf ("Autotuning for %d NRHS took %.2es (%d -> %d RSB blocks).\n", NRHS, at_t, nsb, nnb); tic(); rc=0; while(toc()<3) r*x; rc=rc+1; endwhile rt=toc()/rc; printf("After tuning, each 'sparsersb' multiplication took %.3es.\n",rt); printf("This is %.4g%% of the time taken by 'sparse' (%.2fx speedup).\n",100*rt/st,st/rt); if ut > rt; printf ("Autotuning brought a %.2fx speedup over original RSB structure.\n", ut/rt); printf ("Time spent in autotuning can be amortized in %.1d iterations.\n", at_t/(ut-rt) ); else printf ("RSB autotuning brought no further speedup for NRHS=%d.\n",NRHS); endif disp("") endfor
Produces the following output
On large matrices 'sparsersb' may be faster than 'sparse' in sparse matrix-vector multiplication. In addition to that, 'sparsersb' has an 'empirical online auto-tuning' functionality. It means you run the autotuning on a specific input, and just after, the multiplication might be faster. See this case with two different right hand sides (NRHS) count. Here, a 1.00e+05 x 1.00e+05 matrix with 1.00e+07 nonzeroes, 1 NRHS. Each multiplication with 'sparse' took 7.0e-02s. Each multiplication with 'sparsersb' took 1.214e-02s, this is 17.3% of the time taken by 'sparse'. Autotuning for 1 NRHS took 2.59e+00s (64 -> 64 RSB blocks). After tuning, each 'sparsersb' multiplication took 1.217e-02s. This is 17.34% of the time taken by 'sparse' (5.77x speedup). RSB autotuning brought no further speedup for NRHS=1. Here, a 1.00e+05 x 1.00e+05 matrix with 1.00e+07 nonzeroes, 7 NRHS. Each multiplication with 'sparse' took 4.7e-01s. Each multiplication with 'sparsersb' took 9.036e-02s, this is 19.04% of the time taken by 'sparse'. Autotuning for 7 NRHS took 1.18e+01s (64 -> 64 RSB blocks). After tuning, each 'sparsersb' multiplication took 8.777e-02s. This is 18.49% of the time taken by 'sparse' (5.41x speedup). Autotuning brought a 1.03x speedup over original RSB structure. Time spent in autotuning can be amortized in 5e+03 iterations.
The following code
disp("'sparsersb' can render sparse matrices into Encapsulated Postscript files showing the RSB blocks layout.") rm = sparsersb(sprand(100000,100000,.0001)); sparsersb(rm,'render','sptest.eps') disp("You can open sptest.eps now.")
Produces the following output
'sparsersb' can render sparse matrices into Encapsulated Postscript files showing the RSB blocks layout. You can open sptest.eps now.
Package: sparsersb