, INFO(1) is 0 if the call to MUMPS was successful, negative if an error occurred (see Section 5), or positive if a warning is returned

, INFO(2) holds additional information about the error or the warning. If INFO(1)=-1, INFO(2) is the processor number (in communicator mumps_par%COMM) on which the error was detected

, INFO(3) -after analysis: Estimated real space needed on the processor for factors

, INFO(4) -after analysis: Estimated integer space needed on the processor for factors

, INFO(5) -after analysis: Estimated maximum front size on the processor

, INFO(6) -after analysis: Number of nodes in the complete tree. The same value is returned on all processors

. Info, after analysis: Minimum value of MAXIS estimated by the analysis phase to run the numerical factorization successfully

, INFO(8) -after analysis: Minimum value of MAXS estimated by the analysis phase to run the numerical factorization successfully

. Info, after factorization: Size of the real space used on the processor to store the LU factors

. Info, after factorization: Size of the integer space used on the processor to store the LU factors

, INFO(11) -after factorization: Order of the largest frontal matrix processed on the processor

, INFO(12) -after factorization: Number of off-diagonal pivots encountered on the processor or number of negative pivots if the SYM=1

. Info, after factorization: The number of uneliminated variables, corresponding to delayed pivots, sent to the father

, INFO(14) -after factorization: Number of memory compresses on the processor

, INFO(15) -after analysis: estimated total size (in millions of bytes) of all MUMPS internal data for running numerical factorization

. Info, after factorization: total size (in millions of bytes) of all MUMPS internal data used during numerical factorization. and mumps_par%INFOG : mumps_par%RINFOG is a double precision array of dimension 20. It contains the following global information on the execution of MUMPS: RINFOG(1) -after analysis: The estimated number of floating

, RINFOG(2) -after factorization: The total number of floating-point operations (on all processors) for the assembly process

, RINFOG(3) -after factorization: The total number of floating-point operations (on all processors) for the elimination process

, RINFOG(4) to RINFOG(11) -after solve with error analysis: Only returned on the host process if ICNTL(11) 0. See description of ICNTL

, RINFOG(12) -RINFOG(20) are not used in the current version

, mumps_par%INFOG is an integer array of dimension 40

, INFOG(1) is 0 if the call to MUMPS was successful, negative if an error occurred (see Section 5), or positive if a warning is returned

, INFOG(2) holds additional information about the error or the warning

, :2) is the same on all processors. It has the value of INFO(1:2) of the processor which returned with the most negative INFO(1) value. For example, if processor Ô returns with INFO(1)=-13, and INFO(2)=10000, then all other processors will return with INFOG(1)=-13 and INFOG(2)=10000, The difference between INFOG(1:2) and INFO(1:2) is that INFOG

, INFOG(3) -after analysis: Total estimated real workspace for factors on all processors

, INFOG(4) -after analysis: Total estimated integer workspace for factors on all processors

, INFOG(5) -after analysis: Estimated maximum front size in the complete tree

, INFOG(6) -after analysis: Number of nodes in the complete tree

, :8) : not significant

, after factorization: Total real space to store the LU factors. INFOG(10) -after factorization: Total integer space to store the LU factors

, INFOG(11) -after factorization: Order of largest frontal matrix

, INFOG(12) -after factorization: Total number of off-diagonal pivots or negative pivots if SYM=1

, INFOG(13) -after factorization: Total number of delayed pivots

, INFOG(14) -after factorization: Total number of memory compresses

, INFOG(15) -after solution: Number of steps of iterative refinement

, INFOG(16) -after analysis: Estimated size (in million of bytes) of all MUMPS internal data for running factorization: value on the most memory consuming processor

, INFOG(17) -after analysis: Estimated size (in millions of bytes) of all MUMPS internal data for running factorization: sum over all processors

, INFOG(18) -after factorization: Size in millions of bytes of all MUMPS internal data during factorization: value on the most memory consuming processor

, INFOG(19) -after factorization: Size in millions of bytes of all MUMPS internal data during factorization: sum over all processors

, INFOG(20) -after analysis: Estimated number of entries in the factors

P. R. Amestoy, T. A. Davis, and I. S. Duff, The approximate minimum degree algorithm, 2nd SIAM Conference on Sparse Matrices, 1996.

P. R. Amestoy and I. S. Duff, Vectorization of a multiprocessor multifrontal code, Int. J. of Supercomputer Applics, vol.3, pp.41-59, 1989.

P. R. Amestoy, I. S. Duff, J. Koster, and J. Excellent, A fully asynchronous multifrontal solver using distributed dynamic scheduling, SIAM Journal on Matrix Analysis and Applications, vol.23, issue.1, pp.15-41, 2001.
URL : https://hal.archives-ouvertes.fr/hal-00808293

P. R. Amestoy, I. S. Duff, and J. Excellent, Multifrontal solvers within the PARASOL environment, Applied Parallel Computing, PARA'98, pp.7-11, 1998.
URL : https://hal.archives-ouvertes.fr/hal-00856857

P. R. Amestoy, I. S. Duff, and J. Excellent, Parallélisation de la factorisation LU de matrices creuses non-symétriques pour des architectures à mémoire distribuée, Calculateurs Parallèles Réseaux et Systèmes Répartis, vol.10, issue.5, pp.509-520, 1998.

P. R. Amestoy, I. S. Duff, and J. Excellent, Multifrontal parallel distributed symmetric and unsymmetric solvers, Comput. Methods Appl. Mech. Eng, vol.184, pp.501-520, 2000.
URL : https://hal.archives-ouvertes.fr/hal-00856651

P. R. Amestoy, I. S. Duff, J. Excellent, and P. Plechá?plechá?plechá?, PARASOL. An integrated programming environment for parallel sparse matrix solvers, High-Performance Computing, pp.79-90, 1999.
URL : https://hal.archives-ouvertes.fr/hal-00856860

M. Arioli, J. Demmel, and I. S. Duff, Solving sparse linear systems with sparse backward error, SIAM Journal on Matrix Analysis and Applications, vol.10, pp.165-190, 1989.

L. S. Blackford, J. Choi, A. Cleary, E. Azevedo, J. Demmel et al., , 1997.

S. Chandrasekaran and I. Ipsen, On rank-revealing factorizations, SIAM J. Matrix Anal. Appl, vol.15, pp.592-622, 1994.

A. R. Curtis and J. K. Reid, On the automatic scaling of matrices for Gaussian elimination, J. Inst. Maths. Applics, vol.10, pp.118-124, 1972.

J. J. Dongarra, J. Croz, I. S. Duff, and S. Hammarling, Algorithm 679. A set of Level 3 Basic Linear Algebra Subprograms, ACM Transactions on Mathematical Software, vol.16, pp.1-17, 1990.

J. J. Dongarra, J. Croz, I. S. Duff, and S. Hammarling, Algorithm 679. A set of Level 3 Basic Linear Algebra Subprograms: model implementation and test programs, ACM Transactions on Mathematical Software, vol.16, pp.18-28, 1990.

I. S. Duff, Algorithm 575. Permutations for a zero-free diagonal, ACM Transactions on Mathematical Software, vol.7, pp.387-390, 1981.

I. S. Duff and J. Koster, The design and use of algorithms for permuting large entries to the diagonal of sparse matrices, SIAM Journal on Matrix Analysis and Applications, vol.20, issue.4, pp.889-901, 1999.

I. S. Duff and J. Koster, On algorithms for permuting large entries to the diagonal of a sparse matrix, SIAM Journal on Matrix Analysis and Applications, vol.22, issue.4, pp.973-996, 2001.

I. S. Duff and J. K. Reid, The multifrontal solution of indefinite sparse symmetric linear systems, ACM Transactions on Mathematical Software, vol.9, pp.302-325, 1983.

I. S. Duff and J. K. Reid, The multifrontal solution of unsymmetric sets of linear systems, SIAM Journal on Scientific and Statistical Computing, vol.5, pp.633-641, 1984.

G. Karypis and V. Kumar, METIS -A Software Package for Partitioning Unstructured Graphs, Partitioning Meshes, and Computing Fill-Reducing Orderings of Sparse Matrices -Version 4.0. University of Minnesota, 1998.

J. Schulze, Towards a tighter coupling of bottom-up and top-down sparse matrix ordering methods, BIT, vol.41, issue.4, pp.800-841, 2001.