1. Perron-Frobenius Theory for general real and complex matrices
  2. Matrix theory
  3. Structured perturbations
  4. INTLAB
  5. Interval arithmetic
  6. Accurate sum and dot product
  7. Wilkinson's estimates revisited
  8. Ph.D. [7 S.M. Rump. Kleine Fehlerschranken bei Matrixproblemen. PhD thesis, Universität Karlsruhe, 1980. (PDF, 10710019 bytes) [click for download]]
  9. Sparse matrices
  10. Verification methods
  11. Computer Algebra and computer-assisted proofs
  12. Hedge-Effectiveness

Perron-Frobenius Theory for general real and complex matrices

In [82 S.M. Rump. Theorems of Perron-Frobenius type for matrices without sign restrictions. Linear Algebra and its Applications (LAA) , 266:1-42, 1997. (PDF, 273503 bytes) [click for download]] the classical Perron-Frobenius Theory is extended to general real and in [108 S.M. Rump. Perron-Frobenius Theory for Complex Matrices. Linear Algebra and its Applications (LAA) , 363:251-273, 2003. (PDF, 247135 bytes) [click for download]] to complex matrices, where the Perron root is replaced by the sign-real or sign-complex radius, respectively. The latter is the largest real solution of the nonlinear eigenvalue problem |Ax|=r|x| for real or complex vectors x, respectively. Variational characterizations can be found in [106 S.M. Rump. Variational characterizations of the sign-real and the sign-complex spectral radius. Electronic Journal of Linear Algebra (ELA) , 9:112-117, 2002. (PDF, 132402 bytes) [click for download]].

In [84 S.M. Rump. Bounds for the Componentwise Distance to the Nearest Singular Matrix. SIAM J. Matrix Anal. Appl. (SIMAX) , 18(1):83-103, 1997. (PDF, 250422 bytes) [click for download]] this theory is applied to show that the ratio between the componentwise distance to the nearest singular matrix and the reciprocal of the componentwise condition number is finite. In [85 S.M. Rump. Almost Sharp Bounds for the Componentwise Distance to the Nearest Singular Matrix. Linear and Multilinear Algebra (LAMA) , 42:93-107, 1997. (PDF, 159553 bytes) [click for download]] the ratio is estimated sharply from below, and up to a factor 3+2sqrt(2) from above, see also [92 S.M. Rump. Ill-conditioned Matrices are componentwise near to singularity. SIAM Review (SIREV) , 41(1):102-112, 1999. (PDF, 174976 bytes) [click for download]]. The results rely on the relation between the sign-real spectral radius and cycle products [88 S.M. Rump. The sign-real spectral radius and cycle products. Linear Algebra and its Applications (LAA) , 279:177-180, 1998. (PDF, 128819 bytes) [click for download]], and they are true for all dimensions and weight matrices.

The generalized Perron-Frobenius Theory is applied in [102 S.M. Rump. Conservatism of the circle criterion - solution of a problem posed by A. Megretski. IEEE Trans. Automatic Control , 46(10):1605-1608, 2001. (PDF, 163161 bytes) [click for download]] to solve a problem posed by Megretsky, that is to estimate the conservatism of the circle criterion.


Matrix theory

In [109 S.M. Rump. On P-Matrices. Linear Algebra and its Applications (LAA) , 363:237-250, 2003. (PDF, 202117 bytes) [click for download]] my generalization of Perron-Frobenius Theory is applied to give necessary and sufficient criteria for a matrix to be P-matrix. It is also shown that no minor can be left out to check the P-property. Furthermore a first algorithm is given which has not necessarily exponential computing time to check the P-property.

The verification of nonsingularity of an interval matrix is an NP-hard problem [PoRo93 S. Poljak and J. Rohn. Checking Robust Nonsingularity Is NP-Hard, Math. of Control, Signals, and Systems , 6:1-9, 1993.]. In [74 S.M. Rump. Verification Methods for Dense and Sparse Systems of Equations. In J. Herzberger, editor, Topics in Validated Computations - Studies in Computational Mathematics , pages 63-136, Elsevier, Amsterdam, 1994. (PDF, 498000 bytes) [click for download]] a first criterion is given which is in some cases superior to strong singularity. In [105 S.M. Rump. Optimal scaling for p-norms and componentwise distance to singularity. IMA Journal of Numerical Analysis (IMAJNA) , 23:1-9, 2003. (PDF, 170682 bytes) [click for download]] this is improved to give lower and upper bounds for the optimal p-norm condition number of a matrix. Furthermore a new computable lower bound for the componentwise distance to the nearest singular matrix is given.

In [119 S. Friedland, D. Hershkowitz, and S.M. Rump. Positive entries of stable matrices. Electronic Journal of Linear Algebra (ELA) , 12:17-24, 2005. (PDF, 130129 bytes) [click for download]] we prove that a positive stable matrix of dimension greater than one must have at least two positive entries, and that there exist positive stable matrices of every dimension >1 with this property. We also give a companion matrix which is strictly lower triangular except elements (1,1) and (1,n).


Structured perturbations

In [113 S.M. Rump. Structured Perturbations Part I: Normwise Distances. SIAM J. Matrix Anal. Appl. (SIMAX) , 25(1):1-30, 2003. (PDF, 275615 bytes) [click for download]] various explicit formulas and estimations are stated for structured normwise condition number of matrices. Generally, there is not much difference between the unstructured and the structured condition number. Similar results for eigenvalues and the pseudospectrum of structured matrices are given in [127 S.M. Rump. Eigenvalues, pseudospectrum and structured perturbations. Linear Algebra and its Applications (LAA) , 413:567-593, 2006. (PDF, 296921 bytes) [click for download]]. An exception are Toeplitz matrices, where we show in [113 S.M. Rump. Structured Perturbations Part I: Normwise Distances. SIAM J. Matrix Anal. Appl. (SIMAX) , 25(1):1-30, 2003. (PDF, 275615 bytes) [click for download]] and [139 S.M. Rump and H. Sekigawa. The ratio between the Toeplitz and the unstructured condition number. Operator Theory: Advances and Applications, 199:397-419, 2009. (PDF, 241394 bytes) [click for download]] by a nice connection to the minimization of the norm of the product of two polynomials that the ratio between the structured Toeplitz and unstructured condition number may be exponentially small in the dimension.

The well-known theorem by Eckhart-Young or Gastinel-Kahan states that the normwise distance to the nearest singular matrix is equal to the reciprocal of the condition number. In [113 S.M. Rump. Structured Perturbations Part I: Normwise Distances. SIAM J. Matrix Anal. Appl. (SIMAX) , 25(1):1-30, 2003. (PDF, 275615 bytes) [click for download]] I show that this also true for the structured distance and structured condition number (see also Perron-Frobenius). This is proved for many structures such as symmetric, Toeplitz, Hankel, circulants and others. So in contrast to linear systems the worst case is assumed in particular for a structured matrix.

In [114 S.M. Rump. Structured Perturbations Part II: Componentwise Distances. SIAM J. Matrix Anal. Appl. (SIMAX) , 25(1):31-56, 2003. (PDF, 231299 bytes) [click for download]] it is shown that the situation changes completely when using componentwise distances. For many structures explicit parameterized examples are given such that the componentwise (Skeel-) condition number is about 1, independent of the parameter, whereas the general condition number tends to infinity with the parameter going to zero. The same is true for least squares problems [95 S.M. Rump. Ill-conditionedness need not be componentwise near to ill-posedness for least squares problems. BIT , 39(1):143-151, 1999. (PDF, 142448 bytes) [click for download]]. Moreover, the structured componentwise backward error for symmetric linear systems can be arbitrarily far apart from the unstructured backward error [188 S.M. Rump. The componentwise structured and unstructured backward error can be arbitrarily far apart. SIAM J. Matrix Anal. Appl. (SIMAX), 36(2):385-392, 2015. (PDF, 220482 bytes)[click for download]].


INTLAB

Starting from 1983, see for example [51 S.M. Rump. CALCULUS. In U. Kulisch, editor, Wissenschaftliches Rechnen mit Ergebnisverifikation , pages 85-99. Vieweg und Akademie Verlag, Berlin, 1989.], [58 S.M. Rump. ACRITH - CALCULUS - TPX - Programmierwerkzeuge für wissenschaftliches Rechnen. In Tagungsband Wissenschaftliches Forum , Hamburg, 1990.] and [64 D. Husung and S.M. Rump. ABACUS, Sprachbeschreibung. In L. Atanassova et al., editor, Computer arithmetic and enclosure methods: Proceedings of the Third International IMACS-GAMM Symposium on Computer Arithmetic and Scientific Computing (SCAN-91) , Oldenburg, Germany, 1 - 4 October 1991, Dublin, 1991.] we developed various programming environments for convenient access to interval operations. When Matlab introduced an operator concept, the time was ripe to create the interval toolbox INTLAB [93 S.M. Rump. INTLAB - INTerval LABoratory. In Tibor Csendes, editor, Developments in Reliable Computing , pages 77-104. Kluwer Academic Publishers, Dordrecht, 1999.] for verified computations. We already improved traditional interval arithmetic substantially in the widely used interval C-library PROFIL [308 O. Knüppel. PROFIL / BIAS - A Fast Interval Library. Computing , 53:277-287, 1994.]. For the Matlab interval toolbox the obstacle of interpretation overhead and frequent changing of the rounding mode had to be solved in [94 S.M. Rump. Fast and parallel interval arithmetic. BIT , 39(3):539-560, 1999. (PDF, 221498 bytes) [click for download]]. Furthermore, highly accurate real and complex interval standard functions are developed in [104 S.M. Rump. Rigorous and portable standard functions. BIT , 41(3):540-562, 2001. (PDF, 234868 bytes) [click for download]] for highly accurate bounds for the gamma function over the full floating-point range see [186 S.M. Rump. Verified sharp bounds for the real gamma function over the entire floating-point range. Nonlinear Theory and Its Applications (NOLTA), IEICE, Vol.E5-N,No.3, July, 2014. (PDF, 430728 bytes)[click for download]]. In [166 S.M. Rump. Fast Interval Matrix Multiplication. Numerical Algorithms, 61(1):1-34, 2012.] is a thorough treatment of different possibilities of interval matrix multiplication.


Meanwhile INTLAB is distributed in release 9, and it runs under Matlab and Octave. Here are demos including in particular some larger examples. There are several thousand users in more than 50 countries. For a bibliography of more than 500 papers using INTLAB in various areas see [INTLABref].


Interval arithmetic

Recent definitions of interval arithmetic favour to restrict intervals to subsets of the reals excluding infinity. This means {2 <= x < inf} is an interval, but {2 <= x <= inf} is not. This nice idea implies the 0*x is the zero interval for all intervals x. However, there is a drawback. To allow numerical computations with intervals the bounds are floating-point numbers. Fortunately infinity is a floating-point number in IEEE 754, so that the above interval {2 <= x < inf} can be represented by the bounds 2 and inf. Thus we have the weird situation that [a,b] may be a valid interval but one or both bounds do not belong to it. This may create awkward situations where programs do not perform as expected, which is in particular not appealing for verification algorithms.

In [169 S.M. Rump. Interval Arithmetic Over Finitely Many Endpoints. BIT Numerical Mathematics, 52(4):1059-1075, 2012.] I defined a new interval arithmetic solving this situation. Besides it resolves an apparent disparity between overflow and underflow, and it allows to introduce additional quantities like pi or e such that sin([pi])=0 or log([e])=1 are point intervals.

Interval arithmetic suffers from the dependency problem and the wrapping effect. An interesting approach to reduce this problem is affine arithmetic [FigSto97 L.H. Figueiredo and J. Stolfi. Self-Validated Numerical Methods and Applications Brazilian Mathematics Colloquium monograph , IMPA, 1997.]. Several improvements and the description of the corresponding INTLAB toolbox for affine arithmtic are presented in [191 S.M. Rump and M. Kashiwagi. Implementation and improvements of affine arithmetic. Nonlinear Theory and Its Applications, IEICE , 2(3):1101-1119, 2015. (PDF, 2135727 bytes) [click for download]].


Accurate sum and dot product

In [122 T. Ogita, S.M. Rump, and S. Oishi. Accurate Sum and Dot Product. SIAM Journal on Scientific Computing (SISC) , 26(6):1955-1988, 2005. (PDF, 373558 bytes) [click for download]] and [148 S.M. Rump, T. Ogita, and S. Oishi. Fast High Precision Summation. Nonlinear Theory and Its Applications (NOLTA), IEICE}, 1(1), 2010.(PDF, 339422 bytes) [click for download]] we present algorithms for the fast computation of the sum and dot product of floating-point numbers with a specified precision. The latter paper received the "NOLTA Best Paper Award" by the IEICE Engineering Sciences Society. The methods use extensively so-called error-free transformations. In fact, the algorithm Sum2 in [122 T. Ogita, S.M. Rump, and S. Oishi. Accurate Sum and Dot Product. SIAM Journal on Scientific Computing (SISC) , 26(6):1955-1988, 2005. (PDF, 373558 bytes) [click for download]] itself is an error-free vector transformation: An input vector p is transformed into a vector q of same length such that

where \epsilon denotes the relative rounding error unit. This algorithm avoids branches and proves to be very fast, theoretically and practically. As analyzed by Langlois [Lang06 P. Langlois. Accurate algorithms in floating-point arithmetic. In: Lecture at the 12th GAMM-IMACS International Symposion on Scientific Computing (SCAN), Computer Arithmetic and Validated Numerics , Duisburg, IEEE, 2006.], this is especially due to an improved instruction level parallelism.

In [130 S.M. Rump, T. Ogita, and S. Oishi. Accurate Floating-point Summation part I: Faithful Rounding. SIAM J. Sci. Comput. , 31(1):189-224, 2008. (PDF, 457457 bytes) [click for download]] and [131 S.M. Rump, T. Ogita, and S. Oishi. Accurate Floating-point Summation part II: Sign, K-fold faithful and rounding to nearest. Siam J. Sci. Comput. , 31(2):1269-1302, 2008. (PDF, 377632 bytes) [click for download]], a 60-page paper in two parts, algorithms are presented to compute a sum (and dot product) with faithful rounding. That means, the computed result is always correct up to the last bit. The theoretical foundation as well as an algorithm for computing a k-fold faithfully rounded result is presented as well, where the result is represented by a vector of k floating-point numbers (see also Ph.D.). Moreover, fast algorithms for computing the sign of a sum, the rounded-to-nearest and result with directed rounding are described.

The analysis of such algorithms is often involved, tedious and error-prone. To cure that I introduced in [130 S.M. Rump, T. Ogita, and S. Oishi. Accurate Floating-point Summation part I: Faithful Rounding. SIAM J. Sci. Comput. , 31(1):189-224, 2008. (PDF, 457457 bytes) [click for download]] the ufp-concept, the unit in the first place. Using that floating-point numbers are viewed as scaled integers, the proofs are compiled of inequalities and the matter becomes much better comprehensible.

The general approach is to use thoroughly the IEEE 754 standard to prove results. Based on that in [144 S.M. Rump, P. Zimmermann, S. Boldo, and G. Melquiond. Computing predecessor and successor in rounding to nearest. BIT Numerical Mathematics , 49(2):419-431, 2009. (PDF, 165101 bytes) [click for download]], for example, another fast method is given to compute the predecessor and successor in pure floating-point with rounding to nearest.

XBLAS [Li_etal02 X. Li, J. Demmel, D. Bailey, G. Henry, Y. Hida, J. Iskandar, W. Kahan, S. Kang, A. Kapur, M. Martin, B. Thompson, T. Tung, and D. Yoo. Design, Implementation and Testing of Extended and Mixed Precision BLAS. ACM Trans. Softw. , 28(2):152-205, 2002.] computes a result as computed in quadruple precision, whereas the accuracy still depends on the condition number. The result of algorithm AccSum in [130 S.M. Rump, T. Ogita, and S. Oishi. Accurate Floating-point Summation part I: Faithful Rounding. SIAM J. Sci. Comput. , 31(1):189-224, 2008. (PDF, 457457 bytes) [click for download]] is always accurate to the last bit, but nevertheless it is some 40 % faster than XBLAS, both theoretically and practically. In [147 S.M. Rump. Ultimately Fast Accurate Summation. SIAM Journal on Scientific Computing (SISC), 31(5):3466-3502, 2009. (PDF, 688702 bytes) [click for download]] this algorithm is improved by up to another 25 %.

For an application of accurate dot products to the solution of arbitrarily ill-conditioned linear systems in double precision see Section Verification methods. In [20 S.M. Rump and H. B&ouml;hm. Least Significant Bit Evaluation for Arithmetic Expressions. Computing, 30:189-199, 1983. (PDF, 1418658 bytes) [click for download]] echo" we showed how to evaluate arithmetic expressions accurately using an accurate dot product.


Wilkinson's estimates revisited

Wilkinson's standard error estimates for certain floating-point computations involve a factor gamma_k:=k*eps/(1-k*eps), where eps denotes the relative rounding error unit. Surprisingly, using the ufp-concept mentioned in the previous section, we can show that this factor can be replaced by k*eps for a number of standard problems. This is proved for the sum of floating-point numbers in [164 S.M. Rump. Error estimation of floating-point summation and dot product. BIT Numerical Mathematics, 52(1):201-220, 2012. (PDF, 120791 bytes)[click for download]], and for dot products in [182 C.-P. Jeannerod and S.M. Rump. Improved error bounds for inner products in floating-point artihmetic. SIAM. J. Matrix Anal. & Appl. (SIMAX), 34(2):338-344, 2013.(PDF, 303823 bytes)[click for download]]. The estimates are valid without restriction on n. For the foundation of floating-point arithmetic see [189 S.M. Rump and M. Lange. On the Definition of Unit Roundoff. BIT Numerical Mathematics, 2015. (PDF, 102188 bytes)[click for download]].

Moreover, the famous Lemma 8.4 in ASNA [Hi02" Higham, N.J. Accuracy and stability of numerical algorithms. SIAM Publications, Philadelphia 2002.] bounds the error of Gaussian elimination by factors gamma_k for k=n or k=n-1, depending on whether division occurs or not. These factors can be replaced by k*eps as well as shown in [183 S.M. Rump and C.-P. Jeannerod. Improved error bounds for LU and Cholesky factorizations. SIAM. J. Matrix Anal. & Appl. (SIMAX), 35(2):684-698, 2014.(PDF, 352228 bytes)[click for download]]. A similar remark applies to Cholesky factorization and solving triangular systems by substitution. Again, all estimates are true without restriction on n.

A similar result for recursive evaluation of powers of binary floating-point numbers is shown by Graillat, Lefèvre and Muller [159 HAL Id: ensl-00945033, https://hal-ens-lyon.archives-ouvertes.fr/ensl-00945033v2]. We improve this [190 S.M. Rump, F. B&uuml;nger and C.-P. Jeannerod. Improved Error Bounds for Floating-Point Products and Horner's Scheme, BIT Numerical Mathematics, 56(1):293-307, 2015. (PDF, 149930 bytes)[click for download]] to general products of real and/or floating-point numbers, to general base beta ≥ 2, to a larger number of factors, and to the computation in any order. Moreover, we show that the restriction on the number of factors is basically sharp, and it is mandatory. Thus, the factor gamma_k cannot be replaced generally by k*eps for arbitratry arithmetic expressions.


Ph.D. [7 S.M. Rump. Kleine Fehlerschranken bei Matrixproblemen. PhD thesis, Universit&auml;t Karlsruhe, 1980. (PDF, 10710019 bytes) [click for download]]

In 1969 Krawczyk [Kra69 R. Krawczyk. Newton-Algorithmen zur Bestimmung von Nullstellen mit Fehlerschranken, Computing , 4:187-201, 1969.] presented the famous Krawczyk operator. He assumed an interval vector to be given including a solution of a nonlinear system, and he used his operator to refine this interval. In 1977 Moore [Mo77 R.E. Moore. A Test for Existence of Solutions for Non-Linear Systems, SIAM J. Numer. Anal. 4: 611--615, 1977.] observed that the Krawczyk operator can be used as an existence test, where he proved contraction by a norm estimation. Both approaches lack the construction of an inclusion.

Based on that I develop in my Ph.D. [7 S.M. Rump. Kleine Fehlerschranken bei Matrixproblemen. PhD thesis, Universit&auml;t Karlsruhe, 1980. (PDF, 10710019 bytes) [click for download]] a constructive algorithm to compute an inclusion of a linear system and eigenvalue problems. Compared to previous methods three major improvements are introduced. First, Krawczyk's operator is modified to compute an inclusion of the error of an approximate solution rather than of the solution itself. The new operator turns out to be simpler and more accurate than the original one.

Secondly, rather than using some norm to show contraction, an inclusion in the interior is used to prove existence and uniqueness of the solution or to prove nonsingularity of the Jacobian.

In [CaMa78 O. Caprani and K. Madsen. Iterative Methods for Interval Inclusion of Fixed Points, BIT , 18:42-51, 1978.] some inflation was used in an interval iteration without analysis. In my Ph.D. I show thirdly that this is necessary, coined the term epsilon-inflation for that, and give necessary and sufficient conditions under which an inclusion is computed (see also [90 S.M. Rump. A Note on Epsilon-Inflation. Reliable Computing, 4:371-375, 1998. (PDF, 122641 bytes) [click for download]]).

These three approaches, i) to calculate an inclusion of the error of an approximation, ii) to show contraction by mapping into the interior, and iii) the epsilon-inflation are today standard tools in verification algorithms for finite as well as for infinite dimensional problems.

The IBM program product ACRITH implements the algorithms described in my Ph.D. [7 S.M. Rump. Kleine Fehlerschranken bei Matrixproblemen. PhD thesis, Universit&auml;t Karlsruhe, 1980. (PDF, 10710019 bytes) [click for download]] and habilitation [19 S.M. Rump. Solving Algebraic Problems with High Accuracy. Habilitationsschrift. In U.W. Kulisch and W.L. Miranker, editors, A New Approach to Scientific Computation , pages 51-120. Academic Press, New York, 1983.].

Using a precise dot product an inclusion is computed and stored in k separate vectors, k-1 of which are floating-point vectors and the last one is an interval vector. This method, suggested in my Ph.D., represents a k-fold accurate result (see also Accurate sum and dot product). Later this method was called staggered correction [Ste84 J. Stetter. Sequential defect correction for high-accuracy floating-point algorithms. In: Numerical Analysis (Proceedings, Dundee 1983). , Lecture Notes in Mathematics, vol. 1066 (1984), 186-202].

If some long precision formats are available, an optimal strategy is discussed how to increase the precision to mimize the total computational effort, see also [132 V. Kreinovich and S.M. Rump. Towards Optimal Use of Multi-Precision Arithmetic: A Remark. Reliable Computing 12:365-369, 2006. (PDF, 121825 bytes) [click for download]].


Sparse matrices

In [71 S.M. Rump. Validated Solution of Large Linear Systems. In R. Albrecht, G. Alefeld, and H.J. Stetter, editors, Validation numerics: theory and applications , volume 9 of Computing Supplementum, pages 191-212. Springer, 1993. (PDF, 180113 bytes) [click for download]], see also [74 S.M. Rump. Verification Methods for Dense and Sparse Systems of Equations. In J. Herzberger, editor, Topics in Validated Computations - Studies in Computational Mathematics , pages 63-136, Elsevier, Amsterdam, 1994. (PDF, 498000 bytes) [click for download]], a method to solve linear systems with large symmetric positive definite sparse matrix is presented. Apparently this is still the only known method for effectively solving sparse linear systems. The method is based on the verification of positive definiteness, see also [134 S.M.Rump. Verification of positive definiteness, BIT Numerical Mathematics , 46:433-452, 2006. (PDF, 248004 bytes [click for download]]. Based on that a super-fast method is derived in [136 S.M. Rump and T. Ogita. Super-Fast Validated Solution of Linear Systems. J. Comput. Appl. Math. (JCAM) , 199(2):199-206, 2006. Special issue on Scientific Computing, Computer Arithmetic, and Validated Numerics (SCAN 2004). (PDF, 167035 bytes) [click for download]] for solving sparse linear systems with guaranteed error bounds, that is the computation of a floating-point approximation and verification of the result is performed in the same computing time as only a floating-point approximation.


Verification methods

Developments in verification methods until 2012 are summarized in my 163-pages overview article [160 S.M. Rump. Verification methods: Rigorous results using floating-point arithmetic. Acta Numerica, 19:287-449, 2010.]. In my habilitation [19 S.M. Rump. Solving Algebraic Problems with High Accuracy. Habilitationsschrift. In U.W. Kulisch and W.L. Miranker, editors, A New Approach to Scientific Computation , pages 51-120. Academic Press, New York, 1983.] verification methods for a number of numerical problems including general systems of nonlinear equations are derived (malheureusement, the printed version contains many typos due to unfortunate circumstances). This latter algorithm together with others is also implemented in INTLAB [133 S.M. Rump. INTLAB - Interval Laboratory, the Matlab toolbox for verified computations, Version 6.0, 2008.]. An introduction to verification or self-validating methods is given in [101 S.M. Rump. Self-validating methods. Linear Algebra and its Applications (LAA) , 324:3-13, 2001. (PDF, 163273 bytes) [click for download]], some historical background on intervals in [91 D. Dennis, V. Kreinovich, and S.M. Rump. Intervals and the Origins of Calculus. Reliable Computing , 4(2):191-197, 1998. (PDF, 205001 bytes) [click for download]]. An often cited and analyzed ([CuyVerBecKut01 A. Cuyt, B. Verdonk, S. Becuwe and P. Kuterna. A Remarkable Example of Catastrophic Cancellation Unraveled. Computing , 66(3):309-320, 2001. (doi: 10.1007/s006070170028)], [LohWal02 E. Loh and W. Walster. Rump's Example Revisited. Reliable Computing , 8:3(245-248), 2002. (doi: 10.1023/A:1015569431383)]) example that identical and erroneous results are computed in single, double and extended precision is given in [49 S.M. Rump. Algorithms for Verified Inclusions - Theory and Practice. In R.E. Moore, editor, Reliability in Computing , volume 19 of Perspectives in Computing, pages 109-126. Academic Press, 1988. (PDF, 543373 bytes) [click for download]], more examples can be found in [22 S.M. Rump. How Reliable are Results of Computers?, Translation of ''Wie zuverl&auml;ssig sind die Ergebnisse unserer Rechenanlagen?''. Jahrbuch &Uuml;berblicke Mathematik , pages 163-168, 1983. (PDF, 392516 bytes) [click for download]].

In [179 S.M. Rump. Accurate solution of dense linear systems, Part I: Algorithms in rounding to nearest. Journal of Computational and Applied Mathematics (JCAM), 242:157-184, 2013.] algorithms for the verified solution of linear systems solely using rounding to nearest are presented. Using tricky error estimates fairly sharp inclusions can be computed very fast. Since no directed rounding is necessary, those algorithms are easily portable. In [180 S.M. Rump. Accurate solution of dense linear systems, Part II: Algorithms using directed rounding. Journal of Computational and Applied Mathematics (JCAM), 242:185-212, 2013.] algorithms for the verified solution of linear systems using directed roundings are presented. These algorithms are very fast and accurate and are based on H-matrices. In this paper also a method is described for computing inner inclusions even if only one matrix or right hand side entry is a fat interval.

In [52 S.M. Rump. Guaranteed Inclusions for the Complex Generalized Eigenproblem. Computing , 42:225-238, 1989. (PDF, 789892 bytes) [click for download]] the complex generalized eigenproblem is treated, in [96 S.M. Rump. Computational Error Bounds for Multiple or Nearly Multiple Eigenvalues. Linear Algebra and its Applications (LAA) , 324:209-226, 2001. (PDF, 219485 bytes) [click for download]] an algorithm is given to compute bounds of multiple or clustered eigenvalues and their invariant subspace, and in [116 S.M. Rump and J. Zemke. On eigenvector bounds. BIT , 43:823-837, 2004. (PDF, 194763 bytes) [click for download]] we show under general assumptions that an accurate inclusion of a single eigenvector to a multiple eigenvalue of geometric multiplicity greater than one is not possible, even for symmetric/Hermitian/normal matrices.

In [110 S.M. Rump. Ten methods to bound multiple roots of polynomials. J. of Computation and Applied Mathematics (JCAM) , 156:403-432, 2003. (PDF, 332813 bytes) [click for download]], 10 different methods are given to compute an inclusion of multiple roots of polynomials.

For multiple roots of (systems of) nonlinear equations there are two approaches. First, in [152 S.M. Rump and S. Oishi. Verified Error Bounds for Multiple Roots of Nonlinear Equations. In 2009 International Symposium on Nonlinear Theory and its Applications, NOLTA'09, Sapporo, Japan, 2009. (PDF, 161890 bytes) [click for download]] a method is described how to verify that the given problem has exactly k roots in a complex disc. This improves a method by Neumaier [Neu88 A. Neumaier. An Existence Test for Root Clusters and Multiple Roots. Zeitschrift f&uuml;r Angewandte Mathematik und Mechanik (ZAMM) , 68(6):256-257, 1988.]. By principle, the multiplicity cannot be verified, i.e. there are k roots in total in that disc. Second, in [156 S.M. Rump and S. Graillat. Verified error bounds for multiple roots of systems of nonlinear equations. Numerical Algorithms, 54(3):359-377, 2010. (PDF, 440922 bytes) [click for download]] a method is described to verify that a slightly perturbed problem has a k-fold root within computable error bounds, where a bound for the perturbation is computed as well.

Based on work by Neumaier it is shown in [57 S.M. Rump. Approximate inverses of almost singular matrices still contain useful information. Technical Report 90.1, Forschungsschwerpunkt Informations- und Kommunikationstechnik, TU Hamburg-Harburg, 1990. (PDF, 1870115 bytes) [click for download]] and [74 S.M. Rump. Verification Methods for Dense and Sparse Systems of Equations. In J. Herzberger, editor, Topics in Validated Computations - Studies in Computational Mathematics , pages 63-136, Elsevier, Amsterdam, 1994. (PDF, 498000 bytes) [click for download]] how inner inclusions of the solution complex of a linear or nonlinear system with interval coefficients can be obtained practically free of cost together with a verified inclusion. Jansson [231 C. Jansson. Interval Linear Systems with Symmetric Matrices, Skew-Symmetric Matrices, and Dependencies in the Right Hand Side. Computing , 46:265-274, 1991.] shows how to compute inclusions of a linear system with symmetric matrix taking into account dependencies and thus obtaining much better inclusions. Based on that it is shown in [74 S.M. Rump. Verification Methods for Dense and Sparse Systems of Equations. In J. Herzberger, editor, Topics in Validated Computations - Studies in Computational Mathematics , pages 63-136, Elsevier, Amsterdam, 1994. (PDF, 498000 bytes) [click for download]] how to solve general parameterized systems of linear interval equations.

Methods based on the Krawczyk operator or its modifications require a matrix inversion and interval matrix multiplication. Compared to Gaussian elimination this implies a factor 9 in computing time. In [107 S. Oishi and S.M. Rump. Fast verification of solutions of matrix equations. Numer. Math., 90(4):755-773, 2002. (PDF, 226356 bytes) [click for download]] we present a method where the computational effort of the verification is the same as for Gaussian elimination, so that the total verification process takes twice as long as a pure floating-point elimination. In [136 S.M. Rump and T. Ogita. Super-Fast Validated Solution of Linear Systems. J. Comput. Appl. Math. (JCAM) , 199(2):199-206, 2006. Special issue on Scientific Computing, Computer Arithmetic, and Validated Numerics (SCAN 2004). (PDF, 167035 bytes) [click for download]] a super-fast method for sparse s.p.d. linear systems is given where this factor is reduced to 1 (see sparse matrices).

In about 1984, I derived a method (see topic INTLAB on http://www.ti3.tu-harburg.de) to solve linear systems with extremely ill-conditioned system matrix, see also [57 S.M. Rump. Approximate inverses of almost singular matrices still contain useful information. Technical Report 90.1, Forschungsschwerpunkt Informations- und Kommunikationstechnik, TU Hamburg-Harburg, 1990. (PDF, 1870115 bytes) [click for download]]. The method uses only IEEE 754 basic operations plus an accurate dot product, as for example in [130 S.M. Rump, T. Ogita, and S. Oishi. Accurate Floating-point Summation part I: Faithful Rounding. SIAM J. Sci. Comput. , 31(1):189-224, 2008. (PDF, 457457 bytes) [click for download]], [131 S.M. Rump, T. Ogita, and S. Oishi. Accurate Floating-point Summation part II: Sign, K-fold faithful and rounding to nearest. Siam J. Sci. Comput. , 31(2):1269-1302, 2008. (PDF, 377632 bytes) [click for download]]. It is an iterative method reducing the condition number in each step by about a factor eps. I never published the method because the lack of analysis.

The method uses multiplicative rather than additive corrections [140 S.M. Rump. Error bounds for extremely ill-conditioned problems. Proceedings of 2006 International Symposium on Nonlinear Theory and its Applications , Bologna, Italy, September 11-14, 2006. (PDF, 101159 bytes) [click for download]] (see also [121 T. Ohta, T. Ogita, S.M. Rump, and S. Oishi. Numerical Verification Method for Arbitrarily Ill-conditioned Linear Systems. Transactions on the Japan Society for Industrial and Applied Mathematics (Trans. JSIAM) , 15(3):269-287, 2005. [received the ”Best Paper Award 2005 of Japanese SIAM”] DF, 118303 bytes) [click for download]], this paper received the Best Paper Award 2005 of Japanese SIAM). To test the method it was necessary to construct arbitrarily ill-conditioned matrices exactly storable in single or double precision [59 S.M. Rump. A Class of Arbitrarily Ill-conditioned Floating-Point Matrices. SIAM Journal on Matrix Analysis and Applications (SIMAX) , 12(4):645-653, 1991. (PDF, 129968 bytes) [click for download]]. It seems still mysterious that an approximate inverse computed in double precision (i.e. 53 bits precision) of a matrix with condition number greater than 10300 still contains useful information. A first analysis with some restrictions is given in [137 S. Oishi, K. Tanabe, T. Ogita, and S.M. Rump. Convergence of Rump's Method for Inverting Arbitrarily Ill-Conditioned Matrices. J. Comput. Appl. Math. , 205(1):533-544, 2007. (PDF, 126985 bytes) [click for download]], and based on that a full analysis is derived in [146 S.M. Rump. Inversion of extremely ill-conditioned matrices in floating-point. Japan J. Indust. Appl. Math. (JJIAM) , 26:1-29, 2009. (PDF, 348003 bytes) [click for download]].


Computer Algebra and computer-assisted proofs

Verification or self-validating methods [101 S.M. Rump. Self-validating methods. Linear Algebra and its Applications (LAA) , 324:3-13, 2001. (PDF, 163273 bytes) [click for download]] can be used to assist mathematical proofs. Connections between self-validating methods and computer-assisted proofs are described in [115 S.M. Rump. Computer-assisted proofs and Self-Validating Methods. In B. Einarsson, editor, Handbook on Acuracy and Reliability in Scientific Computation , pages 195-240. SIAM, 2005. (PDF, 4690342 bytes) [click for download]], a paper which was also translated into Japanese [120 S.M. Rump. Computer-Assisted Proofs II. Bulletin of the Japan Society for Industrial and Applied Mathematics (Bull. JSIAM) , 14(4):44-57, 2004, translated by T. Ogita. (PDF, 147675 bytes) [click for download]], [117 S.M. Rump. Computer-Assisted Proofs I. Bulletin of the Japan Society for Industrial and Applied Mathematics (Bull. JSIAM) , 14(3):2-11, 2004, translated by T. Ogita. (PDF, 115508 bytes) [click for download]]. My early work in computer algebra includes the computation of the sign of a real algebraic number [1 S.M. Rump. On the Sign of a Real Algebraic Number. In Proceedings of the 1976 ACM Symposium on Symbolic and Algebraic Computation , pages 238-241, New York, 1976. (PDF, 188515 bytes) [click for download]] which is used to compute separating intervals for real polynomials with coefficients in algebraic number fields [1 S.M. Rump. Ein Algorithmus zur Isolierung der reellen Nullstellen eines Polynoms mit algebraischen Koeffizienten, Rechenzeitanalyse und Implementierung. Master's thesis, Universit&auml;t Kaiserslautern, 1976.]. For the analysis of this algorithm a bound for the minimal root separation was derived [6 S.M. Rump. Polynomial Minimum Root Separation. Mathematics of Computation, 33(145):327-336, 1979. (PDF, 1185999 bytes) [click for download]], the first bound of this quality for polynomials with multiple roots.


Hedge-Effectiveness

There are a number of measures to certify the effectiveness of Hedges. In [360 A.C. Hailer and S.M. Rump. Hedge-Effektivit&auml;t: L&ouml;sung des Problems der kleinen Zahlen. (Hedge Accounting nach FAS 133 bzw. IAS 39). Zeitschrift f&uuml;r das gesamte Kreditwesen , 56(2003)(11):599-603, 2003.] we address the problem of small numbers, and in [362 A.C. Hailer and S.M. Rump. Evaluierung von Hedge-Effektivit&auml;tstests. Zeitschrift f&uuml;r das gesamte Kreditwesen, 58(20):1089-1097, 2005.] we give a number of criteria for optimal tests together with a new Hedge effectiveness test [361 A. Hailer and S.M. Rump. Evaluation of Hedge Effectiveness Tests. Journal of Derivatives Accounting (JDA) , 2(1):31-52, 2005. (PDF, 497918 bytes) [click for download]]. Our methods are apparently used by large banks.


Back to Top