# Lack of use of the BLAS/LAPACK API in Scilab

In this page, we present the specific libraries in Scilab which could be improved if we made use of the BLAS/LAPACK API. This API and the use of optimized linear algebra libraries increase the performance of Scilab. But some specific components do not use these tools, and therefore have limited performance

Contents

## Performance of linear algebra in Scilab

In Scilab, the user accesses to the linear algebra by essentially two different ways :

- directly, when we perform matrices operations such as the matrix-matrix multiply or the backslash operator,
- indirectly, when we use a library which uses linear algebra algorithms as an intermediate step for the final result.

When the user performs matrix computations in Scilab, we actually use the BLAS/LAPACK API in order to provide the result. This allows to make use of optimized linear algebra libraries such as the Intel MKL or ATLAS. Indeed, these libraries improve the use of the CPU and its cache and increases the Megaflops. Therefore, when we use these features of Scilab on a multi-core machine, for example, reasonably good performances can be obtained.

On the other hand, not all libraries used by Scilab make use of the BLAS/LAPACK API. Hence, the performance of some components of Scilab are not, in some cases, as high as they could be. This is mainly because Scilab makes use of a collection of libraries, from various authors, who have used different approaches to solve their linear algebra problems. One of the reasons is that some libraries were created at a time when the BLAS/LAPACK APIs simply did not exist.

We know that specific libraries use the BLAS/LAPACK API, or make use of optimized linear algebra libraries. For example, we know that the FFTW library uses optimized linear algebra libraries.

By contrast, we know that some libraries in Scilab are using internal linear algebra algorithms in the following fields:

- optimization, especially the optim function,
- interpolation.

This is the topic of the two following sections.

## Optimization

The nonlinear unconstrained (or bound constraint) optimization function *optim* makes use of several linear algebra algorithms. All of them are provided by the library, i.e. no function of the BLAS or LAPACK APIs are used.

For example, the unconstrained Quasi-Newton method provided by n1qn1 is available behind *optim* and the "qn" option (this is the default solver). This algorithm updates a Choleksy decomposition of the Hessian matrix, then solves a linear system of equations to compute the descent direction. The main algorithm is in n1qn1q, which makes use of the majour (which updates the Cholesky factors). The question which remains to be determined is which BLAS or LAPACK routines can be used in this context.

The BFGS limited memory algorithm provided by n1qn3a is available in *optim* and the "gc" option. This algorithm uses vector dot products and calls the fuclid routine. Clearly, the BLAS/DDOT routine should be used here.

Other solvers might suffer from the same lack of use of the BLAS/LAPACK API.

## Interpolation

The interpolation module of Scilab allows to create 1D, 2D and 3D splines and to evaluate the value of the spline at given points. In order to compute the coefficients of the spline, we have to solve a linear system of equations. This system is tridiagonal, symetric and definite positive. This is done in somespline.f, for example in the TriDiagLDLtSolve routine. This could be done instead by the LAPACK/dptsv routine. Another possibility may be to estimate the condition number of the matrix with dgtsvx, so that we may generate a warning message in the case where the matrix is ill-conditionned.

## Conclusion

The optimization and interpolation modules in Scilab may be updated in future releases of Scilab in order to make use of the BLAS/LAPACK API. This may increase the performance of these modules, so that, for example, multi-core machines benefits from their computing power. Moreover, even single-core computations may be much faster. Even if we have identified these two modules, a large set of unknown modules may be updated accordingly, if performance is an issue for them.

## References

[1] https://scilab.gitlab.io/legacy_wiki/Linalg_performances

[2] "Programming in Scilab", Michael Baudin, 2010, http://forge.scilab.org/index.php/p/docprogscilab/downloads/