# QTT Decomposition

This guide can be used as a quick start into `xerus`

. It will introduce some basic functionality of the library,
demonstrate the general layout and is enough for very basic applications. It is recommended to also have a look
at the more detailed guides for all classes one wishes to use though - or even have a look at the doxygen class documentation for details on all functions.

It is assumed that you have already obtained and compiled the library itself as well as know how to link against it. If this is not the case, please refer to the building xerus page.

In the following we will solve a FEM equation arising from the heat equation using a QTT decomposition and the ALS algorithm.

To start we create the stiffness matrix as a dense (ie. not sparse or decomposed) tensor.
To define the entries we pass a function to the constructor of the `Tensor`

object that will be
called for every entry with a vector of size_t integers defining the indices of the current entry.
As it is simpler to think of the stiffness matrix in its original form rather than the QTT form we will
set the dimensions to 512x512.

```
xerus::Tensor A({512,512}, [](const std::vector<size_t> &idx){
if (idx[0] == idx[1]) {
return 2.0;
} else if (idx[1] == idx[0]+1 || idx[1]+1 == idx[0]) {
return -1.0;
} else {
return 0.0;
}
});
```

```
def A_fill(idx):
if idx[0] == idx[1] :
return 2
elif idx[1] == idx[0]+1 or idx[1]+1 == idx[0] :
return -1
else :
return 0
A = xerus.Tensor([512,512], A_fill)
```

To account for the $ h^2 $ factor that we have ignored so far we simply multipy the operator by $ N^2 $.

```
A *= 512*512;
```

```
A *= 512*512
```

By reinterpreting the dimension and thus effectively treating the tensor as a $ 2^{18} $ instead of a $ 512^2 $ tensor,
the decomposition into a `TTTensor`

will give us the stiffness matrix in a QTT format.

```
A.reinterpret_dimensions(std::vector<size_t>(18, 2));
xerus::TTOperator ttA(A);
```

```
A.reinterpret_dimensions([2,]*18)
ttA = xerus.TTOperator(A)
```

As the Laplace operator is representable exactly in a low-rank QTT format, the rank of this `ttA`

should be low after this construction.
We can verify this by printing the ranks:

```
using xerus::misc::operator<<; // to be able to pipe vectors
std::cout << "ttA ranks: " << ttA.ranks() << std::endl;
```

```
print("ttA ranks:", ttA.ranks())
```

For the right-hand-side we will take a simple tensor that is equal to 1 at every position. As this is a commonly used tensor,
we can simply use the named constructor provide dby `xerus`

.

```
auto b = xerus::Tensor::ones(std::vector<size_t>(9, 2));
auto ttb = xerus::TTTensor::ones(b.dimensions);
```

```
b = xerus.Tensor.ones([2,]*9)
ttb = xerus.TTTensor.ones(b.dimensions)
```

To have an initial vector for the ALS algorithm we create a random TTTensor of the desired rank (3 in this case - note, that this is the exact rank of the solution).

```
xerus::TTTensor ttx = xerus::TTTensor::random(std::vector<size_t>(9, 2), std::vector<size_t>(8, 3));
```

```
ttx = xerus.TTTensor.random([2,]*9, [3,]*8)
```

With these three tensors (the operator `ttA`

, the right-hand-side `ttb`

and the initial guess `ttx`

)
we can now perform the ALS algorithm to solve for `ttx`

(note here, that the _SPD suffix chooses the variant of the ALS
that assumes that the given operator is symmetric positive definite)

```
xerus::ALS_SPD(ttA, ttx, ttb);
```

```
xerus.ALS_SPD(ttA, ttx, ttb)
```

To verify the calculation performed by the ALS we will need to perform some arithmetic operations. As these require the definition of (relative) index orderings in the tensors, we define some indices

```
xerus::Index i,j,k;
```

```
i,j,k = xerus.indices(3)
```

and use these in calculations like `A(i,j)*x(j) - b(i)`

. Note though, that our tensors are of a higher
degree due to the QTT decomposition and we thus need to specify the corresponding dimension of the
multiindices i,j, and k with eg. `i^9`

to denote a multiindex of dimension 9.

```
double residual = frob_norm( ttA(i^9,j^9)*ttx(j^9) - ttb(i^9) );
std::cout << "residual: " << residual << std::endl;
```

```
residual = xerus.frob_norm( ttA(i^9,j^9)*ttx(j^9) - ttb(i^9) )
print("residual:", residual)
```

To get the actual error of the ALS solution (rather than just the residual) we can calculate the exact solution of the system using the FullTensors A, x and b and the / operator

```
xerus::Tensor x;
x(j^9) = b(i^9) / A(i^9, j^9);
```

```
x = xerus.Tensor()
x(j^9) << b(i^9) / A(i^9, j^9)
```

In the comparison of this exact solution `x`

and the ALS solution `ttx`

, we have to decide whether we want to cast
the TTTensor to a Tensor or vice versa to be able to subtract them.

```
std::cout << "error: " << frob_norm(x - xerus::Tensor(ttx)) << std::endl;
```

```
print("error:", xerus.frob_norm(x - xerus.Tensor(ttx)))
```

The expected output of this whole program now looks something like

```
ttA ranks: { 3, 3, 3, 3, 3, 3, 3, 3 }
residual: 4.93323e-11
error: 1.48729e-20
```

We could now also print the solution with `x.to_string()`

or store in in a file to plot it with another program.
We could change our operator to define other FEM systems and many more things. As this is only a very short
introduction though, we will stop here and refer the interested reader to either the following more detailed guides or
to their own curiosity.

The full source code of this tutorial looks as follows

```
#include <xerus.h>
int main() {
// construct the stiffness matrix A using a lambda function
// and dividing it by h^2 = multiplying it with N^2
xerus::Tensor A({512,512}, [](const std::vector<size_t> &idx){
if (idx[0] == idx[1]) {
return 2.0;
} else if (idx[1] == idx[0]+1 || idx[1]+1 == idx[0]) {
return -1.0;
} else {
return 0.0;
}
});
A *= 512*512;
// reinterpret the 512x512 tensor as a 2^18 tensor
// and create (Q)TT decomposition of it
A.reinterpret_dimensions(std::vector<size_t>(18, 2));
xerus::TTOperator ttA(A);
// and verify its rank
using xerus::misc::operator<<;
std::cout << "ttA ranks: " << ttA.ranks() << std::endl;
// the right hand side of the equation both as Tensor and in (Q)TT format
auto b = xerus::Tensor::ones(std::vector<size_t>(9, 2));
auto ttb = xerus::TTTensor::ones(b.dimensions);
// construct a random initial guess of rank 3 for the ALS algorithm
xerus::TTTensor ttx = xerus::TTTensor::random(std::vector<size_t>(9, 2), std::vector<size_t>(8, 3));
// and solve the system with the default ALS algorithm for symmetric positive operators
xerus::ALS_SPD(ttA, ttx, ttb);
// to perform arithmetic operations we need to define some indices
xerus::Index i,j,k;
// calculate the residual of the just solved system to evaluate its accuracy
// here i^9 denotes a multiindex named i of dimension 9 (ie. spanning 9 indices of the respective tensors)
double residual = frob_norm( ttA(i^9,j^9)*ttx(j^9) - ttb(i^9) );
std::cout << "residual: " << residual << std::endl;
// as an comparison solve the system exactly using the Tensor / operator
xerus::Tensor x;
x(j^9) = b(i^9) / A(i^9, j^9);
// and calculate the Frobenius norm of the difference
std::cout << "error: " << frob_norm(x - xerus::Tensor(ttx)) << std::endl;
}
```

```
import xerus as xe
# construct the stiffness matrix A using a fill function
def A_fill(idx):
if idx[0] == idx[1] :
return 2.0
elif idx[1] == idx[0]+1 or idx[1]+1 == idx[0] :
return -1.0
else:
return 0.0
A = xe.Tensor.from_function([512,512], A_fill)
# and dividing it by h^2 = multiplying it with N^2
A *= 512*512
# reinterpret the 512x512 tensor as a 2^18 tensor
# and create (Q)TT decomposition of it
A.reinterpret_dimensions([2,]*18)
ttA = xe.TTOperator(A)
# and verify its rank
print("ttA ranks:", ttA.ranks())
# the right hand side of the equation both as Tensor and in (Q)TT format
b = xe.Tensor.ones([2,]*9)
ttb = xe.TTTensor.ones(b.dimensions)
# construct a random initial guess of rank 3 for the ALS algorithm
ttx = xe.TTTensor.random([2,]*9, [3,]*8)
# and solve the system with the default ALS algorithm for symmetric positive operators
xe.ALS_SPD(ttA, ttx, ttb)
# to perform arithmetic operations we need to define some indices
i,j,k = xe.indices(3)
# calculate the residual of the just solved system to evaluate its accuracy
# here i^9 denotes a multiindex named i of dimension 9 (ie. spanning 9 indices of the respective tensors)
residual = xe.frob_norm( ttA(i^9,j^9)*ttx(j^9) - ttb(i^9) )
print("residual:", residual)
# as an comparison solve the system exactly using the Tensor / operator
x = xe.Tensor()
x(j^9) << b(i^9) / A(i^9, j^9)
# and calculate the Frobenius norm of the difference
print("error:", xe.frob_norm(x - xe.Tensor(ttx)))
```