Tutorial 1/3 - Core features

Table of contents


Getting started

In order to use Eigen, you just need to download and extract Eigen's source code. It is not necessary to use CMake or install anything.

Here are some quick compilation instructions with GCC. To quickly test an example program, just do

g++ -I /path/to/eigen2/ my_program.cpp -o my_program

There is no library to link to. For good performance, add the -O2 compile-flag. Note however that this makes it impossible to debug inside Eigen code, as many functions get inlined. In some cases, performance can be further improved by disabling Eigen assertions: use -DEIGEN_NO_DEBUG or -DNDEBUG to disable them.

On the x86 architecture, the SSE2 instruction set is not enabled by default. Use -msse2 to enable it, and Eigen will then automatically enable its vectorized paths. On x86-64 and AltiVec-based architectures, vectorization is enabled by default.

Simple example with fixed-size matrices and vectors

By fixed-size, we mean that the number of rows and columns are fixed at compile-time. In this case, Eigen avoids dynamic memory allocation, and unroll loops when that makes sense. This is useful for very small sizes: typically up to 4x4, sometimes up to 16x16.

#include <Eigen/Core>
// import most common Eigen types
USING_PART_OF_NAMESPACE_EIGEN
int main(int, char *[])
{
m3 << 1, 2, 3, 4, 5, 6, 7, 8, 9;
Vector4i v4(1, 2, 3, 4);
std::cout << "m3\n" << m3 << "\nm4:\n"
<< m4 << "\nv4:\n" << v4 << std::endl;
}
output:
m3
1 2 3
4 5 6
7 8 9
m4:
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
v4:
1
2
3
4

top

Simple example with dynamic-size matrices and vectors

By dynamic-size, we mean that the numbers of rows and columns are not fixed at compile-time. In this case, they are stored as runtime variables and the arrays are dynamically allocated.

#include <Eigen/Core>
// import most common Eigen types
USING_PART_OF_NAMESPACE_EIGEN
int main(int, char *[])
{
for (int size=1; size<=4; ++size)
{
MatrixXi m(size,size+1); // a (size)x(size+1)-matrix of int's
for (int j=0; j<m.cols(); ++j) // loop over columns
for (int i=0; i<m.rows(); ++i) // loop over rows
m(i,j) = i+j*m.rows(); // to access matrix coefficients,
// use operator()(int,int)
std::cout << m << "\n\n";
}
VectorXf v(4); // a vector of 4 float's
// to access vector coefficients, use either operator () or operator []
v[0] = 1; v[1] = 2; v(2) = 3; v(3) = 4;
std::cout << "\nv:\n" << v << std::endl;
}
output:
0 1
0 2 4
1 3 5
0 3 6 9
1 4 7 10
2 5 8 11
0 4 8 12 16
1 5 9 13 17
2 6 10 14 18
3 7 11 15 19
v:
1
2
3
4

Warning:
* In most cases it is enough to include the Eigen/Core header only to get started with Eigen. However, some features presented in this tutorial require the Array module to be included (#include <Eigen/Array>). Those features are highlighted with a red star *. Notice that if you want to include all Eigen functionality at once, you can do:
#include <Eigen/Eigen>
This slows compilation down but at least you don't have to worry anymore about including the correct files! There also is the Eigen/Dense header including all dense functionality i.e. leaving out the Sparse module.

top

Matrix and vector types

In Eigen, all kinds of dense matrices and vectors are represented by the template class Matrix. In most cases, you can simply use one of the convenience typedefs.

The template class Matrix takes a number of template parameters, but for now it is enough to understand the 3 first ones (and the others can then be left unspecified):

Matrix<Scalar, RowsAtCompileTime, ColsAtCompileTime>

For example, Vector3d is a typedef for

Matrix<double, 3, 1>

For dynamic-size, that is in order to left the number of rows or of columns unspecified at compile-time, use the special value Eigen::Dynamic. For example, VectorXd is a typedef for

Matrix<double, Dynamic, 1>

top

Coefficient access

Eigen supports the following syntaxes for read and write coefficient access:

matrix(i,j);
vector(i)
vector[i]
vector.x() // first coefficient
vector.y() // second coefficient
vector.z() // third coefficient
vector.w() // fourth coefficient

Notice that these coefficient access methods have assertions checking the ranges. So if you do a lot of coefficient access, these assertion can have an important cost. There are then two possibilities if you want avoid paying this cost:

top

Matrix and vector creation and initialization

Predefined Matrices

Eigen offers several static methods to create special matrix expressions, and non-static methods to assign these expressions to existing matrices:

Fixed-size matrix or vector Dynamic-size matrix Dynamic-size vector
x = Matrix3f::Constant(value);
x.setZero();
x.setOnes();
x.setIdentity();
x.setConstant(value);
x.setRandom();
x = MatrixXf::Zero(rows, cols);
x = MatrixXf::Ones(rows, cols);
x = MatrixXf::Constant(rows, cols, value);
x = MatrixXf::Identity(rows, cols);
x = MatrixXf::Random(rows, cols);
x.setZero(rows, cols);
x.setOnes(rows, cols);
x.setConstant(rows, cols, value);
x.setIdentity(rows, cols);
x.setRandom(rows, cols);
VectorXf x;
x = VectorXf::Zero(size);
x = VectorXf::Ones(size);
x = VectorXf::Constant(size, value);
x = VectorXf::Random(size);
x.setZero(size);
x.setOnes(size);
x.setConstant(size, value);
x.setIdentity(size);
x.setRandom(size);
* the Random() and setRandom() functions require the inclusion of the Array module (#include <Eigen/Array>)
Basis vectors [details]
Vector3f::UnitX() // 1 0 0
Vector3f::UnitY() // 0 1 0
Vector3f::UnitZ() // 0 0 1
VectorXf::Unit(4,1) == Vector4f(0,1,0,0)
== Vector4f::UnitY()

Here is an usage example:

cout << MatrixXf::Constant(2, 3, sqrt(2)) << endl;
v.setConstant(6);
cout << "v = " << v << endl;
output:
1.41 1.41 1.41
1.41 1.41 1.41
v = 6 6 6

Casting

In Eigen, any matrices of same size and same scalar type are all naturally compatible. The scalar type can be explicitely casted to another one using the template MatrixBase::cast() function:

Matrix3d md(1,2,3);
Matrix3f mf = md.cast<float>();

Note that casting to the same scalar type in an expression is free.

The destination matrix is automatically resized in any assignment:

MatrixXf res(10,10);
Matrix3f a, b;
res = a+b; // OK: res is resized to size 3x3

Of course, fixed-size matrices can't be resized.

Map

Any memory buffer can be mapped as an Eigen expression using the Map() static method:

std::vector<float> stlarray(10);
VectorXf::Map(&stlarray[0], stlarray.size()).squaredNorm();

Here VectorXf::Map returns an object of class Map<VectorXf>, which behaves like a VectorXf except that it uses the existing array. You can write to this object, that will write to the existing array. You can also construct a named obtect to reuse it:

float array[rows*cols];
Map<MatrixXf> m(array,rows,cols);
m = othermatrix1 * othermatrix2;
m.eigenvalues();

In the fixed-size case, no need to pass sizes:

float array[9];
Map<Matrix3d> m(array);
Matrix3d::Map(array).setIdentity();

Comma initializer

Eigen also offers a comma initializer syntax which allows you to set all the coefficients of a matrix to specific values:

m << 1, 2, 3,
4, 5, 6,
7, 8, 9;
cout << m;
output:
1 2 3
4 5 6
7 8 9

Not excited by the above example? Then look at the following one where the matrix is set by blocks:

int rows=5, cols=5;
MatrixXf m(rows,cols);
m << (Matrix3f() << 1, 2, 3, 4, 5, 6, 7, 8, 9).finished(),
MatrixXf::Zero(3,cols-3),
MatrixXf::Zero(rows-3,3),
MatrixXf::Identity(rows-3,cols-3);
cout << m;
output:
1 2 3 0 0
4 5 6 0 0
7 8 9 0 0
0 0 0 1 0
0 0 0 0 1

Side note: here .finished() is used to get the actual matrix object once the comma initialization of our temporary submatrix is done. Note that despite the apparent complexity of such an expression, Eigen's comma initializer usually compiles to very optimized code without any overhead.

top

Arithmetic Operators

In short, all arithmetic operators can be used right away as in the following example. Note however that arithmetic operators are only given their usual meaning from mathematics tradition. For other operations, such as taking the coefficient-wise product of two vectors, see the discussion of .cwise() below. Anyway, here is an example demonstrating basic arithmetic operators:

mat4 -= mat1*1.5 + mat2 * (mat3/4);

which includes two matrix scalar products ("mat1*1.5" and "mat3/4"), a matrix-matrix product ("mat2 * (mat3/4)"), a matrix addition ("+") and substraction with assignment ("-=").

matrix/vector product
col2 = mat1 * col1;
row2 = row1 * mat1; row1 *= mat1;
mat3 = mat1 * mat2; mat3 *= mat1;
add/subtract
mat3 = mat1 + mat2; mat3 += mat1;
mat3 = mat1 - mat2; mat3 -= mat1;
scalar product
mat3 = mat1 * s1; mat3 = s1 * mat1; mat3 *= s1;
mat3 = mat1 / s1; mat3 /= s1;

In Eigen, only traditional mathematical operators can be used right away. But don't worry, thanks to the .cwise() operator prefix, Eigen's matrices are also very powerful as a numerical container supporting most common coefficient-wise operators.

Coefficient wise product
mat3 = mat1.cwise() * mat2;
Add a scalar to all coefficients *
mat3 = mat1.cwise() + scalar;
mat3.cwise() += scalar;
mat3.cwise() -= scalar;
Coefficient wise division *
mat3 = mat1.cwise() / mat2;
Coefficient wise reciprocal *
mat3 = mat1.cwise().inverse();
Coefficient wise comparisons *
(support all operators)
mat3 = mat1.cwise() < mat2;
mat3 = mat1.cwise() <= mat2;
mat3 = mat1.cwise() > mat2;
etc.
Trigo *:
sin , cos
mat3 = mat1.cwise().sin();
etc.
Power *:
pow , square , cube ,
sqrt , exp , log
mat3 = mat1.cwise().square();
mat3 = mat1.cwise().pow(5);
mat3 = mat1.cwise().log();
etc.
min , max ,
absolute value (abs , abs2 )
mat3 = mat1.cwise().min(mat2);
mat3 = mat1.cwise().max(mat2);
mat3 = mat1.cwise().abs();
mat3 = mat1.cwise().abs2();

* Those functions require the inclusion of the Array module (#include <Eigen/Array>).

Side note: If you think that the .cwise() syntax is too verbose for your own taste and prefer to have non-conventional mathematical operators directly available, then feel free to extend MatrixBase as described here.

So far, we saw the notation

mat1*mat2

for matrix product, and

mat1.cwise()*mat2

for coefficient-wise product. What about other kinds of products, which in some other libraries also use arithmetic operators? In Eigen, they are accessed as follows – note that here we are anticipating on further sections, for convenience.

dot product (inner product)
scalar = vec1.dot(vec2);
outer product
mat = vec1 * vec2.transpose();
cross product
#include <Eigen/Geometry>
vec3 = vec1.cross(vec2);

top

Reductions

Eigen provides several reduction methods such as: minCoeff() , maxCoeff() , sum() , trace() , norm() , squaredNorm() , all() *,and any() *. All reduction operations can be done matrix-wise, column-wise * or row-wise *. Usage example:

5 3 1
mat = 2 7 8
9 4 6
mat.minCoeff();
1
mat.colwise().minCoeff();
2 3 1
mat.rowwise().minCoeff();
1
2
4

Also note that maxCoeff and minCoeff can takes optional arguments returning the coordinates of the respective min/max coeff: maxCoeff(int* i, int* j) , minCoeff(int* i, int* j) .

Side note: The all() and any() functions are especially useful in combinaison with coeff-wise comparison operators (example).

top

Matrix blocks

Read-write access to a column or a row of a matrix:

mat1.row(i) = mat2.col(j);
mat1.col(j1).swap(mat1.col(j2));

Read-write access to sub-vectors:

Default versions Optimized versions when the size
is known at compile time

vec1.start(n)
vec1.start<n>()
the first n coeffs
vec1.end(n)
vec1.end<n>()
the last n coeffs
vec1.segment(pos,n)
vec1.segment<n>(pos)
the size coeffs in
the range [pos : pos + n [

Read-write access to sub-matrices:

mat1.block(i,j,rows,cols)
(more)
mat1.block<rows,cols>(i,j)
(more)
the rows x cols sub-matrix
starting from position (i,j)
mat1.corner(TopLeft,rows,cols)
mat1.corner(TopRight,rows,cols)
mat1.corner(BottomLeft,rows,cols)
mat1.corner(BottomRight,rows,cols)
(more)
mat1.corner<rows,cols>(TopLeft)
mat1.corner<rows,cols>(TopRight)
mat1.corner<rows,cols>(BottomLeft)
mat1.corner<rows,cols>(BottomRight)
(more)
the rows x cols sub-matrix
taken in one of the four corners
mat4x4.minor(i,j) = mat3x3;
mat3x3 = mat4x4.minor(i,j);
minor (read-write)

top

Diagonal matrices

make a diagonal matrix from a vector
this product is automatically optimized !
mat3 = mat1 * vec2.asDiagonal();
Access the diagonal of a matrix as a vector (read/write)
vec1 = mat1.diagonal();
mat1.diagonal() = vec1;

top

Transpose and Adjoint operations

transposition (read-write)
mat3 = mat1.transpose() * mat2;
mat3.transpose() = mat1 * mat2.transpose();
adjoint (read only)
mat3 = mat1.adjoint() * mat2;

top

Dot-product, vector norm, normalization

Dot-product of two vectors
vec1.dot(vec2);
norm of a vector
squared norm of a vector
vec.norm();

vec.squaredNorm()
returns a normalized vector
normalize a vector
vec3 = vec1.normalized();
vec1.normalize();

top

Dealing with triangular matrices

Read/write access to special parts of a matrix can be achieved. See this for read access and this for write access..

Extract triangular matrices
from a given matrix m:
m.part<Eigen::UpperTriangular>()
m.part<Eigen::StrictlyUpperTriangular>()
m.part<Eigen::UnitUpperTriangular>()
m.part<Eigen::LowerTriangular>()
m.part<Eigen::StrictlyLowerTriangular>()
m.part<Eigen::UnitLowerTriangular>()
Write to triangular parts
of a matrix m:
m1.part<Eigen::UpperTriangular>() = m2;
m1.part<Eigen::StrictlyUpperTriangular>() = m2;
m1.part<Eigen::LowerTriangular>() = m2;
m1.part<Eigen::StrictlyLowerTriangular>() = m2;
Special: take advantage of symmetry
(selfadjointness) when copying
an expression into a matrix
m.part<Eigen::SelfAdjoint>() = someSelfadjointMatrix;
m1.part<Eigen::SelfAdjoint>() = m2 + m2.adjoint(); // m2 + m2.adjoint() is selfadjoint

top

Special Topics

Lazy Evaluation and Aliasing: Thanks to expression templates, Eigen is able to apply lazy evaluation wherever that is beneficial.