Performance tips when working with ilastik

Usage tips

  • ilastik will (ideally) only work on the parts of the data that you are currently viewing: if you zoom out more → ilastik will have to calculate results for a larger area which will take more time for the view to update.
  • As a consequence, in 3D: if you have all 3 views open but only look at one of them → 3 times the work without benefit. You can maximize the view your mainly working with by clicking on the square icon in the top right corner. To get all views back, click on the quadview icon .
  • scrolling through 3D volumes/time series ilastik will predict every (time) slice even if it’s only scrolled through → disable live update when doing any larger navigation.

Training shallow learning classifiers:

Training time depends on number of training samples (e.g. annotated pixels). Adding lot of very similar “looking” pixels will not improve the classifier, but increase the training time a lot. It is good practice to start with few annotations, go to live update and correct the classifier where it’s wrong.

File Formats

Lazy access (and parallelization) require file formats that store volumes in chunks (squared tiles, blocks). File formats that allow efficient reading of sub-volumes will perform better. in ilastik we support .h5 (hdf5) for small/medium data, .n5 for large data.

How to convert your data? Use our Fiji Plugin (can be done efficiently in a macro), from Python using a Jupyter notebook.

Hardware considerations


Computations in ilastik are done in parallel whenever possible. Having a CPU with multiple cores will result in faster performance.


Block-wise computations are more efficient with increasing block-size. Having more RAM available means ilastik can work more efficiently. 3D data will in general require more RAM. E.g. we would not recommend to attempt processing 3D data in the Autocontext with less than 32 Gb of RAM.


Currently only workflows that use deep neural networks (Neural Network Workflow, Trainable Domain Adaptation) support doing calculations on a GPU. If you have an NVidia graphics card, download and install the -gpu builds from our download page to profit vastly improved performance in these workflows.

Other workflows, like Pixel- or Object Classification do not use the GPU for calculations.

Apple Silicon Support

Apple Silicon Hardware is fully supported in the latest beta release.