Training time depends on number of training samples (e.g. annotated pixels). Adding lot of very similar “looking” pixels will not improve the classifier, but increase the training time a lot. It is good practice to start with few annotations, go to live update and correct the classifier where it’s wrong.
Lazy access (and parallelization) require file formats that store volumes in chunks (squared tiles, blocks).
File formats that allow efficient reading of sub-volumes will perform better.
in ilastik we support
.h5 (hdf5) for small/medium data,
.n5 for large data.
Computations in ilastik are done in parallel whenever possible. Having a CPU with multiple cores will result in faster performance.
Block-wise computations are more efficient with increasing block-size. Having more RAM available means ilastik can work more efficiently. 3D data will in general require more RAM. E.g. we would not recommend to attempt processing 3D data in the Autocontext with less than 32 Gb of RAM.
Currently only workflows that use deep neural networks (Neural Network Workflow, Trainable Domain Adaptation) support doing calculations on a GPU.
If you have an NVidia graphics card, download and install the
-gpu builds from our download page to profit vastly improved performance in these workflows.
Other workflows, like Pixel- or Object Classification do not use the GPU for calculations.
Apple Silicon Hardware is fully supported in the latest beta release.