How to use Legion nodes


Legion Overview

The Legion servers are best for problems which can be highly parallelized (particularly where each thread is doing something slightly different from the others).  Each legion node has a very large number of cores available, but those cores are slower than those on a typical Intel Xeon processor like you'll find on our other servers. Due to the large number of cores, these servers are faster at processing particular types of workloads such as molecular dynamics, genome alignment, and monte carlo simulations.

To use the new Legion servers, you will need to connect to the 'head node' ( via ssh.  The legion system uses the SLURM job scheduler, like the Condo cluster.

The Legion hardware is very unique, and a good solution for highly parallel problems, but a bad solution for problems that spend large portions of time on a single thread.  If you are interested in using the Legion servers, please contact for a consultation so we can help ensure the resource is used efficiently.

Some software may need special settings, or may need to be recompiled to take full advantage of the performance benefits of the Intel Xeon Phi processors in the Legion nodes.  We're happy to help assist with this.

Legion Job Scheduler

We use the SLURM job scheduler on Legion.  The partitions (aka queues) will be automatically assigned based on the requested amount of time and cores.

SLURM examples can be found here:

Legion Storage

Similar to the model used on the Condo, your /home storage will be rather small and meant primarily for your profile.  The /work directory is where your working data should be placed. If you are a member of jones-lab, you should use the path /work/LAS/jones-lab for your data and your scripts.

Your permanent data should be copied to the Isilon at /lss/research/jones-lab.  The /work directory should only be used as temporary storage while you're working on your project on the Legion nodes.