Research IT | Feb Updates: Condo 2017 transition, LSS, New Server, Learning Modules

February 10, 2017

***Condo2017 transition scheduled for Wed 2/15***

As I've mentioned in previous emails, our team has been working closely with the HPC team to make improvements to the Condo environment. We are now ready to make the transition, and are targeting next Wednesday 2/15 for the move. Here are they key points you need to know:

New address:
Login using your NetID and regular University password
Scheduler has been changed from PBS to SLURM
Jobs running on Condo will be allowed to finish there, but jobs that will extend past 2/15 will not be able to be started
You will have a new /home directory (old will be mounted read-only at /oldhome)
You will have a new /work directory (old will be mounted read-only at /oldwork)
Software modules will use RISA (like all of our other servers)

The new /work/LAS directory will be on re-architected backend storage to address the performance problems some of you have recently experienced. To temporarily address the performance issues, the backup of /work/LAS is being suspended. Please ensure you copy your important long-term data off to /myfiles as soon as possible, and remember to always treat the /work directory as short to medium term storage only.

As we transition to Condo2017, please re-examine your job scripts and consider changing where you read/write data to a faster location. In general order of speed, the following storage locations are available:
$TMPDIR (storage local to each node, auto-removed when the job finishes)
/ptmp/LAS (parallel temporary storage, accessible by all nodes. Please remove when you're done, auto-remove happens at 90 days)
/work/LAS (parallel medium term storage, accessible by all nodes. Please copy final results you want to keep to /myfiles/las/research/yourfolder)
Find more info here:
And here:

!!!You will need to copy the data you want to keep from /oldwork to /myfiles or /work/LAS by the end of February!!!

We will soon be publishing some videos explaining about the transition in more detail. Check our website for those.

I'll send another reminder on Wednesday. Please email any questions to

***Large Scale Storage***
Our new storage hardware has arrived and is configured. We are currently doing performance testing, and copying over data from MyFiles. I will be in contact with individual research labs in the coming weeks to start pilot testing on the new storage system. Our goal is to complete the transition from MyFiles by late spring.

The cost of the new storage will be $40/TB/Year, with the LAS college continuing to provide a subsidy to our faculty as we previously did on MyFiles. We are working with the other colleges to hopefully establish similar arrangements. Look for more communication later on this topic.

***New Highly Parallel Server***
We have recently completed a collaborative purchase of several new servers with the LAS college, several departments, and many researchers. The new servers are running the latest Intel Knights Landing processor, optimized for highly parallel workloads (we have 1088 threads available across the 4 nodes). We are currently  doing some pilot testing, and will be contacting the contributors soon to setup access.

***Learning modules***
We now have a variety of videos posted covering topics including: basic login procedures, linux commands, and HPC computing.  You can find the videos on our website:

Or our YouTube Channel:

Thanks to Glenn Luecke, Jim Coyle, and the HPC team for participating in and supporting this effort.