Research IT | Dec Updates: Server downtime on Dec 27th, Condo upgrade Jan 2017, Large Scale Storage, Learning Modules

December 12, 2016

***Server Downtime Tue Dec 27***
We will be taking down all of the Research IT servers for quarterly patching on Dec 27th. Please plan your jobs accordingly, ensure your important data is moved to your research folder on the Isilon, and contact me if you have questions or concerns. 

Systems being patched include: Biocrunch, Bigram, Speedy, Speedy2, RIT1, RIT2, and ClusterOne

***Condo upgrade***
When we return from break in January, we will be migrating our users to a newly upgraded version of the Condo cluster (known as

The primary changes are outlined below:

OS has been upgraded from RHEL6 to RHEL7 (to match our other servers)
Condo2017 will use the RISA software repository (modules will match our other ResearchIT machines)
SLURM scheduler will replace PBS (same as on ClusterOne)
We will be using Active Directory accounts instead of local accounts (same as on ResearchIT machines)
Automated Google Authenticator setup & resets

What does this mean for you?
- Login method will change
- You may need to modify job scripts to reflect different software versions, or module names
- You may need to modify job scripts to work with the SLURM scheduler (it will generally accept PBS scripts, but some small changes may be required)

Why is this change happening?
- Having the latest OS and software will allow you to compile and run newer programs
- Simplified login process
- Lower administrative overhead

We will be working on creating documentation and training videos between now and Jan to ensure everyone is prepared for the change. In the meantime, if you have questions please let me know. As we get closer to the cut-over, I'll provide more details.

*** Large Scale Storage***
We are working on building a new Large Scale Storage system targeted at handling the data storage needs of research groups. The functionally will be very similar to the current Isilon file storage, but at a much lower cost, and with the added benefit of an offsite backup. Hardware will be arriving late this month, with testing happening through mid January.  We plan to start migrating our first research groups in late January. I will be sending out information soon on how to participate in the service.

***Learning Modules***
We have been collaborating with various groups on campus, including the HPC team, to build a series of training modules for Research Computing. The modules will cover topics such as: login procedures, basic linux commands, and parallel computing development. You can find the videos on our YouTube channel here:

Please check back often for new videos. We'll have some videos on HPC, MPI, and OpenMP created by Glenn Luecke published within the next few weeks.