Skip to body

CWU Faculty Websites

Turing: The CWU Supercomputer

 

Current applications on Turing (in chronological order)

 


Adrian Florea (Transilvania University), Razvan Andonie (Computer Science Department, CWU)

  • Parallel implementations  of recommender systems using deep learning and MapReduce.
  • Hyperparameter optimization of machine learning algorithms.

Software used: Spark, Deeplearning4j, Go, Tensorflow, Optunity.

Papers:

Florea, A.C, Anvik, J., Andonie, R. Spark-based Cluster Implementation of a Bug Report Assignment Recommender System, in: Lecture Notes in Artificial Intelligence 10246, L. Rutkowski et al. (eds.), Springer-Verlag, Berlin, 2017, 31-42, ISBN 978-3-319-59059-2.

Florea AC., Anvik J., Andonie R. Parallel Implementation of a Bug Report Assignment Recommender Using Deep Learningin: Lintas A., Rovetta S., Verschure P., Villa A. (eds) Artificial Neural Networks and Machine Learning – ICANN 2017, Lecture Notes in Computer Science 10614, Springer, Cham, 2017.

Florea AC, Andonie R. A Dynamic Early Stopping Criterion for Random Search in SVM Hyperparameter Optimization, in: Lazaros S. Iliadis et al. (eds), Artificial Intelligence Applications and Innovations – AIAI 2018, IFIP Advances in Information and Communication Technology (vol. 519), Springer, 2018, 168-180, ISBN: 978-3-319-92006-1.

Florea AC, Andonie R. Weighted Random Search for Hyperparameter Optimization, International Journal of Computers, Communications & Control, 14, 2019, 154-169, ISSN: 1841-9836.

Andonie, R., Florea, A. Weighted Random Search for CNN Hyperparameter Optimization, International Journal of Computers Communications & Control, 15, 2020, ISSN 1841-9844.


Dmytro Dovhalets, Boris Kovalerchuk, Szilard Vajda, Razvan Andonie (Computer Science Department, CWU)

  • Deep learning applications in visual recognition visual representation.

Software used: Tensorflow, Keras.

Papers:

Dmytro Dovhalets, Boris Kovalerchuk, Szilard Vajda, Razvan Andonie. Deep Learning of 2-D Images Representing n-D Data in General Line Coordinates, Proceedings of the 4th International Symposium on Affective Science and Engineering (ISASE2018), May 31 - June 02, 2018, Eastern Washington University, Spokane, USA, Japan Society of Kansei Engineering (publisher), ISSN: 2433-5428 1-6.


Donald Davendra (Computer Science Department, CWU)

  • Multivalent scenario reduction using R and CUDA.
  • CUDA accelerated Flowshop with blocking constraint.

Software used: CUDA , R.


Dominic Klyve (Department of Mathematics, CWU)

  • Computing the number of integers up to 10^12 with exactly k prime factors for various values of k.
  • Computing the number of square-free integers up to 10^12 with exactly k prime factors for various values of k.
  • Implementing the moving-block bootstrap procedure to estimate the density of abundant numbers.

Software used: pari/gp.


Yingbin Ge (Chemistry Department, CWU)

  • Theoretical design of long-life efficient transition metal nanocatalysts towards the activation of C-H bonds at low temperatures.

Papers:

Y. Ge, Anna Le, Gregory J. Marquino, Phuc Q. Nguyen, Kollin Trujillo, Morgan Schimelfenig, and Ashley Noble, Tools for prescreening the most active sites on Ir and Rh clusters toward C-H bond cleavage of ethane: NBO charges and Wiberg bond indexes, ACS Omega, 4, 18809-18819, 2019. 

Software used:  NWChem, GAMESS, ORCA, Gaussian.


Michael Brice, Razvan Andonie (Computer Science Department, CWU)

  • Machine learning applications in astronomy.

Papers:

Michael Brice, Razvan Andonie, Classification of Stars using Stellar Spectra collected by the Sloan Digital Sky Survey, Proceedings of the International Joint Conference on Neural Networks (IJCNN 2019), Budapest, Hungary, July 14-19, 2019.

Brice M., Andonie R. Automated Morgan Keenan Classification of Observed Stellar Collected by the Sloan Digital Sky Survey using a Single Classifier, The Astronomical Journal, 158, 2019.

Software used: Python, scikit-learn, Tensorflow.


 

 

The name is Turing and it is a supercomputer. It consists of four IBM® Power Systems™ S822LC (each known as “Minsky”) for High Performance Computing. NVIDIA NVLink Technology unlocks over 2.8X faster CPU-GPU communication between POWER8 with NVLink CPUs and Tesla P100 accelerators. POWER is the only CPU architecture with embedded NVLink for accelerated computing. NVIDIA Tesla P100 offers massive parallelism and memory bandwidth.

 

This is one of four "Minsky" IBM® Power Systems™ S822LC with two Tesla P-100 GPUs.

 

Each NVLink 1.0 interface has 32 lanes (4 bricks) rated at 19.2 GB/s.  The combined throughput is designed for up to 153.6 GB/s.

The NVIDIA Tesla P100 is the most advanced hyperscale datacenter GPU ever built.  It provides 10x the capability of the previous NVIDIA GPU delivering 5.3 teraflops of double precision compute, 10.6 TFLOPS of single precision compute and 21.2 TFLOPS of half precision FP16 compute from its 15.3 billion transistors on a die size of 610 mm2 using 16nm technology.

The 659 mm2, 22nm SOI w/Embedded DRAM & 15 levels of metal custom POWER8 processors allow the server to be configured up to 16 or 20 cores depending on processor speed.  

Find out more about the S822LC for High Performance Computing in the following IBM Redbooks Redpaper publication: IBM Power System S822LC for High Performance Computing Technical Overview and Introduction.

On top of Turing, we have installed PowerAI. PowerAI makes deep learning, machine learning, and AI more accessible and more performant. By combining this software platform for deep learning with IBM® Power Systems™, we can rapidly deploy a fully optimized and supported platform for machine learning with blazing performance. The PowerAI platform includes the most popular machine learning frameworks and their dependencies, and it is built for easy and rapid deployment.

This computer cluster will mainly be used for research, including student research. The supercomputer was acquired by and is administered through the College of The Sciences.

Read: Area’s Most Powerful Supercomputing Cluster now Operational at CWU.

On June 8th, 2018, IBM  and the U.S. Department of Energy’s Oak Ridge National Laboratory (ORNL) unveiled Summit, currently the world’s “most powerful and smartest scientific supercomputer” with a peak performance of a whopping 200,000 trillion calculations per second. Summit  and Turing have similar architectures, based on IBM Power processors, NVIDIA GPUs, and NVLink.

Read: IBM and the DoE launch the world’s fastest supercomputer.

Take the Next Step to Becoming a Wildcat.

Admissions@cwu.edu