Skip to content

Stoomboot cluster

Aim: Describe how to access and use Nikhef's Stoomboot cluster.

Target audience: Users of the Stoomboot cluster.

Prerequisites

To access Stoomboot, you will need:

Introduction

The Stoomboot cluster is the local computing facility at Nikhef. It is accessible for users from scientific groups to perform, for example, data analysis or Monte Carlo calculations.

The Stoomboot cluster is comprised of:

The system runs HTCondor for batch processing and scheduling.

The dCache, home directories, /project and /data storage systems are all accessible from any node in the Stoomboot cluster.

Usage

Access and Use

The Stoomboot cluster can be used from the interactive nodes.

Access is either via:

Access to the interactive nodes is via ssh.

Batch processing

Most of the detailed information about batch processing and jobs can be found on the batch jobs page.

The bulk of compute power is organized in a batch system. It is suitable for work that is not interactive and can be split up in independent jobs that will last for several hours up to several days.

Batch jobs wait in the queue until the scheduler find a suitable slot for processing.

Seeing what your jobs are doing

The status of all jobs (queued, running, completed) is available via condor_q.

Cluster activity

Real-time activity for the cluster. Click on the graphs to enlarge the image.

STBC use by group

per Hour per Day

Waiting jobs

per Hour per Day

Job waiting times

per Hour per Day

Note: The black line shows the time since the most recent job started. The purple line shows the mean time between job exits (inverse of the core rollover rate).

Contact