This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

System specifications

Overview of DAIC system specifications and comparison with other TU Delft clusters.

This section provides an overview of the Delft AI Cluster (DAIC) infrastructure and its comparison with other compute facilities at TU Delft.

DAIC partitions and access/usage best practices

DAIC partitions and access/usage best practices

1 - Login Nodes

Overview of DAIC login nodes and appropriate usage guidelines.

Login nodes act as the gateway to the DAIC cluster. They are intended for lightweight tasks such as job submission, file transfers, and compiling code (on specific nodes). They are not designed for running resource-intensive jobs, which should be submitted to the compute nodes.

Specifications and usage notes

HostnameCPU (Sockets x Model)Total CoresTotal RAMOperating SystemGPU TypeGPU CountUsage Notes
login11 x Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz815.39 GBOpenShift EnterpriseQuadro K22001For file transfers, job submission, and lightweight tasks.
login21 x Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz13.70 GBOpenShift EnterpriseN/AN/AVirtual server, for non-intensive tasks. No compilation.
login32 x Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz32503.60 GBRHEVQuadro K22001For large compilation and interactive sessions.

2 - Compute nodes

The foundational hardware components of DAIC.

DAIC compute nodes are high-performance servers with multiple CPUs, large memory, and, on many nodes, one or more GPUs. The cluster is heterogeneous: nodes vary in processor types, memory sizes, GPU configurations, and performance characteristics.

If your application requires specific hardware features, you must request them explicitly in your job script (see Submitting jobs).

CPUs

All compute nodes have multiple CPUs (sockets), each with multiple cores. Most nodes support hyper-threading, which allows two threads per physical core. The number of cores per node is listed in the List of all nodes section.

Request CPUs based on how many threads your program can use. Oversubscribing doesn’t improve performance and may waste resources. Undersubscribing may slow your job due to thread contention.

To request CPUs for your jobs, see Job scripts.

GPUs

Many nodes in DAIC include one or more NVIDIA GPUs.GPU types differ in architecture, memory size, and compute capability. The table that follows summarizes the main GPU types in DAIC. For a per-node overview, see the List of all nodes section.

To request GPUs in your job, use --gres=gpu:<type>:<count>. See GPU jobs for more information.

Table 1: Counts and specifications of DAIC GPUs
GPU (slurm) type
CountModelArchitectureCompute CapabilityCUDA coresMemory
l4018NVIDIA L40Ada Lovelace8.91817649152 MiB
a4084NVIDIA A40Ampere8.61075246068 MiB
turing24NVIDIA GeForce RTX 2080 TiTuring7.5435211264 MiB
v10011Tesla V100-SXM2-32GBVolta7.0512032768 MiB

In table 1: the headers denote:

Model
The official product name of the GPU
Architecture
The hardware design used in the GPU, which defines its specifications and performance characteristics. Each architecture (e.g., Ampere, Turing, Volta) represents a different GPU generation.
Compute capability
A version number indicating the features supported by the GPU, including CUDA support. Higher values offer more advanced functionality.
CUDA cores
The number of processing cores available on the GPU. More CUDA cores allow more parallel operations, improving performance for parallelizable workloads.
Memory
The total internal memory on the GPU. This memory is required to store data for GPU computations. If a model’s memory is insufficient, performance may be severely affected.

Memory

Each node has a fixed amount of RAM, shown in the List of all nodes section. Jobs may only use the memory explicitly requested using --mem or --mem-per-cpu. Exceeding the allocation may result in job failure.

Memory cannot be shared across nodes, and unused memory cannot be reallocated.

For memory-efficient jobs, consider tuning your requested memorey to match your code’s peak usage closely. Fore more information, see Slurm basics.

List of all nodes

The following table gives an overview of current nodes and their characteristics. Use the search bar to filter by hostname, GPU type, or any other column, and select columns to be visible.

HostnameCPU (Sockets x Model)Cores per SocketTotal CoresCPU Speed (MHz)Total RAM (GiB)Local Disk (/tmp, GiB)GPU TypeGPU CountSlurmPartitionsSlurmActiveFeatures
100plus2 x Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz16322097.5947553174N/A0general;ewi-insyavx;avx2;ht;10gbe;bigmem
3dgi11 x AMD EPYC 7502P 32-Core Processor32322500.000251148N/A0general;bk-ur-udsavx;avx2;ht;10gbe;ssd
3dgi21 x AMD EPYC 7502P 32-Core Processor32322500.000251148N/A0general;bk-ur-udsavx;avx2;ht;10gbe;ssd
awi012 x Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz18363494.921376393Tesla V100-PCIE-32GB1general;tnw-imphysavx;avx2;ht;10gbe;avx512;gpumem32;nvme;ssd
awi022 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz14282899.951504393Tesla V100-SXM2-16GB2general;tnw-imphysavx;avx2;ht;10gbe;bigmem;ssd
awi042 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz14282899.9515035529N/A0general;tnw-imphysavx;avx2;ht;ib;imphysexclusive
awi082 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz14282899.9515035529N/A0general;tnw-imphysavx;avx2;ht;ib;imphysexclusive
awi092 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz14282899.9515035529N/A0general;tnw-imphysavx;avx2;ht;ib;imphysexclusive
awi102 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz14282899.9515035529N/A0general;tnw-imphysavx;avx2;ht;ib;imphysexclusive
awi112 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz14282899.9515035529N/A0general;tnw-imphysavx;avx2;ht;ib;imphysexclusive
awi122 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz14282899.9515035529N/A0general;tnw-imphysavx;avx2;ht;ib;imphysexclusive
awi192 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz14282899.951251856N/A0general;tnw-imphysavx;avx2;ht;ib;ssd
awi202 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz14282899.951251856N/A0general;tnw-imphysavx;avx2;ht;ib;ssd
awi212 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz14282899.951251856N/A0general;tnw-imphysavx;avx2;ht;ib;ssd
awi222 x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz14282899.951251856N/A0general;tnw-imphysavx;avx2;ht;ib;ssd
awi232 x Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz18362672.149376856N/A0general;tnw-imphysavx;avx2;ht;ib;ssd
awi242 x Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz18363299.932376856N/A0general;tnw-imphysavx;avx2;ht;ib;ssd
awi252 x Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz18363542.370376856N/A0general;tnw-imphysavx;avx2;ht;ib;ssd
awi262 x Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz18362840.325376856N/A0general;tnw-imphysavx;avx2;ht;ib;ssd
cor12 x Intel(R) Xeon(R) Gold 6242 CPU @ 2.80GHz16323573.31515107168Tesla V100-SXM2-32GB8general;me-coravx;avx2;ht;10gbe;avx512;gpumem32;ssd
gpu012 x AMD EPYC 7413 24-Core Processor24482650.000503415NVIDIA A403general;ewi-insyavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu022 x AMD EPYC 7413 24-Core Processor24482650.000503415NVIDIA A403general;ewi-insyavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu032 x AMD EPYC 7413 24-Core Processor24482650.000503415NVIDIA A403general;ewi-insyavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu042 x AMD EPYC 7413 24-Core Processor24482650.000503415NVIDIA A403general;ewi-insyavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu052 x AMD EPYC 7413 24-Core Processor24482650.000503415NVIDIA A403general;ewi-stavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu062 x AMD EPYC 7413 24-Core Processor24482650.000503415NVIDIA A403general;ewi-stavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu072 x AMD EPYC 7413 24-Core Processor24482650.000503415NVIDIA A403general;ewi-stavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu082 x AMD EPYC 7413 24-Core Processor24482650.000503415NVIDIA A403general;ewi-stavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu092 x AMD EPYC 7413 24-Core Processor24482650.000503415NVIDIA A403general;tnw-imphysavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu102 x AMD EPYC 7413 24-Core Processor24482650.000503415NVIDIA A403general;tnw-imphysavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu112 x AMD EPYC 7413 24-Core Processor24482650.000503415NVIDIA A403bk-ur-uds;generalavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu122 x AMD EPYC 7413 24-Core Processor24482650.000503415NVIDIA A403general;ewi-stavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu142 x AMD EPYC 7543 32-Core Processor32642800.000503856NVIDIA A403general;ewi-stavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu152 x AMD EPYC 7543 32-Core Processor32642800.000503856NVIDIA A403general;ewi-stavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu162 x AMD EPYC 7543 32-Core Processor32642800.000503856NVIDIA A403general;ewi-stavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu172 x AMD EPYC 7543 32-Core Processor32642800.000503856NVIDIA A403general;ewi-stavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu182 x AMD EPYC 7543 32-Core Processor32642800.000503856NVIDIA A403general;ewi-stavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu192 x AMD EPYC 7543 32-Core Processor32642800.000503856NVIDIA A403general;ewi-insyavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu202 x AMD EPYC 7543 32-Core Processor32642800.0001007856NVIDIA A403general;ewi-insyavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu212 x AMD EPYC 7543 32-Core Processor32642800.0001007856NVIDIA A403general;ewi-insy-prb;ewi-insyavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu222 x AMD EPYC 7543 32-Core Processor32642800.0001007856NVIDIA A403general;ewi-insyavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu232 x AMD EPYC 7543 32-Core Processor32642800.0001007856NVIDIA A403general;ewi-insyavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu242 x AMD EPYC 7543 32-Core Processor32642800.0001007856NVIDIA A403general;ewi-insyavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu252 x AMD EPYC 7543 32-Core Processor32642800.0001007856NVIDIA A403mmll;general;ewi-insyavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu262 x AMD EPYC 7543 32-Core Processor32642800.0001007856NVIDIA A403lr-asm;generalavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu272 x AMD EPYC 7543 32-Core Processor32642800.000503856NVIDIA A403me-cor;generalavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu282 x AMD EPYC 7543 32-Core Processor32642800.000503856NVIDIA A403me-cor;generalavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu292 x AMD EPYC 7543 32-Core Processor32642800.000503856NVIDIA A403me-cor;generalavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu301 x AMD EPYC 9534 64-Core Processor64642450.000755856NVIDIA L403ewi-insy;generalavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu311 x AMD EPYC 9534 64-Core Processor64642450.000755856NVIDIA L403ewi-insy;generalavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu321 x AMD EPYC 9534 64-Core Processor64642450.000755856NVIDIA L403ewi-me-sps;generalavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu331 x AMD EPYC 9534 64-Core Processor64642450.000755856NVIDIA L403lr-co;generalavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu341 x AMD EPYC 9534 64-Core Processor64642450.000755856NVIDIA L403ewi-insy;generalavx;avx2;10gbe;bigmem;gpumem32;ssd
gpu351 x AMD EPYC 9534 64-Core Processor64642450.000755856NVIDIA L403bk-ar;generalavx;avx2;10gbe;bigmem;gpumem32;ssd
grs12 x Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz8163499.804251181N/A0citg-grs;generalavx;avx2;ht;ib;ssd
grs22 x Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz8163499.804251181N/A0citg-grs;generalavx;avx2;ht;ib;ssd
grs32 x Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz8163499.804251181N/A0citg-grs;generalavx;avx2;ht;ib;ssd
grs42 x Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz8163500251181N/A0citg-grs;generalavx;avx2;ht;ib;ssd
influ12 x Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz16323385.711376197NVIDIA GeForce RTX 2080 Ti8influence;ewi-insy;generalavx;avx2;ht;10gbe;avx512;nvme;ssd
influ22 x Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz16322300.000187369NVIDIA GeForce RTX 2080 Ti4influence;ewi-insy;generalavx;avx2;ht;10gbe;avx512;ssd
influ32 x Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz16322300.000187369NVIDIA GeForce RTX 2080 Ti4influence;ewi-insy;generalavx;avx2;ht;10gbe;avx512;ssd
influ42 x AMD EPYC 7452 32-Core Processor32642350.000252148N/A0influence;ewi-insy;generalavx;avx2;ht;10gbe;ssd
influ52 x AMD EPYC 7452 32-Core Processor32642350503148N/A0influence;ewi-insy;generalavx;avx2;ht;10gbe;bigmem;ssd
influ62 x AMD EPYC 7452 32-Core Processor32642350503148N/A0influence;ewi-insy;generalavx;avx2;ht;10gbe;bigmem;ssd
insy152 x Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz16322300.000754416NVIDIA GeForce RTX 2080 Ti4ewi-insy;generalavx;avx2;ht;10gbe;avx512;bigmem;ssd
insy162 x Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz16322300.000754416NVIDIA GeForce RTX 2080 Ti4ewi-insy;generalavx;avx2;ht;10gbe;avx512;bigmem;ssd
Total (66 nodes)3016 cores35.02 TiB76.79 TiB137 GPU

3 - Storage

What are the foundational components of DAIC?

Storage

DAIC compute nodes have direct access to the TU Delft home, group and project storage. You can use your TU Delft installed machine or an SCP or SFTP client to transfer files to and from these storage areas and others (see data transfer) , as is demonstrated throughout this page.

File System Overview

Unlike TU Delft’s DelftBlue , DAIC does not have a dedicated storage filesystem. This means no /scratch space for storing temporary files (see DelftBlue’s Storage description and Disk quota and scratch space ). Instead, DAIC relies on direct connection to the TU Delft network storage filesystem (see Overview data storage ) from all its nodes, and offers the following types of storage areas:

Personal storage (aka home folder)

The Personal Storage is private and is meant to store personal files (program settings, bookmarks). A backup service protects your home files from both hardware failures and user error (you can restore previous versions of files from up to two weeks ago). The available space is limited by a quota (see Quotas) and is not intended for storing research data.

You have two (separate) home folders: one for Linux and one for Windows (because Linux and Windows store program settings differently). You can access these home folders from a machine (running Linux or Windows OS) using a command line interface or a browser via TU Delft's webdata . For example, Windows home has a My Documents folder. My documents can be found on a Linux machine under /winhome/<YourNetID>/My Documents

Home directoryAccess fromStorage location
Linux  home folder
Linux/home/nfs/<YourNetID>
Windowsonly accessible using an scp/sftp client (see SSH access)
webdatanot available
Windows home folder
Linux/winhome/<YourNetID>
WindowsH: or \\tudelft.net\staff-homes\[a-z]\<YourNetID>
webdatahttps://webdata.tudelft.nl/staff-homes/[a-z]/<YourNetID>

It’s possible to access the backups yourself. In Linux the backups are located under the (hidden, read-only) ~/.snapshot/ folder. In Windows you can right-click the H: drive and choose Restore previous versions.

Group storage

The Group Storage is meant to share files (documents, educational and research data) with department/group members. The whole department or group has access to this storage, so this is not for confidential or project data. There is a backup service to protect the files, with previous versions up to two weeks ago. There is a Fair-Use policy for the used space.

DestinationAccess fromStorage location
Group Storage
Linux/tudelft.net/staff-groups/<faculty>/<department>/<group> or
/tudelft.net/staff-bulk/<faculty>/<department>/<group>/<NetID>
WindowsM: or \\tudelft.net\staff-groups\<faculty>\<department>\<group> or
L: or \\tudelft.net\staff-bulk\ewi\insy\<group>\<NetID>
webdatahttps://webdata.tudelft.nl/staff-groups/<faculty>/<department>/<group>/

Project Storage

The Project Storage is meant for storing (research) data (datasets, generated results, download files and programs, …) for projects. Only the project members (including external persons) can access the data, so this is suitable for confidential data (but you may want to use encryption for highly sensitive confidential data). There is a backup service and a Fair-Use policy for the used space.

Project leaders (or supervisors) can request a Project Storage location via the Self-Service Portal or the Service Desk .

DestinationAccess fromStorage location
Project Storage
Linux/tudelft.net/staff-umbrella/<project>
WindowsU: or \\tudelft.net\staff-umbrella\<project>
webdatahttps://webdata.tudelft.nl/staff-umbrella/<project> or
https://webdata.tudelft.nl/staff-bulk/<faculty>/<department>/<group>/<NetID>

Local Storage

Local storage is meant for temporary storage of (large amounts of) data with fast access on a single computer. You can create your own personal folder inside the local storage. Unlike the network storage above, local storage is only accessible on that computer, not on other computers or through network file servers or webdata. There is no backup service nor quota. The available space is large but fixed, so leave enough space for other users. Files under /tmp that have not been accessed for 10 days are automatically removed. A process that has a file opened can access the data until the file is closed, even when the file is deleted. When the file is deleted, the file entry will be removed but the data will not be removed until the file is closed. Therefore, files that are kept open by a process can be used for longer. Additionally, files that are being accessed (read, written) multiple times within one day won’t be deleted.

DestinationAccess fromStorage location
Local storage
Linux/tmp/<NetID>
Windowsnot available
webdatanot available

Memory Storage

Memory storage is meant for short-term storage of limited amounts of data with very fast access on a single computer. You can create your own personal folder inside the memory storage location. Memory storage is only accessible on that computer, and there is no backup service nor quota. The available space is limited and shared with programs, so leave enough space (the computer will likely crash when you don’t!). Files that have not been accessed for 1 day are automatically removed.

DestinationAccess fromStorage location
Memory storage
Linux/dev/shm/<NetID>
Windowsnot available
webdatanot available

Checking quota limits

The different storage areas accessible on DAIC have quotas (or usage limits). It’s important to regularly check your usage to avoid job failures and ensure smooth workflows.

Helpful commands

  • For /home:
$ quota -s -f ~
Disk quotas for user netid (uid 000000): 
     Filesystem   space   quota   limit   grace   files   quota   limit   grace
svm111.storage.tudelft.net:/staff_homes_linux/n/netid
                 12872M  24576M  30720M           19671   4295m   4295m  
  • For project space: You can use either:
$ du -hs /tudelft.net/staff-umbrella/my-cool-project
37G	/tudelft.net/staff-umbrella/my-cool-project

Or:

$ df -h /tudelft.net/staff-umbrella/my-cool-project
Filesystem                                       Size  Used Avail Use% Mounted on
svm107.storage.tudelft.net:/staff_umbrella_my-cool-project  1,0T   38G  987G   4% /tudelft.net/staff-umbrella/my-cool-project

Note that the difference is due to snapshots, which can stay for up to 2 weeks

4 - Scheduler

What are the foundational components of DAIC?

Workload scheduler

DAIC uses the Slurm scheduler to efficiently manage workloads. All jobs for the cluster have to be submitted as batch jobs into a queue. The scheduler then manages and prioritizes the jobs in the queue, allocates resources (CPUs, memory) for the jobs, executes the jobs and enforces the resource allocations. See the job submission pages for more information.

A slurm-based cluster is composed of a set of login nodes that are used to access the cluster and submit computational jobs. A central manager orchestrates computational demands across a set of compute nodes. These nodes are organized logically into groups called partitions, that defines job limits or access rights. The central manager provides fault-tolerant hierarchical communications, to ensure optimal and fair use of available compute resources to eligible users, and make it easier to run and schedule complex jobs across compute resources (multiple nodes).

5 - Cluster comparison

Overview of the clusters available to TU Delft (CS) researchers

Cluster comparison

TU Delft clusters

DAIC is one of several clusters accessible to TU Delft CS researchers (and their collaborators). The table below gives a comparison between these in terms of use case, eligible users, and other characteristics.

SystemBest forStrengthsUse it whenAccess & Support
🎓 DAIC LogoAI/ML training; data-centric workflows; GPU‑intensive workloadsLarge NVIDIA GPU pool (L40, A40, RTX 2080 Ti, V100 SXM2); local expert support (REIT and ICT); direct TU Delft storageQuick iteration, hyper‑parameter sweeps, demos, and almost any workload from participating groups; queues are generally shorter than DelftBlue but limited by available GPUsAccess SpecsCommunity
🎓 DelftBlue LogoCPU/MPI jobs; high‑memory runs; large per-GPU memory needed; educationLarge CPU pool; larger Nvidia GPUs (A100); dedicated scratch storage; local expert support (DHPC, ICT)Many cores, tightly‑coupled MPI, long CPU jobs, or very high memory per node; educationAccessSpecsCommunity
🎓 DAS-6 LogoDistributed systems research; streaming; edge/fog computing; in-network processingMulti‑site testbed; mix of GPUs (16× A4000, 4× A5000) and CPUsCross‑cluster experiments, network‑sensitive prototypesAccessDocsProject
🇳🇱 Snellius LogoNational‑scale runs; larger GPU pools; cross‑institutional projectsLarge CPU+GPU partitions (A100 and H100); mature SURF user support; common NL platformWhen local capacity/queue limits progress or when collaborating with other Dutch institutionsAccessDocsSpecs
🇪🇺 LUMI LogoEuro‑scale AI/data; very large GPU jobs; benchmarking at scaleTier‑0 system with AMD MI250 GPUs (LUMI‑G); high‑performance I/O; strong EuroHPC ecosystemBeyond Snellius capacity or part of a funded EU consortium / EuroHPC allocationAccessDocs

TU Delft cloud resources

For both education and research activities, TU Delft has established the Cloud4Research program. Cloud4Research aims to facilite the use of public cloud resources, primarily Amazon AWS. At the administrative level, Cloud4Research provides AWS accounts with an initial budget. Subsequent billing can be incurred via a project code, instead of a personal credit card. At the technical level, the ICT innovation teams provides intake meetings to facilitate getting started. Please refer to the Policies and FAQ pages for more details.