Access to bigmem

Good morning,

We want to assemble the genome of Bactris gasipaes, also known as peach palm. Could you give access to bigmem partition for this account tcouvreur

Thank you for your help.

Hello Maria,

Have you already try with standard nodes (you can have 250GB per node) ?
I see some requests with 200GB but your jobs only use some GB. I wonder if there is not an issue with the software.

Hello,

We tried to use long partition but due to the amount of data (700Gb) that we want to analyze right now, we need more memory to use the software.

Thank you,

Maria.

Hello Maria,

Long partition is dedicated to long jobs (> 24 hours): Slurm at IFB - IFB Core Cluster Documentation

Could you try by requesting more memory with --mem option:
On the command line:

sbatch --mem=250G

Or in the bash script:

#SBATCH --mem=250G

Hello, thank you for your answer.

We have tried these three scripts to run the job but they crashed due to memory.

First:
#SBATCH -p long
#SBATCH --ntasks=48
#SBATCH --mem-per-cpu=30G

Second:
#SBATCH -p long
#SBATCH --ntasks=48
#SBATCH --mem=250G

Third:
#SBATCH -p long
#SBATCH --nodes=6
#SBATCH --ntasks=48
#SBATCH --mem=250G

Errors:
-slurmstepd: error: Detected 1 oom-kill event(s) in step 20732376.batch cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler.
-ERROR: Caught unhandled exception: std::bad_alloc

Thank you for your support.

Maria.

Hello Maria,

It's not useful to use more than 1 node if the software is not able to manage this nodes (with something like MPI).
So I recommend to use:

#SBATCH -p long
#SBATCH --ntasks=48
#SBATCH --mem=250G

With 250GB, I see only one job crash due to memory.
Your last job failed due to a software error but not because of memory.
Just to be sure, do you know why ?

Could you also tell me why you request this access to tcouvreur and not for your account ?

Thank your for your tests and feedback.