Fast to long partition error

Hi,

I am running a job at MobaXterm through .sh file that I set all the requirements with the #SBATCH command:

#SBATCH --time=10-00:00:00

#SBATCH --partition=long
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --mem-per-cpu=200

and I get the following error

slurmstepd: error: Detected 1 oom-kill event(s) in StepId=28189537.batch cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler.

Note that if I run the same .sh for a different Rscript (and change also the requirements for fast partition and less time), I do not have such a problem. What seems to be done wrong here?

Thank you in advance for your answer.
Kind regards.

Hi Panagiota,

It's a "out of memory" error. Your job/software/script request more memory than allocated ( --mem-per-cpu=200).

If you don't specify the unit, your are in MB. You have requested 200MB for your job which is It's really few... By default we use 2GB (SLURM at IFB Core - IFB Core Cluster Documentation).

So, you could retry with the unit. Somethingg like:

# 2 Go
 #SBATCH --mem=2G

Or more ?

# 200 GB
#SBATCH --mem=200G
1 « J'aime »