From ed16350c44a8ffe0e0bb717df2baabfeff45a353 Mon Sep 17 00:00:00 2001
From: pelibby16 <pelibby16@earlham.edu>
Date: Fri, 6 Dec 2024 13:47:45 -0500
Subject: [PATCH] edit batch computing module

---
 topical-units/batch-computing/README.md | 21 +++++++++++++--------
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/topical-units/batch-computing/README.md b/topical-units/batch-computing/README.md
index c20d23b..2f2c9df 100644
--- a/topical-units/batch-computing/README.md
+++ b/topical-units/batch-computing/README.md
@@ -1,4 +1,4 @@
-# Sub-Unit: Batch computing
+# Batch computing Topic Unit (Under construction)
 
 Many programs will run just fine on your laptop or through some web interface. However, for large, complex programs that require large amounts of data and take on the order of hours or days to complete, it is better to run the job as a batch through a scheduler. The scheduler can take your program/script and run it over a long period of time without you having to supervise it. It can also run a script many times in parallel.
 
@@ -11,13 +11,20 @@ Many programs will run just fine on your laptop or through some web interface. H
 
 https://wiki.cs.earlham.edu/index.php/Getting_started_on_clusters#Using_Slurm
 
-Earlham CS hosts instances of the Slurm scheduler on each of its `cluster dot earlham dot edu` systems except for Hopper (e.g. Hamilton, Whedon). If you see documents mentioning `qsub` or Torque, they are likely out of date, as we no longer host any Torque instances. Slurm can run some Torque scripts, but it is better to prepare your job to run specifically in Slurm.
+Earlham CS hosts instances of the Slurm scheduler on each of its `cluster.earlham.edu` systems except for Hopper (e.g. Hamilton, Whedon, Faraday). If you see documents mentioning `qsub` or Torque, they are likely out of date, as we no longer host any Torque instances. Slurm can run some Torque scripts, but it is better to prepare your job to run specifically in Slurm.
 
-How to Connect to Whedon or Hamilton:
-You can SSH to these machines by first connecting to Hopper, and then to either Hamilton (`hamilton dot cluster dot earlham dot edu`) or Wheden (`whedon dot cluster dot earlham dot edu`).
+How to Connect to Whedon/Hamilton/Faraday:
+You can SSH to these machines by first connecting to Hopper, and then to either Hamilton (`hamilton.cluster.earlham.edu`), Wheden (`whedon.cluster.earlham.edu`), or Faraday (`faraday.cluster.earlham.edu`).
 
+## Common Slurm Commands
+- `sinfo` - Gives information about the different available queues (called partitions), what state they are in, and which machines are running jobs for that queue. 
+- `squeue` - Shows all jobs currently running in any partition/machine in the cluster. 
+- `srun` - Run a single command through the Slurm scheduler.
+  - `srun --pty bash` - create an interactive bash session as a Slurm job on one of the available compute nodes.
+  - `srun python helloworld.py` - Run a script called `helloworld.py` on an available compute node.
+- `sbatch` - run an sbatch file, which creates a job similar to `srun` using the parameters specified in the sbatch file.
 
-## Example file: my-slurm-script.sbatch
+## Example file: `my-slurm-script.sbatch`
 ```
 #!/bin/sh
 #SBATCH --time=20
@@ -31,9 +38,7 @@ echo "queue/partition is `echo $SLURM_JOB_PARTITION`"
 echo "running on `echo $SLURM_JOB_NODELIST`"
 echo "work directory is `echo $SLURM_SUBMIT_DIR`"
 
-srun -l /bin/hostname
-srun sleep 10           # Replace this sleep command with your command line. 
-srun -l /bin/pwd
+sleep 10           # Replace this sleep command 
 ```
 ## Deliverables
 
-- 
GitLab