BIOS IT Blog
vScaler integrates SLURM with Fabrex for Composable HPC
Our cloud partners vScaler, have teamed up with the folks at GigaIO to integrate the SLURM job scheduler into their HPC Cloud offering, enabling elastic scaling of PCI devices and true HPC disaggregation.
vScaler - an enterprise-class private cloud software tool, can now receive scheduled SLURM jobs, determine the various resources needed and then provision the infrastructure accordingly, including memory and cores right down to GPUs or FPGAs. These resources are then returned to the common pool upon completion and available for provisioning for other jobs.
The resources themselves, such as compute, networking, storage, GPU (or FPGA) and Intel® Optane™ memory devices are interconnected over GigaIO’s high-performance fabric (FabreX™). Fabrex can unite a vast variety of resources, connecting GPUs, TPUs, FPGAs and SoCs to other compute elements or PCI endpoint devices, such as NVMe, PCIe native storage, and other I/O resources. Span multiple servers and multiple racks to scale up single-host systems and scale-out multi-host systems, all unified via the GigaIO FabreX Switch. Fabrex helps to drastically improve the utilization rate of expensive resources like GPUs and FPGAs as they can be reconfigured on the fly as workflows change and evolve.
Alan Benjamin, CEO and President at GigaIO networks comments on LinkedIn "Anyone using SLURM, including those in AI, bioinformatics, physical world modeling, visualization and big data analytics can now run their workloads on optimized infrastructure – greatly accelerating their performance at half the TCO on average!"
The additional integration of the SLURM workload manager, an open-source job scheduler for Linux and Unix-like kernels, means that vScaler Cloud users can request traditional resources like memory and compute cores as well as PCI devices such as GPU and FPGA to be available for jobs as and when required.
Registration is now open to see a demo of the software. Fill in your details below and a member of the team will be in touch!
Not what you're looking for? Check out our archives for more content
Blog Archive