fbpx

GPU-as-a-Service

AI Factory

GPU-as-a-Service

SNS Network, in strategic partnership with Dell and NVIDIA, is set to launch a fully managed AI cloud infrastructure powered by 64 x NVIDIA H100 GPUs.
This locally hosted GPU-as-a-Service is specifically designed to provide customers with the high-performance computing resources required to support their AI workloads. 

Why GPU-as-a-Service (GPUaaS)

cost-effective_17994060

Cost Efficiency

creative_14013976

Conduct POC on local infrastructure

expand_6050107

Scalability and flexibility

graphics-card_5658554

Boost ROI by Optimising GPU usage

business-intelligence_14114854

Enhance performance for AI and Machine Learning (ML)

save-planet_9649780

Sustainability

Why SNS?

gpuaas_dsp

Data Sovereignty and Privacy

Ensuring compliance with local regulations and protecting data within national boundaries.
expand_6050107

Scalability and Flexibility

Scalable resources to meet growing AI demands, enabling businesses to adapt to evolving workloads without the need for upfront infrastructure investments.
graphics-card_5658554

Personalized AI Readiness Assessments

Leverage AI readiness assessments conducted by Dell and NVIDIA experts to provide a tailored roadmap for a seamless transition to GPUaaS solutions.

Explore Dell Technologies Extensive Portfolio To Bring AI To Your Data

Infra and Platform Software Stack

Item Description Model Quantity Remarks
Servers Compute Nodes Dell PowerEdge XE9680 4 H100 GPUs
Control Plane Nodes Dell PowerEdge R660 3 No GPUs
Switches Fabric / Compute Switches NVIDIA MQM9700 2 InfiniBand switch
Storage / Kubernetes Switches Dell PowerSwitch Z9432F 2 Storage and client traffic
Management Switches Dell PowerSwitch N3248TE-ON 1 Management network
Storage Storage Dell PowerScale F710 3 Pool usable capacity: 152TB @ 100% utilisation
Software Software NVIDIA AI Enterprise 1 NVIDIA software
GPU Orchestration Platform NVIDIA Platform 1 NVIDIA platform

Subscription Options

We offer flexible subscription options

calendar_2773319

Monthly

calendar_591576

Annually

servers_3634846

Reserved instances for 1, 2, or 3 years

How the Technology Works

NVIDIA Multi-Instance GPU (MIG)

Without MIG, different jobs running on the same GPU, such as different AI inference requests, compete for the same resources. A job consuming larger memory bandwidth starves others, resulting in several jobs missing their latency targets. With MIG, jobs run simultaneously on different instances, each with dedicated resources for compute, memory, and memory bandwidth, resulting in predictable performance with QoS and maximum GPU utilisation.

How Can We Help You?

Get In Touch With Us and Learn More

×

Hello!

Click one of our representatives below to chat on WhatsApp or send us an email to online@sns.com.my

× How can I help you?