• slide
  • slide
  • slide
  • slide

Overview

Since 1982, Microway has served government, defense, and research-computing– designing hardware for bleeding-edge computational performance.

Microway solutions are designed for the intersection of AI and HPC. We architect clusters, servers, quiet workstations, including with every variety of NVIDIA GPU. We also build the data-planes that keep up with these advanced workloads.

Our customers trust us to enable them to remain at the forefront of their fields and solve the world’s toughest computational challenges. Microway is classified as a small business, woman owned and operated.

Ultra-Quiet Workstations

Ultra-Quiet Workstations

  • Microway AMD EPYC Mid-Tower WhisperStation

    An AMD EPYC “Rome” based WhisperStation designed for the ultimate in CPU performance. This system includes dual AMD EPYC 7252 CPUs, 256GB of DDR4 3200 memory, NVMe SSDs. Every WhisperStation configuration features near-silent power supplies, quiet CPU coolers, noise isolation material, and ultra-quiet fans.

  • Data Science WhisperStation (Single Quadro RTX 6000 GPU)

    A GPU-accelerated platform for end-to-end Data-Science. Data-Science WhisperStation includes an NVIDIA approved software stack and the single NVIDIA Quadro RTX GPU that combine to greatly accelerate the workflow of every data scientist.

    Data-Science WhisperStation is the only ultra-quiet NVIDIA Data Science Workstation platform available and features a proprietary combination of near-silent power supplies, quiet CPU coolers, noise isolation material, and ultra-quiet fans.

  • Data Science WhisperStation (Dual Quadro RTX 6000 GPU)

    A GPU-accelerated platform for end-to-end Data-Science. Data-Science WhisperStation includes an NVIDIA approved software stack and the NVIDIA Quadro RTX GPUs (for larger models) that combine to greatly accelerate the workflow of every data scientist.

    Data-Science WhisperStation is the only ultra-quiet NVIDIA Data Science Workstation platform available and features a proprietary combination of near-silent power supplies, quiet CPU coolers, noise isolation material, and ultra-quiet fans.

Data-Science Workstations

Data-Science Workstations

  • Data Science WhisperStation (Single Quadro RTX 6000 GPU)

    A GPU-accelerated platform for end-to-end Data-Science. Data-Science WhisperStation includes an NVIDIA approved software stack and the single NVIDIA Quadro RTX GPU that combine to greatly accelerate the workflow of every data scientist.

    Data-Science WhisperStation is the only ultra-quiet NVIDIA Data Science Workstation platform available and features a proprietary combination of near-silent power supplies, quiet CPU coolers, noise isolation material, and ultra-quiet fans.

  • Data Science WhisperStation (Dual Quadro RTX 6000 GPU)

    A GPU-accelerated platform for end-to-end Data-Science. Data-Science WhisperStation includes an NVIDIA approved software stack and the NVIDIA Quadro RTX GPUs (for larger models) that combine to greatly accelerate the workflow of every data scientist.

    Data-Science WhisperStation is the only ultra-quiet NVIDIA Data Science Workstation platform available and features a proprietary combination of near-silent power supplies, quiet CPU coolers, noise isolation material, and ultra-quiet fans.

Servers

Servers

  • Intel Xeon 2U TwinPro² Servers (Four Nodes per 2U)

    Four 2U Xeon servers in a dense rackmount platform for the datacenter. This Intel Xeon 2U TwinPro² configuration includes 2x Xeon 16-core CPUs per node, 192GB DDR4-2933 memory, Intel Data Center SSDs, and Mellanox InfiniBand. Each node also includes a fully integrated software stack with Linux OS, OFED, OpenMPI, workload manager, and open-source GNU compilers.

  • AMD EPYC 2U Twin² Servers (Four Nodes per 2U)

    Four 2U EPYC servers in a dense rackmount platform for the datacenter. This AMD EPYC 2U TwinPro² configuration includes 2x 32-core CPUs per node, 256GB DDR4-3200 memory, Intel NVMe Data Center SSDs, and Mellanox InfiniBand. Each node also includes a fully integrated software stack with Linux OS, OFED, OpenMPI, workload manager, and open-source GNU compilers.

  • Xeon VMware 1U Server

    1U dense flash-server that is ready for VMware ESXi. Our Xeon VMware server includes 2 Intel Xeon CPUs, 96GB of DDR4 memory, and Intel Data Center SSDs.

Clusters

Clusters

  • AMD EPYC Turnkey Cluster + 8 NVDIA Tesla V100 GPU Nodes

    AMD EPYC-based HPC & AI compute cluster with NVIDIA Tesla V100 compute nodes. This cluster configuration includes 8 AMD EPYC + Dual Tesla V100 GPU accelerated compute nodes, InfiniBand fabric, management network, and a 42U rackmount cabinet with power distribution. Fully integrated software load: Microway Cluster Management Software (MCMS), OFED, OpenMPI, workload manager, and open-source GNU compilers.

  • Intel Xeon Turnkey Cluster + 8 NVIDIA Tesla V100 GPU Nodes

    Intel Xeon-based HPC & AI compute cluster with NVIDIA Tesla V100 compute nodes. This cluster configuration includes 8 Intel Xeon + Dual Tesla V100 GPU accelerated compute nodes, InfiniBand fabric, management network, and a 42U rackmount cabinet with power distribution. Fully integrated software load: Microway Cluster Management Software (MCMS), OFED, OpenMPI, workload manager, and open-source GNU compilers.

DGX A 100

       

 

NVIDIA DGX™ A100 is the universal system for all AI infrastructure and workloads, built on the revolutionary NVIDIA A100 Tensor Core GPU and backed by over a decade of AI innovation at NVIDIA. DGX A100 is the single platform that brings together training, inference, and analytics into a consolidated system with optimized software that is the foundational building block for AI infrastructure. Backed by AI-fluent expertise that comes with every DGX system, NVIDIA DGX A100 speeds the end-to-end lifecycle of enterprise AI across planning, deployment and on-going optimization.
 

Introducing DGX A100 Video

rv1.png

 

I am AI GTC 2020 Video

v2.png

 

NVIDIA: The AI Podcast

v1.png


Resources:

News

Latest News

Microway, a leading provider of computational clusters, servers, and workstations for AI and HPC applications, announces it supplied Milwaukee School of Engineering (MSOE) with an NVIDIA® DGX™ ...
READ MORE >
Microway, a leading provider of computational clusters, servers, and workstations for AI and HPC applications, announces it has provided Oregon State University with six NVIDIA® DGX-2™ ...
READ MORE >

Resources

Datasheet



Guide

This document provides a high-level architectural overview of the most popular solutions for GPU-accelerated computing, deep learning, and machine learning. Selecting the correct architecture is an important step in HPC & AI deployments. Those not intimately familiar with system architectures are en...