AIRI - AI Ready Infrastructure

CGit offers a specialised IT infrastructure for companies seeking to develop solutions, services, products, or research based on Deep Learning.

 

What is so special about AIRI?

Our infrastructure is based on a reference architecture called AIRI (AI-Ready Infrastructure) developed by NVIDIA, PureStorage, and Arista Networks. The advantage of a reference architecture is that the combined Pure Storage, NVIDIA and Arista systems are specified and tested to work in a high-performance and compatible way with NVIDIA machine learning software.

FAST

AIRI supports Deep Learning with multiple “out of the box” nodes, delivering performance in a linear and predictable way. AIRI is based on state-of-the-art technologies such as RDMA over a 100Gb Ethernet.

SIMPLE

Get started in just a few hours, not weeks. NVIDIA GPU Cloud Deep Learning Stack immediately delivers optimised frameworks, while AIRI Scale-Out Training Kit enables a quick start for multiple DGX-1.

FUTURE-PROOF

While it is the most advanced and modern platform for AI development, AIRI also offers the easiest platform to scale up and grow in. Add GPUs for faster training or add storage for big data sets. AIRI grows at the same pace as your needs.

 

Sign up for a PoC
 

Apply here

 

AIRI - a unique solution in Sweden

This is the only system in Sweden of its kind that is available as a service. This means that your data never need to leave the country. In addition, we have local experts ready to help you get started with your AI development.

We are a Swedish company with data centres in Sweden, which means we are not bound by the Cloud Act. We comply with Swedish laws and regulations, as well as the EU’s Data Directive (GDPR).

What is the service comprised of?

NVIDIA DGX-1

Nvidia DGX-1, with Tesla V100, is an integrated deep learning system. DGX-1 has 8 NVIDIA Tesla V100 GPU accelerators that are connected through NVIDIA NVLink, in a “hybrid cube-mesh” network. With dual-socket Intel Xeon CPUs and four 100GB InfiniBand network cards, DGX-1 provides unparalleled performance for deep-learning training. Not only that, the DGX-1 system software, powerful libraries, and NVLink networks are optimised to scale up deep learning across all eight Tesla V100 GPUs, thus providing a flexible platform with maximum performance. This is perfect for developing and implementing deep learning in both production and research environments.

DGX, with the latest generation V100 GPUs, is up to 3.1x faster than the previous generation based on the Telsa P100.

The higher productivity and performance is thanks to the DGX-1 being an integrated system with a fully optimised software platform. All in running order, from the get-go!

DGX-1 MED TESLA P100 VS TESLA V100

P100V100
TFLOPS1701 000
CUDA Cores28 672 40 960
Tensor Cores--5 120
NVLink vs PCIe Speed-up5x10x
Deep Learning Training Speed-up 1x3x


 

Sign up for a PoC
 

Apply here

 

Pure Storage FlashBlade

”The world’s most valuable resource is no longer oil, but data”. However, slow and complex older storage systems frequently impede data use.

FlashBlade, from Pure Storage, is the industry’s most advanced file and object storage platform.

Why Flashblade

A centralised hub in a deep-learning architecture increases productivity for those who work with data, making scaling and operation easier and more agile for the architect/IT. Specifically, FlashBlade makes an AI system easier to build, operate and grow, for the following reasons:

  • Performance: With over 15GB/s random-read bandwidth and up to 75GB/s in total, FlashBlade is ready to handle the demands of an AI workflow without problems.
  • Small Files: FlashBlade can manage randomly reading small files (50KB) at 10GB/s from a single chassis (50GB/s with 75 blades). This means that no extra power is needed to aggregate data into larger “storage-friendly” files.
  • Scalabiltity: Start with a small system and add one blade at a time to increase performance and capacity, depending on whether the data set or throughput requirements are increasing.
  • Native object support (S3): Input data can be saved either as a file or object.
  • Simple Admin: No need for tuning, no need for provisioning.
  • Non-disrupitve Upgrade everything: Software updates and hardware expansion can be done at any time without interruptions. Even during production.
  • Simple Management: With Pure1, a cloud service from Pure, you can monitor FlashBlade from any device, and receive proactive support and fixes before they affect operation.
  • Built for the future: Developed from the ground up for flash and to take advantage of the new generation NAND technology.

The ability to handle small files is crucial because many types of input, including text, audio, and images, are stored as small files. If storage does not handle small files well, an additional step is required to pre-process and group small files into larger ones.

Most legacy systems were not developed to perform well with small files.

Storage systems that use spinning disk with SSD as a cache level lack the necessary performance. SSD caches only ensure high performance for a small subset of data and will be ineffective in concealing delays caused by spinning disks.

Ultimately, FlashBlade’s performance allows a developer, or anyone working with computer science, not to have to wait. Instead, they can quickly move between phases at work without wasting time waiting for data to copy. FlashBlade can also perform multiple experiments on the same data at the same time.

Arista Networks

Arista Networks is one of the fastest growing companies in the world and ranks 10th on Fortune 100’s 2017 List of Fastest Growing Companies (Amazon is number 9). The company has achieved this amazing result by offering superior technology and services, and this successful formula is the reason why they have come up tops, for the third year in a row, in Gartner’s Magic Quadrant for Data Centres report.

Here are some of the advantages of an Arista solution:

  • Deep Buffers
  • No downtime for upgrades
  • An open standard – it is extremely easy to integrate APIs with Arista. EOS, Arista’s operating system, is based on an unmodified Linux kernel.

How much does it cost?

We work to streamline your projects and save you time and money.

What do I do to access the infrastructure?

Please contact us on +46 (0)31 762 02 40 or email us at ai@cgit.se and we will help you get started.

How much does it cost?

Our service can compete with all similar solutions on the market, regardless of the supplier’s size. Not only that, we will not charge you for data transmission. We also make sure to save your time by providing you with expert consulting, which will shorten the time you need to access the system.

 

Sign up for a PoC
 

Apply here

 

Contact Christian

Feel free to contact Christian directly. His main expertise is within advanced datacenter solutions and he can help you find the right products and services for your organization.