Use Case: Distributed TensorFlow on a Cluster of 1&1 Cloud Servers

Learn how the 1&1 Cloud Server platform can help support and expand your TensorFlow project. There are several use cases for running Distributed TensorFlow on a cluster of 1&1 Cloud Servers. Using a cluster allows you to increase training throughput by harnessing the computing power of multiple servers. Google also recommends using a cluster when dealing with a very large data set, or with very large models. [...]  

Run TensorFlow Serving in a Docker Container

Docker provides a fast and easy way to deploy TensorFlow Serving on a 1&1 Cloud Server. Learn how to use the official Dockerfile.devel file to get TensorFlow Serving up and running in a Docker container. [...]  

Install TensorFlow Serving Using the Ready-to-Use Application

TensorFlow Serving is designed to provide high-performance serving of machine learning models in a production environment. After using TensorFlow to train a model, you can use the TensorFlow Serving APIs to respond to client input. TensorFlow Serving is available as a ready-to-use application which can be automatically installed on a 1&1 Cloud Server when it is built. This stack comes with Inception v3 with trained data for image recognition, but it can be extended to serve other models. [...]  

Install TensorFlow with Pip

There are many different versions of TensorFlow, and many different ways to install it. For this tutorial we will be installing TensorFlow with CPU Support on Ubuntu 16.04, using Pip. This installation method ensures that every user on the server has access to TensorFlow. [...]  

Read More