Tag: TPU

Official Blog TPU Nov. 18, 2024

Unlocking LLM training efficiency with Trillium — a performance analysis - Trillium, Google's sixth-generation Tensor Processing Unit (TPU), delivers up to 1.8x better performance-per-dollar compared to the previous-generation Cloud TPU v5p. Trillium achieves 99% scaling efficiency, outperforming Cloud TPU v5p's 94% scaling efficiency. Trillium lowers the cost to train by up to 1.8x (45% lower) compared to TPU v5p while delivering convergence to the same validation accuracy.

AI Official Blog TPU Nov. 4, 2024

Powerful infrastructure innovations for your AI-first future - Google Cloud's sixth-generation TPU, Trillium, is now available in preview, offering significant improvements in training performance, inference throughput, energy efficiency, and compute performance per chip. It features double the High Bandwidth Memory (HBM) capacity and Interchip Interconnect (ICI) bandwidth, making it ideal for large models with more weights and larger key-value caches.

Machine Learning TPU Vertex AI Sept. 23, 2024

Using TPUs for fine-tuning and deploying LLMs with dstack - Dstack, an open-source container orchestrator, now supports using TPUs with Google Cloud. You can use dstack to fine-tune and deploy large language models (LLMs) on TPUs, leveraging open-source tools like Hugging Face's Optimum TPU and vLLM.

AI Google Kubernetes Engine TPU Sept. 23, 2024

Cost management for AI/ML platforms with Google Kubernetes Engine - This blog discusses some of the capabilities and cost-saving initiatives engineered specifically into Google Kubernetes Engine (GKE) for running AI/ML workloads.

LLM Official Blog TPU Vertex AI July 29, 2024

Hex-LLM: High-efficiency large language model serving on TPUs in Vertex AI Model Garden - Hex-LLM, a high-efficiency large language model (LLM) serving framework designed for Google's Cloud TPU hardware, is now available in Vertex AI Model Garden. Hex-LLM combines state-of-the-art LLM serving technologies with in-house optimizations tailored for XLA/TPU, delivering competitive performance with high throughput and low latency.

HPC Official Blog TPU June 17, 2024

Enhancing the HPC experience with Slurm-GCP v6 and TPU support - Google Cloud announces the general availability of Slurm-GCP v6, the latest and recommended version of its Slurm-based offering for HPC systems. This release brings faster deployments, robust reconfiguration, support for more deployments in a single project, fewer dependencies in the deployment environment, and full support for TPU v3 and v4. Users can start using v6 today by navigating to the Toolkit blueprint library.

Official Blog TPU May 20, 2024

Announcing Trillium, the sixth generation of Google Cloud TPU - Trillium TPUs achieve an impressive 4.7X increase in peak compute performance per chip compared to TPU v5e.

AI GPU Official Blog TPU April 15, 2024

Accelerate AI Inference with Google Cloud TPUs and GPUs

Official Blog TPU Dec. 11, 2023

Enabling next-generation AI workloads: Announcing TPU v5p and AI Hypercomputer - AI Hypercomputer is a groundbreaking supercomputer architecture that employs an integrated system of performance-optimized hardware, open software, leading ML frameworks, and flexible consumption models.

Google Kubernetes Engine Machine Learning Official Blog TPU Dec. 11, 2023

Simplifying MLOps using Weights & Biases with Google Kubernetes Engine - In this blog, we show you how to use W&B Launch to set up access to either GPUs or Cloud Tensor Processing Units (TPUs) on GKE.

AI Google Kubernetes Engine Machine Learning Official Blog TPU Dec. 3, 2023

Powering cost-efficient AI inference at scale with Cloud TPU v5e on GKE - With Cloud TPUs on Google Kubernetes Engine (GKE), the leading Kubernetes service in the industry, customers can orchestrate AI workloads efficiently and cost effectively with best-in-class training and inference capabilities.

Official Blog TPU Nov. 13, 2023

Announcing Cloud TPU v5e GA for cost-efficient AI model training and inference - Cloud TPU v5e is now generally available, as weel as Singlehost inference and Multislice Training technologies.

GCP Experience Official Blog TPU Nov. 13, 2023

AssemblyAI leverages Google Cloud TPU v5e for leading price-performance on large-scale AI inference

AI Official Blog TPU Nov. 13, 2023

Google Cloud demonstrates the world’s largest distributed training job for large language models across 50000+ TPU v5e chips

AI Machine Learning Official Blog TPU Nov. 13, 2023

Introducing Accurate Quantized Training (AQT) for accelerated ML training on TPU v5e - Introduction of the open-source Accurate Quantization Training (AQT) library that provides the software support needed for easy tensor operation quantization in JAX.

AI Machine Learning Official Blog TPU Oct. 23, 2023

InstaDeep’s scalable reinforcement learning on Cloud TPU - n this article, we dive into the scaling capabilities of Cloud TPUs and their transformative impact on Reinforcement Learning workloads for both research and industry.

GPU Official Blog TPU Sept. 18, 2023

Helping you deliver high-performance, cost-efficient AI inference at scale with GPUs and TPUs - Based on the results of MLPerf™ v3.1 Inference Closed, Google Cloud GPU and TPU offerings deliver exceptional performance per dollar for AI inference.

AI Official Blog TPU Sept. 4, 2023

How to scale AI training to up to tens of thousands of Cloud TPU chips with Multislice - Learn how new Cloud TPU Multislice functionality can enable 2x higher scale than alternate accelerators, with 2x higher performance/dollar and near-linear scaling out-of-the-box.

Google Kubernetes Engine Official Blog TPU Sept. 4, 2023

Cloud TPU support in GKE under the hood - Support for both Cloud TPU v4 and Cloud TPU v5e on GKE is now GA, letting you do large-scale AI workload orchestration on optimized infrastructure.

Official Blog TPU Sept. 4, 2023

Cloud TPU v5e accelerates large-scale AI inference - Designed to be efficient, scalable, and versatile, the new Cloud TPU v5e delivers high-throughput and low-latency inference performance.

Google Distributed Cloud Hosted Official Blog TPU Aug. 28, 2023

Enter the Hardware-verse: Google hardware comes to life at Google Cloud Next - Visitors to the Hardware-verse at Google Cloud Next can see Google Distributed Cloud (GDC) and Cloud Tensor Processing Units (TPUs) up close.

AI Official Blog TPU April 10, 2023

Google’s Cloud TPU v4 provides exaFLOPS-scale ML with industry-leading efficiency - A new paper describes how Google’s Cloud TPU v4 outperforms TPU v3 by 2.1x on a per-chip basis, and improves performance/Watt by 2.7x.

AI GCP Experience Machine Learning Official Blog TPU Dec. 5, 2022

How InstaDeep used Cloud TPU v4 to help sustainable agriculture - Google Cloud and InstaDeep discuss how Cloud TPU v4 help train a large genomics model. This model will help predict crop characteristics and thereby help sustainable agriculture.

AI Compute Engine HPC Networking Official Blog TPU Oct. 17, 2022

Google Cloud infrastructure enhancements tailored for your workloads

Official Blog PyTorch TPU Oct. 10, 2022

Building Large Scale Recommenders using Cloud TPUs - In this blog post, we introduce concepts to generate and analyze traces to debug PyTorch training performance on TPU VM.

AI GCP Experience Machine Learning Official Blog TPU Aug. 1, 2022

How Cohere is accelerating language model training with Google Cloud TPUs - Google Cloud and Cohere discuss how Cohere’s new framework deployed on Cloud TPU v4 Pods helps accelerate large language model training.

AI Machine Learning Official Blog TPU July 4, 2022

Cloud TPU v4 records fastest training times on five MLPerf 2.0 benchmarks - Cloud TPU v4 ML supercomputers set performance records on five MLPerf 2.0 benchmarks.

GCP Experience Official Blog TPU June 6, 2022

Snap Inc. adopts Google Cloud TPU for deep learning recommendation models - Snap, Inc. is using Google Cloud solutions to quickly turn millions of data points into personalized customer ad recommendations.

Infrastructure Official Blog TPU May 16, 2022

Google Cloud unveils world’s largest publicly available ML hub with Cloud TPU v4, 90% carbon-free energy - Google Cloud unveils world’s largest publicly available machine learning cluster with up to 9 exaflops of computing power.

Infrastructure Official Blog TPU May 16, 2022

Cloud TPU VMs are generally available - Cloud TPU VMs with Ranking & Recommendation acceleration are generally available on Google Cloud. Customers will have direct access to TPU host machines.

Machine Learning Official Blog PyTorch TPU Jan. 10, 2022

PyTorch/XLA: Performance debugging on TPU-VM part 1 - We present the fundamentals of the performance analysis of PyTorch on Cloud TPUs and discuss a performance analysis case study.

Machine Learning Official Blog TPU Dec. 6, 2021

Google showcases Cloud TPU v4 Pods for large model training - Google’s MLPerf v1.1 Training submission showcased two large (480B & 200B parameter) language models using publicly available Cloud TPU v4 Pod slices.

AI Machine Learning Official Blog TPU July 26, 2021

Scaling deep learning workloads with PyTorch / XLA and Cloud TPU VM - This article addresses challenges associated with scaling deep learning workloads to distributed training jobs that use remote storage. We demonstrate how to stream training data from Cloud Storage to PyTorch / XLA models running on Cloud TPU Pods.

AI Machine Learning Official Blog TPU July 5, 2021

Google demonstrates leading performance in latest MLPerf Benchmarks - TPU v4 Pods will soon be available on Google Cloud, providing the most powerful publicly available computing platform for machine learning training.

Docker Machine Learning TPU June 14, 2021

Accessing your TPUs in Docker Containers with TPU VM - Issues when connecting to TPU within Docker container.

AI HPC Machine Learning Official Blog TPU June 7, 2021

New Cloud TPU VMs make training your ML models on TPUs easier than ever - New Cloud TPU VMs let you run TensorFlow, PyTorch, and JAX workloads on TPU host machines, improving performance and usability, and reducing costs.

Machine Learning Official Blog TPU April 12, 2021

How to use PyTorch Lightning's built-in TPU support - How to start training ML models with Pytorch Lightning on TPUs.

AI Platform Machine Learning TPU Jan. 11, 2021

Running PyTorch with TPUs on GCP AI Platform Training - Using TPUs in PyTorch on AI Platform.

GCP Experience Official Blog TPU Dec. 15, 2020

Samsung Electronics supercharges Bixby with Cloud TPUs & TensorFlow - Samsung improves Bixby voice recognition model training speeds 18x with Cloud TPUs.

Machine Learning TPU Dec. 7, 2020

Training PyTorch on Cloud TPUs - This article attempts to summarize PyTorch/XLA constructs to help you update your model and training code to run with Cloud TPUs.

Data Science Machine Learning TPU Nov. 16, 2020

Running BERT on Google Cloud Platform With TPU - Use Google Cloud and TPU to Build a Deep Learning Environment.

Machine Learning TPU Tutorial Oct. 5, 2020

PyTorch / XLA is now Generally Available on Google Cloud TPUs - PyTorch / XLA, a package that lets PyTorch connect to Cloud TPUs and use TPU cores as devices, is now generally available.

AI Machine Learning Official Blog TPU Oct. 5, 2020

PyTorch / XLA now generally available on Cloud TPUs - PyTorch is now GA on Google Cloud TPUs.

AI Machine Learning Official Blog TPU July 6, 2020

Google breaks AI performance records in MLPerf with world's fastest training supercomputer - Google set performance records in six out of the eight MLPerf benchmarks at the latest MLPerf benchmark contest.

Machine Learning TPU Tutorial March 16, 2020

Get started with PyTorch, Cloud TPUs, and Colab - Running Machine Learning With PyTorch on TPUs in Colab.

AI Machine Learning Official Blog TPU March 9, 2020

Better scalability with Cloud TPU pods and TensorFlow 2.1 - Cloud TPU Pods are now generally available, and include TensorFlow 2.1 support and other new features.

Machine Learning Python TensorFlow TPU Feb. 10, 2020

Train Neural Networks Faster with Google’s TPU from your LapTop. - Tutorial on how to set up Cloud TPU and train ML models.

AI Official Blog TPU Nov. 11, 2019

Cloud TPU breaks scalability records for AI Inference - Results from the MLPerf Inference benchmark demonstrate that Cloud TPU inference meets critical needs of ML customers: developer velocity, scalability, and elasticity.

AI Machine Learning Official Blog TPU Sept. 16, 2019

Train ML models on large images and 3D volumes with spatial partitioning on Cloud TPUs - Spatial partitioning is new Cloud TPU feature that allows you to seamlessly scale image models up to larger sizes (in 2D and 3D) without changing your code. Here’s how to get started.

Official Blog TPU Aug. 26, 2019

BFloat16: The secret to high performance on Cloud TPUs - How the high performance of Google Cloud TPUs is driven by Brain Floating Point Format, or bfloat16

Official Blog TPU July 15, 2019

Cloud TPU Pods break AI training records - Google Cloud sets three new records in the industry-standard ML benchmark contest, MLPerf, with each of the winning runs using less than two minutes of compute time.

Machine Learning TensorFlow TPU June 10, 2019

Replicating GPT2–1.5B - Experience of using Cloud TPUs.

Official Blog TPU May 13, 2019

Google’s scalable supercomputers for machine learning, Cloud TPU Pods, are now publicly available in beta - Cloud TPU Pods are now available in beta, helping you train models, even very large ones, faster and at lower cost on Google Cloud.

AI Official Blog TPU April 29, 2019

Train and deploy state-of-the-art mobile image classification models via Cloud TPU - Learn how to train embedded Neural Architecture Search machine learning models on Cloud TPUs to output quantized TensorFlow Lite classifiers on embedded systems.

Official Blog TPU April 29, 2019

What’s in an image: fast, accurate image segmentation with Cloud TPUs - We’re making it easier for you to use Cloud TPUs for image segmentation by releasing high-performance, open source TPU-optimized implementations of two state-of-the-art segmentation models.

Machine Learning Official Blog TensorFlow TPU March 4, 2019

Train fast on TPU, serve flexibly on GPU: switch your ML infrastructure to suit your needs - In this post, we walk through training and serving an object detection model and demonstrate how TensorFlow’s comprehensive and flexible feature set can be used to perform each step, regardless of which hardware platform you choose.

Official Blog TPU Jan. 21, 2019

Getting started with Cloud TPUs: An overview of online resources - An overview of online resources about TPUs.

Official Blog TensorFlow TPU Dec. 17, 2018

Now you can train TensorFlow machine learning models faster and at lower cost on Cloud TPU Pods - Using Cloud TPU Pods to train TensorFlow machine learning models.

TPU Nov. 12, 2018

Cloud TPUs (TensorFlow @ O’Reilly AI Conference, San Francisco '18) - This talk takes you through a technical deep dive on Google's Cloud TPUs accelerators, as well as how to program them. It also covers the programming abstractions that allow you to run your models on CPUs, GPUs, and Cloud TPU, from single devices up to entire Cloud TPU pods.

Machine Learning TPU Nov. 5, 2018

Training Image & Text Classification Models Faster with TPUs on Cloud ML Engine

GPU Machine Learning TPU Oct. 15, 2018

Google Colab provides free access to GPUs and TPUs. - Using Develop deep learning applications on this free GPU/TPU.

Google Kubernetes Engine Official Blog TPU Sept. 17, 2018

Cloud TPUs in Kubernetes Engine powering Minigo are now available in beta - Google-designed Cloud TPUs are publicly available in beta on Google Kubernetes Engine. GKE also supports Preemptible Cloud TPUs that are priced 70% lower than the standard price of Cloud TPUs.

Official Blog TPU Sept. 3, 2018

What makes TPUs fine-tuned for deep learning? - Cloud TPU provides the benefit of the TPU as a scalable and easy-to-use cloud computing resource to all developers and data scientists running cutting-edge ML models on Google Cloud.

Cloud ML Official Blog TPU Aug. 20, 2018

Hyperparameter tuning using TPUs in Cloud ML Engine - How to use hyperparameter tuning on TPUs on Cloud ML Engine.

TPU Aug. 13, 2018

HowTo Start Using TPUs From Google Colab in Few Simple Steps - Steps for Using TPUs From Google Colab.

TPU July 30, 2018

A tutorial on using Google Cloud TPUs - Guide on using Google Cloud TPUs.

Cloud ML Official Blog TPU July 16, 2018

How to train a ResNet image classifier from scratch on TPUs on Cloud ML Engine - How to train a state-of-the-art image classification model on your own data using Google’s Cloud TPUs.

Machine Learning TensorFlow TPU July 16, 2018

Training and serving a realtime mobile object detector in 30 minutes with Cloud TPUs - Example of training an object detection model on Cloud TPUs with Tensorflow.

Official Blog TPU June 25, 2018

Cloud TPU now offers preemptible pricing and global availability - Cloud TPUs are available in two new regions (in Europe and Asia), and also preemptible pricing for Cloud TPUs that is 70% lower than the normal price.

Cloud ML Official Blog TPU May 28, 2018

Cloud ML Engine adds Cloud TPU support for training - Cloud Machine Learning Engine (ML Engine) offers the option to accelerate training with Cloud TPUs as a beta feature.

Machine Learning TPU Feb. 19, 2018

Cloud TPU machine learning accelerators now available in beta - Cloud TPUs are available in beta on Google Cloud Platform (GCP) to help machine learning (ML) experts train and run their ML models more quickly.

Machine Learning TPU May 22, 2017

Build and train machine learning models on our new Google Cloud TPUs - Announced on Google IO 2017, Tensor Processing Units (TPUs) which is designed to be used for deep learning tasks, will be available as part of Google Cloud Platform

Kubernetes Official Blog TPU

Introducing AI Hub and Kubeflow Pipelines: Making AI simpler, faster, and more useful for businesses - Making AI more useful with Kubeflow Pipelines and API.

 

Latest Issues




Contact

Zdenko Hrček
Třebanická 183
Prague, Czech Republic
Phone: +420 777 283 075
Email: [email protected]