Hyperdisk type
1 You can't specify a throughput level for Hyperdisk Extreme volumes. The provisioned throughput is based on the IOPS level you specify.
2 You can't specify an IOPS level for Hyperdisk Throughput and Hyperdisk ML volumes. The provisioned IOPS is based on the throughput level you specify.
The following is a summary of key Hyperdisk performance concepts:
For a discussion of how Hyperdisk performance works, see About Hyperdisk performance. For performance limits for each Hyperdisk type, see Hyperdisk performance limits.
Each Hyperdisk type has different latency profiles. Google recommends comparing Hyperdisk Throughput to the latency of a hard disk drive. You can compare the latency for Hyperdisk Balanced, Hyperdisk Balanced High Availability, Hyperdisk Extreme, and Hyperdisk ML to the latency of enterprise SSDs.
Hyperdisk Balanced and Hyperdisk Extreme offer sub-millisecond latency.
This section lists the machine series that each Hyperdisk type supports. If a machine series doesn't support Hyperdisk, use Persistent Disk.
Select one or more machine series to see the supported Hyperdisk types.
Machine series | Hyperdisk Balanced | Hyperdisk Extreme | Hyperdisk Throughput | Hyperdisk ML | Hyperdisk Balanced HA |
---|---|---|---|---|---|
C4A | — | — | |||
C4 | — | — | — | ||
C4D (Preview) | — | — | — | ||
C3 | |||||
C3D | — | ||||
N4 | — | — | — | ||
N2 | — | — | — | ||
N2D | — | — | — | — | |
N1 | — | — | — | — | — |
T2D | — | — | — | — | |
T2A | — | — | — | — | — |
E2 | — | — | — | — | — |
Z3 | — | — | — | ||
H3 | — | — | — | ||
C2 | — | — | — | — | — |
C2D | — | — | — | — | — |
X4 | — | — | — | ||
M4 | — | — | — | ||
M3 | — | — | |||
M2 | — | — | — | ||
M1 | — | — | — | ||
N1+GPU | — | — | — | — | — |
A4 | — | — | — | ||
A3 (H200) | — | — | — | ||
A3 (H100) | — | ||||
A2 | — | — | — | — | |
G2 | — | — | — |
This section lists the restrictions that apply to the machine series that each Hyperdisk type supports.
To use Hyperdisk Balanced with A3 VMs, the VM must have at least 8 GPUs.
For Hyperdisk Extreme, the following restrictions apply:
You can't use Hyperdisk Throughput with c3-*-metal
machine types.
You can share a Hyperdisk volume between multiple VMs by simultaneously attaching the same volume to multiple VMs.
The following scenarios are supported:
Concurrent read-write access to a single volume from multiple VMs. Recommended for clustered file systems and highly available workloads like SQL Server Failover Cluster Infrastructure. Supported for Hyperdisk Balanced and Hyperdisk Balanced High Availability volumes.
Concurrent read-only access to a single volume from multiple VMs. This is more cost effective than having multiple disks with the same data. Recommended for accelerator-optimized machine learning workloads. Supported for Hyperdisk ML volumes.
You can't attach a Hyperdisk Throughput or Hyperdisk Extreme volume to more than one VM.
To learn about disk sharing, see Share a disk between VMs.
You can protect your data in the rare event of a zonal or regional outage by enabling replication, that is, maintaining a copy of the data in another zone or region.
To replicate data to another zone within the same region, you must use Hyperdisk Balanced High Availability volumes. Hyperdisk Balanced High Availability is the only supported Hyperdisk type for zonal replication.
For more information, see About synchronous disk replication.
You can protect your data in the unlikely event of a regional outage by enabling Asynchronous Replication. Asynchronous Replication maintains a copy of the data on your volume in another region. For example, to protect a Hyperdisk Balanced volume in us-west1
, you can use Asynchronous Replication to replicate the volume to a secondary volume in the us-east4
region. If the volume in us-west1
became unavailable, then you could use the secondary volume in us-east4
.
You can use Asynchronous Replication with the following Hyperdisk types:
To learn more about cross-regional replication, see Asynchronous Replication.
By default, Compute Engine protects your Hyperdisk volumes with Google-owned and Google-managed encryption keys. You can also encrypt your Hyperdisk volumes with customer-managed encryption keys (CMEK).
For more information, see About disk encryption.
You can add hardware-based encryption to a Hyperdisk Balanced disk by enabling Confidential mode for the disk when you create it. You can use Confidential mode only with Hyperdisk Balanced disks that are attached to Confidential VMs.
For more information, see Confidential mode for Hyperdisk Balanced volumes.
Compute Engine distributes the data on Hyperdisk volumes across several physical disks to ensure durability and optimize performance.
Disk durability represents the probability of data loss, by design, for a typical disk in a typical year. Hyperdisk data loss events are extremely rare and have historically been the result of coordinated hardware failures, software bugs, or a combination of the two. Google takes many steps to mitigate the industry-wide risk of silent data corruption.
Durability is calculated with a set of assumptions about hardware failures, the likelihood of catastrophic events, isolation practices and engineering processes in Google data centers, and the internal encodings used by each disk type.
Human error by a Google Cloud customer, such as when a customer accidentally deletes a disk, is outside the scope of Hyperdisk durability.
The table below shows durability for each disk type's design. 99.999% durability means that with 1,000 Hyperdisk volumes, you would likely go a hundred years without losing a single one.
Hyperdisk Balanced | Hyperdisk Extreme | Hyperdisk ML | Hyperdisk Throughput | Hyperdisk Balanced High Availability |
---|---|---|---|---|
Better than 99.999% | Better than 99.9999% | Better than 99.999% | Better than 99.999% | Better than 99.9999% |
Hyperdisk volumes are mounted as a disk on a VM using the NVMe or SCSI interface, depending on the machine type of the instance.
Hyperdisk Storage Pools make it easier to lower your block storage total cost of ownership and simplify block storage management. With Hyperdisk Storage Pools, you can share a pool of capacity and performance across a maximum of 1,000 disks in a single project. Because storage pools offer thin-provisioning and data reduction, you can achieve higher efficiency.
Storage pools simplify migrating your on-premises SAN to the cloud, and also make it easier to provide your workloads with the capacity and performance that they need.
You create a storage pool with the estimated capacity and performance for all workloads in a project in a specific zone. You then create disks in this storage pool and attach the disks to existing VMs. You can also create a disk in the storage pool as part of creating a new VM. Each storage pool contains one type of disk, such as Hyperdisk Throughput. There are two types of Hyperdisk Storage Pools:
For information about using Hyperdisk Storage Pools, see About storage pools.
You are billed for the total provisioned capacity of your Hyperdisk volumes until you delete them. You are charged per GiB per month. Additionally, you are billed for the following:
Because the data for regional disks is written to two locations, the cost of Hyperdisk Balanced High Availability storage is twice the cost of Hyperdisk Balanced storage.
For more pricing information, see Disk pricing.
Hyperdisk volumes are not eligible for:
Hyperdisk can be used with Spot VMs (or preemptible VMs). However, there are no discounted spot prices for Hyperdisk.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-04-24 UTC.