Google Cloud unveils 8th-Gen TPUs and Cross-Cloud Lakehouse

At Next 2026, Google Cloud announced eighth-generation TPUs (8t, 8i), a Knowledge Catalog for agent-ready enterprise data, and a Cross-Cloud Lakehouse with managed interconnect.

Google Cloud used its Next 2026 conference to introduce eighth-generation TPUs labeled 8t and 8i, a Knowledge Catalog designed to make enterprise files agent-ready, and a Cross-Cloud Lakehouse paired with a managed Cross-Cloud Interconnect to link data across clouds.

The new TPUs are custom processors Google says boost compute and reduce latency for large-scale AI training and inference. Google described the chips as co-designed with DeepMind to align hardware, infrastructure and software. Mark Lohmeyer, vice president and general manager for AI and computing infrastructure, noted teams worked across the full stack. Google reported improved performance per watt for the new models and said the 8t uses fourth-generation liquid cooling, without providing deployment timelines.

The Knowledge Catalog is a data context engine that tags and enriches files as they land in Google Cloud Storage so agents can find business-specific entities and relationships without manual pipelines. Karthik Narain, chief product and business officer at Google Cloud, described the system as making files “instantly tagged, enriched, and made agent ready.” Google said the catalog uses Gemini models to extract entities and map relationships automatically.

Cross-Cloud Lakehouse is presented as a unified data layer that conforms to the Apache Iceberg table format. Google paired the Lakehouse with a managed Cross-Cloud Interconnect, a high-bandwidth network link it controls, which the company says lowers latency and increases reliability compared with point-to-point connectors. Yasmeen Ahmad, managing director for product management, Data & AI Cloud, called data access the “critical piece” for enterprise AI and said the Lakehouse can connect with Databricks via Unity Catalog, Snowflake, Polaris and Amazon S3.

Google emphasized that customers can run its agent capabilities without relocating large volumes of data or relying on brittle APIs. The company said it will continue to offer Nvidia hardware, including support for the upcoming Vera Rubin NVL72 accelerator. Chris Sakalosky, vice president of strategic industries at Google Cloud, described Nvidia as a “phenomenal partner” and said customers will have hardware and software choices.

Google highlighted partnerships and investments alongside the product news. The company referenced a relationship with Anthropic and an investment commitment, and it noted the recent acquisition of security firm Wiz for $32 billion as part of a broader enterprise strategy that spans multiple cloud providers and third-party services.

On energy and sustainability, Google stated the new TPUs deliver roughly twice the performance per watt of the previous generation but provided limited technical detail on cooling and did not specify initial deployment locations. Industry observers point out that higher chip efficiency does not automatically reduce total energy use once scale and demand are included. Google has called for expanded power generation capacity to support expected AI growth.

Google also signaled continued model development, with Gemini 4 expected at Google I/O in May. The company positioned the new TPUs as hardware to support faster inference and more rapid model iteration for enterprise workloads and agent deployments.

Articles by this author