IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image

Intel reveals advancements in AI tech at Hot Chips 2024

Tue, 27th Aug 2024

At the recent Hot Chips 2024 conference, Intel presented four major technical papers, each exploring advancements in their latest technologies including the Intel Xeon 6 system-on-chip (SoC), Lunar Lake client processor, Intel Gaudi 3 AI accelerator, and the Optical Compute Interconnect (OCI) chiplet.

The Intel Xeon 6 SoC is designed to address specific challenges faced in edge computing environments, such as unreliable network connections and constraints related to space and power. Praveen Mosur, Intel Fellow and network and edge silicon architect, shared insights about the Intel Xeon 6 SoC’s capabilities, stating that it is "the company’s most edge-optimized processor to date." The new SoC will employ a combination of advanced compute and edge-specific I/O chiplets built on Intel 4 process technology to enhance performance and power efficiency. Additional features include up to 32 lanes of PCI Express 5.0, up to 16 lanes of Compute Express Link 2.0, and 2x100G Ethernet connectivity, among others.

Arik Gihon, the lead client CPU SoC architect, discussed the Lunar Lake client processor, which aims to establish a new standard for x86 power efficiency. Compared to the previous version, the Lunar Lake processor is expected to offer up to 40% lower system-on-chip power consumption. The processor will also include a new neural processing unit with performance up to four times faster, significantly enhancing generative AI capabilities. Additionally, the new Xe2 graphics processing unit promises 1.5 times the performance of its predecessor.

Roman Kaplan, chief architect of AI accelerators, elaborated on the Intel Gaudi 3 AI accelerator, which is engineered for the training and deployment of generative AI models that require substantial compute power. According to Kaplan, the Gaudi 3 AI accelerator employs an optimised architecture that improves compute, memory, and networking efficiencies. These include matrix multiplication engines, two-level cache integration, and advanced RDMA over Converged Ethernet (RoCE) networking, which collectively aim to improve performance and power efficiencies in AI data centres.

Intel's Integrated Photonics Solutions (IPS) Group showcased the Optical Compute Interconnect (OCI) chiplet, which represents a significant advancement in interconnect technology for AI workloads. Saeed Fathololoumi, principal engineer and Photonics Integration lead, highlighted the OCI chiplet’s ability to support 64 channels of 32 gigabits per second data transmission over distances of up to 100 meters. This new development is expected to address AI infrastructure’s increasing demands for higher bandwidth while reducing power consumption, thus facilitating scalable and efficient data centre architectures.

Pere Monclus, chief technology officer of Intel’s Network and Edge Group, commented on the broad implications of these advancements, stating, “Across consumer and enterprise AI usages, Intel continuously delivers the platforms, systems, and technologies necessary to redefine what’s possible. As AI workloads intensify, Intel’s broad industry experience enables us to understand what our customers need to drive innovation, creativity, and ideal business outcomes.”

The presentations at Hot Chips 2024 underscored Intel’s commitment to pushing the boundaries of AI technology and infrastructure across various sectors, from cloud and data centres to edge and consumer markets. These latest technological innovations are aimed at improving decision-making, increasing automation, and adding significant value to their customer base.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X