Revolutionizing AI Benchmarks: MLCommons Aims for Laptops, Desktops and Workstations

As ⁤Artificial Intelligence (AI) continues to shift from cloud-based operations to on-device applications, consumers are often left wondering how⁣ to determine the performance of AI-powered apps on different devices.‌ This knowledge could significantly impact the efficiency ‍of tasks, potentially ‌saving ‍valuable time. MLCommons,‍ an ​industry group known⁤ for its AI-related‌ hardware benchmarking standards, aims to simplify this⁤ process with the ‌introduction of performance benchmarks for consumer PCs, also known‌ as “client systems”.

MLPerf Client: A New Working ​Group for AI Benchmarks

MLCommons recently announced the establishment⁢ of a new working group, MLPerf Client, dedicated to creating AI benchmarks for desktops, laptops, and ‍workstations running⁤ on various operating ⁤systems including Windows and Linux. The group promises that ⁢these‌ benchmarks will be “scenario-driven”, focusing on real end-user use cases and incorporating community feedback.

The first benchmark from MLPerf ⁤Client will concentrate on text-generating models, specifically Meta’s Llama ‌2. This model ⁢has already been integrated‍ into MLCommons’ other benchmarking suites for datacenter hardware. Meta has also collaborated with Qualcomm and Microsoft to optimize​ Llama 2 for Windows, which will greatly benefit devices running on ‌this operating system.

MLPerf Client ⁢Working Group‍ Members

The MLPerf Client working group comprises several ‌industry⁣ giants including AMD, ​Arm, Asus, Dell, Intel,⁣ Lenovo, Microsoft, Nvidia, ⁤and Qualcomm. However, Apple is notably absent from the group and is also ⁣not a member of MLCommons. This absence is not entirely surprising given​ that a Microsoft ⁣engineering director co-chairs the MLPerf Client group. Consequently, any AI benchmarks developed by MLPerf Client will not be⁣ tested⁣ on Apple‌ devices in⁣ the near future.

Despite this, it will be ​interesting to see​ the benchmarks and⁤ tools that ⁢emerge from MLPerf Client. Given‍ the increasing prevalence of​ AI, these metrics could play a significant role in ‌future device-buying‌ decisions. Ideally, the MLPerf Client benchmarks will resemble the many PC build comparison tools available online, providing an indication of the AI performance one can ⁣expect from a‍ specific​ machine. With the ⁢participation of Qualcomm and Arm, both heavily invested in the mobile device ecosystem, these benchmarks may even ‌expand to ‍cover phones and tablets‌ in the future.