GPU Security Flaw in Apple, Nvidia, AMD and Others Exposes Millions of People

A concerning security vulnerability has come to light, affecting certain iPhones and MacBooks, as discovered by researchers at Trail of Bits.

This revelation extends to millions of Apple devices, including iPhones and MacBooks, as well as devices equipped with AMD or Qualcomm chips, painting a broad and potentially alarming picture of affected devices.

Termed “LeftoverLocals,” this security flaw resides within the GPU memory responsible for storing AI data, a task that leverages the graphics unit instead of the System on a Chip (SoC). What makes this vulnerability particularly unsettling is that it grants unauthorized individuals the capability to extract personal information with relative ease, as this sensitive data is readily accessible within the local memory of the GPU.

Apple has taken swift action to address the concerning security problem and has officially acknowledged its awareness of the issue. In response, patches have already been issued for devices featuring the M3 and A17 Bionic chips, demonstrating Apple’s commitment to ensuring the security of its users.

However, the challenge persists for owners of older iPhone 12 Pro models, iPads, and M2 MacBook Air devices, which remain exposed to the exploit.

The scope of this exploit extends beyond Apple devices, encompassing GPUs from a variety of manufacturers, including Apple, AMD, Qualcomm, and Imagination. Notably, Nvidia, Arm, and Intel remain unaffected by this particular vulnerability.

The evolving complexity of graphics units, coupled with their increasing workload, has inadvertently led to expanded access to sensitive data. This vulnerability demonstrates that hackers can gain access to uninitialized local memory with astonishing ease, often requiring less than 10 lines of code. The affected memory spans a substantial range, from 5 MB to 180 MB, presenting a significant security challenge.

The implications of this exploit are far-reaching, as it opens the door for attackers to access data left on a user’s device, including Large Language Models (LLMs), which are primarily utilized by generative AI services such as ChatGPT.

All companies whose units were affected by these critical flaws have acknowledged the issue and pledged to take corrective action. We strongly recommend keeping an eye out for any new updates.



Get Alerts

Follow ProPakistani to get latest news and updates.


ProPakistani Community

Join the groups below to get latest news and updates.



>