Intel Enables Arc iGPUs to Use Up to 93% Memory

Wccftech reports that Intel released a HotFix driver for Arc Pro Graphics that allows users to allocate up to 93% of system RAM to built-in Arc Pro iGPUs, according to the driver notes covered by Wccftech. The article says the change applies to built-in Arc Pro GPUs in select Intel Core Ultra processors and to several discrete Arc Pro cards including Arc Pro B390 and Arc Pro B370, and is compatible with Arc Alchemist and Battlemage families. Wccftech gives an example where a 64 GB host can allocate 59.5 GB to the GPU and contrasts Intel's allowance with AMD Ryzen AI, which it reports permits up to 87% host allocation.
What happened
Wccftech reports Intel released a HotFix workstation driver for Arc Pro Graphics that increases the maximum share of host system memory available to built-in Arc Pro iGPUs and select Arc Pro cards to 93%, per the driver notes cited by Wccftech. The coverage names specific hardware families as supported, including built-in Arc Pro GPUs in select Intel Core Ultra processors and discrete Arc Pro models such as Arc Pro B390 and Arc Pro B370, and it notes compatibility with the Alchemist and Battlemage Arc families. Wccftech reports the driver supports multiple Windows hosts, and gives the example calculation that a 64 GB system can allocate 59.5 GB (64 x 0.93) to the GPU. The article also compares the figure to AMD Ryzen AI, which it reports allows up to 87% memory allocation.
Editorial analysis - technical context
Increasing the host-memory ceiling for integrated GPUs affects how much working set a GPU can address without relying on device-local memory. For practitioners, allocating a larger fraction of DRAM to an iGPU can let larger language models or larger batch sizes fit entirely within the GPU address space on machines without discrete GPUs. Industry-pattern observations: comparable vendor updates that raise host-allocation limits typically trade off potential system-memory pressure for greater GPU-accelerated workload capability on single-socket workstations.
Context and significance
For ML engineers who prototype or run inference on local hardware, the change is a practical enabler rather than a paradigm shift. Industry context: larger host-memory allocations make integrated GPUs more useful for lightweight or midsized LLM inference workflows on laptops and compact workstations, but they do not change the raw compute or on-die memory bandwidth. Observed patterns in similar shifts show that ISV certification and driver maturity determine real-world usability for production ML workloads.
What to watch
- •Adoption in vendor toolchains and frameworks, including whether frameworks can reliably place model tensors into the expanded iGPU addressable region.
- •Any OS-level behaviors under high host-memory reservation, such as paging, thermal, or performance regressions reported by users.
- •Progress on ISV certifications for Arc Pro GPUs cited by the driver notes, and comparative benchmarks versus discrete GPUs and AMD Ryzen AI systems.
Scoring Rationale
This driver change materially increases the usable GPU address space on Intel Arc iGPUs, making local LLM inference more feasible on some systems. The update is useful for practitioners prototyping or running midsized models locally but does not alter compute or bandwidth constraints, so its overall industry impact is moderate.
Practice with real Ride-Hailing data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Ride-Hailing problems


