LG unveils EXAONE 4.5 multimodal AI model

LG AI Research released EXAONE 4.5, a 33-billion-parameter multimodal vision-language model that processes both text and images. The model uses a Hybrid Attention architecture with a proprietary 1.29B-parameter vision encoder and supports a 262,144-token context window across six languages including Korean, English, and Japanese. On benchmarks, EXAONE 4.5 scores 77.3 on STEM average (vs. GPT-5-mini at 73.5), 92.9 on AIME 2025, and 78.7 on MMMU. The open-weight model is available on Hugging Face in FP16, FP8, and GGUF formats under a non-commercial license, and runs on a single H200 GPU at full context length.
Scoring Rationale
A 33B open-weight multimodal model matching or beating GPT-5-mini on multiple benchmarks, available on Hugging Face with broad framework support, is directly relevant to practitioners evaluating non-Western AI alternatives.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems


