LG EXAONE 4.5 and the Real State of Korean Local LLMs
A practical overview of EXAONE 4.5, HyperCLOVA X SEED, Kanana 2, A.X 4.0, Mi:dm 2.0, and Solar Pro 3, focused on licensing, deployment, and real Korean-language capability.
Quick take
Start with this judgment
23 min readBottom line
A practical overview of EXAONE 4.5, HyperCLOVA X SEED, Kanana 2, A.X 4.0, Mi:dm 2.0, and Solar Pro 3, focused on licensing, deployment, and real Korean-language capability.
- Best for
- Readers comparing cost, capability, and real limits before choosing a tool
- What to check
- EXAONE 4.5 · Korean local LLM · HyperCLOVA X
- Watch out
- Pricing and features can change, so confirm with the official source too.
3 key points
- EXAONE 4.5 is a 33B open weight VLM released on April 9, 2026, but it is not a model that can be immediately put into commercial services because it is licensed under the
NClicense. - As of May 2, 2026, the local LLM market in Korea should be divided into
API-first flagships,open-weight/self-hosted models, andgiant sovereign showcase models. - The important question now is not “Who is number one in Korean?” but “Can we actually distribute it legally for our purposes, on affordable hardware?”
목차
- What exactly is EXAONE 4.5 model?
- Can EXAONE 4.5 truly be called a local LLM?
- Who has come so far in Korean local LLM?
- What is the Korean model that can be used commercially?
- How should I verify that I am good at Korean?
- How good is the Korean model compared to the global model?
- Why should individual developers and companies make different choices?
- How will the local LLM landscape in Korea change in the future?
- FAQ: Frequently Asked Questions about EXAONE 4.5 and Korea Local LLM
- Conclusion: How should we read the local LLM market in Korea in May 2026?
Looking back at Korea’s local LLM as of May 2, 2026, the atmosphere has clearly changed. Beyond the level of “Korea also has its own model,” the number of models that can actually be downloaded from Hugging Face and self-hosted has increased, and API-centered commercial models have also rapidly evolved toward agents and multimodal. However, for some models, including EXAONE 4.5, there is a large gap between performance buzz and actual distribution possibilities. This article focuses on EXAONE 4.5, but also includes HyperCLOVA
What exactly is EXAONE 4.5 model?
To conclude, it is more accurate to read EXAONE 4.5 as “Korean open weight VLM targeting document understanding and multimodal industrial reasoning in the 33B weight class” rather than “No. 1 Korean conversational LLM.” LG AI Research released EXAONE 4.5 on April 9, 2026, and the technical report and model card say it is a 33B model combining 31.7B language parameters and 1.29B vision encoder (Source: EXAONE 4.5 GitHub, EXAONE 4.5 Technical Report, EXAONE 4.5 HF model card).
Why was 4.5 so significant?
The EXAONE series was originally known as an LG model with strengths in the Korean language, but the point that changed in 4.5 is that it focused on visual understanding from text focus. LG introduced EXAONE 4.5 as “the first open weight vision language model” and explained that it enhanced industrial document understanding through document-centric data curation and 256K context expansion (Source: EXAONE 4.5 Technical Report, LG Press Release).
What is important about this expression is that “input mixed with industrial documents, tables, charts, and visual information” is placed at the forefront rather than “general Korean conversation.” Even at the same 33B level, EXAONE 4.5 is better read on the document intelligence axis than the ChatGPT alternative.
Just looking at the specs, how strong is it?
EXAONE 4.5, based on the official model card, supports 262,144 token contexts, and a single H200 or 4x A100 40GB is recommended for actual serving. Supported frameworks include TensorRT-LLM, vLLM, SGLang, and llama.cpp (Source: EXAONE 4.5 HF model card).
If we only look at the official comparative figures related to the Korean language, the interpretation should be a little more calm. The LG model card has multimodal Korean indices KMMMU 42.7, K-Viscuit 80.1, and KRETA 91.9 written on it, and the language-only page has KMMLU-Pro 67.6 and KoBALT 52.1 written on it. In the same table, GPT-5 mini records KMMLU-Pro 72.5 and KoBALT 63.6, and Qwen3.5 27B records KMMLU-Pro 73.0 and KoBALT 54.9 (Source: EXAONE 4.5 HF model card).
| characteristic | EXAONE 4.5 | GPT-5 mini | K-EXAONE 236B | Qwen3.5 27B |
|---|---|---|---|---|
| KMMMU | 42.7 | 42.6 | - | 51.7 |
| K-Viscuit | 80.1 | 78.5 | - | 84.0 |
| KRETA | 91.9 | 94.8 | - | 96.5 |
| KMMLU-Pro | 67.6 | 72.5 | 67.3 | 73.0 |
| KoBALT | 52.1 | 63.6 | 61.8 | 54.9 |
| OCRBench v2 | 63.2 | 55.8 | - | 67.3 |
| OmniDocBench v1.5 | 81.2 | 77.0 | - | 88.9 |
Just by looking at this table, the picture is clear. EXAONE 4.5 is not overwhelmingly first in all areas of Korean. Instead, it is quite competitive in document, OCR, diagram interpretation, and some Korean multimodal contexts. Therefore, it is safer to call it “the best in document VLM” rather than “the best in Korean.”
Why have licenses and limits become more important?
The biggest real-world limitation of EXAONE 4.5 is the license. Both GitHub and Hugging Face specify EXAONE AI Model License Agreement 1.2 - NC, so commercial use is prohibited (Source: EXAONE 4.5 GitHub, EXAONE 4.5 HF model card).
You should look at licensing before performance. EXAONE 4.5 is open weight, so it is possible to download it for research or internal experimentation, but it is not a model for putting it into commercial service as is.
Another limitation is that LG has set itself down. The model card notes that EXAONE 4.5 relies heavily on training data statistics, which can produce inaccurate updates, bias, and inappropriate responses. Up to this point, other multimodal models are similar, but models that emphasize “understanding industrial documents” as a strength can be more dangerous due to incorrect table interpretation or document inference (Source: EXAONE 4.5 HF Model Card).
Can EXAONE 4.5 truly be called a local LLM?
The short answer is “technically yes, but from a consumer perspective, only halfway.” You can get it from Hugging Face, and the vLLM, TensorRT-LLM, and llama.cpp paths are also open, so self-hosting is possible. However, in that the actual serving environment recommended by LG is one H200 or four A100 40GB, it is far from the local LLM based on a typical MacBook or personal workstation (Source: EXAONE 4.5 HF model card).
“Downloadable” and “Private Local” are different
This is the distinction most often missed in Korean LLM articles in 2026. Just because it is revealed on Hugging Face does not immediately mean that it is a “local model that works well at home.” Models such as EXAONE 4.5, HyperCLOVA
Conversely, models such as Mi:dm 2.0 Mini 2.3B or Kanana 1.5 2.1B are much more realistic from a true local experiment perspective. So, even in articles, the term “locally possible” must be used, but the hardware rating must be added.
Local areas for research and local areas for products are also different.
EXAONE 4.5 is quite interesting when research teams or large corporate AI organizations run document understanding experiments. However, the NC license blocks local use of products, especially commercial SaaS or corporate external services. In other words, “can be self-hosted” and “can be used immediately for business” must be separated.
Who should seriously watch EXAONE 4.5?
The first is the document intelligence team. For teams handling workloads that include charts, tables, reports, drawings, and OCR, the experimental value of a 33B-class open weight VLM is quite significant. Second, the Korean context is an important research organization. However, thirdly, if an ordinary startup or individual developer is in the context of “let’s choose a Korean-style local LLM,” EXAONE 4.5 is not the first choice, but closer to one of the axes of comparison.
EXAONE 4.5’s realistic position
EXAONE 4.5 is closer to a “multimodal open weight for research and industry” than a “Korean local chatbot for the public.” If you miss this difference, the entire article can easily read like an advertisement.
Who has come so far in Korean local LLM?
As of May 2, 2026, it is convenient to read Korea’s local LLM ecosystem into roughly five branches. LG EXAONE, NAVER HyperCLOVA X/SEED, Kakao Kanana, SKT A.X, KT Mi:dm, and Upstage Solar Pro 3 are the most notable API-centric Korean models.
NAVER: HyperCLOVA
Naver provides the HyperCLOVA HCX-007 supports hybrid reasoning, 128K context, function calling, and structured outputs, HCX-005 is multimodal, and HCX-DASH-002 is a lightweight line (Source: CLOVA Studio models).
On the other hand, the local/self-hosted axis is HyperCLOVA X SEED. NAVER’s official technology page introduces SEED as “an open source AI model that is freely accessible to companies and developers and can be used commercially” (Source: NAVER HyperCLOVA X Page). However, since the actual Hugging Face distribution unit is a separate custom license, this does not mean that legal review can be omitted.
The latest releases on the SEED side include 0.5B·1.5B text instructions, Vision 3B, Think 14B, Think 32B, and Omni 8B. Think 32B is a Korean-centric inference VLM with 128K context, and Omni 8B is a 32K omni model that handles text, images, and audio (Source: [HyperCLOVA
Kakao: Commercial launch in Kanana 1.5, agent direction becomes clearer in Kanana-2
On May 23, 2025, Kakao released the Kanana 1.5 2.1B and 8B series as Apache 2.0, opening the door to an open model for commercial use. The official press release emphasizes Korean-English bidirectional performance, long input handling, enhanced function calls, and commercial use potential (Source: Kakao Kanana 1.5 release).
On January 20, 2026, the Kanana-2 30B-A3B series was additionally released. Kakao explains that this model is optimized to run on A100-class general-purpose GPUs and is a MoE model for agentic AI that enhances instruction implementation and tool calls. Official data also shows that out of the total 32B, only 3B is activated during actual inference (Source: Kakao Kanana-2 update, Kanana-2 collection).
Another point is that Kakao is expanding from text-only to multimodal. Kakao introduced Kanana-o as Korea’s first integrated multimodal language model in May 2025, but this axis has a stronger product/API nature than open weight (Source: Kanana-o official announcement).
SKT·KT·Upstage: Rather, it is more practical from a corporate introduction perspective.
SKT released the A.X 4.0 72B standard model and 7B lightweight model as open source on July 3, 2025. The official announcement states additional learning based on Qwen2.5, possibility of operating in local environment, internal network installation support, KMMLU 78.3 and CLicK 83.5. However, it is correct to read these performance figures with the caveat that they are based on SKT’s official announcement (Source: SKT A.X 4.0 release).
For KT, based on public weight, Mi:dm 2.0 Base 11.5B and Mi:dm 2.0 Mini 2.3B are the most important. The Hugging Face model card specifies the MIT license, Korea-centric AI position, and KT user data not included. And although Mi:dm K 2.5 Pro appears in the product lineup on the K intelligence model page, the publicly available download axis I checked as of May 2, 2026 was 2.0 Base/Mini (Source: Mi:dm 2.0 Base HF model card, K Model page).
Upstage should be viewed in two ways. The latest flagship is Solar Pro 3, which maintains 102B MoE, 12B active parameters, API focus, and the same price as Solar Pro 2 as of March 2026, and improves agent, inference, and Korean performance, according to the official blog. The price page discloses $0.15 per 1 million input tokens and $0.6 per 1 million output tokens (Source: Solar Pro 3 blog, Upstage Pricing). On the other hand, if you are looking for Upstage series that can be downloaded directly, Solar Open 100B is still the reference point (Source: Upstage HF org models, Solar Open 100B).
| model | Status as of 2026-05-02 | Actual distribution form | commercial use | one line judgment |
|---|---|---|---|---|
| EXAONE 4.5 | Latest public 33B VLM | Self-host available, high specifications | Not possible (NC) | Research/document type VLM |
| HyperCLOVA X | Current API lineup | CLOVA Studio/API | possible | Korean/Enterprise API |
| HyperCLOVA X SEED | Open Weight Series Expansion | Self-host available | Possible, but custom license | Naver’s local axis |
| Kanana 2 | Latest agent axis opened | Self-host available | Open to the public, detailed license confirmation required | A100 level practical MoE |
| A.X 4.0 | 72B/7B released | On-premise orientation | possible | Corporate security Korean model |
| Mi:dm 2.0 | 11.5B/2.3B released | Self-host available | Yes (MIT) | The most common commercial open axle |
| Solar Pro 3 | Latest official flagship | API-centric | possible | Korean high-performance API model |
What is the Korean model that can be used commercially?
In practice, this question is the most important. Before looking at model performance, you should look at license, API vs self-hosted, deployment hardware, and legal review difficulty.
EXAONE 4.5 separates performance buzz from adoption potential
EXAONE 4.5 is most dramatic at this point. Although its performance is highly topical, it is eliminated as a commercial service introduction model due to the NC license. Even though there is meaning in in-house research, evaluation, and prototypes, it is hasty to conclude, “Let’s make a product out of this.”
There is a big difference in experience between MIT, Apache, and custom licenses.
Mi:dm 2.0 Base is MIT, so it is the simplest to interpret. Kanana 1.5 2.1B/8B was released as Apache 2.0 according to the Kakao press release. These models have low review costs for companies (Source: Mi:dm 2.0 Base HF Model Card, Kakao Kanana 1.5 release).
On the other hand, HyperCLOVA Even if commercial use is permitted, legal review is necessary as prohibition clauses, usage policies, and distribution notice conditions may be included. It is better to express this model as “open weight for commercial use with conditions” rather than “free to use.”
The API model has visible costs, while the open weight has hidden infrastructure costs.
For API-centric models like Solar Pro 3, the token price is immediately visible. Upstage is revealing the price of Solar Pro 3. On the other hand, self-hosted series such as A.
| Distribution method | representative model | merit | Things to note |
|---|---|---|---|
| open weight for research | EXAONE 4.5 | State-of-the-art multimodal document inference experiments | NC License |
| Commercially available open model | Mi:dm 2.0, Kanana 1.5 | Self-hosting/tuning flexibility | Operating costs and quality control |
| Enterprise On-Premises | A.X 4.0, Kanana-2, SEED Think 32B | Security and Data Control | A100 level or higher required |
| API Flagship | Solar Pro 3, HyperCLOVA X | Speed of adoption and simplicity of operation | Vendor lock-in and continuous billing |
How should I verify that I am good at Korean?
The most common error in Korean model articles is mixing different benchmarks into one-line rankings. Being good at Korean should be divided into at least three aspects.
KMMLU looks at Korean test-type expertise and cultural context
KMMLU is not a translation of the existing English benchmark, but a benchmark that collects 35,030 Korean test questions in 45 subjects. Therefore, rather than simple translation skills, we look at Korean language expertise, cultural context, and test-based reasoning. Based on the initial paper, the highest public model was only 50.54%, which was lower than the human average of 62.6% (Source: KMMLU paper).
This means two things. First, if your KMMLU score is high, you can be considered to have strong Korean language expertise. Second, that score alone does not represent “interactive UX.”
KoBALT deepens the linguistic understanding of the real Korean language
KoBALT evaluates 24 phenomena across morphology, phonology, pragmatics, syntax, and semantics with 700 questions. In other words, it takes a deeper look at “how naturally you write Korean sentences.” This bench reveals much more sharply the difference between the Korean-specific model and the global model (Source: KoBALT paper).
EXAONE 4.5’s KoBALT 52.1 is not a bad score, but compared to GPT-5 mini 63.6 or K-EXAONE 61.8, it is not enough to say that it is “#1 overall in Korean.” Therefore, it is more natural to view EXAONE 4.5 as a document-type multimodal axis.
KMMMU and CLLik better show the “Korean context”
KMMMU is useful for evaluating Korean multimodal context, while CLLik is more directly aimed at understanding Korean cultural and linguistic context. For example, SKT announced that A. However, since this figure is based on SKT’s official announcement, it should not be read with the same weight as independent reproduction results (Source: SKT A.X 4.0 release).
KMMLU is closer to specialized knowledge, KoBALT is closer to linguistic depth, and KMMMU and CLLik are closer to the Korean context and multimodality. Combining different tests into a single ranking table almost always leads to exaggeration.
How good is the Korean model compared to the global model?
The most honest answer is this: In the competition for the best general-purpose model, there are still many sections where global frontiers and large Chinese models are superior. However, the Korean model is carving out separate battlefields in Korean context, enterprise security, document-heavy workloads, and on-premises deployment.
EXAONE 4.5 is closer to “role-specific competition” than the world’s best
Based on LG’s official data alone, EXAONE 4.5 is ahead of GPT-5 mini in some document and OCR items, but there are many items where Qwen3.5 27B is higher. In other words, rather than saying “EXAONE 4.5 is the best in the world,” it is more accurate to say that it is “a specialized type that is quite strong among the open weight VLMs in the 33B weight class” (Source: EXAONE 4.5 HF model card).
It is realistic that the Korean model complements the global frontier rather than replacing it.
This point is also connected to the flow seen in GPT-5.5 Summary or Claude Opus 4.7 Summary. The global model is still stronger in terms of general-purpose reasoning, extensive coding, up-to-date English-centered knowledge, and ecosystem coverage. Instead, the Korean model has a growing presence in conditions such as legal, public, and financial documents, common sense in Korean society, high security requirements, and domestic data sovereignty.
It should also be compared with the Chinese open model.
In the Korean market, the American Frontier model is not the only competitor. As seen in Qwen 3.6 review, GLM 5.1 review, and Kimi K2.6 complete analysis, the Chinese model is strong in price, open weight, coding, long context, and function calls. For a Korean local LLM to be truly marketable, it must prove that “there is a clear reason in the Korean context and security,” not “because it is domestically produced.”
Why should individual developers and companies make different choices?
From now on, it is a question of “who should use it?” Even if it is the same Korean model, the model chosen by an individual developer and a large enterprise security team is completely different.
If you are an individual developer, you need to narrow down the license and hardware first.
There is a high hardware barrier for individual developers to directly use models such as EXAONE 4.5, SEED Think 32B, and Kanana-2 30B-A3B as their main models. So, realistically, it is better to compare the Mi:dm 2.0 Mini, Kanana 1.5 2.1B/8B, or the Gemma/Qwen series together, even if they are not necessarily Korean models. The context of the practical usability of Open Weight Korean can also be referenced in Gemma 4 Review.
For companies, on-premise and legal risks are the top priority.
If it is difficult to export data externally, such as automating internal documents in finance, public, manufacturing, or large corporations, on-premise candidates such as A.X 4.0, Kanana-2, HyperCLOVA X SEED, or Mi:dm may matter more. At that point, license clarity, internal GPU availability, Korean document quality, and tool-calling/agent fit become more important than the absolute performance of the model.
Solar Pro 3 and HyperCLOVA X are fast for API-centric organizations
If you need to attach quickly and do not have an internal GPU operating organization, API models such as Solar Pro 3 or HyperCLOVA X are much more realistic. In particular, Solar Pro 3 has improved inference and agent performance in the same price range, and HyperCLOVA
| situation | First model to look at | reason | Things to note |
|---|---|---|---|
| Document, chart, OCR research | EXAONE 4.5 | 33B Open Weight VLM | NC License |
| Korean commercial self-host | Mi:dm 2.0 Base, Kanana 1.5 | The commercial license axis is relatively clear. | Operating system is more important than absolute performance |
| Large enterprise on-premises | A.X 4.0, Kanana-2, SEED Think 32B | Security·Korean context·In-house distribution | A100 level or higher required |
| Fast API adoption | Solar Pro 3, HyperCLOVA X | Operational simplicity | Long-term costs and dependencies |
| personal local experiment | Mi:dm 2.0 Mini, Kanana 1.5 2.1B | small weight class | Do not expect performance like a large model. |
How will the local LLM landscape in Korea change in the future?
Three trends seem almost certain: MoE, multimodal, and sovereign AI moving into real production.
First, it is highly likely that the proportion of MoE will be larger than that of Dense.
Looking at Kanana-2 30B-A3B, Solar Pro 3 102B/12B, K-EXAONE 236B/23B, and A.X K1 519B/33B, the direction is clear. The method of lowering costs by increasing total parameters but reducing actual active parameters is becoming a standard in Korean corporate models (Source: Kakao Kanana-2 update, Solar Pro 3 blog, K-EXAONE Technical Report, A.X K1 HF model).
Second, multimodal is no longer an option, but the default route.
EXAONE 4.5 is a document-type VLM, HyperCLOVA It is unlikely that Korean models will remain text-only for long.
Third, “Do you understand Korean business well” becomes more important than “Do you speak Korean well?”
This flow is worth emphasizing in the text rather than the title. In the future, it is highly likely that the domestic market will not be a simple translated version of Korean, but administrative documents, public-sector formats, financial/legal context, internal enterprise knowledge, and local security requirements. In other words, the weapon of the Korean model lies in industrial suitability in the Korean context rather than being number one in general purpose.
FAQ: Frequently Asked Questions about EXAONE 4.5 and Korea Local LLM
EXAONE 4.5 has been released on Hugging Face. Can I use it directly on the company's service?
Among Korean local LLMs, which axis has the cleanest commercial use?
Is EXAONE 4.5 really good at Korean?
Among domestic models, which is the most realistic for individuals to experiment with?
Which of the Korean models is the most recent for use as an API?
Can Korea’s local LLM catch up with the global model in the future?
Conclusion: How should we read the local LLM market in Korea in May 2026?
The conclusion is simple. EXAONE 4.5 is not a signal that the Korean model has completely turned the world upside down, but is closer to a signal that the Korean company has entered the document-type multimodal competition in earnest with the 33B class open weight VLM. And looking at the Korean local LLM as a whole, now it is more important to “who understands the Korean business context better, and can distribute under what license, on what hardware, and under what security conditions” rather than “who is the smartest.”
Final judgment on this article
Korean local LLM is not yet at the level of being ranked first overall in the general-purpose frontier, but it has reached a level where it can no longer be called a frontier in terms of Korean language documents, Korean context, on-premise security, and agent-type work environments.
It will be easier to make a decision if you look at EXAONE 4.5 if you need a research document VLM, Mi:dm 2.0·Kanana·A.X·SEED if you need a commercial self-host, and Solar Pro 3·HyperCLOVA X if you need a quick API introduction.
Start by choosing whether to use it commercially or not.
Even if it has good performance like EXAONE 4.5, it is excluded from NC product candidates.
First decide whether to use API or self-host.
The comparison begins when you have to choose whether to pay the token price or the GPU and operating costs.
View Korean benches by purpose
KMMLU, KoBALT, KMMMU, and CLicK do not combine into a single ranking because their viewing abilities are different.
View Korean context and document workload separately
Actual suitability in Korean administrative, legal, financial, and document processing may be more important than the general-purpose highest score.
- EXAONE 4.5 GitHub
- EXAONE 4.5 HF model card
- EXAONE 4.5 Technical Report
- LG EXAONE 4.5 Press Release
- NAVER HyperCLOVA X
- CLOVA Studio models
- [HyperCLOVA
- Kakao Kanana 1.5 release
- Kakao Kanana-2 update
- SKT A.X 4.0 release
- Mi:dm 2.0 Base HF model card
- Upstage Solar Pro 3
- Upstage Pricing
- KMMLU paper
- KoBALT thesis
Topic tags