Hide for Today

Elice Cloud DataHub: Unveiled in 2024 to Revolutionize AI Model Development

Check out our comprehensive cloud solution that supports all stages of AI model development.A

Event background

Explore Elice's Exclusive AI Technology - Innovating Since 2024

Discover everything about Elice AI technology

Event background

Leading the Future: CSAP-Certified AI Courseware Released in 2024

Explore Elice's robust platform, proven by Cloud Service Assurance Program (CSAP)

Event background

Elice Cloud DataHub: Unveiled in 2024 to Revolutionize AI Model Development

Check out our comprehensive cloud solution that supports all stages of AI model development.A

Event background

Explore Elice's Exclusive AI Technology - Innovating Since 2024

Discover everything about Elice AI technology

Event background

Leading the Future: CSAP-Certified AI Courseware Released in 2024

Explore Elice's robust platform, proven by Cloud Service Assurance Program (CSAP)

Event background

Elice Cloud DataHub: Unveiled in 2024 to Revolutionize AI Model Development

Check out our comprehensive cloud solution that supports all stages of AI model development.A

Event background

Explore Elice's Exclusive AI Technology - Innovating Since 2024

Discover everything about Elice AI technology

Event background

Leading the Future: CSAP-Certified AI Courseware Released in 2024

Explore Elice's robust platform, proven by Cloud Service Assurance Program (CSAP)

Event background
Elice logo

Overview of Elice’s Korean Qwen2.5 Models

Elice

4/2/2025

We’re excited to introduce Elice’s Korean Qwen2.5 Instruct models, available in 32B and 72B variants.
These models are custom fine-tuned by Elice to deliver exceptional performance—especially in Korean language understanding and generation—and are now available via the Elice Cloud Model Library.

Fine-Tuning and Training Process

At the core of Elice’s Korean Qwen2.5 models is a rigorously curated dataset built from verified, high-quality sources, including:

  • AI-HUB datasets
  • Crawled data from official Korean government websites
  • Meticulously filtered open-source corpora

This ensures a clean, high-integrity training foundation with real-world Korean usage embedded from the ground up.

The models were trained on an 18-trillion-token corpus with a multi-stage alignment process incorporating:

  • Diverse Instruction Data: Capturing broad, practical scenarios.
  • Alignment Techniques: Supervised fine-tuning + RLHF + DPO for safe and useful outputs.
  • Multilingual Enhancement: With a strategic focus on Korean, Elice’s variant achieves superior linguistic performance in Korean contexts.

⚡ Trained in Just 2 Days on 8x H100 GPUs

Elice’s Korean Qwen2.5 models were trained in just 2 days using only 8x H100 GPUs—a testament to both engineering efficiency and scalable AI development.
You can do the same using Elice Cloud on-demand instances for only 33,600 KRW/hour, with flexible configurations to match your project size.Elice’s Korean Qwen2.5 models were trained in just 2 days using only 8x H100 GPUs—a testament to both engineering efficiency and scalable AI development.
You can do the same using Elice Cloud on-demand instances for only 33,600 KRW/hour, with flexible configurations to match your project size.

Performance on Standard Benchmarks

Here’s how Elice’s Korean Qwen2.5 models perform on industry benchmarks:
Benchmark performance comparison between Qwen2.5-32B and Qwen2.5-72B across MMLU (knowledge), GSM8K (math), MATH (math proof), HumanEval (coding), MBPP (coding), and MT-Bench (chat quality).

The 72B variant leads the pack, showing top-tier performance across knowledge, reasoning, math, and code generation.

✅ Zero Language Switching Errors

Open-source multilingual models often suffer from language switching issues, where Korean prompts result in English responses (~10% of queries). Elice’s Korean Qwen2.5 models reduce this to ~0%, delivering consistent and natural Korean output.

Retrieval and Long-Context Capabilities

A standout capability of Elice’s Korean Qwen2.5 models is 128K token context support, paired with dedicated retrieval mechanisms. These models can digest and reason over massive documents—critical for tasks like legal text analysis, compliance auditing, and academic research.

Comparison with Korean-Specialized Models

To put performance in perspective, here’s how Elice’s Korean Qwen2.5 models stack up against other well-known Korean-focused models:
Comparison of benchmark performance between Elice's Qwen2.5 models and Exaone 3.5 (32B) and SOLAR 10.7B, covering MMLU (English), Korean MMLU, GSM8K (Math), HumanEval (Coding), ARC-Challenge, and BBH (Reasoning).

While SOLAR and Exaone show solid results on Korean tasks, Elice’s Korean Qwen2.5 Instruct models—especially the 72B variant—outperform them not just in Korean but also across general-purpose reasoning, making them ideal for real-world bilingual use cases.

Inference and Deployment Advantages

Beyond raw performance, Elice’s Korean Qwen2.5 models are built for practical deployment:

  • Scalability: The 32B model runs on a single high-end GPU, while the 72B model delivers massive throughput on multi-GPU setups.
  • Quantized Versions (4/8-bit): Lightweight deployment with minimal performance trade-offs.
  • Open-Source under Apache 2.0 License: Tune, adapt, or integrate freely.

📢 Available Now on Elice Cloud Model Library
No need to build from scratch—just deploy and go.

Already Powering Real Products

These models aren’t just sitting on a benchmark sheet—they’re already delivering value in the real world.

Elice’s Korean Qwen2.5 models are actively powering various core products and services within the Elice ecosystem, including:

  • Coding chat assistant
  • AI agents with tool calling ability
  • Advanced RAG application for Educational purpose
  • And countless micro-services using internally

This real-world usage validates the models’ performance, stability, and production readiness - Like you, we care about real-world application just as much as the benchmarking numbers.

Conclusion

Elice’s Korean Qwen2.5 models raise the bar for Korean language AI:

  • Built on verified, high-quality data
  • Trained in record time on cost-effective, reproducible infrastructure
  • Solving real issues like language switching errors
  • Fully open, deployable, and available right now on Elice Cloud

Whether you’re building research tools, customer-facing applications, or enterprise AI pipelines, Elice’s Korean Qwen2.5 models offer the ideal combination of precision, performance, and practical deployment power.

Top-tier Korean language AI isn’t a luxury anymore—it’s just a few clicks away.

  • #Qwen2.5
  • #benchmark