Business | Solution
Solution
Next-generation technology
environment for smart innovation based on NPU
Discovering tailored AI use cases via NPU servers
Methodology for performing proof-of-concept analysis projects
Application of Whynet's proven analysis methodology
ATOM Server is
the only NPU server in Korea capable of inferencing
various models from LLM to Vision AI and Visual Language Model.
Speed
Achieved world-class performance in the global benchmark MLPerf M
compared to GPUs and other NPUs.
SML (Small Language Model)
The only domestically mass-produced model that supports
transformer-based Small Language Model (SLM) and various generative AI model acceleration.
Energy
With lower power costs and energy efficiency compared to GPUs,
AI services and data centers can be operated more economically.
Tech
Rebellions' ATOM™ contains performance and core technologies
verified at IS SCC, the world's premier semiconductor conference.
Key Products
ATOM™-Max Server
ATOM™-Max Server is a high-performance server for large-scale AI inference. Based on high-efficiency power design, it can stably perform large-scale AI inference with a single server. It supports hundreds of AI models including Vision AI, LLM, Multi-Modal AI, and Physical AI, as well as core operational tools such as vLLM, Triton, and Kubernetes. It provides a GPU-friendly development environment for easy utilization.
Key Features
Performance at Any Scale
Supports large-scale AI services without performance degradation even when user requests surge.
Processes thousands of tokens and frames in real-time with a single server.
Variety of Models Applications
Can immediately utilize hundreds of AI models including LLM, Vision, Multi-Modal, and Physical AI
to implement customized AI services.
Sustainable AI Infrastructure
Reduces the Total Cost of Ownership (TCO) of AI infrastructure through high-efficiency power design,
enabling the construction of a sustainable AI business environment.
Develop As You Always Have
Can use familiar development environments and workflows (PyTorch, TensorFlow, etc.)
as-is, allowing easy continuation of development through tutorials.
Full-Stack Software Support
Compatible with open-source ecosystems such as vLLM, Triton Inference Server, and K8s,
enabling end-to-end service construction with various AI service operation tools for
efficient serving, flexible resource operation, and monitoring.
Want to learn more about our business?
System Integration
System Integration (SI)
Network Integration
Network Integration (NI)
System Management
Operations & Maintenance (SM)
Security
Security
Solution
Service Provider ·
Data Center
Solution
Campus ·
Enterprise
Solution
AI · Automation
Solution
NPU