VentureIsrael News

ClearML Enhances AMD Instinct GPU Partitioning to Accelerate Enterprise AI Adoption

2026-01-05 11:10 Portfolio Companies
ClearML, the leading end-to-end solution for GPU management and unleashing AI in the enterprise, today announced extended support for the fractional GPU capabilities now available for AMD Instinctâ„¢ GPUs, including the AMD Instinct MI300X GPU and newer GPUs. With partitioned AMD GPUs, enterprises can run multiple concurrent AI workloads with improved throughput and resource efficiency, maximizing infrastructure ROI while accelerating AI at every phase of production. ClearML's silicon-agnostic solution gives enterprises the freedom to scale their infrastructure when and how they choose, while keeping their AI Infrastructure investments future-ready.

Organizations are increasingly looking to identify opportunities to enhance GPU utilization across their AI infrastructure. Diverse AI workload types create a resource mismatch: lightweight tasks often claim entire GPUs while compute-intensive training jobs wait in queue. Dynamic workload orchestration is critical for organizations seeking to scale AI capabilities without expanding infrastructure budgets.

A Complete Solution for GPU Management and Utilization

AMD Instinctâ„¢ MI300X GPUs deliver the hardware flexibility to partition each single GPU into 8 independent compute slices, each with hardware isolated resources and optimized memory locality. ClearML provides a turnkey solution for enterprise teams by automatically detecting available partitions, dynamically allocating resources based on workload requirements, and delivering unified visibility across all resources through a single control plane.

Together, high-performance, cost-effective AMD hardware and ClearML's vendor-agnostic AI infrastructure platform deliver effortless management and optimal GPU utilization across any heterogeneous GPU clusters.

Maximizing GPU utilization is a top infrastructure priority for global enterprises and organizations. According to ClearML's latest State of AI Infrastructure at Scale 2025-2026 annual report, (https://go.clear.ml/state-of-ai-infrastructure-report-25-26), and based on our survey, almost half (49.2%) of IT leaders at leading F1000 companies identified maximizing GPU efficiency across existing hardware, including shared compute and fractional GPUs, as their top priority for expanding AI infrastructure over the next 12-18 months. ClearML's native support for AMD Instinct partitioned GPUs unites AMD partitioning technology with intelligent orchestration, empowering AI teams to scale infrastructure more cost-effectively.

"The latest AMD Instinct GPUs deliver comprehensive performance, and now with fractional GPU capabilities, organizations can get even more value from these investments," says Moses Guttmann, CEO and Co-founder of ClearML. "By supporting fractional GPUs across multiple hardware vendors, we're reinforcing our commitment to true hardware-agnostic solution. Whether you're running AMD Instinct GPUs, or a heterogeneous mix of infrastructure, ClearML maximizes your AI infrastructure investment. This gives teams the flexibility to choose the best solution for their needs while ensuring maximum utilization across their entire IT organization."

"AMD Instinct GPUs deliver leadership performance with an exceptional TCO profile. By supporting fractional GPU capabilities leveraging Instinct GPU partitioning, ClearML makes it simple for customers to run diverse workloads efficiently, driving better ROI and delivering more AI outcomes from the same infrastructure." - Marilyn Basanta, Sr. Director Product Management, AMD

Unified AMD GPU Resource Management Across ClearML's AI Infrastructure Platform

ClearML automatically leverages fractional AMD GPU support across the entire platform, enabling teams to build, train, and deploy AI models without any additional configuration or effort:

  • Infrastructure Control Plane: Centralized visibility and governance across all resources, including partitioned AMD Instinct GPUs. Administrators can set resource quotas, monitor real-time utilization across every partition, seamlessly orchestrate jobs on a per-partition level and track what workloads are running on each node from a single dashboard. Dynamic workload orchestration ensures optimal utilization across the entire GPU cluster.
  • AI Development Center: AI builders can deploy, build, and train AI through an intuitive interface, with fractional GPUs accelerating time to production by providing more compute options for workloads.
  • GenAI App Engine: Launch models on AMD GPU partitions with a single click, complete with a full UI and management dashboard. Deploy both off-the-shelf and custom AI while maintaining cost efficiency.

Enterprise IT teams and AI builders gain consistent, governed access to partitioned AMD Instinct GPU resources from early-stage research through scaled production workloads, managed through a single control plane without switching tools or sacrificing control.

Availability

Support for AMD Instinct MI300X and newer GPUs is available now for ClearML Enterprise customers. Organizations interested in maximizing their AMD GPU utilization can contact ClearML to learn more.