Ant Group uses domestic chips to train AI models and cut costs


AI is changing the game in video and image generation—making it easier than ever to turn ideas into stunning visuals. If you want to create your own videos without troublesome editing, Dreamlux lets you explore it with one click. Dive into the future of AI content creation today.

.
  • Поделиться с:

Ant Group Leverages Domestic Chips for Cost-Effective AI Training

Ant Group is strategically employing Chinese-made semiconductors to train its artificial intelligence models, aiming to cut costs and reduce reliance on U.S. technology. This move reflects broader initiatives among Chinese tech companies to circumvent export restrictions on advanced chips.

The company, owned by Alibaba, has utilized domestic chips from suppliers linked to Alibaba and Huawei to develop large language models through the Mixture of Experts (MoE) method. These efforts have reportedly achieved results comparable to those derived from Nvidia’s advanced chips. While Nvidia technology remains part of Ant’s AI toolkit, there's a discernible shift toward using alternatives from AMD and Chinese manufacturers.

This development marks Ant’s active participation in the escalating AI innovation contest between China and the U.S., with a focus on cost-efficient model training. The pursuit of domestic hardware alternatives underscores a wider ambition among Chinese tech firms to navigate around bans on high-end chips, like Nvidia’s H800, essential for many AI processes.

Ant’s advancements have been detailed in a research paper, asserting that its models sometimes outperform those of major international counterparts. Although these claims have not been independently verified, they suggest significant progress in reducing operational costs and dependence on foreign hardware.

The MoE approach breaks down data tasks into minor sets, managed by separate components, enhancing model production efficiency. This method has gained traction among AI researchers, aligning with methodologies employed by tech giants like Google.

Training MoE models typically requires high-performance GPUs, often financially out of reach for smaller companies. Ant’s research aims to overcome this barrier by optimizing training processes to work with cheaper hardware solutions, a method articulated in its paper titled, “Scaling Models without premium GPUs.”

In contrast, Nvidia’s strategy focuses on increasing chip power and capability, with CEO Jensen Huang asserting the perpetual demand for computational power due to the nature of AI model evolution.

Ant’s paper indicates that training one trillion tokens using standard hardware costs approximately 6.35 million yuan. By employing their optimized methods, Ant managed to lower these costs to 5.1 million yuan.

The company is keen to implement its Ling-Plus and Ling-Lite models across industrial sectors such as healthcare and finance. Its acquisition of the medical platform Haodf.com exemplifies its commitment to deploying AI innovations in healthcare solutions.

Ant has made its models open-source, with Ling-Lite boasting 16.8 billion parameters and Ling-Plus having 290 billion, highlighting its technological prowess. However, the paper notes ongoing challenges, such as performance instability related to hardware or structural tweaks during training.

"If you find one point of attack to beat the world’s best kung fu master, you can still say you beat them, which is why real-world application is important," remarked Robin Yu, CTO of Shengshang Tech, emphasizing practical implementation over theoretical advancement.

This approach not only positions Ant Group prominently within the AI sphere but also reflects a growing trend toward more localized, cost-effective AI solutions. As the AI landscape evolves, these innovative directions hold significant implications for global tech competitiveness.

The Future of AI in Video Content Creation

In our digital era, video stands as the dominant medium for storytelling. Whether you're a brand strategist, content creator, or someone who loves sharing moments visually, producing high-quality videos is essential for capturing and retaining attention. Yet, traditional video production often demands significant time, money, and specialized skills.

This is where AI makes a compelling entrance.

AI-powered video generators are transforming the landscape by enabling users to convert simple prompts or images into studio-quality videos in mere minutes. AI video generator tools like Dreamlux that automate everything from animations and voiceovers to seamless scene transitions, democratizing video creation for all.

Beyond automation, AI is now capturing subtle human moments—adding life to photos with gentle, natural effects.

Enter the World of AI Breeze Blowing Effect

A standout among these innovations is the AI Breeze Blowing Effect feature, which brings photos to life by simulating soft wind movement through the subject’s hair. With just a single image, the tool generates a realistic animation where strands of hair flow naturally in the breeze, adding elegance, emotion, and motion to an otherwise still photo.

Perfect for portraits, fashion visuals, or artistic content, the AI Breeze Blowing effect adds a cinematic touch with minimal effort. It’s ideal for creators looking to evoke a sense of calm, beauty, or atmosphere—without needing complex video shoots or special effects.

By animating even the smallest details, AI helps turn simple visuals into emotionally rich video moments.

AI Breeze Blowing Effect - Add soft motion to hair with AI

How to Use Dreamlux AI Breeze Blowing for a Natural Wind Effect

Follow the steps below to create a gentle breeze animation with Dreamlux.ai:

  1. Go to the official https://dreamlux.ai and click on "Templates"
  2. Select the "Free AI Breeze Blowing Effect" from the template list
  3. Upload a portrait or photo where hair movement will enhance the scene
  4. Click "Create", and let the AI Breeze Blowing Effect tool generate a flowing hair animation in just minutes

Dreamlux helps you add motion and mood to any still image.

Комментарии