L'IA n'est plus l'apanage des géants de la technologie ! Tether lance QVAC, est-ce le moment où tout le monde aura son propre LLM ?

Stablecoin issuer Tether announced today (17th) a major technological breakthrough for its AI infrastructure QVAC Fabric: the world’s first support for a cross-platform BitNet LoRA fine-tuning framework, allowing large language models (LLMs) that previously required enterprise-grade GPUs and cloud computing power to now be trained and inferred on consumer-grade hardware, including smartphones.

Smartphones can now train LLMs: 1B models completed within 1 hour

According to data released by Tether, the framework has successfully achieved fine-tuning of BitNet models on various devices, including the common Samsung S25 and iPhone 16.

Samsung S25 (Adreno GPU):

  • 125 million parameters model: about 10 minutes for fine-tuning
  • 1 billion parameters model: about 1 hour 18 minutes

iPhone 16 (Apple GPU):

  • 1 billion parameters: about 1 hour 45 minutes

Extreme tests have reached up to 13 billion parameters for model fine-tuning

Previously, AI training tasks executed on high-end NVIDIA GPUs have been compressed to edge devices like smartphones.

Key technology: BitNet + LoRA — slashing AI costs dramatically

The core of this breakthrough lies in the combination of two technologies:

BitNet (1-bit LLM)

Compresses traditional high-precision weights into just -1, 0, 1, significantly reducing memory and computation requirements.

LoRA (Low-Rank Adaptation)

Trains only a small number of parameters (reducing training volume by up to 99%), greatly lowering fine-tuning costs.

Together, these enable models to operate in extremely low-resource environments.

Empirical tests show BitNet-1B uses 77.8% less VRAM than Gemma-3-1B and 65.6% less than Qwen3-0.6B. Under the same hardware, roughly twice the model size can be run.

GPU unlocking mobile AI: performance boosted up to 11 times

Another key breakthrough of QVAC is enabling BitNet to run truly on “non-NVIDIA” ecosystems. It supports GPUs from AMD, Intel, Apple Silicon, and even mobile GPUs: Adreno, Mali, Apple Bionic.

Large language models are no longer the exclusive domain of tech giants; AI can now be decentralized

Tether CEO Paolo Ardoino stated: “Intelligence will be a key factor in future societal development. It has the potential to enhance social stability, serve as a bridge connecting society, or further empower a select few elites. The future of AI should be accessible, usable, and available to everyone, not monopolized by a few cloud service providers with enormous resources.”

Traditional AI development heavily relies on cloud and large GPU clusters, which are costly and concentrated among a few tech giants. Tether’s QVAC platform supports meaningful large-scale model training on consumer hardware, including smartphones, demonstrating that advanced AI can be decentralized and inclusive. In the coming months, Tether will continue investing significant resources and funds to ensure AI can be used anytime, anywhere on local devices.

This article: AI is no longer a patent of tech giants! Tether launches QVAC — the era of everyone having an LLM? Originally published on Chain News ABMedia.

Voir l'original
Avertissement : Les informations contenues dans cette page peuvent provenir de tiers et ne représentent pas les points de vue ou les opinions de Gate. Le contenu de cette page est fourni à titre de référence uniquement et ne constitue pas un conseil financier, d'investissement ou juridique. Gate ne garantit pas l'exactitude ou l'exhaustivité des informations et n'est pas responsable des pertes résultant de l'utilisation de ces informations. Les investissements en actifs virtuels comportent des risques élevés et sont soumis à une forte volatilité des prix. Vous pouvez perdre la totalité du capital investi. Veuillez comprendre pleinement les risques pertinents et prendre des décisions prudentes en fonction de votre propre situation financière et de votre tolérance au risque. Pour plus de détails, veuillez consulter l'avertissement.
Commentaire
0/400
Aucun commentaire