Tether Unveils AI Framework Enabling Model Training on Smartphones

Tether Unveils AI Framework Enabling Model Training on Smartphones


The post Tether Unveils AI Framework Enabling Model Training on Smartphones appeared on BitcoinEthereumNews.com.

Key Insights Tether introduced a framework enabling large language model training on smartphones. The system used BitNet architecture and LoRA fine-tuning to reduce compute needs. Crypto firms increased spending on AI infrastructure and high-performance computing. Tether released a new artificial intelligence training framework on Tuesday that enables large language models to run and fine-tune on consumer hardware. The system formed part of the company’s QVAC platform and supported smartphones alongside several non-Nvidia processors. Engineers designed the framework to reduce memory requirements, thereby lowering the cost barrier to building and testing language models. The launch came as crypto infrastructure companies moved deeper into artificial intelligence development and compute markets. Tether, issuer of the largest stablecoin by market capitalization, framed the release as an attempt to decentralize machine-learning capabilities. The firm argued that enabling model training on widely available hardware could reduce reliance on centralized cloud providers. Tether Introduced BitNet-Based Training System Tether’s announcement described the framework as a training environment built on Microsoft’s BitNet architecture. The design used one-bit neural network structures combined with LoRA fine-tuning methods, allowing developers to adjust models while keeping compute demands low. Source: X Company engineers said the system trained language models with up to one billion parameters on smartphones in under two hours. Smaller models reportedly completed training within minutes when optimized through the same approach. The company also stated that the platform supported models reaching thirteen billion parameters on mobile devices. Engineers built the system to operate across several hardware ecosystems rather than relying on Nvidia chips. The framework supported AMD processors, Intel architectures, Apple Silicon systems, and mobile graphics processors from Qualcomm and Apple. That compatibility expanded access to machine-learning experimentation beyond traditional high-performance computing clusters. The technical design also reduced graphics memory requirements compared with standard models. Internal engineering results showed that…



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *