Inception

Why are we starting this project?

Our goal is to integrate idle computing power worldwide, create a GPU cluster, and promote the development of decentralized computing.

We view computational power as the "digital oil" of this era, fueling an unprecedented technological industrial revolution. Our ambition is for TonGPU to become the definitive currency of computational power, establishing an ecosystem of products and services that facilitate the utilization of computational resources both as a fundamental resource and a valuable asset.

why is it EVM first, and then TON?

Based on the current data from Defillama, EVM chains, particularly Ethereum, still have the highest trading volume and the broadest audience. That’s why we will first launch our project on Ethereum. Why are we expanding to the TON chain later? Because we believe that the TON chain also has a significant user base. Many cryptocurrency users are active on Telegram, and these users have a lot of idle computational resources.

why not AWS/Google Cloud?

Our decision to steer away from mainstream cloud providers like AWS or Google Cloud is deeply rooted in a vision to harness the vast, untapped potential of idle resources available with everyday users. While conventional cloud services offer specialized GPU/CPU capabilities, they do so at a steep cost, making advanced computing inaccessible to many due to high rental and bandwidth fees.

In stark contrast, a wealth of computational power lies dormant within the devices of regular users, a resource that, if leveraged, could significantly democratize access to computational capabilities. By redirecting our focus from dedicated data centers to these underutilized personal resources, we not only optimize what's already available but also alleviate the financial burden associated with traditional cloud computing services.

How to Efficiently Utilize User Resources?

Our approach to addressing the challenges of distributed training/computation, a persistent issue within the industry, involves innovative time-sliced training methods. Instead of relying on real-time synchronization through technologies like NVLink for concurrent training, we adopt a segmented approach to handle large-scale tasks.

This methodology involves breaking down a substantial task into multiple training segments, each executed sequentially rather than simultaneously. By doing so, we circumvent the need for intensive real-time computational coordination, allowing for more flexible and efficient use of distributed resources. This segmented training approach not only optimizes the utilization of available computational power but also enhances scalability and reduces the potential for bottlenecks often associated with synchronized distributed training.

Through this time-sliced training strategy, we effectively distribute the computational load, enabling more users to contribute their idle GPU/CPU resources without the stringent requirements of simultaneous processing. This method ensures that each segment of the task receives the focused computational effort it requires, leading to more efficient overall processing and facilitating the tackling of large-scale computational tasks in a distributed manner.

Last updated