🤖
Decentralized GPT
  • 🧠Welcome to DGPT
  • ❔What is DGPT?
  • 👤Decentralized Unsupervised Pre-Training (DUPT)
  • 👥Decentralized Supervised Fine-Tuning (DSFT)
    • 🗣️Collect and Preprocess the Unsupervised Corpus of Tokens from Various Oracles
  • 🪙Tokenomics
  • 🛣️Roadmap
  • 🖇️Links
Powered by GitBook
On this page

What is DGPT?

PreviousWelcome to DGPTNextDecentralized Unsupervised Pre-Training (DUPT)

Last updated 11 months ago

Overview of Decentralized Unsupervised Pre-Training (DUPT)

Decentralized Unsupervised Pre-Training (DUPT) is a technique used in the training of language models without the need for labeled data. The method leverages unsupervised learning to build a robust foundational model capable of understanding and generating human language. This foundational model is essential for subsequent fine-tuning tasks.

Overview of Decentralized Supervised Fine-Tuning (DSFT)

Decentralized Supervised Fine-Tuning (DSFT) builds on the pre-trained model developed through DUPT. In DSFT, the pre-trained model is adapted to perform specific tasks using labeled datasets. This process ensures that the model can generalize well to new, unseen data, thereby enhancing its applicability and performance in real-world scenarios.

Purpose and Benefits

The purpose of combining DUPT and DSFT is to create a versatile and powerful language model that can be trained efficiently and effectively. The main benefits include improved generalization, faster convergence during training, and the ability to leverage large amounts of unlabeled data to reduce the dependency on costly labeled datasets.

❔