OpenAssistant Models and Dataset Released
For the last four months I have been working on OpenAssistant, a volunteer project in collaboration with LAION-AI to create a fully open-source language model tuned to follow instructions as an alternative to proprietary services like ChatGPT. Today (15th April 2023) we release a high-quality instruction tuning dataset and the first versions of OpenAssistant models.
Dataset
We release the first version of the OpenAssistant Conversations Dataset, a fully volunteer-contributed dataset consisting of over 50,000 high-quality instruction and response pairs across multiple languages. The dataset is released under a permissive Creative Commons license, meaning it can be used for a wide range of purposes, including commercial. The instruction and response pairs have been used successfully to tune a pretrained large language model (LLM) to follow instructions using supervised finetuning (SFT), demonstrating their value.
In addition to the instruction and response pairs, we release accompanying label data from community moderation, with each message being rated by several users for categories such as whether it is spam, its quality, its helpfulness, and others. Finally, each prompt has multiple response options, and we release the accompanying human ranking data for relative ranking of the responses for each prompt. This data can be used to train reward models for reinforcement learning from human feedback (RLHF).
A more in-depth breakdown and analysis of the dataset will be available in the upcoming Open Assistant Dataset version 1 research paper.
Models
We release a range of models tuned from two different base models, Pythia 12B, and LLaMa 30B.
Our Pythia-based 12B parameter model is released under the Apache 2.0 license, making it available for a wide range of uses including commercial. This is the most capable fully open-source instruction-tuned model available, having been trained on more data than others such as Databricks’ Dolly. While larger public instruction-tuned models exist, such as Vicuna, they are derived from LLaMa and therefore not fully open-source.
Our LLaMa-based 30B model will be released in the form of weight deltas, meaning you must have a copy of the original LLaMa weights in order to use it. The restrictions placed on the use of LLaMa models by Meta must also be followed, meaning the model is available only for research use.
We additionally release the reward model used for RLHF training under a permissive Apache 2.0 license.
Chat Interface and Safety
You can chat with an OpenAssistant LLaMa-based 30B model using the free chat interface here. This will also help us build better datasets and models as responses and feedback are recorded, so please use the thumbs up and down ratings liberally!
In conjunction with OpenAssistant, safety techniques for LLMs were developed under the blade2blade project. Our chat interface will soon use blade2blade to mitigate harms.
Future Work
We will in future release improved models and models tuned with RLHF. We are additionally considering several streams of future work, including:
- Tuning other open-source models to follow instructions using OpenAssistant data, such as Cerebras-GPT, GPT-J, GPT-NeoX.
- Developing integrations for Open Assistant, similar to the plugins proposed for ChatGPT.
- Expanding the Open Assistant data and preparing a corpus for pretraining a large language model (LLM) from scratch.
Contributors
A fantastic team has been working on this project, and you can see an incomplete list of those who contributed to development here!
If you wish to get in touch with me to discuss the project or otherwise, you can message me on LinkedIn (link below) or email me at oliver ge stanley (_at_) gmail (_dot_) com
(address obfuscated to avoid spam).