Share this link via
Or copy link
Llama 2, an advanced large language model developed by Meta, is now open-source and freely accessible for both research and commercial purposes. This cutting-edge model offers a range of pretrained and fine-tuned language variations, ranging from 7 billion to 70 billion parameters.
Trained on an extensive dataset of 2 trillion tokens, the pretrained models boast double the context length compared to its predecessor, Llama 1. Additionally, the fine-tuned models have undergone training with over 1 million human annotations, ensuring enhanced performance.
Llama 2 stands out among other open-source language models, showcasing superior performance across various external benchmarks, including reasoning, coding, proficiency, and knowledge tests.
Powered by publicly available online data sources for pretraining and instruction datasets with over 1 million human annotations for fine-tuning, Llama 2 is set to revolutionize language processing.
Meta has forged strategic partnerships with cloud providers, companies, and researchers, demonstrating its commitment to an open approach in advancing language technology.