
Christiane Lillge
Add a review FollowOverview
-
Founded Date August 2, 2024
-
Sectors Sales & Marketing
-
Posted Jobs 0
-
Viewed 6
Company Description
DeepSeek’s First-generation Reasoning Models
reasoning models, achieving efficiency comparable to OpenAI-o1 throughout math, code, and thinking tasks.
Models
DeepSeek-R1
Distilled designs
DeepSeek team has demonstrated that the reasoning patterns of bigger designs can be distilled into smaller designs, resulting in better efficiency compared to the reasoning patterns found through RL on small models.
Below are the models produced through fine-tuning versus a number of dense designs commonly used in the research neighborhood utilizing reasoning information generated by DeepSeek-R1. The examination results demonstrate that the distilled smaller thick models carry out incredibly well on benchmarks.
DeepSeek-R1-Distill-Qwen-1.5 B
DeepSeek-R1-Distill-Qwen-7B
DeepSeek-R1-Distill-Llama-8B
DeepSeek-R1-Distill-Qwen-14B
DeepSeek-R1-Distill-Qwen-32B
DeepSeek-R1-Distill-Llama-70B
License
The design weights are accredited under the MIT License. DeepSeek-R1 series support industrial use, enable any adjustments and derivative works, consisting of, however not restricted to, distillation for training other LLMs.