Although AkaLlama-70B has significant potential, its responses can sometimes be inaccurate, biased, or misaligned, presenting risks if used without additional testing and refinement. Furthermore, the quality of the model's output is greatly influenced by the system prompt and decoding strategy. Changes in these areas could result in less precise outputs. Therefore, we strongly recommend handling our model with considerable caution.
@misc{akallama,
author = {Chung, Jiwan and Jeon, Jaehyun and Kim, Saejin and Lim, Seungwon and Oh, Giyeong and Son, Yejin and Yu, Youngjae}
title = {AkaLlama: Yonsei University Large Language Model Project},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1}},
}
This website is adapted from Nerfies, licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. We thank the Llama team for giving us access to their models, and open-source projects.
Usage and License Notices: The data, code and checkpoint is intended and licensed for research use only. They are also restricted to uses that follow the license agreement of Llama. (META LLAMA 3 COMMUNITY LICENSE AGREEMENT)
Special Thanks: Data Center of the Department of Artificial Intelligence at Yonsei University for the computation resources