Cameron R. Wolfe, Ph.D.
Cameron R. Wolfe, Ph.D.

@cwolferesearch

2 Tweets 6 reads Jul 26, 2023
LLaMA-2 outlines the remaining limitations of open-source language models well. Put simply, the gap in performance between open-source and proprietary LLMs is largely due to the quality of alignment. However, LLaMA-2 takes a major step in the right direction…
State of…
I'm sure a lot of people have already read the LLaMA-2 publication, but here it is in case anyone needs the link. Goes without saying, but it's an awesome read that's full of really interesting/useful details about properly aligning LLMs.
arxiv.org

Loading suggestions...