Would that actually be decent? Even 6b models feel way too rudimentary after experiencing 33+b models and/or chatgpt. I haven’t tried those really scaled down and optimized models, though!
They’re decent for text completion purposes, e.g. generating some corpspeak for an email, or generating some “wikipedia”-like text. You have to know how to write good prompts, don’t try to treat it like ChatGPT.
For example if i want to know about the history of Puerto Rico I would put:
“The history of puerto rico starts in about 480BC when”
Would that actually be decent? Even 6b models feel way too rudimentary after experiencing 33+b models and/or chatgpt. I haven’t tried those really scaled down and optimized models, though!
They’re decent for text completion purposes, e.g. generating some corpspeak for an email, or generating some “wikipedia”-like text. You have to know how to write good prompts, don’t try to treat it like ChatGPT.
For example if i want to know about the history of Puerto Rico I would put:
“The history of puerto rico starts in about 480BC when”
Decent enough for a model 50 times smaller than ChatGPT. I use orca_mini_3b.