Model input
prompt
Prompt to send to Llama.
max_length
Maximum number of tokens to generate. A word is generally 2-3 tokens (minimum: 1)
temperature
Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value. (minimum: 0.01; maximum: 5)
top_p
When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens (minimum: 0.01; maximum: 1)
repetition_penalty
Penalty for repeated words in generated text; 1 is no penalty, values greater than 1 discourage repetition, less than 1 encourage it. (minimum: 0.01; maximum: 5)
seed
Seed for random number generator, for reproducibility
Language models

vicuna-chat

Open source chatbot based on LLaMA-13B. It was developed by training LLaMA-13B on user-shared conversations collected from ShareGPT. LLaMA is a new open-source language model from Meta Research that performs as well as comparable closed-source models. Using GPT-4 to evaluate model outputs, the developers of Vicuna-13B found that it not only outperforms comparable models like Stanford Alpaca, but also reaches 90% of the quality of OpenAI's ChatGPT and Google Bard.

0.7k
0
Model result

Opensource machine learning, oh how you shine, With your algorithms and code, you make my heart entwine. Your libraries and frameworks, they make my work a breeze, I can train my models with ease. Your community of developers, they are so kind and giving, They share their knowledge and insights, and help me to living. With you, open source