Ask a large language model (LLM) any question you want about a vector of text or the text from a search_text().
Usage
llm(
  text,
  query,
  text_col = "text",
  model = llm_model(),
  maxTokens = 1024,
  temperature = 0.5,
  top_p = 0.95,
  seed = sample(1e+06:9999999, 1),
  API_KEY = Sys.getenv("GROQ_API_KEY")
)Arguments
- text
 The text to send to the LLM (vector of strings, or data frame with the text in a column)
- query
 The query to ask of the LLM
- text_col
 The name of the text column if text is a data frame
- model
 the LLM model name (see
llm_model_list())- maxTokens
 The maximum integer of completion tokens returned per query
- temperature
 Controls randomness in responses. Lower values make responses more deterministic. Recommended range: 0.5-0.7 to prevent repetitions or incoherent outputs; valued between 0 inclusive and 2 exclusive
- top_p
 Nucleus sampling threshold (between 0 and 1); usually alter this or temperature, but not both
- seed
 Set for reproducible responses
- API_KEY
 your API key for the LLM
Details
You will need to get your own API key from https://console.groq.com/keys. To avoid having to type it out, add it to the .Renviron file in the following format (you can use usethis::edit_r_environ() to access the .Renviron file)
GROQ_API_KEY="key_value_asdf"
See https://console.groq.com/docs for more information
Examples
# \donttest{
  text <- c("hello", "number", "ten", 12)
  query <- "Is this a number? Answer only 'TRUE' or 'FALSE'"
  is_number <- llm(text, query)
#> You have 499999 of 500000 requests left (reset in 172.799999ms) and 299949 of 300000 tokens left (reset in 10.2ms).
  is_number
#>     text answer       time tokens
#> 1  hello  FALSE 0.01215060     52
#> 2 number   TRUE 0.02051176     52
#> 3    ten  FALSE 0.01232082     52
#> 4     12   TRUE 0.02013553     52
# }
