Selene’s evaluations can help you generate valuable insights at several stages in your LLM development lifecycle. Some most common ways you can use Selene evals are:

  1. Model selection: You can compare the performance of different models and choose the right one for your use case.

  2. Prompt Engineering: You can run A/B tests on your prompts, learn which works well.

  3. Context improvement: You can use Selene critiques to understand gaps in your current context, and take steps to improve that.

  4. Guardrailing: Using Selene in production, you can prevent your LLM from inadvertently sending out low quality or harmful outputs.

    You can use Selene Mini as your guardrail to minimize latency!
  5. Improving your LLM outputs: You can go one step further and improve your LLM outputs on the fly! You can achieve this by using Selene critiques to regenerate your LLM output live.

Let us know if there are other ways we can help you build high quality LLM products.