Skip to content
#

prompt-evaluation

Here are 15 public repositories matching this topic...

A Streamlit web app that uses a Groq-powered LLM (Llama 3) to act as an impartial judge for evaluating and comparing two model outputs. Supports custom criteria, presets like creativity and brand tone, and returns structured scores, explanations, and a winner. Built end-to-end with Python, Groq API, and Streamlit.

  • Updated Nov 24, 2025
  • Python

A hybrid machine learning system for scoring LLM prompts. Features a BERT-based gatekeeper for structural validation and an LLM-based classifier to ensure semantic intent, delivering consistent empirical metrics for prompt engineering.

  • Updated Dec 17, 2025
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the prompt-evaluation topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the prompt-evaluation topic, visit your repo's landing page and select "manage topics."

Learn more