LLM Playground

A platform that empowers users to explore, compare, and experiment with multiple AI models in one place — bridging AI literacy with hands-on creativity.

Project type: End-to-end web app design + development (Vibe Coding + AI Integration)

Role: Product Designer + Front-End Developer (designed, prototyped, and implemented core UI/UX and interaction flow)

Industry: Generative AI, Product Design Tools

Tools: React • Next.js • Flask • Tailwind CSS • Windsurf • Figma MakeAI• Vibe Coding • OpenAI / Anthropic / Groq / Gemini APIs

Project Overview

LLM Playground is an interactive web application that allows users to chat, test, and compare multiple LLMs (Large Language Models) such as OpenAI, Anthropic, Groq, and Gemini — all in one clean, dynamic interface.

The goal was to unify the fragmented prompt-testing process into a single experience that’s fast, accessible, and insightful — blending modern design with intelligent functionality.

This project represents my end-to-end ownership — from product ideation and UX design to development, evaluation, and deployment — built through a vibe coding workflow using tools like Windsurf to accelerate iteration and design feedback loops.

Problem

Developers and researchers testing generative AI often juggle multiple LLM playgrounds (OpenAI, Claude, Gemini, etc.) with inconsistent UIs, APIs, and parameters.
This creates friction:

  • Constant context switching between tabs and models

  • No unified parameter control (temperature, max tokens)

  • Limited visibility for cross-model evaluation

  • Redundant code and prompt duplication

Solution

LLM Playground consolidates all major AI providers into one seamless environment.

Key capabilities include:

  • Instant switching between LLM providers (OpenAI, Anthropic, Groq, Gemini)

  • Adjustable parameters for creativity and response length

  • Built-in LLM-as-a-Judge evaluator for comparing response quality

  • A dynamic welcome screen with personalized greetings and starter prompts

  • An elegant, minimal UI optimized for clarity and accessibility

Projected Metrics

User Retention Rate:
Projected 85% weekly retention within the first 3 months, driven by the introduction of the LLM Parameters panel, which encourages users to personalize and experiment with model behavior.

Try it out

https://llm-playground-demo.vercel.app/

Thank you for reading!