β‘ LLM Energy Testing Lab
Measure real-world energy consumption and carbon footprint of LLM interactions
π―
Prompt Model
π§©
Middleware
π¬
Response
π
Tests
π
Documentation
β‘
Run Energy Test
Stop
Enable Live Power (RAPL)
π Logs
ποΈ Clear
πΎ Export
π―
Prompt Model
β
Provider
π₯οΈ Local (Ollama)
βοΈ Cloud (Groq)
Model Selection
Loading modelsβ¦
π Context Length:
-
β οΈ RAPL power monitoring disabled for cloud models. Energy estimates only.
User Query
π‘ This is your actual query - the only "original" tokens
π§©
Hidden Middleware
β
β οΈ Middleware Overhead:
Each injection adds hidden tokens to your query
Inputs (Prefill Stage)
System Prompt
Custom / empty
Clear
Conversation Context
Inject Last Turn
β
Add Prompt Injection
Outputs (Decode Stage)
Include Thinking
Temperature
0.7
Max Tokens
π¬
Response Output
Run a test to see the model response here.
π
Token Analysis
β
Input
-
Overhead
-
Thinking
-
Output
-
Strategy
-
Total Tokens
-
Latency
-
Tokens/Sec
-
π
Tests Overview
β
Clear History
β‘
Total Energy
-
Watt-hours per run
π―
Intensity
-
Wh / 1k tokens
π
Speed
-
Total time (s)
π±
Carbon Footprint
-
gCO2 emissions
Metric:
Energy
Speed
Carbon
Grid:
Global Average (0.445 kgCO2/kWh)
Show RAPL (Actual)
Show Estimates
Norm:
Per 1k Output Tokens
Per 1k Input Tokens
Per 1k Total Tokens
π
Benchmark Estimation
β
Current Benchmark
-
View Details
Benchmark Source
Loading sources...
Baseline Benchmark
Loading benchmarks...
Apply
Compare Model Benchmarks
Loading model benchmarks...
Add Custom Benchmark
Add
Uses the selected benchmark and external baselines to estimate Wh/1000 e-tokens for this run and session.
β‘
Live Power RAPL Results
β
π
Batch Testing
β
Number of Runs
π²
Reproducibility & Calibration
β
Seed (Optional)
Calibration
Not run yet.
Run Calibration
_
Live Measurement Logs
β