Groq
groq.comInferenceprotocol: x402first seen 5h ago
Ultra-fast LLM inference (Llama, DeepSeek, Gemma)
Endpoints
4
Uptime · 24h
0.0%
0 / 140 probes
Avg latency · 24h
191 ms
p90 234 ms
Total calls
0
0 unique payers
Healthy probes · last 24h
6 hour buckets · hover for per-bucket stats
Endpoints
Ordered: active first, healthy first, cheapest first. Up to 200 shown.
| Endpoint | Method | Price | Network | Last status | Latency | Probed |
|---|---|---|---|---|---|---|
| https://api.venice.ai/api/v1/chat/completions | POST | 0.001000 USDC | eip155:8453 | 400 | 199 ms | 8m ago |
| https://api.venice.ai/api/v1/responses | POST | 0.001000 USDC | eip155:8453 | 400 | 195 ms | 8m ago |
| https://blockrun.ai/v1/models | GET | — | eip155:8453 | 404 | 203 ms | 8m ago |
| https://blockrun.ai/v1/chat/completions | POST | — | eip155:8453 | 404 | 227 ms | 8m ago |