Have something to say?

Tell us how we could make the product more useful to you.

Planned

Report takes long to load

There is an identified issue where the report takes a while to load making it not feel like a native experience. We’ve done some investigation into what may be causing it and working on a potential fix. Here’s a rough report from an LLM 🐢 Why It Feels Slow or Unnative 1. LLM Response Latency GPT responses (even gpt-4o-mini) take 1–3 seconds, sometimes more. Add Supabase Edge Function overhead (auth, cold starts, retries) — this adds more time. 2. Async State Handling in Frontend Your app likely shows "Generating detailed analysis..." with no progressive feedback. There's no intermediate UI (e.g. skeleton loaders, shimmer, partial content) → feels stuck or abrupt when data arrives. 3. Full Page Dependence You're calling the LLM only on full page mount, and if anything fails, the whole section is blocked. Feels brittle — not resilient to retry or degraded experiences. 4. Server Round-Trip Even with local caching, each report requires: Full data lookup → Edge Function → OpenAI → back to frontend On mobile or slow connections, this chain introduces friction

drivevisor 8 months ago