There is an identified issue where the report takes a while to load making it not feel like a native experience. We’ve done some investigation into what may be causing it and working on a potential fix.
Here’s a rough report from an LLM
🐢 Why It Feels Slow or Unnative
GPT responses (even gpt-4o-mini) take 1–3 seconds, sometimes more.
Add Supabase Edge Function overhead (auth, cold starts, retries) — this adds more time.
Your app likely shows "Generating detailed analysis..." with no progressive feedback.
There's no intermediate UI (e.g. skeleton loaders, shimmer, partial content) → feels stuck or abrupt when data arrives.
You're calling the LLM only on full page mount, and if anything fails, the whole section is blocked.
Feels brittle — not resilient to retry or degraded experiences.
Even with local caching, each report requires:
Full data lookup → Edge Function → OpenAI → back to frontend
On mobile or slow connections, this chain introduces friction
Please authenticate to join the conversation.
Planned
Feature Request
8 months ago

drivevisor
Get notified by email when there are changes.
Planned
Feature Request
8 months ago

drivevisor
Get notified by email when there are changes.