Completed
In-app support via Featurebase widget
Will integrate Featurebase’s support widget to handle feedback and questions inside the app.

drivevisor 8 months ago
Completed
In-app support via Featurebase widget
Will integrate Featurebase’s support widget to handle feedback and questions inside the app.

drivevisor 8 months ago
In Progress
Estimated costs are not accurate or consistent
Some reports show an initial figure but the total does not add up. This could be down to the discrepency between the software logic and the LLM

drivevisor 8 months ago
In Progress
Estimated costs are not accurate or consistent
Some reports show an initial figure but the total does not add up. This could be down to the discrepency between the software logic and the LLM

drivevisor 8 months ago
Planned
Report takes long to load
There is an identified issue where the report takes a while to load making it not feel like a native experience. We’ve done some investigation into what may be causing it and working on a potential fix. Here’s a rough report from an LLM 🐢 Why It Feels Slow or Unnative 1. LLM Response Latency GPT responses (even gpt-4o-mini) take 1–3 seconds, sometimes more. Add Supabase Edge Function overhead (auth, cold starts, retries) — this adds more time. 2. Async State Handling in Frontend Your app likely shows "Generating detailed analysis..." with no progressive feedback. There's no intermediate UI (e.g. skeleton loaders, shimmer, partial content) → feels stuck or abrupt when data arrives. 3. Full Page Dependence You're calling the LLM only on full page mount, and if anything fails, the whole section is blocked. Feels brittle — not resilient to retry or degraded experiences. 4. Server Round-Trip Even with local caching, each report requires: Full data lookup → Edge Function → OpenAI → back to frontend On mobile or slow connections, this chain introduces friction

drivevisor 8 months ago
Planned
Report takes long to load
There is an identified issue where the report takes a while to load making it not feel like a native experience. We’ve done some investigation into what may be causing it and working on a potential fix. Here’s a rough report from an LLM 🐢 Why It Feels Slow or Unnative 1. LLM Response Latency GPT responses (even gpt-4o-mini) take 1–3 seconds, sometimes more. Add Supabase Edge Function overhead (auth, cold starts, retries) — this adds more time. 2. Async State Handling in Frontend Your app likely shows "Generating detailed analysis..." with no progressive feedback. There's no intermediate UI (e.g. skeleton loaders, shimmer, partial content) → feels stuck or abrupt when data arrives. 3. Full Page Dependence You're calling the LLM only on full page mount, and if anything fails, the whole section is blocked. Feels brittle — not resilient to retry or degraded experiences. 4. Server Round-Trip Even with local caching, each report requires: Full data lookup → Edge Function → OpenAI → back to frontend On mobile or slow connections, this chain introduces friction

drivevisor 8 months ago