DevPulse
Developer Telemetry & AI Vibe Analytics
Overview
A developer telemetry dashboard tracking cognitive load and productivity trends. DevPulse fires an AI inference call to analyze commit narratives, calculating a developer's sentiment score, burnout index, and "vibe", wrapped in an industrial brutalist aesthetic.
AI Pipeline Strategy
- Groq Inference: Ultra-fast analysis on commit batch data using
llama-3.1-8b. - Burnout Indexing: Formula estimating intensity vs recovery per commit.
- Executive Directive: LLM compiles aggregated stats into a single industrial command summary.
- Rate-limit Backoff: Exponential retries built over Octokit and Groq integrations.
System Architecture & Telemetry Flow
A robust full-stack architecture powered by Express 5 orchestrating asynchronous Git integrations. It scales JSON metrics via authenticated GitHub Passport.js OAuth sessions and securely dumps persistent historic telemetry profiles directly into a Mongo cluster, visualizing seamlessly via React DOM.
sequenceDiagram
participant UI as React UI 🖥️
participant Server as Express Backend 💻
participant Github as GitHub API 🐱
participant AI as Groq (Llama 3) 🧠
participant DB as MongoDB Atlas 🗄️
UI->>Server: Request Telemetry (Token)
Server->>Github: OAuth Session / Fetch Commits
Github-->>Server: Return Commit History Batch
Server->>AI: Send Commits for Inference (System Prompt)
AI-->>Server: Returns Vibe/Burnout Multipliers
Server->>DB: Store Analyzed Telemetry Profile
Server-->>UI: Serve Aggregated JSON Payload
UI->>UI: Render Recharts Data Visualization
Tech Highlights
Core Engineering Challenges
- AI Strict Formatting: Constrained Llama-3 outputs strictly into deterministic JSON patterns, resolving hallucinated object structures via hard-coded parsing strategies.
- API Backoffs: Implemented smart exponential retries to circumvent strict free-tier rate limits from third-party OAuth callbacks.