Recent Blogs

InterviewPrep: An AI-powered mock interview platform

4/24/2025

TL;DR — In this post I unpack how I combined Next.js 14’s App Router, serverless Postgres via Neon, Drizzle ORM, Google Gemini, Pinecone vector search, Clerk authentication, AWS S3 and a sprinkle of shadcn/ui to ship InterviewPrep: a full-stack mock-interview playground that runs entirely on Vercel.

1. Why another interview app?

I started this project after a long week of back-to-back technical screens. Existing tools felt disjointed — one site for coding challenges like LeetCode® and HackerRank®, another for behavioural questions, and none that used my own résumé to ask context-aware questions (for free?). InterviewPrep grew out of that itch: a single place to generate customised, AI-driven interview sessions — behavioural, technical & live coding — in under 30 seconds.

2. High-level architecture


  ┌────────────┐            ┌────────────┐
  │  Next.js   │ API Routes │   Gemini   │
  │  (Vercel)  │───────────▶│    API     │
  └────┬───────┘            └─────┬──────┘
       │                          │
       │Drizzle (SQL Tags)        │
  ┌────▼───────┐          Vector  │
  │   Neon     │◀─────────Store───┘
  │ PostgreSQL │          (Pinecone)
  └────▲───────┘
       │  Presigned URLs
  ┌────┴───────┐
  │   AWS S3   │  ← résumés, recordings
  └────────────┘
        

3. Bootstrapping the project

  • pnpm create next-app interviewprep --ts --tailwind --eslint
  • pnpm i @clerk/nextjs @auth/drizzle-adapter drizzle-orm @neondatabase/serverless pg zod @pinecone-database/pinecone openai @aws-sdk/client-s3 @aws-sdk/s3-request-presigner @monaco-editor/react react-hook-form @tanstack/react-query tailwind-merge

4. Core Features

4.1 Résumé Chat & Feedback (ResumeAI)

Flow: Upload your résumé PDF, which is split into chunks and embedded via OpenAI’s embeddings. Chunks are stored in Pinecone as a vector index.

Use case: Chat in a conversational UI that retrieves context from your résumé. It can answer questions about your experience, suggest improvements, and generate tailored résumé feedback on the fly based on your input.

4.2 AI-powered Mock Interviews

  1. Create Room: Users fill a form with job role, description, experience, and optionally upload/select a résumé. Gemini generates 5 behavioural and 2 coding questions based on these inputs.
  2. Behavioural Round: Questions are presented one at a time. Users record answers via webcam and STT; upon stopping, answers are sent to Gemini for sentiment, STAR-format, and delivery analysis.
  3. Technical Round: Two code challenges in an embedded Monaco Editor. Users save each solution, then batch-submit for AI scoring (correctness, efficiency, style).

Upon completion, participants are redirected to a feedback page showing collapsible ratings and suggestions.

4.3 Peer-to-Peer Human Interviews

Flow: Create a room, join via lobby cards, engage in a video call alongside shared résumé, switch to collaborative code editor (PiP) for live technical practice.

5. What’s next

  • Voice-only interview mode (Twilio Programmable Voice)
  • Browser-based code runner (Docker + Firecracker)
  • Public dashboards for cohort progress
  • Advanced AI feedback enhancements
  • Improved STT accuracy and robustness

Contributions welcome—GitHub Repo.

Read More →

KonTask: Revitalizing Downtown Greencastle With an AI-Powered Local-Services Marketplace

4/28/2025

How one student-led startup plans to bridge the gap between people who need help and the neighbors who can provide it.

The Problem & Why It Matters — Right Now

Downtown Greencastle hums with unmet potential. Residents struggle to find reliable providers, while qualified locals can’t reach customers.

Our survey (102 responses) revealed:

  • 96% of service seekers couldn’t find help
  • 100% of providers struggled to find clients

The Data Driving Our Design

  • 50% DePauw students — fast, tech-friendly booking
  • 25% Residents — trusted, consistent providers
  • 20% Freelancers — visibility & steady work
  • 5% Visitors — one-off convenience

Enter KonTask

Mobile-first, AI-powered matching for plumbing, nail art, tutoring, and more.

Competitor Analysis

  • Local focus: Only KonTask zeroes in on Greencastle’s core streets.
  • Real-time booking: Slot selection & instant confirm vs. delays.
  • AI matching: ML-based provider recommendations.
  • All-in-one: From repairs to beauty in one app.

How It Works

  1. Search: “Fix leaky sink.”
  2. Match: AI ranks by location, skills, ratings.
  3. Accept: Two-way confirm & secure Stripe escrow.
  4. Do & Rate: 1–5 stars feed future matches.
  5. Analytics: Trends spotlight hot services.

Business Model & Financial Outlook

Service Seekers

  • Free plan: standard search, basic protection
  • Premium $3.49/mo: priority matching & full protection

Service Providers

  • Free listing: appear in results
  • Premium $15.99/mo: boost ranking & analytics

Additional Revenue

  • 5% transaction fee
  • 10% protection fee
  • $5k/year in-app advertising

Financial Milestones

Breakeven at ~700 users (2 yrs); revenue $25.6k → $185k (Yr 3); margin –74% → 51%.

Roadmap

  • 0–6 mo: MVP launch & beta
  • 6–24 mo: user growth & new features
  • 24+ mo: AI enhancements & profitability

How KonTask Revitalizes Downtown Greencastle

  1. KonTask → centralized service platform
  2. Users → find & book services, driving demand
  3. Small Businesses → increased customers & revenue
  4. Vacant Properties → repurposed for local businesses
  5. Downtown Engagement → boosted foot traffic & vibrancy
Read More →

MusicFinder: An AI-powered, mood-based Spotify companion

4/29/2025

TL;DR — In this post I break down how MusicFinder fuses a fine-tuned T5 transformer, OpenAI’s conversational magic, and Spotify’s rich catalog into a single pipeline that turns “Feeling like dancing in the rain” into an instant, playable playlist. Everything is containerised, CI-tested, and ready to scale on Google Cloud Run.

1. Why another music recommender?

Every streaming service claims to “get” you, yet Friday-night playlists still ignore the nuance between melancholy-but-hopeful and straight-up sad. Typical recommenders lean on sparse listening history; MusicFinder starts with how you feel right now — captured in natural language — and uses a purpose-built NLP stack to score tracks on emotional fit.

Goal: Cut the delta between mood ↔ music from minutes of browsing to a single chat prompt.

2. High-level architecture


      ┌────────────────────────────┐
      │     React Frontend         │
      └───────────┬────────────────┘
                  │ text prompt
                  ▼
      ┌───────────┴───────────────┐
      │  Node.js API  (Express)   │
      └───────┬──────┬────────────┘
              │      │
              │      ├─▶  Spotify OAuth2 ✔
              │      │
              │      └─▶  OpenAI GPT-4 🎵
              ▼
      ┌────────────┐    HTTP    ┌───────────────────┐
      │ FastAPI svc│──────────▶│  T5 Emo-Classifier │
      └────────────┘  (JSON)   └───────────────────┘
              ▲
              │    Track search / like / save
              └───────────▶  Spotify Web API
          

3. Core features

(Your list here…)

4. Tech stack in one glance

(Your badges or bullet list…)

5. “One-click” workflow

You type: “Need mellow-study vibes with a sprinkle of hope.”
T5 labels it “calm, optimistic”.
GPT-4 returns JSON: [ { title, artist, album } × 10 ].
Node.js searches each track on Spotify, filters unavailable items.
Frontend renders playable cards — hit ▶️ and focus on your essay.
Tap ❤️? The track is saved to Liked Songs instantly.

Round-trip latency ~= 900 ms (cold) / 250 ms (warm) in US-central1.

6. Local setup (15 min)


      git clone https://github.com/itsnothuy/playlistalchemy.git
      cd playlistalchemy
      
      # ML microservice
      cd backend_python
      pip install -r requirements.txt
      uvicorn app:app --host 0.0.0.0 --port 8000
      
      # Node API
      cd ../backend
      npm install
      cp .env.example .env   # add your keys
      npm start
      
      # React client
      cd ../frontend
      npm install
      npm start             # http://localhost:3000
          

7. Continuous delivery in action

Every push to main triggers:


      - name: Test & Build
        run: |
          cd backend && npm test
          cd ../frontend && npm run lint && npm run build
      
      - name: Docker Publish
        uses: google-github-actions/deploy-cloudrun@v2
          

8. Roadmap

  • 🎤 Voice emotion (whisper-small on-device → T5)
  • 📈 Adaptive taste modelling via Spotify audio features
  • 📲 React Native share-sheet integration
  • 🌐 Polyglot moods (Spanish, Vietnamese, Japanese)

9. Wrap-up

Building MusicFinder proved that with the right abstractions — LLMs for creativity, transformers for classification, and serverless for ops — any solo dev can ship a production-grade, emotionally intelligent recommender in weeks, not months. Curious? ⭐ the GitHub repo, open an issue, or drop a PR.

— Huy Tran, full-stack builder & AI tinkerer

Read More →