4/24/2025
TL;DR — In this post I unpack how I combined Next.js 14’s App Router, serverless Postgres via Neon, Drizzle ORM, Google Gemini, Pinecone vector search, Clerk authentication, AWS S3 and a sprinkle of shadcn/ui to ship InterviewPrep: a full-stack mock-interview playground that runs entirely on Vercel.
I started this project after a long week of back-to-back technical screens. Existing tools felt disjointed — one site for coding challenges like LeetCode® and HackerRank®, another for behavioural questions, and none that used my own résumé to ask context-aware questions (for free?). InterviewPrep grew out of that itch: a single place to generate customised, AI-driven interview sessions — behavioural, technical & live coding — in under 30 seconds.
┌────────────┐ ┌────────────┐
│ Next.js │ API Routes │ Gemini │
│ (Vercel) │───────────▶│ API │
└────┬───────┘ └─────┬──────┘
│ │
│Drizzle (SQL Tags) │
┌────▼───────┐ Vector │
│ Neon │◀─────────Store───┘
│ PostgreSQL │ (Pinecone)
└────▲───────┘
│ Presigned URLs
┌────┴───────┐
│ AWS S3 │ ← résumés, recordings
└────────────┘
pnpm create next-app interviewprep --ts --tailwind --eslintpnpm i @clerk/nextjs @auth/drizzle-adapter drizzle-orm @neondatabase/serverless pg zod @pinecone-database/pinecone openai @aws-sdk/client-s3 @aws-sdk/s3-request-presigner @monaco-editor/react react-hook-form @tanstack/react-query tailwind-mergeFlow: Upload your résumé PDF, which is split into chunks and embedded via OpenAI’s embeddings. Chunks are stored in Pinecone as a vector index.
Use case: Chat in a conversational UI that retrieves context from your résumé. It can answer questions about your experience, suggest improvements, and generate tailored résumé feedback on the fly based on your input.
Upon completion, participants are redirected to a feedback page showing collapsible ratings and suggestions.
Flow: Create a room, join via lobby cards, engage in a video call alongside shared résumé, switch to collaborative code editor (PiP) for live technical practice.
Contributions welcome—GitHub Repo.
4/28/2025
How one student-led startup plans to bridge the gap between people who need help and the neighbors who can provide it.
Downtown Greencastle hums with unmet potential. Residents struggle to find reliable providers, while qualified locals can’t reach customers.
Our survey (102 responses) revealed:
Mobile-first, AI-powered matching for plumbing, nail art, tutoring, and more.
Breakeven at ~700 users (2 yrs); revenue $25.6k → $185k (Yr 3); margin –74% → 51%.
4/29/2025
TL;DR — In this post I break down how MusicFinder fuses a fine-tuned T5 transformer, OpenAI’s conversational magic, and Spotify’s rich catalog into a single pipeline that turns “Feeling like dancing in the rain” into an instant, playable playlist. Everything is containerised, CI-tested, and ready to scale on Google Cloud Run.
Every streaming service claims to “get” you, yet Friday-night playlists still ignore the nuance between melancholy-but-hopeful and straight-up sad. Typical recommenders lean on sparse listening history; MusicFinder starts with how you feel right now — captured in natural language — and uses a purpose-built NLP stack to score tracks on emotional fit.
Goal: Cut the delta between mood ↔ music from minutes of browsing to a single chat prompt.
┌────────────────────────────┐
│ React Frontend │
└───────────┬────────────────┘
│ text prompt
▼
┌───────────┴───────────────┐
│ Node.js API (Express) │
└───────┬──────┬────────────┘
│ │
│ ├─▶ Spotify OAuth2 ✔
│ │
│ └─▶ OpenAI GPT-4 🎵
▼
┌────────────┐ HTTP ┌───────────────────┐
│ FastAPI svc│──────────▶│ T5 Emo-Classifier │
└────────────┘ (JSON) └───────────────────┘
▲
│ Track search / like / save
└───────────▶ Spotify Web API
(Your list here…)
(Your badges or bullet list…)
You type: “Need mellow-study vibes with a sprinkle of hope.”
T5 labels it “calm, optimistic”.
GPT-4 returns JSON: [ { title, artist, album } × 10 ].
Node.js searches each track on Spotify, filters unavailable items.
Frontend renders playable cards — hit ▶️ and focus on your essay.
Tap ❤️? The track is saved to Liked Songs instantly.
Round-trip latency ~= 900 ms (cold) / 250 ms (warm) in US-central1.
git clone https://github.com/itsnothuy/playlistalchemy.git
cd playlistalchemy
# ML microservice
cd backend_python
pip install -r requirements.txt
uvicorn app:app --host 0.0.0.0 --port 8000
# Node API
cd ../backend
npm install
cp .env.example .env # add your keys
npm start
# React client
cd ../frontend
npm install
npm start # http://localhost:3000
Every push to main triggers:
- name: Test & Build
run: |
cd backend && npm test
cd ../frontend && npm run lint && npm run build
- name: Docker Publish
uses: google-github-actions/deploy-cloudrun@v2
Building MusicFinder proved that with the right abstractions — LLMs for creativity, transformers for classification, and serverless for ops — any solo dev can ship a production-grade, emotionally intelligent recommender in weeks, not months. Curious? ⭐ the GitHub repo, open an issue, or drop a PR.
— Huy Tran, full-stack builder & AI tinkerer