Web application that analyzes YouTube playlists to uncover viral trends and engagement patterns. Built with FastHTML and MonsterUI for a modern, responsive interface.
Discover what makes YouTube playlists go viral with AI-powered analytics
Live Example: https://www.viralvibes.fyi
vercel --prodgit clone https://github.com/navneeth/viralvibes.git
cd viralvibes
uv venv --python 3.11.6
source .venv/bin/activate
uv pip install -r requirements.txt
cp .env.example .env
# Edit .env with your Supabase credentials
python main.py
The development server will start at http://0.0.0.0:5001
This project uses standard OSS practices for Python version management:
pyproject.toml - Single source of truth (requires-python = ">=3.11").python-version - Used by pyenv and GitHub Actionsruntime.txt - Vercel deployment onlyAll GitHub Actions workflows read from .python-version using python-version-file parameter.
ViralVibes includes a CLI tool for local development and debugging. Here are the main commands:
Test playlist processing with either backend:
# Local debug with yt-dlp backend
python cli.py process "https://youtube.com/playlist?list=PLxxxxx" --backend yt-dlp
# Local test with YouTube API backend
python cli.py process "https://youtube.com/playlist?list=PLxxxxx" --backend youtubeapi
Additional CLI options:
--dry-run: Run without updating the database--help: Show all available options# List pending jobs
python cli.py pending
# Run the worker loop (like on Render)
python cli.py run --poll-interval 10 --batch-size 3 --max-runtime 300
The application follows a modern serverless architecture with three main layers:
components/ → UI primitives (buttons, cards, tables, sections)views/ → Composed pages (dashboard views, layouts)services/ → YouTube + data logic (playlist processing, transforms)worker/ → Async jobs (background processing, job queue)routes/ → Entry points (partially used, mostly in main.py)tests/ → Solid coverage (unit and integration tests)services package due to its heavy logic.graph TD
A[Frontend - main.py] -->|POST /validate| B[YouTube Service - services/youtube_service.py]
B -->|Compute metrics| C[Database Layer - db.py]
C -->|Upsert playlist stats| D[Supabase/Postgres]
D -->|Trigger job state update| E[Worker - worker/worker.py]
E -->|Fetch & process playlist| B
E -->|Write job status| C
F[Tests] -->|Validate| A & B & E

The YouTube service returns a Polars DataFrame with the canonical columns below. Code expects these exact names and types when reading/processing playlist results.
| Column | Type | Description |
|---|---|---|
| Rank | int64 | Position in playlist (1-based) |
| id | string | YouTube video ID (e.g. dQw4w9WgXcQ) |
| Title | string | Video title |
| Description | string | Video description |
| Views | int64 | View count |
| Likes | int64 | Like count |
| Dislikes | int64 | Always 0 (YouTube API no longer returns dislikes) |
| Comments | int64 | Comment count |
| Duration | int64 | Duration in seconds |
| PublishedAt | string | ISO publish date (e.g. 2023-05-12T15:30:00Z) |
| Uploader | string | Channel name |
| Thumbnail | string | URL to high-res thumbnail |
| Tags | list[string] | List of tags |
| CategoryId | string | YouTube category ID |
| CategoryName | string | Human-readable category (e.g. Music) |
| Caption | boolean | True if captions/subtitles exist |
| Licensed | boolean | True if licensed content |
| Definition | string | hd or sd |
| Dimension | string | 2d or 3d |
| Rating | float64 | Reserved / typically null |
Place this section near “Key Files and Responsibilities” or under the “Data Layer” section so it’s visible to contributors and tests.
classDiagram
class YouTubeDataFrame {
+Rank: int64
+id: string
+Title: string
+Description: string
+Views: int64
+Likes: int64
+Dislikes: int64
+Comments: int64
+Duration: int64
+PublishedAt: string
+Uploader: string
+Thumbnail: string
+Tags: list[string]
+CategoryId: string
+CategoryName: string
+Caption: bool
+Licensed: bool
+Definition: string
+Dimension: string
+Rating: float64
+Controversy: float64
+Engagement Rate Raw: float64
+Views Formatted: string
+Likes Formatted: string
+Dislikes Formatted: string
+Comments Formatted: string
+Duration Formatted: string
+Controversy %: string
+Engagement Rate (%): string
}
class YouTubeTransforms {
+normalize_columns(df: DataFrame): DataFrame
+_enrich_dataframe(df: DataFrame, actual_playlist_count: int): (DataFrame, Dict)
}
YouTubeTransforms --> YouTubeDataFrame : returns/operates on
Deploy to Vercel with one click using the button above, or use the CLI:
npm install -g vercel
vercel --prod
Contributions are welcome! Fork the repository, open a branch, and submit a PR.
This project is licensed under the MIT License - see the LICENSE file for details.
This repository uses pre-commit to run linters and formatters locally before commits.
Quick setup (run once after cloning):
pip install pre-commit
pre-commit install
Run all hooks against the repository (useful for CI or one-time fixes):
pre-commit run --all-files
Notes
.pre-commit-config.yaml at the repo root.pre-commit run --all-files to ensure code meets the repo hooks before merging.# Run tests with coverage
coverage run -m pytest tests/ -v
coverage report
# View HTML report
coverage html && open htmlcov/index.html
Current Coverage: