A social media sentiment analysis dashboard that aggregates posts from Reddit and YouTube, analyzes sentiment using AI, and visualizes trends in real-time.
- Multi-platform Analysis – Fetch posts from Reddit and YouTube comments
- AI-Powered Sentiment – Uses HuggingFace's RoBERTa model for accurate classification
- Interactive Dashboard – Pie charts, trend lines, and filterable post lists
- Historical Tracking – View sentiment changes over time
- Rate Limiting & Caching – Built-in protections against API abuse
| Layer | Technology |
|---|---|
| Frontend | Next.js 14 (App Router) + Tailwind CSS + shadcn/ui |
| Backend | FastAPI (Python) |
| Database | PostgreSQL |
| AI/NLP | distilbert-base-uncased-finetuned-sst-2-english |
| Charts | Recharts |
| Containerization | Docker & Docker Compose |
- Docker Desktop (includes Docker Compose)
- Git
git clone https://github.com/your-username/Sentra.git
cd SentraCopy the example environment file:
cp .env.example .envEdit the .env file and add your API credentials:
DATABASE_URL=postgresql://postgres:postgres@db:5432/sentra
REDDIT_CLIENT_ID=your_reddit_client_id
REDDIT_CLIENT_SECRET=your_reddit_client_secret
REDDIT_USER_AGENT=sentra:v1.0.0
YOUTUBE_API_KEY=your_youtube_api_key- Go to https://www.reddit.com/prefs/apps
- Click "Create App" or "Create Another App"
- Fill in:
- Name: Sentra
- App type: Select "script"
- Redirect URI: http://localhost:8000
- Click "Create app"
- Copy the client ID (under the app name) and secret
- Go to https://console.cloud.google.com/
- Create a new project (or select existing)
- Enable the "YouTube Data API v3"
- Go to "Credentials" → "Create Credentials" → "API Key"
- Copy the API key
Note: The app works without API keys, but you won't be able to fetch data from those sources.
docker-compose up --buildThis will:
- Start PostgreSQL database
- Build and start the FastAPI backend (downloads AI model on first run ~500MB)
- Build and start the Next.js frontend
First startup takes 5-10 minutes due to:
- Downloading the sentiment analysis model
- Installing dependencies
- Building the frontend
Once services are running, run Prisma migrations:
docker-compose exec frontend npx prisma db pushOpen your browser and check:
| Service | URL | Expected |
|---|---|---|
| Frontend | http://localhost:3000 | Dashboard UI |
| Backend Health | http://localhost:8000/health | JSON with status |
| API Docs | http://localhost:8000/docs | Swagger UI |
The health endpoint should show:
{
"status": "ok",
"database": "connected",
"sentimentModel": "loaded",
"reddit": "configured",
"youtube": "configured"
}- Search: Enter a keyword (e.g., "ChatGPT", "climate change", "Tesla")
- Select Source: Choose Reddit, YouTube, or All
- Analyze: View sentiment distribution and individual posts
- Filter: Filter posts by sentiment type
- Track: Search the same keyword over time to build history
cd backend
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
uvicorn main:app --reload --host 0.0.0.0 --port 8000cd frontend
npm install
npx prisma generate
npm run dev# View logs
docker-compose logs -f
# View specific service logs
docker-compose logs -f backend
# Restart a service
docker-compose restart backend
# Stop all services
docker-compose down
# Stop and remove volumes (reset database)
docker-compose down -v
# Rebuild after code changes
docker-compose up --build# Open Prisma Studio (GUI for database)
docker-compose exec frontend npx prisma studio
# Reset database
docker-compose exec frontend npx prisma db push --force-reset| Endpoint | Method | Description |
|---|---|---|
/health |
GET | Service health check |
/analyze |
POST | Analyze single text |
/analyze/batch |
POST | Analyze multiple texts |
/search |
GET | Search and analyze from sources |
/history |
GET | Get historical sentiment data |
# Health check
curl http://localhost:8000/health
# Analyze single text
curl -X POST http://localhost:8000/analyze \
-H "Content-Type: application/json" \
-d '{"text": "I love this product!"}'
# Search Reddit for a keyword
curl "http://localhost:8000/search?q=python&source=reddit"
# Get 7-day history
curl "http://localhost:8000/history?q=python&days=7"The model downloads on first startup. Wait a few minutes and check logs:
docker-compose logs -f backendEnsure PostgreSQL is running:
docker-compose psAdd API credentials to .env and restart:
docker-compose down
docker-compose upCheck that NEXT_PUBLIC_API_URL is set in frontend/.env.local:
NEXT_PUBLIC_API_URL=http://localhost:8000
Stop other services using the ports or change ports in docker-compose.yml:
- Frontend: 3000
- Backend: 8000
- PostgreSQL: 5432
sentra/
├── docker-compose.yml # Container orchestration
├── .env.example # Environment template
├── frontend/ # Next.js application
│ ├── src/
│ │ ├── app/ # Pages and API routes
│ │ ├── components/ # React components
│ │ ├── lib/ # Utilities and API client
│ │ └── types/ # TypeScript types
│ └── prisma/ # Database schema
└── backend/ # FastAPI application
├── main.py # Application entry
└── app/
├── routers/ # API endpoints
├── services/ # Business logic
└── models/ # Pydantic schemas
MIT License - feel free to use this project for learning or commercial purposes.