PetSwipe is on a mission to help shelter animals find loving homes. This swipe-to-adopt platform connects prospective pet parents with adoptable animals in need. With PetSwipe, users can effortlessly browse pets, swipe right to adopt or left to pass, and manage their profiles with ease. They can also view their matches and keep track of their swipe history — making the journey to adoption simpler, faster, and more heartwarming 🐰.
Note
Inspired by Tinder UX, but for pets to find their loving humans! 🐶🐱
- About PetSwipe
- Live App
- Features
- Tech Stack & Architecture
- User Interface
- Database Schema
- Getting Started
- API Reference
- AWS Deployment
- Agentic AI Integration
- Scripts & Utilities
- Testing
- GitHub Actions CI/CD
- Command Line Interface
- Monitoring & Observability
- Contributing
- License
- Author
PetSwipe is a full-stack application that allows users to swipe through pets available for adoption. The app is designed to be user-friendly and visually appealing, with a focus on providing a seamless experience for both users and shelter staff.
The app is built using modern technologies, including TypeScript, Next.js, Express, and PostgreSQL. It leverages the power of AWS for storage and deployment, ensuring scalability and reliability.
The app is also designed to be modular and easy to extend, with a focus on clean code and best practices. It includes features such as user authentication, a swipe interface, personalized pet decks, and admin tools for managing pets and users.
And most importantly, it is built with the goal of helping shelter animals find their forever homes. By providing a fun and engaging way for users to browse pets, PetSwipe aims to increase adoption rates and raise awareness about the importance of pet adoption. 🐾
I hope you enjoy using PetSwipe as much as I enjoyed building it! 🐱
Tip
Please spread the word about PetSwipe to your friends and family, and help us find loving homes for as many pets as possible! 🏠❤️
PetSwipe is live on Vercel! You can now try it out and see how it works.
Tip
Link not working? Copy and paste this URL into your browser: https://petswipe.vercel.app.
Also, checkout the backend API at PetSwipe API. You can use tools like Postman or Swagger UI to explore the API endpoints.
[!IMPORTANT] > Note: Currently, most of the data is seeded with dummy data. We hope the app will be used by more real users and pet adoption shelters in the future. If you are a shelter or a pet adoption organization, please reach out to us to get all your data integrated into the app in seconds! Or you can also use the in-app manual add pet features to further enrich our pets database (only works for authenticated users).
PetSwipe is a full-stack application with the following features:
- User Authentication:
- Login, signup, password reset functionalities are all implemented
- JWT-based authentication, where tokens are stored in HTTP-only cookies for security
- Swipe Interface:
- Swipe left/right or press arrow keys/buttons to navigate through the deck of pets cards
- For each card, users can view pet details, photos, and decide to adopt or pass
- Each user is randomly assigned a sample selection of pet cards (around 90-110 cards) to review. As the app gathers more real, user-added pet data, it will improve in matching users with the most relevant pets.
- Personalized Deck:
- Deck is generated based on user preferences and past swipes
- Users will only see pets that they haven't swiped on before, and pets that are most relevant to them
- History: View all swipes & liked (adopted) pets
- Chatbot:
- A simple chatbot to answer common questions about the app and pets (e.g. breeds, adoption process, pet care tips, etc.)
- Powered by Google AI and Retrieval-Augmented Generation (RAG) for personalized responses
- Admin Tools:
- Bulk upload pets via CSV
- Export pets data
- Photo uploads to S3
- Manual match assignment
- and more!
- Responsive UI: Built with Tailwind CSS and shadcn/ui
- Fully responsive design for mobile and desktop
- Light and dark mode support
- Accessible design for all users
- Real-time Animations:
- Framer Motion for smooth transitions and animations
- Swipe animations for a more engaging experience
- Loading spinners and skeleton screens for better UX
- Analytics: Countups of swipes, matches, adoptions, etc. all are available to admins
PetSwipe is built using a modern tech stack, ensuring scalability, maintainability, and performance. The architecture is designed to be modular and easy to extend.
| Layer | Technology |
|---|---|
| Frontend | Next.js, React, TypeScript, Tailwind CSS, shadcn/ui, Framer Motion, SWR |
| Backend & API | Node.js, Express, TypeScript, TypeORM, PostgreSQL, OpenAPI (via Swagger), RabbitMQ, Redis |
| Data & Storage | AWS RDS (PostgreSQL), AWS S3 |
| Security & Auth | JSON Web Tokens, bcryptjs, cookie-parser |
| DevOps & Deployment | Docker, AWS ECR & ECS (Fargate), Vercel, GitHub Actions |
| Infrastructure | Terraform, Consul, Vault, Nomad, AWS IAM, AWS CloudWatch, AWS ALB |
| AI | Google AI, Retrieval-Augmented Generation (RAG) |
| Testing | Playwright (frontend), Jest (backend) |
flowchart TB
subgraph Client["🖥️ Client Layer"]
Browser["Web Browser"]
Mobile["Mobile Browser"]
end
subgraph CDN["🌐 CDN & Edge"]
Vercel["Vercel Edge Network"]
CloudFront["AWS CloudFront"]
end
subgraph Frontend["⚛️ Frontend (Next.js)"]
NextApp["Next.js Application"]
SWR["SWR Data Fetching"]
Framer["Framer Motion"]
Shadcn["shadcn/ui Components"]
end
subgraph LoadBalancing["⚖️ Load Balancing"]
ALB["AWS Application Load Balancer"]
TG["Target Group"]
end
subgraph Backend["🔧 Backend Services"]
ECS["AWS ECS Fargate"]
Express["Express.js API"]
TypeORM["TypeORM"]
Redis["Redis Cache"]
RabbitMQ["RabbitMQ"]
end
subgraph Data["💾 Data Layer"]
RDS["AWS RDS PostgreSQL"]
S3["AWS S3 Storage"]
Supabase["Supabase (Backup)"]
end
subgraph AI["🤖 AI Services"]
GoogleAI["Google AI / Gemini"]
RAG["RAG System"]
end
subgraph Monitoring["📊 Monitoring & Observability"]
Prometheus["Prometheus"]
Grafana["Grafana"]
CloudWatch["AWS CloudWatch"]
end
subgraph Infrastructure["🏗️ Infrastructure as Code"]
Terraform["Terraform"]
Consul["HashiCorp Consul"]
Vault["HashiCorp Vault"]
Nomad["HashiCorp Nomad"]
Ansible["Ansible"]
end
subgraph CICD["🚀 CI/CD Pipeline"]
GHA["GitHub Actions"]
Jenkins["Jenkins"]
ECR["AWS ECR"]
GHCR["GitHub Container Registry"]
end
Browser --> Vercel
Mobile --> Vercel
Vercel --> NextApp
NextApp --> SWR
SWR --> ALB
ALB --> TG
TG --> ECS
ECS --> Express
Express --> TypeORM
Express --> Redis
Express --> RabbitMQ
TypeORM --> RDS
Express --> S3
Express --> GoogleAI
GoogleAI --> RAG
Express --> Prometheus
Prometheus --> Grafana
ECS --> CloudWatch
GHA --> ECR
GHA --> GHCR
GHA --> Vercel
Jenkins --> ECR
ECR --> ECS
Terraform --> Backend
Terraform --> Data
Consul --> Backend
Vault --> Backend
Nomad --> Backend
Ansible --> Infrastructure
S3 -.Backup.-> Supabase
flowchart LR
subgraph Development["👨💻 Development"]
Dev["Developer"]
Git["Git Repository"]
end
subgraph CI["🔄 Continuous Integration"]
GHA["GitHub Actions"]
Jenkins["Jenkins Pipeline"]
Lint["Linting & Format"]
Test["Testing Suite"]
Build["Build Process"]
Security["Security Scan"]
end
subgraph Registry["📦 Container Registry"]
ECR["AWS ECR"]
GHCR["GitHub CR"]
end
subgraph IaC["🏗️ Infrastructure"]
TF["Terraform Apply"]
Ansible["Ansible Playbooks"]
end
subgraph AWS["☁️ AWS Cloud"]
ECS["ECS Fargate"]
RDS["RDS PostgreSQL"]
S3["S3 Buckets"]
ALB["Load Balancer"]
CW["CloudWatch"]
end
subgraph HashiStack["🔐 HashiCorp Stack"]
Consul["Service Discovery"]
Vault["Secrets Management"]
Nomad["Orchestration"]
end
Dev -->|Push Code| Git
Git -->|Trigger| GHA
Git -->|Trigger| Jenkins
GHA --> Lint
Lint --> Test
Test --> Security
Security --> Build
Build --> ECR
Build --> GHCR
Jenkins --> Lint
ECR --> ECS
TF -->|Provision| AWS
TF -->|Configure| HashiStack
Ansible -->|Deploy| AWS
Consul --> ECS
Vault --> ECS
Nomad --> ECS
ECS --> ALB
ECS --> RDS
ECS --> S3
ECS --> CW
erDiagram
AppUser ||--o{ Swipe : makes
AppUser ||--o{ Match : receives
AppUser {
uuid id PK
string email UK
string password
string name
date dob
text bio
text avatarUrl
timestamp createdAt
timestamp updatedAt
}
Pet ||--o{ Swipe : receives
Pet ||--o{ Match : appears_in
Pet {
uuid id PK
string name
string type
text description
text photoUrl
string shelterName
text shelterContact
text shelterAddress
timestamp createdAt
timestamp updatedAt
}
Swipe {
uuid id PK
uuid userId FK
uuid petId FK
boolean liked
timestamp swipedAt
}
Match {
uuid id PK
uuid userId FK
uuid petId FK
timestamp matchedAt
}
sequenceDiagram
participant User as 👤 User
participant Frontend as ⚛️ Next.js
participant ALB as ⚖️ ALB
participant Backend as 🔧 Express API
participant Vault as 🔐 Vault
participant DB as 💾 PostgreSQL
participant S3 as 📦 S3
User->>Frontend: Visit App
Frontend->>User: Show Login Page
User->>Frontend: Submit Credentials
Frontend->>ALB: POST /api/auth/login
ALB->>Backend: Forward Request
Backend->>Vault: Retrieve Secrets
Vault-->>Backend: Return Secrets
Backend->>DB: Verify Credentials
DB-->>Backend: User Data
Backend->>Backend: Generate JWT
Backend->>ALB: Set HTTP-Only Cookie
ALB-->>Frontend: 200 OK + Token
Frontend->>Frontend: Store Auth State
User->>Frontend: Upload Avatar
Frontend->>ALB: POST /api/users/me/avatar
ALB->>Backend: Forward with JWT
Backend->>Backend: Verify JWT
Backend->>S3: Upload Image
S3-->>Backend: S3 URL
Backend->>DB: Update User Record
Backend->>ALB: Return Success
ALB-->>Frontend: 200 OK
Frontend-->>User: Show Updated Profile
and so many more...
| Entity | Column | Type | Nullable | Description | Notes / Relations |
|---|---|---|---|---|---|
| Match | id |
uuid |
No | Primary key | @PrimaryGeneratedColumn("uuid") |
user |
ManyToOne → AppUser |
No | Who is swiping | FK → AppUser.id, cascade on delete |
|
pet |
ManyToOne → Pet |
No | Which pet was presented | FK → Pet.id, cascade on delete |
|
matchedAt |
timestamp |
No | When it was shown | @CreateDateColumn() |
|
| Pet | id |
uuid |
No | Primary key | @PrimaryGeneratedColumn("uuid") |
name |
varchar |
No | e.g. “Buddy” or “Whiskers” | @Column() |
|
type |
varchar |
No | e.g. “Dog”, “Cat” | @Column() |
|
description |
text |
Yes | Breed, color, age etc. | @Column({ type: "text", nullable: true }) |
|
photoUrl |
text |
Yes | URL to photo(s) | @Column({ type: "text", nullable: true }) |
|
shelterName |
varchar |
Yes | The shelter this pet is from | @Column({ type: "varchar", nullable: true }) |
|
shelterContact |
text |
Yes | Contact info for the shelter | @Column({ type: "text", nullable: true }) |
|
shelterAddress |
text |
Yes | Physical address of the shelter | @Column({ type: "text", nullable: true }) |
|
matches |
OneToMany → Match[] |
— | All Match records for this pet | inverse of Match.pet |
|
swipes |
OneToMany → Swipe[] |
— | All Swipe records for this pet | inverse of Swipe.pet |
|
createdAt |
timestamp |
No | When record was created | @CreateDateColumn() |
|
updatedAt |
timestamp |
No | When record was last updated | @UpdateDateColumn() |
|
| Swipe | id |
uuid |
No | Primary key | @PrimaryGeneratedColumn("uuid") |
user |
ManyToOne → AppUser |
No | Who swiped | FK → AppUser.id, cascade on delete |
|
pet |
ManyToOne → Pet |
No | Which pet was swiped on | FK → Pet.id, cascade on delete |
|
liked |
boolean |
No | true = adopt, false = pass |
@Column() |
|
swipedAt |
timestamp |
No | When the swipe occurred | @CreateDateColumn() |
|
| unique index | (user, pet) |
— | Prevent duplicate swipes | @Index(["user","pet"],{unique:true}) |
|
| AppUser | id |
uuid |
No | Primary key | @PrimaryGeneratedColumn("uuid") |
email |
varchar |
No | Unique user email | @Column({ unique: true }) |
|
password |
varchar |
Yes | Hashed password | @Column({ nullable: true }) |
|
name |
varchar |
Yes | User’s full name | @Column({ nullable: true }) |
|
dob |
date |
Yes | Date of birth | @Column({ type: "date", nullable: true }) |
|
bio |
text |
Yes | User biography | @Column({ type: "text", nullable: true }) |
|
avatarUrl |
text |
Yes | URL to avatar image | @Column({ type: "text", nullable: true }) |
|
matches |
OneToMany → Match[] |
— | All Match records by this user | inverse of Match.user |
|
swipes |
OneToMany → Swipe[] |
— | All Swipe records by this user | inverse of Swipe.user |
|
createdAt |
timestamp |
No | When user was created | @CreateDateColumn() |
|
updatedAt |
timestamp |
No | When user was last updated | @UpdateDateColumn() |
Note
This table may not be up-to-date, as more entities and relationships could be introduced in the near future to support additional features and enhancements!
- Node.js ≥ v18
- npm ≥ v8 or Yarn
- PostgreSQL (AWS RDS recommended)
- AWS CLI & IAM credentials for S3, RDS
- Docker (optional, for local Postgres container)
- Google AI API key (for chatbot feature)
Caution
shadcn/ui peerDeps, install frontend dependencies with:
npm install --legacy-peer-deps
# or
yarn install --ignore-engines-
Clone & Install
git clone https://github.com/hoangsonww/PetSwipe-Match-App.git cd PetSwipe-Match-App/backend npm install -
Environment Copy and configure:
cp .env.example .env
DATABASE_URL: your AWS RDS Postgres connection stringJWT_SECRET,COOKIE_SECRET- AWS:
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,S3_BUCKET_NAME - More in
.env.example. Be sure that you have all required environment variables set up before running the app!
-
Seed Sample Pets (optional)
npm run seed:pets
-
Run in Development
npm run dev
The backend API is now available at
http://localhost:5001/api.
-
Clone & Install
cd frontend npm install --legacy-peer-deps -
Environment Create
.env.local: (replacehttp://localhost:5001with your backend URL)NEXT_PUBLIC_API_URL=http://localhost:5001/api
-
Run in Development
npm run dev
Frontend available at
http://localhost:3000 -
Build & Production
npm run build npm run start
Swagger docs are served locally at http://localhost:5001/api-docs.json. You can also access the live API documentation at PetSwipe API, and the JSON format at PetSwipe API JSON.
- POST
/api/auth/signup - POST
/api/auth/login - POST
/api/auth/logout - POST
/api/auth/verify-email - POST
/api/auth/reset-password
- POST
/api/matches - GET
/api/matches - GET
/api/matches/me
- GET
/api/pets - POST
/api/pets - GET
/api/pets/export - POST
/api/pets/:petId/photo - POST
/api/pets/upload - GET
/api/pets/mine - PUT
/api/pets/:petId - GET
/api/pets/:petId
- POST
/api/swipes - GET
/api/swipes/me - GET
/api/swipes/me/liked - GET
/api/swipes(Admins only)
- GET
/api/users/me - PUT
/api/users/me - POST
/api/users/me/avatar - DELETE
/api/users/me/avatar
- POST
/api/chat
More endpoints may be added as the app evolves. Refer to the Swagger docs for the most up-to-date information!
Swagger UI is available at https://petswipe-backend-api.vercel.app/.
PetSwipe features enterprise-grade deployment strategies for zero-downtime releases:
- Zero-downtime deployments with instant rollback
- Two identical environments (Blue & Green)
- Perfect for major releases and database migrations
- Rollback time: < 30 seconds
- Gradual traffic shifting (5% → 10% → 25% → 50% → 100%)
- Automated rollback on errors or high latency
- Real-time health monitoring during rollout
- Progressive validation with production traffic
- Scale based on CPU, memory, and request metrics
- Scheduled scaling for predictable traffic patterns
- Load testing prior to deployment for capacity planning
📖 Full Deployment Guide | 🚀 Quick Reference
Our production infrastructure is built on AWS with Terraform:
Compute & Orchestration:
- AWS ECS Fargate: Serverless container orchestration
- Blue, Green, and Canary environments
- Auto-scaling based on CPU/memory metrics
- Circuit breaker deployment with automated rollback
- AWS ECR: Private Docker registry with vulnerability scanning
- Backup: GitHub Container Registry (GHCR)
Networking & Load Balancing:
- Application Load Balancer: Traffic routing and health checks
- Weighted routing for canary deployments
- SSL/TLS termination with ACM certificates
- HTTP/2 and WebSocket support
- Route 53: DNS management
- CloudFront: CDN for static assets (optional)
Data & Storage:
- AWS RDS PostgreSQL: Multi-AZ database with automated backups
- Read replicas for scaling
- Performance Insights enabled
- Point-in-time recovery
- AWS S3: Object storage
- Static assets bucket (public)
- Uploads bucket (private)
- ALB access logs
- Lifecycle policies for cost optimization
- Backup: Supabase Storage
Monitoring & Observability:
- CloudWatch: Comprehensive monitoring
- Custom dashboards (main + canary-specific)
- Metric alarms with SNS notifications
- Log aggregation and Insights queries
- X-Ray distributed tracing
- CloudWatch Alarms: Proactive alerts
- Service health (CPU, memory, errors)
- Deployment health (canary metrics)
- Database performance
- Composite alarms for deployment quality
Security & Compliance:
- AWS KMS: Encryption for all resources
- AWS WAF: Web application firewall
- AWS Shield: DDoS protection (Standard)
- IAM: Least-privilege access policies
- Security Groups: Network-level access control
- Secrets Manager: Sensitive configuration management
CI/CD & Deployment:
- Jenkins: Automated build and deployment pipelines
- Blue-Green pipeline:
Jenkinsfile.bluegreen - Canary pipeline:
Jenkinsfile.canary - Main CI pipeline with strategy selection
- Blue-Green pipeline:
- AWS CodeDeploy: ECS blue-green deployments
- Lambda: Automated canary rollback function
- GitHub Actions: Backup CI/CD
- Terraform: Infrastructure as Code (100+ resources)
HashiCorp Stack (Optional):
- Vault: Secrets management and encryption-as-a-service
- Consul: Service discovery and distributed configuration
- Nomad: Alternative workload orchestration
Frontend Hosting:
- Vercel: Next.js application hosting with edge functions
- Automatic deployments from Git
- Preview deployments for PRs
- Global CDN distribution
Note
Our infrastructure is designed for production-grade reliability, security, and scalability. The use of blue-green and canary deployments ensures zero-downtime releases with instant rollback capabilities. All infrastructure is version-controlled and reproducible via Terraform.
To deploy the app to AWS, we use Terraform for Infrastructure as Code (IaC). This allows us to define our AWS resources in code and deploy them easily.
To get started with Terraform:
-
Install Terraform on your machine.
-
Navigate to the
infrastructuredirectory:cd terraform -
Initialize Terraform:
terraform init
-
Configure your AWS credentials in
~/.aws/credentialsor set them as environment variables. -
Run the following commands to deploy:
terraform plan # Preview the changes terraform apply # Apply the changes
This will create all the necessary AWS resources for the app, including RDS, S3, ECS, ECR, IAM roles, and more.
Caution
Make sure you have the necessary permissions to create and manage AWS resources. Review the Terraform scripts before applying them to avoid any unintended changes. For more details on how to configure and use Terraform with AWS, check out the Terraform AWS Provider documentation.
For managing secrets and service discovery, we use HashiCorp Vault, Consul, and Nomad. These tools help us securely store sensitive information, manage configurations, and orchestrate our services.
These tools provide:
- Secure secret storage & dynamic secret issuance (Vault)
- Encryption-as-a-Service & audit logging (Vault)
- Distributed service discovery & health checking (Consul)
- Centralized key/value configuration store (Consul)
- Identity-based service mesh (mTLS) for secure service-to-service communication (Consul)
- Flexible, multi-region workload orchestration & scheduling (Nomad)
- Rolling upgrades, canary deployments & autoscaling (Nomad)
- Support for containers, VMs & standalone binaries (Nomad)
These tools are optional but recommended for larger, more complex deployments. They can be set up using Terraform as well. For more information, refer to the terraform/README.md file in the terraform directory to see how to configure and deploy these tools.
Note
If you are not familiar with these tools, you can skip this section for now. The app can run without them, but they provide additional security and flexibility for production deployments.
For managing the deployment and configuration of the app, we also use Ansible. Ansible playbooks are used to automate tasks such as:
-
Provisioning AWS infrastructure Create or update RDS instances, S3 buckets (static + uploads), ECR repositories, Application Load Balancer & Target Group, ECS clusters and services.
-
Building & deploying containers Build, tag and push backend and frontend Docker images to ECR, then trigger zero-downtime rolling updates of your Fargate service.
-
Configuring frontend hosting Sync Next.js static build to S3 and invalidate CloudFront distributions for instant cache busting.
-
Managing application configuration Template out and distribute environment variables, secrets (via AWS Secrets Manager or HashiCorp Vault), and config files to running services.
-
Database migrations & backups Run schema migrations on PostgreSQL (via TypeORM) and schedule or trigger automated backups and snapshots.
-
Lifecycle & housekeeping Apply S3 lifecycle rules (e.g. purge old uploads), rotate credentials, and clean up unused resources.
-
Service discovery & secrets bootstrap (If using HashiCorp stack) Bootstrap or update Consul/Nomad clusters, deploy Vault auto-unseal configuration, and distribute ACL tokens.
-
OS & user management Install OS packages, manage system users, SSH keys, and security hardening on any EC2 hosts you might run.
-
CI/CD integration Hook into GitHub Actions workflows to run Ansible playbooks on push or merge, ensuring every change is tested and deployed automatically.
Caution
Remember to set up your AWS credentials and permissions correctly before running any Ansible playbooks. Ensure that the IAM user/role has the necessary permissions to create and manage the resources defined in your playbooks.
Tip
For more information on how to use Ansible with AWS, check out the ansible/README.md file and the Ansible AWS documentation.
Warning
NEVER store sensitive information (like AWS credentials, database passwords, etc.) directly in your Ansible playbooks or inventory files. Use Ansible Vault or environment variables to securely manage secrets. Also, NEVER commit your .env files or any sensitive configuration files to version control!
PetSwipe features a sophisticated multi-agent AI system built with LangGraph and LangChain that provides intelligent pet matching, personalized recommendations, and natural language conversations. The system uses an assembly line architecture where specialized agents work together to deliver the best possible pet-to-human matches.
- 🤖 6 Specialized AI Agents - User profiler, pet analyzer, matcher, recommender, conversation, and monitoring agents
- ⚡ Assembly Line Processing - Sequential and parallel agent execution for optimal performance
- 🎯 Intelligent Matching - AI-powered compatibility scoring between users and pets
- 💬 Natural Conversations - Chatbot powered by RAG (Retrieval-Augmented Generation)
- 📊 Production Ready - Full AWS and Azure deployment configurations with monitoring
The agentic AI pipeline processes user requests through multiple stages:
- User Profiling - Analyzes user preferences and swipe history
- Pet Analysis - Extracts personality traits and compatibility factors from pet profiles
- Matching - Calculates compatibility scores using semantic matching
- Recommendations - Generates personalized pet recommendations
- Conversation - Handles natural language interactions
- Monitoring - Tracks performance metrics and system health
For detailed information about the AI system, including installation, configuration, API reference, and deployment guides, see the Agentic AI README.
The app also comes with several scripts and utilities to help with development and deployment:
- Seed Pets:
npm run seed:pets - Assign Pets:
npm run assign:user - TypeORM CLI:
npm run typeorm <command> - Swagger: Auto-generated OpenAPI spec
Additionally, a Makefile is included for common tasks:
make install # Install dependencies
make start # Start the backend server
make dev # Start the backend server in development mode
make up # Start the backend server with Docker
make test # Run tests
make lint # Lint the codebase
make deploy # Deploy to AWS
make clean # Clean up Docker containers
# and more...To start the entire app with Docker, run:
docker-compose up --buildThis will pull the images from ECR and start the backend and database containers locally on your machine.
Additionally, there are also Shell scripts that help you run Docker commands easily:
pull_and_run.sh: Pulls the latest Docker image from ECR and runs it.upload_to_ghcr.sh: Builds the Docker image and uploads it to GitHub Container Registry (remember to set up your GitHub PAT and username in the script/export them in your shell).
PetSwipe also includes a Docker Compute Terminal for running commands inside the Docker container. You can use it to run database migrations, seed data, or any other commands you need.
To hop into the Docker Compute Terminal, run:
docker-compose exec compute /bin/zshThis will give you a shell inside the Docker container where you can run commands as if you were on a regular terminal, plus you can access all the installed dependencies and tools of the application.
PetSwipe includes a comprehensive testing suite to ensure the application works as expected. The tests are organized into three main categories:
- Playwright: End-to-end tests for the frontend UI.
- Jest: Unit and integration tests for the backend API and frontend API helper functions.
- Chai & Mocha: Additional tests for specific features and functionalities.
- Commitlint: Ensures commit messages follow the conventional commit format.
Playwright is used for end-to-end testing of the frontend application. It simulates user interactions and verifies that the UI behaves as expected.
To run Playwright tests, navigate to the frontend directory and run:
cd frontend
npm run test:e2eJest is used for unit and integration tests in the backend API and some frontend components. It provides a fast and reliable testing framework.
To run Jest tests in the backend, navigate to the backend directory and run:
cd backend
npm run testThis will run all tests in the backend API, including unit tests for controllers, services, and integration tests for the database.
To run Jest tests in the frontend, navigate to the frontend directory and run:
cd frontend
npm run testChai and Mocha are used for additional tests in the frontend. To run Chai & Mocha tests, navigate to the frontend directory and run:
cd frontend
npm run test:mochaPetSwipe uses GitHub Actions for Continuous Integration and Continuous Deployment (CI/CD). The workflow is defined in .github/workflows/workflow.yml.
The CI/CD pipeline includes the following steps:
- Checkout Code: Pulls the latest code from the repository.
- Set Up Node.js: Installs the specified Node.js version.
- Install Dependencies: Installs the necessary dependencies for both backend and frontend.
- Run Linting: Runs ESLint and Prettier to ensure code quality and formatting.
- Run Tests: Executes Jest tests for the backend and Playwright tests for the frontend.
- Build Frontend: Builds the Next.js frontend application.
- Build Docker Images: Builds Docker images for both backend and frontend.
- Push Docker Images: Pushes the built images to GitHub Container Registry (GHCR).
- Deploy to AWS: Deploys the backend API to AWS ECS and the frontend to Vercel.
- Notify on Failure: Sends notifications via email if any step fails.
- Commitlint: Ensures commit messages follow the conventional commit format.
This setup ensures that every change pushed to the repository is automatically tested, built, and deployed, providing a robust and reliable development workflow.
The app also includes a CLI for managing pets and users. You can run the CLI commands from the root directory:
petswipe <command> [options]petswipe dev: Start backend & frontend in development mode.petswipe build: Build both backend & frontend applications.petswipe docker:build: Build & push Docker images to GitHub Container Registry.petswipe up: Pull Docker images and start the stack.petswipe down: Stop the Docker Compose stack.petswipe clean: Remove build artifacts.petswipe lint: Run linters in both projects.petswipe test: Run tests in both projects.
This CLI is designed to make it easier to manage the application and perform common tasks without having to navigate through multiple directories or run multiple commands.
PetSwipe includes comprehensive monitoring and observability using Prometheus and Grafana to track application performance, health, and metrics.
- Prometheus: http://localhost:9090 - Metrics collection and querying
- Grafana: http://localhost:3001 - Dashboards and visualization (admin/admin)
The monitoring stack is automatically included when you run:
docker-compose up -dThis starts Prometheus, Grafana, and configures them to monitor your application services including the backend API, frontend, and database.
- Service uptime and health checks
- Backend API performance metrics
- Frontend application metrics
- Database connection status
- System resource usage
For detailed configuration and customization options, see monitoring/Monitoring.md.
- Fork the repo & clone
- Create a feature branch
- Code, lint, test
- IMPORTANT: Run
npm run formatin the root directory to format all files before committing!
- IMPORTANT: Run
- Open a Pull Request. We'll review and merge it if it is meaningful and useful!
Please follow the existing code style (ESLint, Prettier, TypeScript). This helps maintain a clean and consistent codebase!
© 2025 Son Nguyen. I hope this code is useful for you and your projects. Feel free to use it, modify it, and share it with others.
Licensed under the MIT License. See LICENSE for details.
Important
In short, you may use this code for personal or educational purposes, but please do not use it for commercial purposes without permission. If you do use this code, please give proper credit to the original author.
This application is built with ❤️ by Son Nguyen in 2025:
Feel free to reach out for any questions, suggestions, or even collaborations!
❤️ Thank you for helping pets find their forever homes! 🐶🐱

















