Diwash Bhandari
</>
[]

Diwash Bhandari

Software Developer | AI Enthusiast

Versatile Software Developer with over years of experience in designing, developing, and testing high-quality software solutions.

Kathmandu, Nepal

About Me

My professional profile and what I bring to the table

Versatile Software Developer with over 4 years of experience in designing, developing, and testing high-quality software solutions. Strong expertise in implementing robust testing strategies to ensure code reliability, maintainability, and performance. Adept at collaborating across cross-functional teams to deliver user-focused, scalable applications that align with business objectives. Skilled in translating complex requirements into effective technical solutions, focusing on clarity, visualization, and continuous improvement. Known for delivering clean, efficient code and driving innovation in fast-paced development environments.

5+

Years of Experience

Since 2020

10+

Projects Completed

And counting...

4+

Technologies

Always learning

Clean Code

Writing maintainable, scalable, and efficient code with proper testing strategies

Innovation

Leveraging cutting-edge technologies and AI to solve complex business problems

Collaboration

Working effectively with cross-functional teams to deliver user-focused solutions

Work Experience

A journey of growth and innovation in software development

Software Engineer

Codavatar Tech Pvt. Ltd.

June 2023 - Present

Kalopul Commercial Building, Kalopul, Kathmandu

  • Designed and developed scalable RESTful APIs using FastAPI and Starlette, enabling high-throughput data handling and improved backend responsiveness for SaaS platforms.

  • Architected modular microservices to support distributed systems, simplifying deployments and enhancing maintainability across enterprise-grade applications.

  • Implemented real-time features with WebSockets and asynchronous programming, improving user experience and enabling live collaboration in multi-tenant environments.

  • Built robust internal frameworks and developer tooling to streamline onboarding, enforce standards, and reduce integration overhead across teams.

  • Worked extensively with technologies like PostgreSQL, Redis, gRPC, and GraphQL to deliver reliable, secure, and high-performance backend systems.

Software Engineer

Chuchuro Firm

May 2022 - July 2023

Sinamangal Kathmandu

  • Developed and maintained Python applications using Peewee ORM, Tornado Framework, RabbitMQ, and Meilisearch, enhancing performance and scalability.

  • Ensured high code quality by writing clean, maintainable, and testable Python code, and implemented rigorous testing with Pytest, improving software reliability.

  • Implemented RabbitMQ for asynchronous processing, optimizing system efficiency and throughput, and integrated Meilisearch to enhance search capabilities.

  • Contributed to QA efforts by combining manual and automated testing to effectively identify and resolve issues, leading to a more stable and user-friendly product.

Intern , Associate Software Engineer

Young Minds Creation (P) Ltd

December 2020 - April 2022

Young Minds Tower

  • Developed and maintained complex Laravel-based web applications, ensuring strong performance and scalability.

  • Wrote clean, maintainable, and testable PHP code, utilizing Laravel's built-in features to enhance application functionality and reliability.

  • Built and integrated RESTful APIs for seamless data exchange between systems, implementing security measures such as password hashing and encryption to protect data.

  • Extended application functionality by integrating third-party packages and libraries, contributing to a more versatile and feature-rich platform.

Graphic Designer

Pinches Artcore

May 2019 - Jan 2020

  • Creating designs for various mediums, such as print materials, digital platforms, and social media.

  • Designing logos, brochures, flyers, posters, and other marketing materials.

  • Working with clients to understand their design needs and preferences.

Featured Projects

A selection of my recent work

ResearchGen

AI-Powered Research Assistant Application Generator. Generates personalized research assistant applications using AI with flexible API key management and multiple AI provider support.

FastAPI
OpenAI GPT
Google Gemini

IntelliDocs AI

RAG-powered chatbot for customer support. Provides accurate, context-aware responses. Built with FastAPI and hexagonal architecture for maintainability and testability.

FastAPI
LangChain
ChromaDB

ResumeCraft

Collaborative resume builder using Starlette and WebSockets for real-time editing.

Starlette
WebSockets
JavaScript

Bash-based CI/CD pipeline

Lightweight CI/CD tool written in Bash that automates pull, test, Docker build, deploy, and notify steps.

Bash
Docker
CI/CD

SMS Spam Classifier

Real-time SMS spam detection app using FastAPI, Streamlit UI, and a Naive Bayes model.

FastAPI
Streamlit
Machine Learning

Disposable Email Checker

Fast, scalable API for detecting disposable emails with asynchronous validation and real-time stats.

Starlette
AsyncIO
API

Site Monitor App

Asynchronous Python-based tool for real-time website monitoring and uptime tracking.

Python
AsyncIO
Monitoring

News Classification

Machine learning model to classify news articles into categories using text analysis and ML pipelines.

Scikit-learn
NLP
Python

Stock Price Prediction

Forecasting stock prices using historical data and machine learning regression models.

Machine Learning
Pandas
Time Series

Flight Fare Prediction

Trained machine learning model to predict airline ticket prices from historical flight data.

Regression
Scikit-learn
Pandas

Recent Articles

Exploring ideas in tech, development, and beyond

Ever found yourself staring at Docker containers that refuse to talk to each other? Let me share a story that might sound familiar. Last Tuesday, I was knee-deep in microservices architecture (because apparently, I enjoy making my life complicated), and I hit a wall that many of us have faced: I had my shiny FastAPI backend running in one Docker Compose setup, my trusty PostgreSQL database humming along in another, and they were acting like strangers at a party who refuse to make eye contact. If you’re reading this, chances are you’re in the same boat. Maybe you’re following a microservices pattern, or perhaps you just want to keep your database and API services cleanly separated. Whatever your reason, I’ve got your back. The Problem: When Containers Live in Isolation Here’s what I started with — sound familiar? Backend Setup (docker-compose-backend.yml): version: '3.8'services: fastapi: build: . ports: - "8000:8000" environment: - ENVIRONMENT=development Database Setup (docker-compose-database.yml): version: '3.8'services: postgres: image: postgres:15 environment: POSTGRES_DB: myapp POSTGRES_USER: developer POSTGRES_PASSWORD: secretpassword ports: - "5432:5432" Looks reasonable, right? Both services start up fine, but when FastAPI tries to connect to PostgreSQL, it’s like they’re speaking different languages. The containers are isolated in their own networks, blissfully unaware of each other’s existence. The Real-World Project: TaskMaster Instead of showing you another “Hello World” example, let me walk you through how I solved this problem while building TaskMaster — a task management API that I built for a client. This is a real application with authentication, database relationships, and all the complexity you’ll face in production. TaskMaster needed: User registration and JWT authentication Task creation with categories and priorities Real database relationships Production-ready error handling Health checks and monitoring Perfect for demonstrating Docker networking in action. Solution 1: Docker Networks — The Professional Approach After some coffee and Stack Overflow diving, I discovered Docker networks. Think of them as virtual meeting rooms where your containers can actually have conversations. Step 1: Create a Shared Network First, let’s create a network that both our services can join: docker network create taskmaster-network It’s that simple. Now we have a virtual space where our containers can find each other. Step 2: The Database Setup Here’s the production-ready database configuration I used for TaskMaster: # docker-compose-database.ymlversion: '3.8'services: taskmaster-db: image: postgres:15 container_name: taskmaster_postgres environment: POSTGRES_DB: taskmaster POSTGRES_USER: taskuser POSTGRES_PASSWORD: supersecretpassword POSTGRES_HOST_AUTH_METHOD: trust # Development only! ports: - "5432:5432" volumes: - taskmaster_data:/var/lib/postgresql/data - ./database/init.sql:/docker-entrypoint-initdb.d/init.sql networks: - taskmaster-network healthcheck: test: ["CMD-SHELL", "pg_isready -U taskuser -d taskmaster"] interval: 30s timeout: 10s retries: 3 restart: unless-stoppedvolumes: taskmaster_data:networks: taskmaster-network: external: true Step 3: The FastAPI Backend Configuration # docker-compose-backend.ymlversion: '3.8'services: taskmaster-api: build: context: . dockerfile: Dockerfile container_name: taskmaster_fastapi ports: - "8000:8000" environment: - DATABASE_URL=postgresql://taskuser:supersecretpassword@taskmaster_postgres:5432/taskmaster - SECRET_KEY=your-secret-jwt-key-change-in-production - ENVIRONMENT=development networks: - taskmaster-network restart: unless-stopped depends_on: taskmaster_postgres: condition: service_healthynetworks: taskmaster-network: external: true Step 4: The Real FastAPI Application Here’s the actual TaskMaster code — not a toy example, but production-ready code: Database Models (models.py): from sqlalchemy import Column, Integer, String, Text, DateTime, Boolean, ForeignKey, Enumfrom sqlalchemy.ext.declarative import declarative_basefrom sqlalchemy.orm import relationshipfrom sqlalchemy.dialects.postgresql import UUIDfrom datetime import datetimeimport uuidimport enumBase = declarative_base()class Priority(enum.Enum): low = "low" medium = "medium" high = "high" urgent = "urgent"class Status(enum.Enum): pending = "pending" in_progress = "in_progress" completed = "completed" cancelled = "cancelled"class User(Base): __tablename__ = "users" id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4) username = Column(String(50), unique=True, nullable=False, index=True) email = Column(String(100), unique=True, nullable=False, index=True) full_name = Column(String(100)) hashed_password = Column(String(255), nullable=False) is_active = Column(Boolean, default=True) created_at = Column(DateTime, default=datetime.utcnow) tasks = relationship("Task", back_populates="owner", cascade="all, delete-orphan")class Task(Base): __tablename__ = "tasks" id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4) title = Column(String(200), nullable=False) description = Column(Text) priority = Column(Enum(Priority), default=Priority.medium) status = Column(Enum(Status), default=Status.pending) created_at = Column(DateTime, default=datetime.utcnow) updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) due_date = Column(DateTime) owner_id = Column(UUID(as_uuid=True), ForeignKey("users.id"), nullable=False) owner = relationship("User", back_populates="tasks") Database Connection (database.py): from sqlalchemy import create_enginefrom sqlalchemy.ext.declarative import declarative_basefrom sqlalchemy.orm import sessionmakerimport osimport timeimport logging# Set up logginglogging.basicConfig(level=logging.INFO)logger = logging.getLogger(__name__)DATABASE_URL = os.getenv( "DATABASE_URL", "postgresql://taskuser:supersecretpassword@localhost:5432/taskmaster")# Create engine with connection poolingengine = create_engine( DATABASE_URL, pool_size=10, max_overflow=20, pool_pre_ping=True, # Validates connections before use echo=True # Set to False in production)SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)Base = declarative_base()def get_database(): """Dependency to get database session""" db = SessionLocal() try: yield db except Exception as e: logger.error(f"Database error: {e}") db.rollback() raise finally: db.close()def wait_for_database(max_retries=30, delay=1): """Wait for database to be ready""" retries = 0 while retries &lt; max_retries: try: # Try to create a connection connection = engine.connect() connection.close() logger.info("Database connection successful!") return True except Exception as e: retries += 1 logger.warning(f"Database connection failed (attempt {retries}/{max_retries}): {e}") time.sleep(delay) logger.error("Could not connect to database after maximum retries") return False Main FastAPI Application (main.py): from fastapi import FastAPI, Depends, HTTPException, statusfrom fastapi.security import HTTPBearer, HTTPAuthorizationCredentialsfrom sqlalchemy.orm import Sessionfrom pydantic import BaseModel, EmailStrfrom passlib.context import CryptContextfrom jose import JWTError, jwtfrom datetime import datetime, timedeltaimport loggingfrom database import get_database, wait_for_database, enginefrom models import Base, User, Task, Priority, Statuslogger = logging.getLogger(__name__)# Security setupSECRET_KEY = os.getenv("SECRET_KEY", "your-secret-key-change-in-production")ALGORITHM = "HS256"ACCESS_TOKEN_EXPIRE_MINUTES = 30pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")security = HTTPBearer()app = FastAPI( title="TaskMaster API", version="1.0.0", description="A production-ready task management API")# Pydantic modelsclass UserCreate(BaseModel): username: str email: EmailStr password: str full_name: str = Noneclass UserLogin(BaseModel): username: str password: strclass TaskCreate(BaseModel): title: str description: str = None priority: Priority = Priority.medium due_date: datetime = Noneclass TaskResponse(BaseModel): id: str title: str description: str priority: Priority status: Status created_at: datetime due_date: datetime = None class Config: from_attributes = True# Authentication utilitiesdef verify_password(plain_password, hashed_password): return pwd_context.verify(plain_password, hashed_password)def get_password_hash(password): return pwd_context.hash(password)def create_access_token(data: dict, expires_delta: timedelta = None): to_encode = data.copy() if expires_delta: expire = datetime.utcnow() + expires_delta else: expire = datetime.utcnow() + timedelta(minutes=15) to_encode.update({"exp": expire}) encoded_jwt = jwt.encode(to_encode, SECRET_KEY, algorithm=ALGORITHM) return encoded_jwtasync def get_current_user( credentials: HTTPAuthorizationCredentials = Depends(security), db: Session = Depends(get_database)): credentials_exception = HTTPException( status_code=status.HTTP_401_UNAUTHORIZED, detail="Could not validate credentials", headers={"WWW-Authenticate": "Bearer"}, ) try: payload = jwt.decode(credentials.credentials, SECRET_KEY, algorithms=[ALGORITHM]) username: str = payload.get("sub") if username is None: raise credentials_exception except JWTError: raise credentials_exception user = db.query(User).filter(User.username == username).first() if user is None: raise credentials_exception return user@app.on_event("startup")async def startup_event(): """Initialize database connection on startup""" logger.info("Starting TaskMaster API...") if not wait_for_database(): raise Exception("Could not connect to database") # Create tables Base.metadata.create_all(bind=engine) logger.info("Database tables created successfully")@app.get("/")async def root(): return {"message": "Welcome to TaskMaster API! 🚀", "status": "operational"}@app.get("/health")async def health_check(db: Session = Depends(get_database)): """Health check endpoint that tests database connection""" try: # Simple query to test connection result = db.execute("SELECT 1") return {"status": "healthy", "database": "connected", "timestamp": datetime.utcnow()} except Exception as e: logger.error(f"Health check failed: {e}") raise HTTPException(status_code=503, detail="Database connection failed")@app.post("/register")async def register(user: UserCreate, db: Session = Depends(get_database)): """Register a new user""" # Check if user exists if db.query(User).filter(User.username == user.username).first(): raise HTTPException(status_code=400, detail="Username already registered") if db.query(User).filter(User.email == user.email).first(): raise HTTPException(status_code=400, detail="Email already registered") # Create new user hashed_password = get_password_hash(user.password) db_user = User( username=user.username, email=user.email, full_name=user.full_name, hashed_password=hashed_password ) db.add(db_user) db.commit() db.refresh(db_user) return {"message": "User registered successfully", "user_id": str(db_user.id)}@app.post("/login")async def login(user: UserLogin, db: Session = Depends(get_database)): """Login user and return JWT token""" db_user = db.query(User).filter(User.username == user.username).first() if not db_user or not verify_password(user.password, db_user.hashed_password): raise HTTPException( status_code=status.HTTP_401_UNAUTHORIZED, detail="Incorrect username or password" ) access_token_expires = timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES) access_token = create_access_token( data={"sub": db_user.username}, expires_delta=access_token_expires ) return {"access_token": access_token, "token_type": "bearer"}@app.post("/tasks", response_model=TaskResponse)async def create_task( task: TaskCreate, current_user: User = Depends(get_current_user), db: Session = Depends(get_database)): """Create a new task""" db_task = Task( title=task.title, description=task.description, priority=task.priority, due_date=task.due_date, owner_id=current_user.id ) db.add(db_task) db.commit() db.refresh(db_task) return db_task@app.get("/tasks", response_model=list[TaskResponse])async def get_tasks( current_user: User = Depends(get_current_user), db: Session = Depends(get_database), status_filter: Status = None, priority_filter: Priority = None): """Get user's tasks with optional filtering""" query = db.query(Task).filter(Task.owner_id == current_user.id) if status_filter: query = query.filter(Task.status == status_filter) if priority_filter: query = query.filter(Task.priority == priority_filter) tasks = query.order_by(Task.created_at.desc()).all() return tasks@app.put("/tasks/{task_id}/status")async def update_task_status( task_id: str, new_status: Status, current_user: User = Depends(get_current_user), db: Session = Depends(get_database)): """Update task status""" task = db.query(Task).filter( Task.id == task_id, Task.owner_id == current_user.id ).first() if not task: raise HTTPException(status_code=404, detail="Task not found") task.status = new_status task.updated_at = datetime.utcnow() db.commit() return {"message": "Task status updated successfully"} Step 5: Testing Our Real TaskMaster API Now comes the moment of truth. Let’s start our services and test the real functionality: # Start the database firstdocker-compose -f docker-compose-database.yml up -d# Wait for postgres to initialize (check the logs)docker-compose -f docker-compose-database.yml logs -f taskmaster-db# Once you see "database system is ready to accept connections"# Start the backenddocker-compose -f docker-compose-backend.yml up -d# Check the logs to see if connection workeddocker-compose -f docker-compose-backend.yml logs -f taskmaster-api Testing the API endpoints: # Health checkcurl http://localhost:8000/health# Register a new usercurl -X POST "http://localhost:8000/register" \ -H "Content-Type: application/json" \ -d '{ "username": "testuser", "email": "test@example.com", "password": "testpassword123", "full_name": "Test User" }'# Login to get JWT tokencurl -X POST "http://localhost:8000/login" \ -H "Content-Type: application/json" \ -d '{ "username": "testuser", "password": "testpassword123" }'# Use the token to create a task (replace YOUR_TOKEN with the actual token)curl -X POST "http://localhost:8000/tasks" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_TOKEN" \ -d '{ "title": "Test Docker connection", "description": "Make sure FastAPI can talk to PostgreSQL", "priority": "high" }'# Get your taskscurl -X GET "http://localhost:8000/tasks" \ -H "Authorization: Bearer YOUR_TOKEN" When I first got this working, I literally did a little victory dance. Seeing that JSON response with my task data coming from PostgreSQL running in a separate container was magical. Expected successful response: [ { "id": "123e4567-e89b-12d3-a456-426614174000", "title": "Test Docker connection", "description": "Make sure FastAPI can talk to PostgreSQL", "priority": "high", "status": "pending", "created_at": "2024-03-15T10:30:00.000Z", "due_date": null }] Solution 2: The Host Network Approach (When Networks Feel Overkill) Sometimes you just want something simple that works. If you’re developing locally and don’t need the full network isolation, you can connect through the host: # docker-compose-backend.yml (Alternative approach)version: '3.8'services: taskmaster-api: build: . ports: - "8000:8000" environment: # Connect via host machine - DATABASE_URL=postgresql://taskuser:supersecretpassword@host.docker.internal:5432/taskmaster extra_hosts: - "host.docker.internal:host-gateway" # For Linux compatibility This approach uses your host machine as a bridge between containers. It’s simpler but less secure for production. The Dockerfile That Actually Works Here’s a Dockerfile that I’ve learned works reliably with database connections: FROM python:3.11-slimWORKDIR /app# Install system dependenciesRUN apt-get update &amp;&amp; apt-get install -y \ gcc \ postgresql-client \ curl \ &amp;&amp; rm -rf /var/lib/apt/lists/*# Copy requirements first (for better caching)COPY requirements.txt .RUN pip install --no-cache-dir -r requirements.txt# Copy application codeCOPY . .# Create non-root userRUN useradd -m -u 1000 appuser &amp;&amp; chown -R appuser:appuser /appUSER appuser# Health checkHEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \ CMD curl -f http://localhost:8000/health || exit 1EXPOSE 8000CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--reload"] The requirements.txt for TaskMaster: fastapi==0.104.1uvicorn[standard]==0.24.0sqlalchemy==2.0.23psycopg2-binary==2.9.9python-jose[cryptography]==3.3.0passlib[bcrypt]==1.7.4python-multipart==0.0.6pydantic[email]==2.5.0 Debugging: When Things Go Wrong (And They Will) Here are the commands that have saved my sanity more times than I can count: # Check if containers can see each otherdocker exec -it taskmaster_fastapi ping taskmaster_postgres# View container logsdocker-compose -f docker-compose-backend.yml logs -f taskmaster-apidocker-compose -f docker-compose-database.yml logs -f taskmaster-db# Check network connectivitydocker network lsdocker network inspect taskmaster-network# Connect to postgres directly to testdocker exec -it taskmaster_postgres psql -U taskuser -d taskmaster# Check if FastAPI can reach postgresdocker exec -it taskmaster_fastapi nc -zv taskmaster_postgres 5432# Test database connection from within containerdocker exec -it taskmaster_fastapi python -c "from database import wait_for_databaseprint('Database connection:', wait_for_database())" Production Lessons Learned from TaskMaster After running TaskMaster in production for 6 months, here are the real-world insights: 1. Monitoring is Critical I added health checks everywhere and monitoring with Prometheus: # Add to main.pyfrom prometheus_client import Counter, Histogram, generate_latest, CONTENT_TYPE_LATESTREQUEST_COUNT = Counter('http_requests_total', 'Total HTTP requests', ['method', 'endpoint'])REQUEST_LATENCY = Histogram('http_request_duration_seconds', 'HTTP request latency')@app.middleware("http")async def add_metrics(request: Request, call_next): start_time = time.time() response = await call_next(request) process_time = time.time() - start_time REQUEST_COUNT.labels(method=request.method, endpoint=request.url.path).inc() REQUEST_LATENCY.observe(process_time) return response@app.get("/metrics")async def metrics(): return Response(generate_latest(), media_type=CONTENT_TYPE_LATEST) 2. Database Backups Are Non-Negotiable I learned this the hard way when a container crashed and I almost lost client data: # Add to your database compose fileservices: taskmaster-db: # ... existing config ... volumes: - taskmaster_data:/var/lib/postgresql/data - ./backups:/backups # Backup service db-backup: image: postgres:15 container_name: taskmaster_backup environment: - PGPASSWORD=supersecretpassword volumes: - ./backups:/backups - /etc/localtime:/etc/localtime:ro networks: - taskmaster-network command: | bash -c " while true; do echo 'Creating backup...' pg_dump -h taskmaster_postgres -U taskuser taskmaster &gt; /backups/taskmaster_$(date +%Y%m%d_%H%M%S).sql find /backups -name '*.sql' -mtime +7 -delete # Keep backups for 7 days sleep 86400 # Run daily done " depends_on: - taskmaster-db 3. Load Testing Revealed Surprises I used locust to load test TaskMaster and discovered bottlenecks I never expected: # locustfile.pyfrom locust import HttpUser, task, betweenimport jsonclass TaskMasterUser(HttpUser): wait_time = between(1, 3) def on_start(self): self.register_and_login() def register_and_login(self): # Register user user_data = { "username": f"loadtest_{self.user_id}", "email": f"@example.com"&gt;loadtest_{self.user_id}@example.com", "password": "testpass123", "full_name": "Load Test User" } self.client.post("/register", json=user_data) # Login and get token login_response = self.client.post("/login", json={ "username": user_data["username"], "password": "testpass123" }) self.token = login_response.json()["access_token"] self.headers = {"Authorization": f"Bearer {self.token}"} @task(3) def get_tasks(self): self.client.get("/tasks", headers=self.headers) @task(1) def create_task(self): task_data = { "title": "Load test task", "description": "Testing system load", "priority": "medium" } self.client.post("/tasks", json=task_data, headers=self.headers) @task(1) def health_check(self): self.client.get("/health") Run with: locust -f locustfile.py --host=http://localhost:8000 This revealed that my connection pool was too small and JWT token validation was a bottleneck. The Alternative: Single Compose File After building TaskMaster with separate compose files, my client asked: “Why not just put everything in one file?” Good question. Here’s when I recommend each approach: Use separate compose files when: Different teams manage database and API Different deployment schedules Production scaling needs Microservices architecture Use single compose file when: Small team (2–3 developers) Simple application architecture Development environment only Quick prototyping Here’s how TaskMaster would look as a single file: # docker-compose-taskmaster-all.ymlversion: '3.8'services: taskmaster-db: image: postgres:15 container_name: taskmaster_postgres environment: POSTGRES_DB: taskmaster POSTGRES_USER: taskuser POSTGRES_PASSWORD: supersecretpassword volumes: - taskmaster_data:/var/lib/postgresql/data - ./database/init.sql:/docker-entrypoint-initdb.d/init.sql healthcheck: test: ["CMD-SHELL", "pg_isready -U taskuser -d taskmaster"] interval: 30s timeout: 10s retries: 3 networks: - taskmaster-network taskmaster-api: build: . container_name: taskmaster_fastapi ports: - "8000:8000" environment: - DATABASE_URL=postgresql://taskuser:supersecretpassword@taskmaster-db:5432/taskmaster - SECRET_KEY=your-secret-jwt-key-change-in-production - ENVIRONMENT=development depends_on: taskmaster-db: condition: service_healthy networks: - taskmaster-network restart: unless-stopped # Optional: Add Redis for caching/sessions redis: image: redis:7-alpine container_name: taskmaster_redis networks: - taskmaster-network command: redis-server --appendonly yes volumes: - redis_data:/datavolumes: taskmaster_data: redis_data:networks: taskmaster-network: driver: bridge Starting everything with one command: docker-compose -f docker-compose-taskmaster-all.yml up -d Production Architecture: What TaskMaster Looks Like Today After all the iterations, here’s the production-ready setup I’m proud of: TaskMaster Production Architecture┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐│ Load Balancer │────│ FastAPI API │────│ PostgreSQL ││ (Nginx) │ │ (4 workers) │ │ (Primary) │└─────────────────┘ └──────────────────┘ └─────────────────┘ │ │ ┌──────────────────┐ │ │ Redis │ │ │ (Cache/Queue) │ │ └──────────────────┘ │ │ ┌─────────────────┐ │ PostgreSQL │ │ (Read Replica)│ └─────────────────┘ Each component runs in its own Docker Compose stack, connected through Docker networks. It’s served over 2 million API requests without a hitch. Production Security Improvements: # Production security improvementsservices: taskmaster-db: environment: - POSTGRES_HOST_AUTH_METHOD=md5 # Not 'trust' - POSTGRES_INITDB_ARGS=--auth-host=md5 command: | postgres -c log_statement=all -c log_min_duration_statement=1000 -c max_connections=100 -c shared_preload_libraries=pg_stat_statements deploy: resources: limits: cpus: '1.0' memory: 1G reservations: cpus: '0.5' memory: 512M taskmaster-api: deploy: resources: limits: cpus: '0.5' memory: 512M reservations: cpus: '0.25' memory: 256M Common Pitfalls and How to Avoid Them 1. Container Name vs Service Name Confusion # ❌ Wrong - using service name when you need container name- DATABASE_URL=postgresql://user:pass@taskmaster-db:5432/db# ✅ Correct - using container name for cross-compose communication- DATABASE_URL=postgresql://user:pass@taskmaster_postgres:5432/db 2. Network Creation Timing # ❌ Wrong - containers fail to find networkdocker-compose -f docker-compose-backend.yml up -ddocker-compose -f docker-compose-database.yml up -d# ✅ Correct - create network firstdocker network create taskmaster-networkdocker-compose -f docker-compose-database.yml up -ddocker-compose -f docker-compose-backend.yml up -d 3. Database Connection Timing # ❌ Wrong - no retry logicdef connect_to_database(): engine = create_engine(DATABASE_URL) return engine.connect() # Fails if DB not ready# ✅ Correct - with retry logic (as shown in our wait_for_database function)def wait_for_database(max_retries=30, delay=1): for attempt in range(max_retries): try: connection = engine.connect() connection.close() return True except Exception as e: time.sleep(delay) return False Troubleshooting Guide: When Things Break Error: “Could not translate host name” # Problem: Container can't resolve other container's name# Solution: Check network configurationdocker network inspect taskmaster-networkdocker exec -it taskmaster_fastapi nslookup taskmaster_postgres Error: “Connection refused on port 5432” # Problem: PostgreSQL not ready or wrong port# Solution: Check database health and port mappingdocker-compose -f docker-compose-database.yml logs taskmaster-dbdocker exec -it taskmaster_postgres pg_isready -U taskuser Error: “Authentication failed” # Problem: Wrong credentials or auth method# Solution: Check environment variables and pg_hba.confdocker exec -it taskmaster_postgres cat /var/lib/postgresql/data/pg_hba.conf Performance Optimization Tips 1. Connection Pool Tuning Based on TaskMaster’s production metrics: # Optimized connection pool settingsengine = create_engine( DATABASE_URL, pool_size=20, # Base connections max_overflow=30, # Additional connections under load pool_pre_ping=True, # Validate connections pool_recycle=3600, # Recycle connections every hour echo=False # Disable SQL logging in production) 2. Database Indexing -- Add these indexes for better TaskMaster performanceCREATE INDEX CONCURRENTLY idx_tasks_owner_status ON tasks(owner_id, status);CREATE INDEX CONCURRENTLY idx_tasks_created_at ON tasks(created_at DESC);CREATE INDEX CONCURRENTLY idx_users_username ON users(username);CREATE INDEX CONCURRENTLY idx_users_email ON users(email); 3. FastAPI Optimization # Add to main.py for better performancefrom fastapi.middleware.gzip import GZipMiddlewarefrom fastapi.middleware.trustedhost import TrustedHostMiddlewareapp.add_middleware(GZipMiddleware, minimum_size=1000)app.add_middleware(TrustedHostMiddleware, allowed_hosts=["*"]) # Configure for production# Enable response caching for read-only endpointsfrom functools import lru_cache@lru_cache(maxsize=100)def get_user_task_count(user_id: str, db: Session): return db.query(Task).filter(Task.owner_id == user_id).count() Environment-Specific Configurations Development Environment # docker-compose-dev.ymlversion: '3.8'services: taskmaster-api: build: . volumes: - .:/app # Hot reload for development environment: - ENVIRONMENT=development - DEBUG=true - LOG_LEVEL=DEBUG command: ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--reload"] Production Environment # docker-compose-prod.ymlversion: '3.8'services: taskmaster-api: image: taskmaster-api:latest # Pre-built image environment: - ENVIRONMENT=production - DEBUG=false - LOG_LEVEL=INFO - WORKERS=4 command: ["gunicorn", "main:app", "-w", "4", "-k", "uvicorn.workers.UvicornWorker", "--bind", "0.0.0.0:8000"] deploy: replicas: 3 resources: limits: memory: 512M reservations: memory: 256M Testing Strategy Integration Testing # test_integration.pyimport pytestimport requestsfrom sqlalchemy import create_enginefrom sqlalchemy.orm import sessionmakerfrom testcontainers.postgres import PostgresContainerfrom testcontainers.compose import DockerCompose@pytest.fixture(scope="session")def docker_services(): with DockerCompose(".", compose_file_name="docker-compose-test.yml") as compose: compose.wait_for("http://localhost:8001/health") yield composedef test_full_user_workflow(docker_services): base_url = "http://localhost:8001" # Register user register_response = requests.post(f"{base_url}/register", json={ "username": "testuser", "email": "test@example.com", "password": "testpass123", "full_name": "Test User" }) assert register_response.status_code == 200 # Login login_response = requests.post(f"{base_url}/login", json={ "username": "testuser", "password": "testpass123" }) assert login_response.status_code == 200 token = login_response.json()["access_token"] # Create task headers = {"Authorization": f"Bearer {token}"} task_response = requests.post(f"{base_url}/tasks", json={"title": "Test Task", "description": "Integration test"}, headers=headers ) assert task_response.status_code == 200 # Get tasks tasks_response = requests.get(f"{base_url}/tasks", headers=headers) assert tasks_response.status_code == 200 assert len(tasks_response.json()) == 1 Docker Compose for Testing # docker-compose-test.ymlversion: '3.8'services: test-db: image: postgres:15 environment: POSTGRES_DB: taskmaster_test POSTGRES_USER: testuser POSTGRES_PASSWORD: testpass tmpfs: - /var/lib/postgresql/data # In-memory for faster tests test-api: build: . ports: - "8001:8000" environment: - DATABASE_URL=postgresql://testuser:testpass@test-db:5432/taskmaster_test - SECRET_KEY=test-secret-key depends_on: - test-db CI/CD Pipeline GitHub Actions Workflow # .github/workflows/ci.ymlname: TaskMaster CI/CDon: push: branches: [ main, develop ] pull_request: branches: [ main ]jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Create Docker network run: docker network create taskmaster-network - name: Start test services run: | docker-compose -f docker-compose-test.yml up -d sleep 10 # Wait for services to be ready - name: Run tests run: | docker-compose -f docker-compose-test.yml exec -T test-api pytest tests/ -v - name: Run integration tests run: pytest test_integration.py -v - name: Clean up run: docker-compose -f docker-compose-test.yml down -v deploy: needs: test runs-on: ubuntu-latest if: github.ref == 'refs/heads/main' steps: - name: Deploy to production run: | # Your deployment script here echo "Deploying TaskMaster to production..." Monitoring and Observability Logging Configuration # logging_config.pyimport loggingimport sysfrom pythonjsonlogger import jsonloggerdef setup_logging(): # Create a custom formatter formatter = jsonlogger.JsonFormatter( '%(asctime)s %(name)s %(levelname)s %(message)s' ) # Configure root logger root_logger = logging.getLogger() root_logger.setLevel(logging.INFO) # Console handler console_handler = logging.StreamHandler(sys.stdout) console_handler.setFormatter(formatter) root_logger.addHandler(console_handler) # Database query logging logging.getLogger('sqlalchemy.engine').setLevel(logging.WARNING) Health Check Enhancement @app.get("/health/detailed")async def detailed_health_check(db: Session = Depends(get_database)): """Detailed health check with component status""" health_status = { "status": "healthy", "timestamp": datetime.utcnow(), "components": {} } # Database health try: db_start = time.time() result = db.execute("SELECT 1") db_latency = (time.time() - db_start) * 1000 # ms health_status["components"]["database"] = { "status": "healthy", "latency_ms": round(db_latency, 2) } except Exception as e: health_status["status"] = "unhealthy" health_status["components"]["database"] = { "status": "unhealthy", "error": str(e) } # Memory usage import psutil memory_usage = psutil.virtual_memory().percent health_status["components"]["memory"] = { "status": "healthy" if memory_usage &lt; 80 else "warning", "usage_percent": memory_usage } return health_status Security Best Practices Environment Variables Management # .env file (never commit to git)DATABASE_URL=postgresql://taskuser:$(cat /run/secrets/db_password)@taskmaster_postgres:5432/taskmasterSECRET_KEY=$(cat /run/secrets/jwt_secret)ENVIRONMENT=production# Docker secrets in productiondocker secret create db_password db_password.txtdocker secret create jwt_secret jwt_secret.txt Production Dockerfile with Security FROM python:3.11-slim# Security updatesRUN apt-get update &amp;&amp; apt-get upgrade -y \ &amp;&amp; apt-get install -y --no-install-recommends \ gcc \ postgresql-client \ curl \ &amp;&amp; rm -rf /var/lib/apt/lists/*WORKDIR /app# Non-root userRUN groupadd -r appuser &amp;&amp; useradd -r -g appuser -u 1001 appuser# Install Python dependenciesCOPY requirements.txt .RUN pip install --no-cache-dir --upgrade pip \ &amp;&amp; pip install --no-cache-dir -r requirements.txt# Copy applicationCOPY --chown=appuser:appuser . .# Switch to non-root userUSER appuser# Security headers and configurationENV PYTHONDONTWRITEBYTECODE=1 \ PYTHONUNBUFFERED=1 \ PYTHONHASHSEED=randomEXPOSE 8000HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \ CMD curl -f http://localhost:8000/health || exit 1CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "4"] Wrapping Up: Your Next Steps Building TaskMaster taught me that connecting FastAPI and PostgreSQL across Docker Compose files isn’t just about the technical setup — it’s about building maintainable, scalable systems that can evolve with your needs. Key Takeaways: Docker Networks are Your Friend: They provide the cleanest, most production-ready way to connect containers across compose files. Start Simple, Evolve Gradually: Begin with basic connectivity, then add monitoring, security, and scaling as needed. Test Everything: From database connections to load testing, comprehensive testing saves you from production headaches. Monitor From Day One: Health checks, logging, and metrics aren’t optional — they’re essential for maintaining reliable systems. Security is Non-Negotiable: Use proper authentication, secure secrets management, and follow Docker security best practices. What You Should Do Next: If you’re just getting started: Clone the TaskMaster repository and run it locally Experiment with the Docker network setup Try the API endpoints and see the database integration in action Break things intentionally and learn how to debug them If you’re ready for production: Implement proper secrets management Set up monitoring and alerting Create automated backup strategies Plan your scaling strategy Load test your application Remember: The best architecture is the one that works for your team and scales with your needs. TaskMaster started as a simple todo API and evolved into a production system serving thousands of users. Your journey will be unique, but the principles remain the same. Resources and Further Reading: TaskMaster GitHub Repository: Full source code with all examples Docker Networks Documentation: Deep dive into container networking FastAPI Production Guide: Official deployment recommendations PostgreSQL Performance Tuning: Database optimization techniques Container Security Best Practices: Securing your Docker deployments Final Thoughts After six months running TaskMaster in production, serving over 2 million API requests, and helping dozens of developers implement similar setups, I can confidently say that mastering Docker container communication is one of the most valuable skills you can develop as a backend developer. The techniques you’ve learned here — Docker networks, health checks, proper error handling, and production considerations — will serve you well beyond just FastAPI and PostgreSQL. These patterns apply to any microservices architecture, any database technology, and any containerized application. What’s your Docker networking story? Have you built something similar? What challenges did you face, and how did you solve them? I’d love to hear about your experiences and the creative solutions you’ve discovered. Keep building, keep learning, and remember: every expert was once a beginner who refused to give up. Happy coding! 🚀 Found this helpful? Follow me for more real-world development stories and practical tutorials. If you build something cool with TaskMaster or these Docker techniques, tag me — I love seeing what the community creates! Connect with me: GitHub: Follow for more open-source projects LinkedIn: Professional updates and development insights Twitter: Quick tips and development thoughts Support the Project: If TaskMaster helped you build something awesome, consider: ⭐ Starring the GitHub repository 📝 Contributing improvements or bug fixes 💬 Sharing your use case in the discussions 📖 Writing about your own Docker journey Until next time, keep containerizing! 🐳

A comprehensive guide to creating scalable, asynchronous email processing infrastructure The Problem: Email Bottlenecks in Modern Applications Every modern web application needs to send emails — welcome messages, verification codes, password resets, notifications. But here’s the catch: sending emails synchronously during user requests creates a terrible user experience. Users wait seconds for registration to complete while your server talks to SMTP providers, and your application becomes vulnerable to third-party service outages. The solution? Asynchronous email processing with a robust queuing system. Today, we’ll build a production-ready email queuing system using Go, Asynq, and Redis that can handle thousands of emails efficiently while providing monitoring, retry mechanisms, and scheduling capabilities. Why This Tech Stack? Go offers excellent concurrency primitives and performance for high-throughput systems. Asynq is a Go library that provides distributed task queues with built-in retry logic, scheduling, and monitoring. Redis serves as our reliable message broker with persistence and clustering capabilities. This combination gives us: High Performance: Go’s goroutines handle concurrent email processing Reliability: Asynq’s retry mechanisms and Redis persistence Scalability: Horizontal scaling across multiple workers Observability: Built-in monitoring and metrics Developer Experience: Clean APIs and excellent tooling Architecture Overview Our email queuing system follows this architecture: ┌─────────────┐ ┌─────────────┐ ┌─────────────┐│ Web App │───▶│ Queue │───▶│ Worker ││ │ │ (Redis) │ │ (Asynq) │└─────────────┘ └─────────────┘ └─────────────┘ │ ▼ ┌─────────────┐ │ SMTP Server │ └─────────────┘ Components: Producer: Web application enqueues email tasks Queue: Redis stores and manages tasks Consumer: Asynq workers process tasks asynchronously Monitor: Asynqmon provides real-time visibility Implementation Deep Dive Project Structure golang-email-queue-with-asynq/├── cmd/│ ├── producer/ # Web server that enqueues tasks│ └── worker/ # Background worker that processes emails├── pkg/│ ├── config/ # Configuration management│ ├── email/ # Email service and templates│ ├── queue/ # Asynq client and server setup│ └── tasks/ # Task definitions and handlers├── templates/ # Email templates (HTML/text)├── docker-compose.yml # Redis setup└── go.mod 1. Setting Up the Queue Infrastructure First, let’s define our task structure and queue client: // pkg/tasks/email_task.gopackage tasksimport ( "encoding/json" "fmt")const TypeEmailDelivery = "email:deliver"type EmailPayload struct { To string `json:"to"` Subject string `json:"subject"` Body string `json:"body"` HTMLBody string `json:"html_body,omitempty"` From string `json:"from"` Template string `json:"template,omitempty"` TemplateData map[string]interface{} `json:"template_data,omitempty"`}func NewEmailDeliveryTask(payload EmailPayload) (*asynq.Task, error) { data, err := json.Marshal(payload) if err != nil { return nil, fmt.Errorf("failed to marshal email payload: %w", err) } return asynq.NewTask(TypeEmailDelivery, data), nil} 2. Queue Client for Enqueueing Tasks // pkg/queue/client.gopackage queueimport ( "time" "github.com/hibiken/asynq")type Client struct { client *asynq.Client}func NewClient(redisAddr, password string, db int) *Client { client := asynq.NewClient(asynq.RedisClientOpt{ Addr: redisAddr, Password: password, DB: db, }) return &amp;Client{client: client}}func (c *Client) EnqueueEmail(task *asynq.Task, opts ...asynq.Option) error { info, err := c.client.Enqueue(task, opts...) if err != nil { return fmt.Errorf("failed to enqueue task: %w", err) } log.Printf("Enqueued task: id=%s queue=%s", info.ID, info.Queue) return nil}func (c *Client) ScheduleEmail(task *asynq.Task, processAt time.Time) error { info, err := c.client.Enqueue(task, asynq.ProcessAt(processAt)) if err != nil { return fmt.Errorf("failed to schedule task: %w", err) } log.Printf("Scheduled task: id=%s at=%v", info.ID, processAt) return nil} 3. Email Service with Template Support // pkg/email/service.gopackage emailimport ( "bytes" "fmt" "html/template" "net/smtp" "text/template")type Service struct { smtpHost string smtpPort string smtpUser string smtpPassword string templates map[string]*EmailTemplate}type EmailTemplate struct { Subject string TextBody *template.Template HTMLBody *template.Template}func (s *Service) SendEmail(payload tasks.EmailPayload) error { var body, htmlBody string if payload.Template != "" { tmpl, exists := s.templates[payload.Template] if !exists { return fmt.Errorf("template %s not found", payload.Template) } // Render template with data var buf bytes.Buffer if err := tmpl.TextBody.Execute(&amp;buf, payload.TemplateData); err != nil { return fmt.Errorf("failed to render text template: %w", err) } body = buf.String() if tmpl.HTMLBody != nil { buf.Reset() if err := tmpl.HTMLBody.Execute(&amp;buf, payload.TemplateData); err != nil { return fmt.Errorf("failed to render HTML template: %w", err) } htmlBody = buf.String() } } else { body = payload.Body htmlBody = payload.HTMLBody } return s.sendSMTP(payload.From, payload.To, payload.Subject, body, htmlBody)}func (s *Service) sendSMTP(from, to, subject, body, htmlBody string) error { // SMTP implementation with both text and HTML support auth := smtp.PlainAuth("", s.smtpUser, s.smtpPassword, s.smtpHost) msg := s.buildMIMEMessage(from, to, subject, body, htmlBody) addr := fmt.Sprintf("%s:%s", s.smtpHost, s.smtpPort) return smtp.SendMail(addr, auth, from, []string{to}, []byte(msg))} 4. Task Handler for Processing Emails // pkg/tasks/handlers.gopackage tasksimport ( "context" "encoding/json" "fmt" "github.com/hibiken/asynq")type EmailHandler struct { emailService *email.Service}func (h *EmailHandler) ProcessEmailDelivery(ctx context.Context, t *asynq.Task) error { var payload EmailPayload if err := json.Unmarshal(t.Payload(), &amp;payload); err != nil { return fmt.Errorf("failed to unmarshal email payload: %w", err) } log.Printf("Processing email: to=%s subject=%s", payload.To, payload.Subject) if err := h.emailService.SendEmail(payload); err != nil { return fmt.Errorf("failed to send email: %w", err) } log.Printf("Email sent successfully: to=%s", payload.To) return nil} 5. Worker Server Setup // cmd/worker/main.gopackage mainimport ( "log" "github.com/hibiken/asynq")func main() { redisOpt := asynq.RedisClientOpt{ Addr: "localhost:6379", } server := asynq.NewServer(redisOpt, asynq.Config{ Concurrency: 10, Queues: map[string]int{ "critical": 6, // 60% of workers "default": 3, // 30% of workers "low": 1, // 10% of workers }, RetryDelayFunc: asynq.DefaultRetryDelayFunc, ErrorHandler: asynq.ErrorHandlerFunc(func(ctx context.Context, task *asynq.Task, err error) { log.Printf("Task failed: %s, Error: %v", task.Type(), err) }), }) // Register handlers emailHandler := &amp;tasks.EmailHandler{ emailService: email.NewService(config.SMTP), } mux := asynq.NewServeMux() mux.HandleFunc(tasks.TypeEmailDelivery, emailHandler.ProcessEmailDelivery) log.Println("Starting email worker...") if err := server.Run(mux); err != nil { log.Fatalf("Could not run server: %v", err) }} Key Features Implementation 1. Email Templates The system supports reusable email templates for common scenarios: // Welcome email templatetemplates["welcome"] = &amp;EmailTemplate{ Subject: "Welcome to {{.AppName}}!", TextBody: textTemplate(` Hi {{.UserName}}, Welcome to {{.AppName}}! We're excited to have you on board. Best regards, The {{.AppName}} Team `), HTMLBody: htmlTemplate(` &lt;h1&gt;Welcome to {{.AppName}}!&lt;/h1&gt; &lt;p&gt;Hi {{.UserName}},&lt;/p&gt; &lt;p&gt;Welcome to {{.AppName}}! We're excited to have you on board.&lt;/p&gt; `),} 2. Priority Queues Configure different priority levels for email types: // Enqueue with priorityclient.EnqueueEmail(task, asynq.Queue("critical"), // High priority queue asynq.MaxRetry(5), // More retry attempts asynq.Timeout(2*time.Minute),) 3. Scheduled Emails Schedule emails for future delivery: // Schedule welcome email 1 hour after registrationwelcomeTime := time.Now().Add(1 * time.Hour)client.ScheduleEmail(welcomeTask, welcomeTime) 4. Monitoring with Asynqmon Start the monitoring web interface: // cmd/monitor/main.gopackage mainimport ( "github.com/hibiken/asynqmon")func main() { h := asynqmon.New(asynqmon.Options{ RootPath: "/monitoring", RedisConnOpt: asynq.RedisClientOpt{ Addr: "localhost:6379", }, }) http.Handle("/monitoring/", h) log.Println("Asynqmon server starting on :8080/monitoring/") log.Fatal(http.ListenAndServe(":8080", nil))} Docker Setup for Development # docker-compose.ymlversion: '3.8'services: redis: image: redis:7-alpine ports: - "6379:6379" volumes: - redis_data:/data command: redis-server --appendonly yesasynqmon: image: hibiken/asynqmon ports: - "8080:8080" environment: - REDIS_ADDR=redis:6379 depends_on: - redisvolumes: redis_data: Performance Optimizations 1. Connection Pooling // Configure Redis connection poolredisOpt := asynq.RedisClientOpt{ Addr: "localhost:6379", PoolSize: 20, MinIdleConns: 5,} 2. Batch Processing // Process multiple emails in batches for better throughputfunc (h *EmailHandler) ProcessBatchEmails(ctx context.Context, tasks []*asynq.Task) error { var emails []EmailPayload for _, task := range tasks { var payload EmailPayload json.Unmarshal(task.Payload(), &amp;payload) emails = append(emails, payload) } return h.emailService.SendBatch(emails)} 3. Circuit Breaker Pattern // Implement circuit breaker for SMTP failuresfunc (s *Service) SendEmail(payload EmailPayload) error { if s.circuitBreaker.State() == circuitbreaker.Open { return errors.New("circuit breaker is open") } err := s.sendSMTP(payload) if err != nil { s.circuitBreaker.RecordFailure() return err } s.circuitBreaker.RecordSuccess() return nil} Production Deployment Considerations 1. High Availability Deploy multiple worker instances for redundancy Use Redis Sentinel or Redis Cluster for high availability Implement health checks for worker processes 2. Monitoring and Alerting // Add metrics for monitoringvar ( emailsSent = prometheus.NewCounterVec( prometheus.CounterOpts{ Name: "emails_sent_total", Help: "Total number of emails sent", }, []string{"template", "status"}, ) processingDuration = prometheus.NewHistogramVec( prometheus.HistogramOpts{ Name: "email_processing_duration_seconds", Help: "Time taken to process email tasks", }, []string{"template"}, )) 3. Security Best Practices Use environment variables for sensitive configuration Implement rate limiting to prevent abuse Validate and sanitize email content Use TLS for SMTP connections Testing Strategy // Unit test for email handlerfunc TestProcessEmailDelivery(t *testing.T) { mockService := &amp;MockEmailService{} handler := &amp;EmailHandler{emailService: mockService} payload := EmailPayload{ To: "test@example.com", Subject: "Test Email", Body: "Test body", } task, _ := NewEmailDeliveryTask(payload) err := handler.ProcessEmailDelivery(context.Background(), task) assert.NoError(t, err) assert.True(t, mockService.SendEmailCalled)} Conclusion This email queuing system provides a robust foundation for handling email delivery at scale. The combination of Go’s performance, Asynq’s reliability features, and Redis’s persistence creates a system that can handle thousands of emails per minute while providing excellent observability and error handling. Key Benefits: Improved User Experience: Non-blocking email operations Reliability: Automatic retries and error handling Scalability: Horizontal scaling with multiple workers Flexibility: Template system and priority queues Observability: Real-time monitoring and metrics The complete implementation is available on GitLab: golang-email-queue-with-asynq Start with the basic setup and gradually add features like templates, monitoring, and advanced queue configurations as your needs grow. This architecture will serve you well from startup MVPs to enterprise-scale applications. Ready to implement your own email queuing system? Clone the repository and start building! Feel free to contribute improvements or ask questions in the issues section.

The technology that’s actually changing how we build software (and it’s wilder than you think) I still remember the first time GitHub Copilot suggested an entire function based on just my comment. I sat there for a moment, slightly unsettled, wondering if I’d just witnessed the beginning of the end for my career — or the start of something incredible. Turns out, it was definitely the latter. The landscape of software development is experiencing a revolutionary transformation. What used to take me hours of Stack Overflow searches and careful coding can now happen in minutes through simple conversations with AI. And honestly? It’s both exciting and a little terrifying. If you’re a developer who hasn’t yet dipped your toes into the generative AI waters, you’re standing at the edge of something massive. This isn’t just another framework to learn or tool to master — it’s a fundamental change in how we think about building software. What Is Generative AI? (And Why Should You Care?) Let’s cut through the marketing buzzwords. Generative AI is artificial intelligence that creates new stuff — text, code, images, music, you name it — based on patterns it learned from massive amounts of training data. Here’s what makes it different from the AI you’re used to: instead of just analyzing data (like “is this email spam?”), it actually produces original content that didn’t exist before. It’s like the difference between a critic who can tell you if a song is good versus a musician who can actually compose one. For us developers, this means we’re not just using tools anymore — we’re collaborating with them. And trust me, once you experience what it’s like to have an AI pair programmer who’s read every GitHub repo and Stack Overflow answer, there’s no going back. The Real Shift: From Coder to Conductor I used to spend way too much time writing boilerplate code, debugging silly syntax errors, and trying to remember API documentation. Now? I describe what I want, review the AI’s suggestions, and focus on the actually interesting problems — architecture, user experience, and solving real business challenges. It’s not about AI replacing developers (spoiler alert: it won’t). It’s about elevating what we do from typing out every semicolon to orchestrating intelligent systems that can think alongside us. How This Magic Actually Works (The Technical Stuff) You don’t need a PhD to understand what’s happening under the hood, but knowing the basics will make you a much more effective AI collaborator. Neural Networks: The Brain Behind the Magic Think of neural networks as really sophisticated pattern recognition systems. They’re made up of layers of interconnected nodes that process information kind of like neurons in a brain (hence the name). Here’s what’s cool: during training, these networks look at millions of examples and learn to recognize incredibly complex patterns. When you feed a coding AI examples like this: # Millions of examples like this teach the AI patternsdef calculate_total(items): return sum(item.price for item in items)def get_user_by_id(user_id): return database.query(User).filter(User.id == user_id).first() The AI doesn’t just memorize these functions — it learns the underlying patterns of how code works, what good practices look like, and how different pieces fit together. Transformers: Why AI Finally “Gets” Code The real breakthrough came with transformer architecture (that “Attention Is All You Need” paper from 2017). The key innovation is something called self-attention, which lets the AI understand relationships between different parts of your code, no matter how far apart they are. // The transformer understands these connectionsfunction processOrder(orderData) { const orderId = orderData.id; // Declaration here const customer = orderData.customer; // 100 lines of business logic... logActivity(orderId, 'order_processed'); // AI knows this refers to the same ID return { success: true, orderId }; // And this one too} This is why modern AI can maintain context across entire files and even understand complex codebases — it’s not just looking at individual lines but understanding the web of relationships throughout your code. Large Language Models: The Heavy Hitters When we talk about GPT-4, Claude, or Code Llama, we’re talking about transformer-based models trained on absolutely massive datasets — we’re talking trillions of tokens including billions of lines of code from GitHub, documentation, technical discussions, and more. These models have billions or even trillions of parameters (think of them as tiny pieces of learned knowledge). When GPT-4 suggests code, it’s drawing from patterns learned across millions of repositories, understanding not just syntax but also conventions, best practices, and even the reasoning behind different approaches. Essential Tools for Generative AI Development Alright, let’s talk about the tools you can actually use today. I’ve tried most of these, and here’s my honest take on what’s worth your time. AI Coding Assistants (The Game Changers) GitHub Copilot — The one that started it all What it does: Real-time code suggestions directly in your IDE Best for: Day-to-day coding, boilerplate generation, learning new languages My experience: It’s scary good at understanding context and suggesting entire functions Pricing: $10/month for individuals, $19/month for business IDE support: VS Code, JetBrains, Vim/Neovim # Just write a comment and watch the magic# Create a function to validate email addresses with comprehensive regexdef validate_email(email): # Copilot completes this entire function correctly Amazon CodeWhisperer — AWS’s answer to Copilot What it does: Similar to Copilot but with stronger AWS integration Best for: AWS-heavy projects, infrastructure as code Unique feature: Security scanning built-in Pricing: Free tier available, then $19/month per user Tabnine — The customizable option What it does: Code completion with team-specific training Best for: Teams wanting to train on their own codebase Privacy: Can run entirely on-premises Pricing: Free tier, Pro at $12/month Cursor — The AI-first IDE What it does: Entire IDE built around AI collaboration Best for: Developers who want AI deeply integrated into their workflow Unique feature: Chat with your entire codebase My take: Still early but incredibly promising Code Generation and Refactoring Tools Codeium — The generous free option What it does: Code completion and generation Best for: Developers wanting Copilot-like features for free Pricing: Generous free tier, enterprise plans available Languages: 70+ programming languages supported Replit Ghostwriter — Built into the Replit environment What it does: Code generation, explanation, and debugging Best for: Quick prototyping and learning Unique feature: Generates code that runs immediately in browser AI Chat Interfaces for Development ChatGPT — The Swiss Army knife What it does: General-purpose AI that’s surprisingly good at code Best for: Code explanation, debugging, architecture discussions My workflow: I use it for rubber duck debugging and exploring new concepts Pricing: Free tier, $20/month for GPT-4 access Claude — The thoughtful one What it does: Long-context conversations about code and architecture Best for: Complex problem-solving, code reviews, system design Unique feature: Can handle much longer conversations and documents Perplexity — The researcher What it does: AI search with real-time information Best for: Finding current best practices, recent framework updates Why I use it: When I need to know about the latest in tech Specialized Development Tools GitHub Copilot X — The next evolution Features: Chat interface, pull request summaries, test generation Status: Rolling out gradually What’s exciting: Integrated AI throughout the entire development workflow Sourcery — The code reviewer What it does: Automated code reviews and refactoring suggestions Best for: Python developers wanting cleaner code Integration: GitHub, GitLab, VS Code DeepCode (now Snyk Code) — Security-focused AI What it does: AI-powered security vulnerability detection Best for: Catching security issues early Languages: Supports major languages with security rule sets API and Integration Tools OpenAI API — The pioneer Models: GPT-4, GPT-3.5, Codex Best for: Building AI features into your applications Pricing: Pay-per-token, can get expensive for high volume # Simple integration exampleimport openaidef generate_code_explanation(code): response = openai.ChatCompletion.create( model="gpt-4", messages=[ {"role": "system", "content": "You are a code explanation expert."}, {"role": "user", "content": f"Explain this code: {code}"} ] ) return response.choices[0].message.content Anthropic Claude API — The safety-focused option What it does: Large context window, careful reasoning Best for: Complex analysis, long documents, safety-critical applications Google Palm API — The Google option What it does: Code generation and understanding Best for: Integration with Google Cloud services Hugging Face — The open-source haven What it does: Access to hundreds of open-source models Best for: Experimentation, custom model training, privacy-conscious development Development Environment Integration JetBrains AI Assistant — Native IntelliJ integration What it does: AI features built directly into JetBrains IDEs Best for: Java, Kotlin, and other JetBrains-supported languages VS Code Extensions GitHub Copilot: The standard Tabnine: Alternative completion engine CodeGPT: ChatGPT integration for VS Code How I Actually Use These Tools (Real-World Examples) Let me show you how this stuff works in practice. Here’s my typical workflow: Morning Code Review with AI # I write something like this rough implementationdef process_user_data(data): # TODO: Add validation and error handling user = create_user(data) send_email(user.email) return user# Then ask Claude: "Review this function for potential issues"# It catches things like: missing input validation, no error handling # for email sending, potential data races, etc. Debugging with ChatGPT When I’m stuck on a weird bug, I paste the error message and relevant code into ChatGPT. It’s surprisingly good at spotting issues I’ve been staring at for hours. Architecture Discussions I use Claude for system design conversations: “I’m building a real-time chat application that needs to handle 10,000 concurrent users. What architecture patterns should I consider, and what are the trade-offs?” The responses are genuinely helpful for thinking through complex decisions. Learning New Technologies When I need to learn a new framework or library, I ask AI to create simple examples and explain the concepts. It’s like having a patient tutor available 24/7. The Reality Check: What AI Can’t Do (Yet) Let’s be honest about the limitations because understanding them makes you a better AI collaborator. The Hallucination Problem AI confidently generates wrong information sometimes. I once had Copilot suggest a React hook that doesn’t exist. The code looked perfect, compiled fine, but failed at runtime because the hook was completely made up. My rule: Always verify AI suggestions, especially for APIs or libraries you’re not familiar with. Context Limitations Current models have context windows — they can only “remember” a certain amount of your conversation or codebase at once. This means: Long debugging sessions may lose track of earlier context Large codebases can’t be fully understood in one go Complex, multi-file refactoring needs human guidance Business Logic Blind Spots AI is great at patterns but struggles with unique business requirements: # AI can write this pattern easilydef calculate_tax(amount, rate): return amount * rate # But struggles with this business-specific logicdef calculate_loyalty_discount(customer): # Complex business rules specific to your company # that don't exist in training data pass Security Vulnerabilities AI-generated code can have security issues, especially because it’s trained on public code (which includes insecure examples). Common problems I’ve seen: SQL injection vulnerabilities Hardcoded credentials Missing input validation Insecure cryptographic implementations Best Practices That Actually Work After using these tools for over a year, here’s what I’ve learned: Prompting That Gets Results Be specific about context: Bad: "Write a user authentication function"Good: "Write a Node.js function using bcrypt that takes email and password, validates against a PostgreSQL database using the User model, returns a JWT token on success, and proper error messages for invalid credentials" Provide examples of your code style: "Here's how we typically structure our React components in this project: [paste example]. Now create a similar component for displaying user profiles." Code Review Process I treat AI-generated code like code from any junior developer: Does it solve the actual problem? Are there obvious bugs or edge cases missed? Does it follow our team’s conventions? Are there security implications? Is it maintainable and readable? Integration Workflow Here’s my typical process: Start with a clear problem statement Get an AI-generated first draft Review and iterate (usually 2–3 rounds) Test thoroughly (AI doesn’t replace testing) Code review with humans (AI suggestions still need human eyes) The Future (And Why You Should Care Now) The pace of change is honestly overwhelming. Here’s what’s coming that will impact how we work: Multimodal AI Soon, you’ll be able to: Sketch a UI mockup and have it generate working code Describe an application verbally and get a working prototype Upload screenshots of bugs and get debugging suggestions AI Agents Instead of just chatting with AI, we’ll delegate entire tasks: “Analyze our codebase for performance bottlenecks, create a report, and suggest specific optimizations with code examples.” Personalized Models AI assistants that learn your coding style, understand your project context, and integrate with your team’s workflow. Imagine an AI that knows your company’s architecture patterns and coding standards. Getting Started (Your Action Plan) If you’re convinced but don’t know where to begin, here’s what I’d recommend: Week 1: Dip Your Toes In Install GitHub Copilot or try Codeium (free) Use it for simple autocompletion and see how it feels Try ChatGPT for explaining code you don’t understand Week 2: Go Deeper Use AI for test generation on an existing project Ask AI to review a recent pull request Experiment with different prompting strategies Week 3: Integration Use AI for documentation generation Try pair programming with AI on a side project Explore API integration for one of your applications Month 2+: Advanced Usage Experiment with different models and tools Start incorporating AI into your regular workflow Share learnings with your team The Bottom Line Look, I won’t sugarcoat it — this technology is moving fast, and it’s changing our industry in ways we’re still figuring out. But here’s what I know for sure: the developers who are experimenting with AI now, learning its strengths and limitations, and figuring out how to collaborate effectively with these systems are going to have a huge advantage. This isn’t about AI replacing developers. It’s about developers who use AI replacing developers who don’t. And honestly? Once you experience what it’s like to have an intelligent coding partner that never gets tired, never judges your questions, and has read more code than you’ll see in a lifetime, you’ll wonder how you ever worked without it. The future of software development is collaborative — human creativity and intuition working alongside AI’s computational power and vast knowledge. The question isn’t whether this will happen (it’s already happening), but how quickly you can adapt and start benefiting from it. So go ahead, try that coding assistant. Ask AI to explain that confusing legacy code. Generate some tests for your side project. Start small, stay curious, and prepare to be amazed by what you can build when you combine human insight with artificial intelligence. The age of AI-augmented development isn’t coming — it’s here. And it’s pretty incredible once you get the hang of it.

Technical Skills & Tools

Frameworks, tools, and technologies I use to build solutions

Machine Learning & AI

LangChain, Cohere, ChromaDB, scikit-learn, NLTK, Pandas, NumPy, Matplotlib, Jupyter Notebook

Languages

Python, Go, JavaScript, PHP, SQL

Web Development

REST, gRPC, GraphQL, OpenAPI, FastAPI, React, Laravel

Cloud & DevOps

GCP, AWS, Docker, MongoDB, PostgreSQL, Git, GitHub, Linux

Frameworks & Tools

Starlette, WebSockets, Flask, Tornado, RabbitMQ, Meilisearch, Gel (EdgeDB)

Testing & Quality

Pytest, Unit Testing, Integration Testing, Code Quality, TDD

Education

Academic background and professional certifications that shaped my technical expertise.

Nepal Commerce Campus (NCC)

Bachelor in Information Management

Bachelor Degree · 2017 - 2021

Focused on producing IT professionals with strong management and technical skills, and a results-driven, socially responsible mindset.

Ambition College

Mangement (Computer Science)

NEB · 2014 - 2016

Focused on computer science, it combines business strategy and technical expertise. It equips students with skills in programming, databases, and leadership for tech-driven roles. This blend enables innovative solutions to complex business challenges.

Get In Touch

Let's discuss your next project or opportunity