Building intelligent systems at the intersection of LLMs, RAG, fine-tuning, and real-world deployment.
AI & ML Engineer focused on building production-grade systems — from multi-agent LLM pipelines and QLoRA fine-tuning to computer vision and MLOps. I work across the full ML stack: data, modelling, APIs, experiment tracking, and cloud deployment. Currently completing my B.E. in CS (AI & ML) and actively seeking roles where I can ship intelligent systems that solve real problems.
# mohammed_omer.py
class MohammedAbdulOmer:
def __init__(self):
self.name = "Mohammed Abdul Omer"
self.location = "Hyderabad, India 🇮🇳"
self.education = "B.E. Computer Science (AI & ML) — Lords Institute of Engineering"
self.focus = ["LLM Engineering", "RAG Pipelines", "LLM Fine-tuning", "MLOps", "Computer Vision"]
self.stack = ["Python", "PyTorch", "LangChain", "LangGraph", "FastAPI", "HuggingFace"]
self.status = "Open to AI / ML Engineer roles 🚀"
def say_hi(self):
print("Let's build something intelligent together!")
me = MohammedAbdulOmer()
me.say_hi()| Area | What I Build |
|---|---|
| 🔍 Advanced RAG | Hybrid retrieval (BM25 + vector + re-ranking), agentic pipelines, multi-document QA |
| 🤖 LLM Engineering | Prompt orchestration, QLoRA/PEFT fine-tuning, Groq / Gemini APIs |
| 🧠 Agentic AI | Multi-agent systems, CrewAI, LangGraph ReAct agents, autonomous task planning |
| ⚙️ MLOps | MLflow experiment tracking, Evidently AI drift monitoring, W&B, reproducible pipelines |
| 👁️ Computer Vision | Object detection (YOLOv8), medical imaging, Grad-CAM explainability |
| ⚡ API & Deployment | FastAPI, Django, Docker, Railway, Render, HuggingFace Spaces |
|
Autonomous multi-agent pipeline where specialised agents — researcher, analyst, writer — coordinate via CrewAI and Tavily to perform live web research, synthesise findings, and generate structured reports with zero human intervention. 🔗 GitHub |
QLoRA fine-tuning of Mistral-7B-Instruct-v0.2 on a medical Q&A dataset using PEFT. Trained on Kaggle T4 GPU with full environment compatibility handling. Model and Gradio demo fully deployed to HuggingFace Hub. |
|
Production-grade RAG chatbot using a LangGraph ReAct agent with hybrid retrieval — ChromaDB vector search, BM25, and FlashRank re-ranking. Streaming SSE responses via FastAPI async backend, powered by Groq's llama-3.1-8b-instant with dual Streamlit and Gradio frontends. 🔗 GitHub |
End-to-end MLOps pipeline integrating MLflow for experiment tracking and model registry, Weights & Biases for training visualisation, and Evidently AI for data drift and model performance monitoring — structured as a reproducible, production-mimicking workflow. 🔗 GitHub |


