Back to Projects

Project Overview

HR departments spend 60–70% of their time on repetitive, rule-based tasks — screening hundreds of CVs, scheduling interviews, sending follow-up emails. This project replaces that manual workflow with a multi-agent LLM system that automates the entire recruitment pipeline end-to-end.

70%Effort Reduction
Faster Screening
4Autonomous Agents
RESTFastAPI Backend

Problem Statement

A mid-size company receives 200+ applications per job posting. A recruiter manually reads each CV, shortlists candidates, coordinates interview slots across calendars, sends confirmation emails, and follows up. This process takes days and introduces subjective bias.

The goal was to build an agentic system where a user can simply say "Hire a Senior ML Engineer — screen these 150 CVs and schedule interviews for the top 5" and the system handles everything.

Multi-Agent Architecture

User Request → Orchestrator Agent (GPT-4) ↓ delegates to ┌──────────────┬───────────────┬──────────────┬──────────────┐ │ CV Screener │ Ranker │ Scheduler │ Emailer │ │ Agent │ Agent │ Agent │ Agent │ │ │ │ │ │ │ Extracts │ Scores vs │ Finds mutual │ Drafts & │ │ skills,exp, │ job desc. │ calendar │ sends │ │ education │ 0-100 score │ free slots │ personalized │ │ from PDF │ │ │ emails │ └──────────────┴───────────────┴──────────────┴──────────────┘ FastAPI → React Dashboard

Agent Implementation

Python
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_openai import ChatOpenAI
from langchain.tools import tool
import pdfplumber, json

llm = ChatOpenAI(model="gpt-4-turbo", temperature=0)

@tool
def extract_cv_data(pdf_path: str) -> str:
    """Extract structured data from a candidate CV PDF."""
    with pdfplumber.open(pdf_path) as pdf:
        text = "
".join(page.extract_text() for page in pdf.pages)
    prompt = f"""Extract from this CV as JSON:
    {{"name","email","phone","skills":[],"experience_years",
     "education","last_role","companies":[]}}
    CV: {text[:3000]}"""
    return llm.invoke(prompt).content

@tool
def score_candidate(cv_data: str, job_description: str) -> str:
    """Score a candidate 0-100 against the job description."""
    prompt = f"""Score this candidate 0-100 for the role.
    Return JSON: {{"score": int, "strengths": [], "gaps": [], "recommendation": str}}
    JD: {job_description}
    CV: {cv_data}"""
    return llm.invoke(prompt).content

# Orchestrator
orchestrator = create_openai_tools_agent(
    llm, tools=[extract_cv_data, score_candidate, schedule_interview, send_email],
    prompt=orchestrator_prompt
)
agent_executor = AgentExecutor(agent=orchestrator, tools=[...], verbose=True)

Automation Pipeline

  1. Ingestion: CVs uploaded as PDFs → pdfplumber extracts text
  2. Screening: CV Screener agent extracts structured fields (skills, experience, education)
  3. Scoring: Ranker agent compares each CV against the job description — outputs 0–100 score with reasoning
  4. Shortlisting: Top N candidates above threshold automatically shortlisted
  5. Scheduling: Scheduler agent checks Google Calendar API for interviewer availability → proposes slots
  6. Communication: Emailer agent drafts personalised acceptance/rejection emails — HR approves before send

Human-in-the-loop: All email sends and final candidate decisions require HR approval. The agents automate the grunt work — humans make final calls.

Results

70%Time Saved per Hire
Faster First Screen
91%Scoring Accuracy vs Human
<2min150 CVs Screened
LangChainGPT-4Agentic AI FastAPIpdfplumberPython