Generative AI

Learn Generative AI, Large Language Models (LLMs), AI automation, and intelligent AI systems using real-world datasets, modern AI frameworks, and hands-on practical implementation.

What is Generative AI?

Generative AI is the process of creating intelligent systems that can generate text, images, code, audio, and digital content using advanced AI models and large-scale datasets.

Generative AI

Module 1 : introduction & Foundations

Understand the foundations of Artificial Intelligence, including Generative AI, Agentic AI, their core concepts, and how they differ in modern intelligent systems.

  • Introduction
  • What is Generative AI?
  • What is Agentic AI?
  • Generative AI vs Agentic AI
  • Module 2 : Fundamentals of generative ai & llms

    Learn the basics of Generative AI and LLMs, how they work, and where they are used.

    • Basics of Generative AI
    • What are LLMs?
    • Key Concepts
    • Tokenization
    • Embeddings
    • Vocabulary
    • Attention
    • Transformer Architecture
    • Self-Attention Mechanism
    • Multi-head Attention
    • Cross Attention
    • Encoder–Decoder

    Module 3 : working with llms

    Learn how to interact with LLMs using prompts, fine-tuning, and best practices for real applications.

    • Open Source vs Closed Source LLMs
    • Hugging Face Ecosystem
    • Model Loading
    • Model Parameters
    • Model Weight Formats (.pth, safetensors, onnx)
    • Model Size Calculation and License
    • Multimodal LLMs: Text, Audio (ASR, TTS), Image, Video

    Module 4 : prompt engineering & control

    Learn how to write effective prompts to get better results from LLMs.

    • Prompts and Context
      • Max Sequence Length vs Max Output Tokens
      • Task-specific Prompts
      • Sampling Parameters (Temperature, Top-k, Top-p, etc.)
    • Prompting Techniques
      • Zero-Shot Learning
      • Chain of Thought (CoT)
      • ReAct
    • Guardrails
      • Using Chat Completion APIs
      • OpenAI (ChatGPT)
      • Google Gemini
      • Anthropic Claude

    Module 5 : Retrieval-Augmented generation(RAG)

    Learn how retrieval systems improve generative models for more accurate and relevant outputs.

    • What is RAG?
    • LLM Hallucination: Causes and Mitigation
    • When to Use RAG
    • Components of RAG
      • Embeddings
      • Vector Databases: Chroma, Pinecone, FAISS, Milvus
      • Chunking Strategies
      • Conversational RAG
      • Embedding Spaces: Semantic Similarity & Cosine Distance
      • Answer Grading / Response Evaluation (BLEU, ROUGE, GPT-based)

    Module 6 : Advanced RAG Techniques

    Learn advanced techniques to optimize RAG systems for better performance and scalability.

    • Corrective RAG (CRAG)
    • Self-RAG with Reflection
    • Graph RAG
    • Hybrid RAG(Semantic + Keyword)

    Module 7 : Graph-based Knowledge and Retrieval

    Learn how to leverage graph-based approaches for knowledge representation and retrieval in AI systems.

  • Graph Fundamentals – Nodes, Edges
  • Ontology Design
  • GraphDBs:
    • oAdavantages of GraphDBs
    • oNeo4j:Community vs Enterprise vs Cloud
  • Module 8 : LangChain Framework

    Learn how to use the LangChain framework to build applications with LLMs.

    • Introduction to LangChain
    • Chains, Prompts, and Templates
    • Memory Systems and Conversation Flow
    • Memory Types: Short-term, Long-term, Episodic
    • Persistence Strategies
    • Basic Document Loaders and Text Splitters

    Module 9 : Agentic AI Principles

    Learn the principles of Agentic AI and how to design intelligent agents that can perform complex tasks autonomously.

    • What is Agentic AI?
    • AI Agents vs Agentic AI
    • Agentic AI Frameworks Overview
      • No-code vs Code-first
      • N8n vs LangGraph vs Airflow
      • Design Principles:
      • Goal,Planner,Orchestrator
      • Copilot vs Autopilot
      • Agentic AI Frameworks
        • CrewAI, LangGraph, AutoGen

    Module 10 : Production & Final Project

    Learn how to deploy generative AI applications and work on a final project to showcase your skills.

    • Agentic AI using LangGraph
    • LangChain vs LangGraph
    • LangGraph Components
    • Workflow Types
      • Sequential
      • Parallel
      • Iterative
      • Conditional
    • Memory & State Management
      • Persistence Strategies
      • Time Travel in LangGraph
    • Observability with LangSmith
      • What is Observability?

    Module 11 : Model context protocol(MCP)

    Learn about the Model Context Protocol (MCP) and how it enables interoperability between different AI models and frameworks.

    • MCP Fundamentals and Architecture
    • MCP Server and Client Implementation
    • Tool Integration through MCP
    • MCP vs Traditional Integration Patterns

    Module 12 : Agent-to-Agent Communication

    Learn how to enable communication and collaboration between multiple AI agents to solve complex problems.

    • Agent-to-Agent Communication Fundamentals
    • Orchestration Patterns:
      • Manager-Worker
      • Peer-to-Peer

    Module 13 : Fine-Tuning and Quantization

    Learn how to fine-tune and optimize LLMs for specific tasks and deployment scenarios.

    • When to Use Fine-tuning vs Prompt Engineering
    • Parameter-Efficient Fine-Tuning (PEFT)
    • LoRA
    • QLoRA
    • Quantization Techniques
      • Intro to Quantization
      • Asymmetric vs Symmetric
      • Post-training Quantization vs Quantization-Aware

    Model Serving & Deployment + projects

    Learn how to serve and deploy generative AI models in production environments.

    • Model Serving Frameworks:
      • vLLM
      • TensorRT-LLM

    • Final Projects (Pick One)
    • Domain-Aware LLM Chatbox using open-Source
    • Customer Support Assistant Powered by Retrieval-Augmented Generation
    • Agentic AI Advisor for Healthcare Guidance and Decision support

    Generative AI Demo Form