Course Syllabus
Generative AI with Python
Master the end-to-end workflow for building, tuning, and deploying AI copilots.
This immersive 6-week journey blends the theory of Large Language Models with hands-on labs, curated tools,
and production best practices designed for working professionals.
Course Overview
The program is crafted for developers, data scientists, and solution architects who want to move from consuming AI
APIs to creating enterprise-grade AI assistants. We start with essential Python refreshers, progress through transformer
foundations, and rapidly move into prompt engineering, retrieval augmented generation, guardrails, and deployment on
cloud infrastructure. Every week alternates between concept briefings, guided labs, and product-building sprints to
ensure depth and confidence.
By the end of the course you will:
- Understand how modern LLMs are trained, evaluated, and optimized for enterprise workloads.
- Design prompts and system messages that deliver deterministic, on-brand responses.
- Build retrieval-augmented chatbots with embeddings, vector databases, and LangChain orchestration.
- Ship a production-ready AI assistant complete with monitoring, cost controls, and responsible AI checks.
Weekly capstone milestones · Live code reviews
Module Breakdown
Week 1 – Python Foundations for AI Builders
Accelerate your Python fluency for data, APIs, and orchestration in AI projects.
- Python refresher: data classes, typing, async patterns, and packaging for reuse.
- Data wrangling with Pandas, Polars, and vector-friendly formats.
- FastAPI fundamentals for exposing model endpoints and tool functions.
- Lab: build a reusable utilities package for AI experimentation.
Week 2 – Large Language Model Fundamentals
Demystify transformer internals, embeddings, and fine-tuning strategies.
- Tokenization, attention mechanics, and parameter scaling considerations.
- OpenAI vs open-source models; when to fine-tune, instruct-tune, or use adapters.
- Hands-on with Hugging Face pipelines, Eval harnesses, and prompt testing.
- Lab: evaluate two base models against a domain-specific dataset.
Week 3 – Prompt Engineering & Guardrails
Engineer resilient prompts, safety checks, and evaluation workflows.
- System prompt architecture, conversation state, and tool selection.
- Evaluation methods: golden sets, rubrics, and human-in-the-loop reviews.
- Guardrails with OpenAI, Azure Content Safety, and NeMo Guardrails.
- Lab: build an evaluation harness that enforces policy and tone requirements.
Week 4 – Retrieval-Augmented Generation Systems
Fuse proprietary knowledge bases with LLM reasoning to answer domain questions with citations.
- Vector databases: Pinecone, Chroma, Azure AI Search, and cost considerations.
- Document loaders, text chunking, and embedding strategies.
- LangChain and LlamaIndex pipelines for query decomposition.
- Lab: deploy a contextual support bot that surfaces source references.
Week 5 – Agentic Workflows & Automation
Compose multi-step AI workflows that call tools, orchestrate APIs, and manage conversations.
- Planning & tool selection with LangChain Agents and Semantic Kernel.
- Function calling patterns, JSON mode, and custom toolkits.
- Workflow engines: Azure OpenAI Assistants, Airflow DAGs, Durable Functions.
- Lab: orchestrate a research copilot that summarises, extracts, and emails insights.
Week 6 – Deployment, Monitoring & Cost Governance
Harden your AI product with CI/CD, observability, and financial guardrails.
- Packaging models and chains with FastAPI & Azure Container Apps.
- Observability stack: OpenTelemetry, prompt logs, evaluation dashboards.
- Cost tracking, caching, and fallback strategies across providers.
- Capstone sprint: launch your AI assistant with stakeholder demo.
Tooling & Resources
Primary Toolchain
- Python 3.11+, Poetry, VS Code, and GitHub Projects for planning.
- OpenAI, Azure OpenAI, Anthropic Claude, and Hugging Face Hub access.
- LangChain, LlamaIndex, Semantic Kernel, and Guidance for orchestration.
Data & Storage
- Azure AI Search, Pinecone, PostgreSQL pgvector, and Chroma DB.
- Blob storage patterns for document ingestion pipelines.
- Responsible data governance checklists and risk registers.
Collaboration Extras
- Weekly code reviews with instructor feedback loops.
- Templates for PRDs, model cards, and evaluation scorecards.
- Interview preparation set: 40 curated GenAI scenario questions.
Assignments & Milestones
Assignments are intentionally progressive—each deliverable feeds the final production-ready AI assistant.
Skill Checks
- End of week quizzes covering theory, prompt tactics, and design considerations.
- Pair-programming lab to refactor prompts into reusable templates.
- Checklist-based peer reviews for data ingestion and safety practices.
Capstone Series
- Week 2: Draft the problem statement and KPI scorecard for your assistant.
- Week 4: Deliver a working RAG prototype with evaluation notebook.
- Week 6: Ship the final demo with observability dashboard and lessons learned.
Portfolio Boosters
- Curated GitHub repository with README templates and architecture diagrams.
- Resume bullets & LinkedIn summary prompts tailored to your capstone.
- Mock stakeholder presentation to simulate enterprise buy-in.
Timeline Highlights
- Week 1 – Launch Pad: Kick-off call, environment setup, Python refresher, and OpenAI quick wins.
- Week 3 – Midpoint Review: Design review of prompt strategies and responsible AI guidelines; iterate on RAG plan.
- Week 5 – Production Readiness: Integrate monitoring, caching, and testing pipelines before final deployment sprint.