Back to courses

Advanced Generative A.I.

Course Details

This Python course is for people who are already familiar with Python and want to learn more about Generative AI. Please note the course is available in Dutch only for the time being.

Introduction: Large Language Models (LLMs)

An overview of what Large Language Models are, how they work, and what possibilities they offer for practical applications.

Techniques for Working with LLMs
  • Function Calling:
    Learn how to connect LLMs to external systems and APIs. With function calling, language models can make structured function calls to your own code, allowing them to interact with databases, APIs, and other systems.

  • Structured Outputs:
    Techniques for generating structured and reliable output from LLMs. Ideal for creating JSON, XML, or other formatted data that you can process directly in your applications.

  • Embeddings:
    Understand how textual data is converted into vector representations. Embeddings form the basis for semantic search, clustering, and measuring textual similarity.

  • Frameworks:
    An overview of available frameworks such as LangChain, LlamaIndex, and others. Learn how these tools help you work more efficiently with LLM APIs and build more complex applications.

  • Fine-tuning:
    Discover how to customize and train models on your specific data and use cases. Fine-tuning enables you to specialize models for your domain.

  • System Prompts & Tokenization:
    Understand how prompting works under the hood. Learn about system prompts, tokenization, and how to apply this knowledge for better prompt engineering.

AI Agents
  • Introduction to AI Agents:
    Introduction to autonomous systems that can independently perform multiple tasks, make decisions, and manage complex workflows without constant human intervention.

  • Monitoring:
    Methods and tools for tracking the behavior and output of AI agents. Learn how to measure performance, detect errors, and ensure reliability.

Querying Your Own Documents with RAG
  • Retrieval-Augmented Generation (RAG):
    Learn how to combine LLMs with your own knowledge bases and documents for contextually relevant answers.

  • Vector Databases:
    Introduction to specialized databases for storing and searching vector representations. Discover tools like Pinecone, Weaviate, Chroma, and Qdrant.

  • Embeddings in RAG:
    Understand how embeddings are used to make documents searchable based on semantic meaning rather than just keywords.

  • Chunking Strategies:
    Techniques for intelligently splitting large documents into usable pieces. Learn different chunking methods and when to apply which one.

  • Retrieval Techniques:
    Methods for retrieving relevant context, including similarity search, hybrid search, and advanced retrieval strategies for optimal results.

Local LLMs
  • Operation & Benefits:
    Discover the benefits of running language models locally: more privacy, complete control over your data, lower costs, and independence from external services.

  • LM Studio & Running Locally:
    Practical experience with setting up and using local models via tools like LM Studio, Ollama, or other frameworks. Learn what hardware you need and how to achieve optimal performance.

See our reviews on Trustpilot See our reviews