AI Foundation Model Basics

ECE69500

Credit Hours:

1

Catalog Description:

This course explores AI foundation models, including transformers, vision models, and generative architectures. Students will learn about their applications, training processes, and techniques for fine-tuning and deployment, such as prompt engineering and retrieval-augmented generation (RAG). The course also covers the limitations of existing models and performance evaluation methods. Through research discussions and hands-on assignments, students will develop practical skills to analyze, implement, and critically assess foundation models in real-world applications.

Required Text(s):

None.

Recommended Text(s):

None.

Lecture Outline:

1 Lecture Topics
1 Introduction: Definition and historical context of foundation models; Applications of foundation models across various industries (e.g., text generation, translation, image classification, code generation); High-level overview of training process for foundation models; Benefits and limitations of foundation models compared to traditional machine learning approaches; Deep learning basics
2 Large Language Models; Large Vision Models; Generative Image Models; Multimodal or Multiple Foundation Models
3 Prompt Engineering - Crafting a prompt to solve a task; Retrieval-Augmented Generation (RAG) - Adding external information into context window to enhance performance and precision; Fine-tuning - Adapting foundation model parameters to new scenarios
4 Introduction to foundation model APIs and frameworks (e.g., Hugging Face Transformers, TensorFlow Serving); Challenges in foundation model deployment; Hands-on session: Choosing and accessing a pre-trained foundation model and Fine-tuning a model for a specific task.
5 Evaluating foundation model performance; Mitigating bias in foundation models

Assessment Method:

Homework, projects 

Requisites:

Basic working knowledge of machine learning; familiarity with Python programming; familiarity with frameworks such as TensorFlow and PyTorch