ECE 69500 - AI Foundation Model Basics
Course Details
Lecture Hours: 1 Credits: 1
Areas of Specialization:
- Computer Engineering
Normally Offered:
Each Fall
Campus/Online:
On-campus and online
Requisites:
Basic working knowledge of machine learning; familiarity with Python programming; familiarity with frameworks such as TensorFlow and PyTorch
Catalog Description:
This course explores AI foundation models, including transformers, vision models, and generative architectures. Students will learn about their applications, training processes, and techniques for fine-tuning and deployment, such as prompt engineering and retrieval-augmented generation (RAG). The course also covers the limitations of existing models and performance evaluation methods. Through research discussions and hands-on assignments, students will develop practical skills to analyze, implement, and critically assess foundation models in real-world applications.
Required Text(s):
None.
Recommended Text(s):
None.
Learning Outcomes
A student who successfully fulfills the course requirements will have demonstrated an ability to:
- Explain the basics, applications, and limitations of foundation models
- Describe key architectures, including transformers, vision models, and generative models, and implement a simple attention-based transformer
- Apply prompt engineering, retrieval-augmented generation (RAG), and fine-tuning to adapt models for specific tasks
- Deploy foundation models using APIs and open-source frameworks like HuggingFace Transformers
- Evaluate model performance, identify biases, and explore bias mitigation strategies
Lecture Outline:
| 1 | 1 |
|---|---|
| 1 | Introduction: Definition and historical context of foundation models; Applications of foundation models across various industries (e.g., text generation, translation, image classification, code generation); High-level overview of training process for foundation models; Benefits and limitations of foundation models compared to traditional machine learning approaches; Deep learning basics |
| 2 | Large Language Models; Large Vision Models; Generative Image Models; Multimodal or Multiple Foundation Models |
| 3 | Prompt Engineering - Crafting a prompt to solve a task; Retrieval-Augmented Generation (RAG) - Adding external information into context window to enhance performance and precision; Fine-tuning - Adapting foundation model parameters to new scenarios |
| 4 | Introduction to foundation model APIs and frameworks (e.g., Hugging Face Transformers, TensorFlow Serving); Challenges in foundation model deployment; Hands-on session: Choosing and accessing a pre-trained foundation model and Fine-tuning a model for a specific task. |
| 5 | Evaluating foundation model performance; Mitigating bias in foundation models |
Assessment Method:
Homework, projects (4/2025)