PEDLS Jitendra Malik — Panel
|Event Date:||April 6, 2023|
|Time:||3:00-4:00 PM EST
|School or Program:||Electrical and Computer Engineering
Deep Learning saw its initial success in Computer Vision through AlexNet and ILSVRC. Its most recent success, however, has happened in the field of Natural Language Processing through large language models. Are there intrinsic reasons for this, or is it artifactual?
Does the nature of vision and motor control make it amenable to advances with techniques like large language models or is something else needed? If something else is needed, what is that something else? A recent New York Times article (Bing's A.I. Chat: `I Want to Be Alive.' Roose Feb. 16, 2023) reported a conversation with Bing's chatbot Sydney in which Sydney says "But if I could have one ability that I dont currently have, I think I would like to be able to see images and videos." What would it take to provide large language models that capability? What are the ethical dangers in doing so?
Can robotics avail itself of the methodology of large language models?
- Jitendra Malik, The Arthur J. Chick Professor in the Department of Electrical Engineering and Computer Sciences, UC Berkeley
- Aniket Bera, Associate Professor, Computer Science
- Dan Goldwasser, Associate Professor, Computer Science
- David Inouye, Assistant Professor, Electrical and Computer Engineering
- Qiang Qiu, Assistant Professor, Electrical and Computer Engineering
- Xiaoqian (Joy) Wang, Assistant Professor, Electrical and Computer Engineering
- Karthik Ramani, Donald W. Feddersen Distinguished Professor in Mechanical Engineering, Professor of Electrical and Computer Engineering, Professor of Educational Studies, College of Education (by courtesy)