Google's chief scientist, Purdue faculty discuss AI capabilities and responsibilities in panel

Panel members sitting on stage

Saurabh Bagchi (far right) moderates a panel discussion at Fowler Hall about the future of AI. Participating panelists included (L-R) Mithuna Thottethodi, Greg Shaver, Xiangyu Zhang, Jeff Dean, Joy Wang and Ashram Alam.

Five Purdue faculty members and Jeff Dean, chief scientist at Google, engaged in a lively discussion at Fowler Hall in a panel titled “What could and should AI do for society in the next 25 years?”

The event, which followed the April 11 Purdue Engineering Distinguished Lecture Series featuring Dean, was moderated by Saurabh Bagchi, professor in the Elmore Family School of Electrical and Computer Engineering and director of the Army’s Assured Autonomy Innovation Institute (A2I2).

In the last decade, the world has seen tremendous progress in AI — in both its core technology and through its engineering — resulting in improvements to people’s professional and personal lives. The panel reflected on AI’s future trajectory, say in the next 25 years. This question was framed in the context of the desired technological advances as well the societal implications of such technology. The panelists pondered both the positives and the potential downsides of AI so that engineers and scientists are wise enough to amplify the former and contain the latter.

Bagchi asked each panelist to comment on the question in the panel’s title. Ashraf Alam, the Jai N. Gupta Distinguished Professor in the Elmore Family School of Electrical and Computer Engineering, stated that in his area of photovoltaic cells and renewable energy technologies, machine learning (ML) has been a game changer because it can predict very quickly the output from many locations with such technologies. For a goal to work toward, he mentioned that machine learning and large-language models (ML/LLM) need to be respectively integrated with physics-based and grammar-based models.

Joy Wang, assistant professor in the Elmore Family School of Electrical and Computer Engineering, brought in the angle of trustworthiness of ML in the biomedical domain. She shared remarkable successes already achieved in this domain, such as Google Deep Mind’s work on understanding proteins and other molecules that are essential to our understanding of health and wellbeing. The beauty is that such understanding and interventions can be personalized to individuals. With the increasing use of wearables and telehealth technologies, ML is enabling early detection of diseases.

Jeff Dean outlined application areas of AI that excite him. The first is healthcare since medical decision making should be informed by a wealth of past data, which AI technologies are very good in harnessing, he said. The second lies in education in which educational materials can be created in any language, and personalized tutors can be synthesized. A third area that will be beneficial for humanity is robots operating in “messy environments” at home or at office.

Xiangyu Zhang, the Samuel Conte professor in the Department of Computer Science, discussed the importance of securing AI models. He shared the outcomes of his project of four years called TrojAI, which aims to develop techniques to scan backdoors injected into AI models of various modalities such as computer vision and natural language processing. The results of his study made him strike a cautionary note when it comes to secure application of AI models.

Greg Shaver, professor in the School of Mechanical Engineering, views AI as having a crucial role to play in transport of food and medicine as a freight. He also believes that partial automation through heavy vehicle platooning is in sight, whereas automation of single heavy vehicles is some way off.

Mithuna Thottethodi, professor in the Elmore Family School of Electrical and Computer Engineering, stressed the importance of hardware that can improve the efficiency of AI computation without diluting the accuracy of AI applications, thus reducing the energy demand and pushing and facilitating AI processing at the edge, rather than only at the data centers.

After these prepared remarks, Bagchi opened the floor to questions from the audience, who had formed long lines in front of the two microphones to share their thoughts and questions with the panelists.

The first category focused on the foundational AI architectures needed and where that innovation is headed, including a precise understanding of the term that has captured popular imagination — “AGI or Artificial General Intelligence.” The second focused on trust in data and the resultant AI models. The third covered the role of educational institutions in AI education to its students and also to society more broadly.

An animated discussion ensued from a question about how far society is from AGI. Panelists opined that the set of tasks AI is being asked to do is expanding fast, and one needs to ask whether the goal of AGI is to perform a task better than some humans or better than all humans.

On the matter of trust, an intriguing question was posed about LLMs “hoovering up” available data from public sources, like websites, and then generating new data. How can one be confident about the quality of the data, the attendee asked. If a model is trained on poor quality data, the model would likewise be inaccurate. The panelists acknowledged that this is an open problem today. Zhang put forward the presence of tools that can assign trust levels to data, while Dean stressed the importance of techniques to differentiate machine-generated data from human-generated data.

On the topic of role of educational institutions like Purdue, Dean reminisced about distributed systems research in the 1980s and 1990s. Then, algorithms were developed and demonstrated at small scale. If they showed promise, large-scale implementation and evaluation were attempted by commercial entities. Thottethodi struck a bullish note on choice of AI for a career and urged students to pick the hard problems in AI as the focus of their work.

The panel ended with Bagchi summarizing three prominent AI thrusts of work at Purdue. First, making AI models more efficient in execution sets them up to be run on a vast array of computational devices. Second, increasing the reliability and the security of AI models is vital. The third involves training AI models with a variety of modalities (text, audio, video, unstructured conversation) and with ever increasing context.

After the event concluded, Dean remained in the auditorium and engaged in an informal discussion with about 20 students.