EAGER: SaTC-EDU: AI-based Humor-Integrated Social Engineering Training
This project is supported by NSF
Advances in artificial intelligence (AI) have introduced new opportunities and challenges in cybersecurity. Social engineering, while contributing to the majority of cyberattacks, poses a uniquely difficult problem in cybersecurity because of a combination of factors. First, social engineering is low cost and involves multiple increasingly complex and subtle attack models. Second, the majority of computer users are not cybersecurity-literate, with less than 30% judged competent on basic knowledge. Third, social engineering takes advantage of human vulnerabilities such as habit formation and susceptibility to persuasive techniques. This all results in a significant gap in security because individuals are unprepared to counteract social engineering. To address the need to educate casual computer users against social engineering attacks, this project proposes a novel approach that will take advantage of human psychology, just like the attacks themselves do. The project team proposes to create an accessible and engaging learning experience that will promote changes in attitude and behavior in computer users by teaching them about social engineering techniques and how to detect them. This project fills an important gap by focusing on users normally marginalized by current cybersecurity education efforts, including casual computer users or those with computer anxiety, such as the elderly and low-income families.
To address the dual problems of a lack of cybersecurity literacy and increasing social engineering attacks, the multidisciplinary project team proposes to integrate AI techniques to create a customized social engineering education experience that utilizes the principles of entertainment education. This effort will target non-security professionals and will use pretext design maps to train AI systems to generate social engineering scenarios. Transformer-based natural language processing models and humor theory knowledge will be used to generate explainable humorous training schemas based on these social engineering scenarios. The scenarios will then be applied in a classroom setting, where learning patterns and specific psychological markers will be used to refine the AI-generated scenarios. The combination of these approaches will result in an effective cybersecurity pedagogical tool, powered by AI, for casual computer users.
This project is supported by a special initiative of the Secure and Trustworthy Cyberspace (SaTC) program to foster new, previously unexplored, collaborations between the fields of cybersecurity, artificial intelligence, and education. The SaTC program aligns with the Federal Cybersecurity Research and Development Strategic Plan and the National Privacy Research Strategy to protect and preserve the growing social and economic benefits of cyber systems while ensuring security and privacy.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Ontological Semantic Technology
Ontological Semantic Technology (OST) is a set of resources that enable computational understand of natural language. The resources consist of language independent ontology (representing general knowledge of the world), a number of language-dependent lexica (one per supported natural language, such as English, Russian, Korean, etc.) and a set of tools that interpret text into text-meaning representation (TMR), results of which are stored in an information base. The ultimate goal of OST is to understand explicit and implicit information, knowledge of which is compatible with the ontology.
Communication about Energy Usage in Smart Homes
Part of Sociotechnical systems to enable smart and connected energy-aware residential communities
The overall goal of the project is to discover new knowledge on how individuals, groups, and residential communities make decisions related to their home energy consumption through some feedback mechanism. This feedback mechanism can visual or verbal. The goal of the verbal communication in this project is to give users information about their energy usage informally, in a way that is most acceptable to them. The grain size of information (both in terms of numbers, energy devices, and user behavior) to be delivered to the user as well as the acceptability of verbal cues, based on the density level of text, different voices, effects, and grammatical constructions is part of the experiments performed. The communication of individual cues as well as prolonged dialogs is currently accomplished through Amazon Alexa.
Chat Analysis for Law Enforcement
In collaboration with High Tech Crime Unit
The overall goal of the project is to provide law enforcement with a faster and reliable way to triage suspicious conversations between minors and adults in order to prioritize existing cases based on their risk level. AKRaNLU’s component of the data consists of anonymized chats, mostly from social media platforms, each of which includes two or more participants. In order to detect risk of conversations, themes of various parts of chats are considered as well as style of the conversations that may point out to the groups of individuals working together or the same individual using various aliases and multiple social media platforms.
Detection of Information Inconsistency
The aim of the project is to develop framework and preliminary results to show feasibility of inconsistency detection. The goal is to taxonomize various inconsistency types that can be seen in text, and identify existing methods that are suitable for identifying each type, as well as outline and develop new methods that would improve the performance of the overall system.
Advancements in Artificial Intelligence allow computational agents to become more ‘natural’ in various communication aspects. Humor is one of the components of human interaction that should be accounted for, if computational agents are to communicate in a way that is similar to humans. The goal of the project is to be able to detect (and be able to explain) when a component of a text contains humor, when an utterance contains humor potential that can be highlighted with a computer-generated punchline, and to understand humor preferences of a particular individual based on his/her interaction with a computational system, and respond appropriately to a user. Existing research from multiple disciplines that contribute to humor studies is considered, as well as natural language processing techniques.
Detecting Biased Information in Text
Bias detection has becoming an increasingly popular area within natural language processing. The general goal of bias detection is not only to identify that it exists, and possibly flag it, but, more importantly, to reduce the its impact on models that are learned from data that contain it. Examples of bias within natural language processing includes gender and ethnic bias that can be traced through longitudinal data. However, biased information is also present in reporting various perspectives of events, social or political, as well as in what is commonly known as propaganda, with the latter heavily overlapping with psychological warfare and false information.
Several projects are ongoing in a broad domain of medical field. These include: knowledge representation of imaging data of vascular anatomy with the goal of detecting errors caused by limited special resolution or other machine learning techniques errors of which can be reduced by anatomical knowledge; identifying patterns social media data that can lead to alignment with information reported by hospitals in identifying priority health needs and disparities in rural communities; and general interactions between doctors and patients.