Synergizing Expertise: A Holistic View of AI Robustness Across Disciplines

Authors

  • Kalyan Sandhu, Chetan Sasidhar Ravi, Ajay Aakula, Shashi Thota, Ashok Kumar Pamidi Venkata,

Abstract

AI has changed many spheres of life. Resiliency and dependability of AI systems constitute challenges. These systems may be compromised by environmental disturbances, adversarial attacks, and data biases. This work proposes a paradigm change and multidisciplinary artificial intelligence solutions to overcome these constraints.
We underline the limits of monodisciplinary artificial intelligence development. Silos restrict dependable systems even if every field generates important data. Strong artificial intelligence can be created, in our opinion, using computer science, mathematics, psychology, cognitive science, control theory, and
AI leverages computer science and machine learning. Common hostile inputs lead to erroneous output from ML models. At this point, math shines. Sometimes set theory-based formal verification combined with reason can show AI model accuracy. Combining maths and computer science makes artificial intelligence strong.
Cognitive science combined with psychology: Human decision-making is robust. Cognitive science and psychology help to clarify how people manage uncertainty, adjust to change, and reason under duress. Artificial intelligence systems could endure unanticipated events and grow from experience. Partial-data artificial intelligence systems may benefit from bound rationality—that is, from humans making best decisions with limited knowledge.
Control theory in dynamic systems engineering helps to guarantee dependability of artificial intelligence systems. Control theory helps us to design artificial intelligence systems capable of handling unforeseen events. Safe uses like autonomous cars call for this since little deviations could be fatal.
Multidisciplinary cooperation to expose and reduce prejudices: AI is biassed by training data. Results of biassed artificial intelligence could erode trust. One could get help from ethics and sociology. Fairness measurements and bias identification could help to lower artificial intelligence biases. For ethical artificial intelligence, psychologists could offer human fairness values.
Though they show promise, cross-disciplinary approaches require control for human-AI collaboration. Development and implementation of responsible artificial intelligence need for human-AI cooperation. AI systems could be able to replicate human talents thanks to human judgment, creativity, and ethics.

Downloads

Published

2023-06-30

Issue

Section

Articles