AI Security Framework: A Real-Time Threat Detection and Compliance Monitoring System

Authors

  • K. Roshan, Department of CSE (CS), Vignana Bharathi Institute of TechnologyHyderabad, Telangana, India
  • T. Laya Department of CSE (CS),Vignana Bharathi Institute of Technology,Hyderabad,Telangana, India
  • K. Saketh Department of CSE (CS),Vignana Bharathi Institute of Technology,Hyderabad,Telangana, India
  • M. Kaushik Department of CSE (CS),Vignana Bharathi Institute of Technology,Hyderabad,Telangana, India.
  • Potharaju Chandra Mounika Assistant Professor, Department of CSE (CS),Vignana Bharathi Institute of Technology,Hyderabad, Telangana, India.

DOI:

https://doi.org/10.63856/ijis/v2i4/00034

Keywords:

AI Security, Large Language Models, Threat Detection, Data Protection, Cybersecurity, anomaly detection, risk assessment.

Abstract

This paper presents an AI Security Framework designed to provide a practical and effective solution for securing AI-based systems. The proposed framework focuses on protecting sensitive data, securing AI models, and ensuring compliance with privacy regulations. It includes key components such as secure data handling, threat detection, access control, continuous monitoring, and real-time alerts. The system also supports safe integration of LLMs while reducing risks such as adversarial attacks and unauthorised access. The main goal of this framework is to offer a simple, scalable, and reliable approach to AI security. It helps organisations use AI technologies safely while maintaining strong protection against modern cyber threats.

References

1. A. Al-Fuqaha, M. Guizani, M. Mohammadi, M. Aledhari, and M. Ayyash, “Internet of Things: A Survey on Enabling Technologies, Protocols, and Applications,” IEEE Communications Surveys & Tutorials, vol. 17, no. 4, pp. 2347–2376, 2015.

2. NIST, “Artificial Intelligence Risk Management Framework (AI RMF 1.0),” National Institute of Standards and Technology, 2023.

3. Google, “Secure AI Framework (SAIF),” Google Research, 2023.

4. Microsoft, “Responsible AI Standard and Security Guidelines,” Microsoft Corporation, 2022.

5. Databricks, “AI Security Framework 2.0,” Databricks Documentation, 2023.

6. OWASP Foundation, “OWASP Top 10 for Large Language Model Applications,” 2023.

7. K. Kim and J. Park, “Deep Learning for Network Threat Detection,” IEEE Transactions on Network Security, 2019.

8. P. Sharma and R. Singh, “AI for Cyber Security: Challenges and Opportunities,” ACM International Conference on Security, 2020.

9. R. Kumar and M. Patel, “Predictive Threat Analysis Using AI,” International Journal of Cybersecurity, 2023.

10. I. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and Harnessing Adversarial Examples,” International Conference on Learning Representations (ICLR), 2015

Downloads

Published

2026-04-22

Issue

Section

Articles

How to Cite

AI Security Framework: A Real-Time Threat Detection and Compliance Monitoring System. (2026). International Journal of Integrative Studies (IJIS), 2(4), 54-58. https://doi.org/10.63856/ijis/v2i4/00034

Similar Articles

61-70 of 90

You may also start an advanced similarity search for this article.