AI Security Framework: A Real-Time Threat Detection and Compliance Monitoring System
DOI:
https://doi.org/10.63856/ijis/v2i4/00034Keywords:
AI Security, Large Language Models, Threat Detection, Data Protection, Cybersecurity, anomaly detection, risk assessment.Abstract
This paper presents an AI Security Framework designed to provide a practical and effective solution for securing AI-based systems. The proposed framework focuses on protecting sensitive data, securing AI models, and ensuring compliance with privacy regulations. It includes key components such as secure data handling, threat detection, access control, continuous monitoring, and real-time alerts. The system also supports safe integration of LLMs while reducing risks such as adversarial attacks and unauthorised access. The main goal of this framework is to offer a simple, scalable, and reliable approach to AI security. It helps organisations use AI technologies safely while maintaining strong protection against modern cyber threats.
References
1. A. Al-Fuqaha, M. Guizani, M. Mohammadi, M. Aledhari, and M. Ayyash, “Internet of Things: A Survey on Enabling Technologies, Protocols, and Applications,” IEEE Communications Surveys & Tutorials, vol. 17, no. 4, pp. 2347–2376, 2015.
2. NIST, “Artificial Intelligence Risk Management Framework (AI RMF 1.0),” National Institute of Standards and Technology, 2023.
3. Google, “Secure AI Framework (SAIF),” Google Research, 2023.
4. Microsoft, “Responsible AI Standard and Security Guidelines,” Microsoft Corporation, 2022.
5. Databricks, “AI Security Framework 2.0,” Databricks Documentation, 2023.
6. OWASP Foundation, “OWASP Top 10 for Large Language Model Applications,” 2023.
7. K. Kim and J. Park, “Deep Learning for Network Threat Detection,” IEEE Transactions on Network Security, 2019.
8. P. Sharma and R. Singh, “AI for Cyber Security: Challenges and Opportunities,” ACM International Conference on Security, 2020.
9. R. Kumar and M. Patel, “Predictive Threat Analysis Using AI,” International Journal of Cybersecurity, 2023.
10. I. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and Harnessing Adversarial Examples,” International Conference on Learning Representations (ICLR), 2015
Downloads
Published
Issue
Section
License
Copyright (c) 2026 International Journal of Integrative Studies (IJIS)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.



