Any AI or ML platform should at a bare minimum adhere to and support traditional Enterprise IT security principles such as requiring that all users be known before using the system (i.e. traditional Identity and Access Management like Role-based access control and group permissions); a Zero trust model (no user is trusted until explicitly granted trust); the principle of least privilege and more. However, in addition to all of the standard Enterprise IT security mechanisms that we have been using for decades and have already extended into the cloud, AI and machine learning bring their own unique additional set of security challenges.
Many machine learning models and applications are built on open-source components that may or may not have been thoroughly vetted by the development team. These days it is also common to adopt pre-trained models from third-party model marketplaces. Without knowing how the model was trained and on what data it is impossible to verify the viability, effectiveness, and bias of a given model.
Additionally, with the need for huge computer power and massive storage, a lot of AI machine learning workloads are now moving onto the cloud. A lot of that data is in clear text form so that it can be easily used for training purposes by the model. These types of datasets could provide large and valuable targets to potential attackers.
Beyond just data theft, there are new attack vectors emerging such as: submitting data maliciously in the training cycle in order to pollute the predictive capability; querying a model in such a way that it divulges what kind of data it was trained on; or providing deceptive input data in order to trick the system into making bad predictions. There is an entire branch of security around this known as Adversarial defense, which will be a burgeoning field in the years ahead.