This opinion paper shows the active use of artificial intelligence without significant control measures in place, especially in its early stages, produces insufficient levels of trust to be categorized as unjustified. To minimize the security risks it is imperative to have some form of control to ensure the reliability of artificial intelligence to complement cybersecurity. It is require continuous monitoring and evaluation of artificial intelligence systems. Setting the artificial intelligence standards in cybersecurity, that aims to gain trust and thus refuse to monitor and control AI is risky.