Tuesday, December 24

AI’s Brave New World: Whatever happened to security? Privacy?

videobacks.net

AI’s Brave New World: Whatever happened to security? Privacy? John deVadoss · 7 days ago · 3 min read

Generative AI’s rapid development raises unprecedented challenges in privacy and security, sparking urgent calls for regulatory intervention.

3 min read

Updated: Mar. 31, 2024 at 12:49 am UTC

Cover art/illustration via CryptoSlate. Image includes combined content which may include AI-generated content.

The following is a guest post from John deVadoss, Governing Board of the Global Blockchain Business Council in Geneva and co-founder of the InterWork Alliance in Washington, DC.

Last week, I had the opportunity in Washington, DC to present and discuss the implications of AI relating to Security with some members of Congress and their staff.

Generative AI today reminds me of the Internet in the late 80s – fundamental research, latent potential, and academic usage, but it is not yet ready for the public. This time, unfettered vendor ambition, fueled by minor-league venture capital and galvanized by Twitter echo chambers, is fast-tracking AI’s Brave New World.

The so-called “public” foundation models are tainted and inappropriate for consumer and commercial use; privacy abstractions, where they exist, leak like a sieve; security constructs are very much a work in progress, as the attack surface area and the threat vectors are still being understood; and the illusory guardrails, the less that is said about them, the better.

So, how did we end up here? And whatever happened to Security? Privacy?

“Compromised” Foundation Models

The so-called “open” models are anything but open. Different vendors tout their degrees of openness by opening up access to the model weights, or the documentation, or the tests. Still, none of the major vendors provide anything close to the training data sets or their manifests or lineage to be able to replicate and reproduce their models.

This opacity with respect to the training data sets means that if you wish to use one or more of these models, then you, as a consumer or as an organization, do not have any ability to verify or validate the extent of the data pollution with respect to IP, copyrights, etc. as well as potentially illegal content.

Critically, without the manifest of the training data sets, there is no way to verify or validate the non-existent malicious content. Nefarious actors, including state-sponsored actors, plant trojan horse content across the web that the models ingest during their training, leading to unpredictable and potentially malicious side effects at inference time.

Remember, once a model is compromised, there is no way for it to unlearn, the only option is to destroy it.

“Porous” Security

Generative AI models are the ultimate security honeypots as “all” data has been ingested into one container. New classes and categories of attack vectors arise in the era of AI; the industry is yet to come to terms with the implications both with respect to securing these models from cyber threats and,

 » …
Read More

videobacks.net