Use open source for safer generative AI experiments

The public availability of generative AI models, particularly large language models (LLMs), has led many employees to experiment with new use cases, but it also put some organizational data at risk in the process. The authors explain how the burgeoning open-source AI movement is providing alternativ...

Descripción completa

Detalles Bibliográficos
Otros Autores: Culotta, Aron, author (author), Mattei, Nicholas, author
Formato: Libro electrónico
Idioma:Inglés
Publicado: [Cambridge, Massachusetts] : MIT Sloan Management Review 2023.
Edición:[First edition]
Materias:
Ver en Biblioteca Universitat Ramon Llull:https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009825873306719
Descripción
Sumario:The public availability of generative AI models, particularly large language models (LLMs), has led many employees to experiment with new use cases, but it also put some organizational data at risk in the process. The authors explain how the burgeoning open-source AI movement is providing alternatives for companies that want to pursue applications of LLMs but maintain control of their data assets. They also suggest resources for managers developing guardrails for safe and responsible AI development.
Notas:Reprint #65221.
Descripción Física:1 online resource (5 pages)