Artificial intelligence in companies. Here's the main threat
Although businesses have understood and appreciated the potential of generative artificial intelligence in the context of creating new ideas and increasing efficiency, there is a risk associated with introducing sensitive and protected data into publicly available large language models, known as LLMs. The details are explained by Torsten Grabs, Senior Director of Product Management at Snowflake.
Sep 28, 2023 | updated: 9:14 AM EDT, October 5, 2023
Entering such data poses threats to security, privacy, and management. Therefore, companies need to find a way to deal with them before they start benefiting from the use of new technology.
As IDC notes, entrepreneurs have reason to fear that LLMs may "learn" based on queries and reveal this information to other companies that might implement similar commands. Business owners are worried that sensitive data could be stored online and exposed to hacker attacks. This makes entering data and commands into publicly accessible LLMs not a good solution for businesses, especially those operating in regulated areas.
One of the solutions is to move the LLM to your data, instead of sending your data to the LLM. This is an option that most businesses will choose to balance the need for innovation with the necessity of data protection. Most large companies already maintain a strong safety boundary around their data. These organizations should also host and implement the LLM within a protected environment. This will allow the data teams to further develop and customize the LLM, and employees to interact with it within the existing safety circuit.
Smaller models are also effective
The LLM model doesn't have to be extensive to be useful. "Garbage in, garbage out" is a universal truth for every AI model, and businesses should adjust their models using internal data they can trust and that will provide them with the necessary information. For example, employees may want to ask about sales in the northwest region or about the benefits of a contract with a specific client. Answers to these questions can be obtained by tuning the LLM to your own data in a secure and managed environment.
Optimizing LLM for a business can yield not only better quality results, but also help in reducing resource needs. Smaller models targeted at specific use cases within the company usually require less computational power and less memory than models built for general-purpose use cases or a wide variety of use cases in different industries. Targeting the LLM towards business use cases allows the model to be run in a more cost-effective and efficient way.
Most of the data cannot be easily stored
Tuning the model to internal systems and data requires access to all information that may be useful. Many of them will be stored in formats other than text. About 80 percent of data in the world is unstructured data, including corporate data, such as emails, images, contracts, and training films. This implies the need to use technologies such as natural language processing to extract information from sources and make it available to analysts, so they can build and train multimodal AI models.
AI is a field characterized by rapid development. Therefore, companies should exercise caution with every approach to generative artificial intelligence. This includes reading all the fine print about the models and services they use, as well as collaboration with reputable suppliers who offer guarantees about the provided models. There should be a balance between risk and benefits.