Sanjeev Mehta
How Salesforce "Einstein Trust Layer" takes on "Shadow AI"
AI and Shadow AI
Given the recent boom in industrial use of AI applications and tools, for good enough reasons, what needs to be worked upon in the same manner are efficient policies, risk management and governance for the use of AI. Without these armors end users and employees will easily, knowingly or unknowingly leak the business and personal data to the world of Internet. Such use of AI in an unauthorized and unapproved manner can be termed as Shadow AI.
One such example would be the use of GenAI applications such as ChatGPT to get help automating tasks like creating a report, analyzing data, generating emails, creating code etc. To get help from these applications we need to provide instructions on the tasks and supporting document/information for aligned responses.
Such applications claim not to store any of these information, they learn from the instructions and information that are provided to them. They improve their model and parameters thereby generating the response close to human understanding in a precise and concise manner, better everytime! To the end user it is like a black box which is intelligent and has learning capabilities. These applications do not yet know about data security, ethical values, business policies, company reputation. They learn to predict and join tokens (words or characters) based on complex algorithms, for a close to human intelligence response thereby exposing significant risks on all frontier including data protection, data privacy and reputation damage.
By outlining and enforcing AI policies and non-negotiable regulatory compliance IT team can manage these risks of Shadow AI without losing immense benefits of GenAI.
Einstein Trust Layer and how it resolves the shadow effect:
Salesforce has always placed trust and customer data as the top priority. The Einstein Trust Layer, in Salesforce, is aligned to that goal and provides an answer on how data can be kept secure and privacy honored, while making the most out of Gen AI.
It includes a set of features that helps securing access to LLM models and protecting business and personal data using data security, dynamic grounding, data masking, zero data retention, prompt defense, toxicity detection and audit log.
The Einstein Trust Layer is placed between LLM models and Salesforce apps so that every interaction between Salesforce apps and LLM goes through the Trust Layer.
Salesforce supports open model ecosystem to allow access to many large language models (LLMs), hosted inside and outside of Salesforce thereby making best use of both worlds.
Salesforce is already built on a secure layer which allows access to data that the employee is authorized to see. Hence any data retrieved by the employee (using Salesforce) to pass through the prompt will be restricted to the data visible to the employee.
Once the secured data is retrieved, the prompt is grounded dynamically to add domain-specific knowledge and customer information and thus removes data that should not be accessible by the running user.
Once the data is grounded, the Trust Layer masks personal information like email, name, credit card information using pattern matching, field metadata (fields that are classified as sensitive or using Shield Platform Encryption) and advanced machine learning techniques. The masked data is then unmasked back in the response inside Trust Layer before displaying to the user.
As a Prompt Defense mechanism, Prompt Builder provides additional technical guardrails to build trust. These guardrails are add-on instructions (System Policies) to the LLMs to help how to process the prompt and what restrictions should be in play to omit any harmful or unintended response.
Once the secured, grounded and masked data is passed on to LLM Models the zero retention policy, that Salesforce has an agreement on with the partners providing LLMs, ensures that no data is retained by the LLM world.
When a prompt is sent to an LLM model, the model works on the appropriate task and sends back response to Salesforce. The model providers are bound to erase the prompt as well the responses as soon as it is sent back to Salesforce.
In the next stage before the response is popped to the end user the ethical guardrails comes into play wherein responses are checked for toxicity and bias by the Salesforce assessment tool and then demasked. Once the data is demasked, the response is shared back to the user.
Salesforce keeps track of every step that happens during the entire journey. Everything is timestamped metadata that is collected into an audit trail.
Takeaway:
Einstein Trust Layer in Salesforce ecosystem provides an efficient way to remove unauthorized use of GenAI and builds towards a trustworthy relationship between AI and human.
It is obviously as secure as trusting the external LLM models (its providers) and how well policies are implemented for zero retention. Everything is build on trust which is gained overtime.
Comments