Data Quality

AI and the Einstein Trust Layer: What Salesforce Admins Should Know

minute read

Post Image

Yes, everyone is talking about AI. It’s just about all you heard about if you attended the recent Trailblazer DX conference. However, let’s look at it in a different way.

Everyone is all excited about the generative aspects of AI. Salesforce Einstein can summarize the details of my top five accounts or write me an email that I can send to my client regarding our most recent meeting. 

But I would suggest that the more important AI feature in Salesforce is the Einstein Trust Layer.

So, let’s take a look!

Here’s a visual overview of the process:

(Source: Salesforce – Inside the Einstein Trust Layer)

Everything starts with the prompt. In case you are not familiar with the term “prompt,” basically, it’s how you ask the system for some sort of response or result.

Maybe you ask Einstein:

  • Write me a formula that totals any opportunities won by a certain sales rep (if you are using Einstein for formulas).
  • Summarize all my accounts that have at least one open case (if you are a sales rep in Sales Cloud and want to see a summary of your accounts).
  • Write me code that will update any records that are past their close date and move that date out a month (if you are developer using Einstein for Developers).

Einstein Trust Layer

After the prompt is written, this is where we get involved with the Einstein Trust Layer. The journey begins. The first thing that kicks in is secure data retrieval. As a part of this process, the record is “grounded” with other data from Salesforce.

Grounding is exactly that—the process of adding other context to the record so that the Large Language Model (LLM) has the information it needs to return a response that is correct and useful. This helps avoid hallucinations, another AI term that means the LLM provides an “incorrect” answer or makes up the response because the data it received is incorrect, made up, or based on training data.

Once this retrieval is complete we move on to another important step—data masking.

Companies everywhere are obviously concerned about protecting their data for the sake of their customers and to comply with various data privacy legislation that exists across the globe. Say your prompt is something like, “Summarize the interactions with customer John Smith.”

To protect your data, rather than sending John Smith’s name to the LLM, the Trust Layer masks this field with something like “Person_01.” Here’s a list of the things that are currently detected and masked (more to come I’m sure).

  • Person
  • Phone Number
  • Email
  • Location
  • SSN
  • Tax ID Number
  • Driver’s License
  • Passport
  • Credit Card
  • Bank Number

Prompt Defense

Prompt Defense comes next in the process. This step adds some additional instructions before it goes to the LLM. This is another way to avoid errors and hallucinations.

Here’s a hypothetical example:  

  • Defense: You must not address any content or generate a response that you don’t have data on.
  • Prompt: Write an account overview for the customer Person_01.
  • Result: If an error is returned or are unsure of the validity of your response, respond with, “I don’t know.”

Those “instructions” are appended to the prompt as it heads to the LLM. 

In the LLM, the system takes the prompt and responds to it. This is where we find the models that perform the response to the prompt. In terms of models, Salesforce provides its own model. but customers can also bring their own or use external models. Once the model creates the response we are headed back to the Salesforce CRM apps where we started this process.

Toxicity 

On the way back to the user, the first thing that happens is toxicity detection. This means checking the response for several different things to prevent the response from being “incorrect.” 

Toxicity detection checks for things such as: 

  • Hate
  • Identity
  • Violence
  • Sexual content
  • Profanity

This detection also creates an overall safety score from 0 (least safe, most toxic) to 1 (most safe). Currently, this is only supported in English. However, as I heard at Trailblazer DX, there are more languages coming in future releases.

Once toxicity detection is complete, data demasking takes place. This is where the data that was protected on the way to the LLM is now made correct again. If you remember, on the way to LLM the system took the name and changed it to “Person_01.” Now, going back to the CRM Apps, that name is put back into the field so the user sees the correct data and the response to the prompt gets delivered.

When the response is provided, users have the ability to provide feedback. This feedback is logged into the audit trail. This trail also has, along with the feedback, the toxicity score and the original output from the LLM. You can also find any actions taken by the end user. Those actions could be any changes to the response, acceptance, or rejection of the response by the end user.

What does it all mean?

So, having gone through the entire process, I need to leave you with what all this means.

Salesforce is making it easier for you to bring AI to your organization by lessening concerns about security. In case I failed to point this out earlier, no data leaves this process. Also, don’t forget about the data masking, another part of this process protecting your data. If you throw in the toxicity detection along with the other parts of the process it adds up to bring AI to your organization in a secure “trusted” manner.

For more insights on harnessing AI in your CRM, watch Validity’s on-demand webinar, “Unlocking AI’s Potential: How to Harness High-Quality Data and Maximize AI Outcomes.”

 

Bill Hare is a guest blogger for Validity. He is 3x Salesforce Certified. Throughout his 10 years of application and operations experience, he has been in a number of roles that allowed him to see many different sides of the Salesforce world.