AI communications platform Sendbird today took a step towards counteracting AI’s early-stage trust issues. Its new Trust OS is an accountability system that makes AI agents more trustworthy. It does this through a series of functions that breed accountability, oversight, and responsible agent behavior.
This mostly covers automated and agentic AI that’s increasingly growing in use. In many cases, these AI agents are handling sensitive data or can even operate in customer-facing ways. This makes safety nets increasingly valuable to avoid AI mishaps and give AI-adoptive enterprises more peace of mind.
In that sense, this move comes at the right moment as agentic AI is hitting its stride while widespread trust in AI could use a boost. In fact, Gartner reports that a lack of trust is one of AI’s biggest adoption barriers at the moment; and recent Localogy data aligns with that (more on that in a bit).
“Those AI horror stories, like the $1 car sale, happen when companies don’t hold their systems to this standard,” Sendbird CEO and Co-Founder John S. Kim told Localogy Insider. “With Trust OS, our customers can trust their AI agents the same way they trust their best human agents.”
The ‘How’
So, how does Trust OS accomplish these ambitious goals? Sendbird specifies the following functions that define the platform.
- Observability, allowing for full visibility into every AI agent decision, output, and interaction
- Controls over data, knowledge, and policies each AI agent uses to fine-tune behavior and manage risk.
- Oversight that allows for human workflows when needed to ensure AI agents are maintaining full accountability and;
- Scale-proven infrastructure that allows for scaling from pilot to global rollout without compromising trust
As for the results that businesses can expect from the above functions, Sendbird specifies…
- 90 percent reduction in pilot-to-production timelines through automated testing, eliminating the need for manual QA.
- 99 percent of hallucinations are caught and resolved with flagged messages and AI suggestions of knowledge and action books.
- Improved AI performance and accountability with the ability to trace and correct every agent interaction; and,
- Safely run multiple AI agents across the environment with role-based access and selective deployment, increasing AI agent autonomy without losing oversight.
AI-Curious
Stepping back, the AI world is currently divided between early adopters, laggards, and AI curious. For the latter, AI’s current lifecycle stage involves lots of feeling around. Companies are still trying to wrap their heads around what it can do, and how it should best be applied and deployed to their operations.
With that comes all the trust issues that AI is battling. According to our recent SMB survey with Duda, several businesses are deploying AI but are restricting its use to low-stakes functions. Many are still hesitant to unleash the technology on sensitive or secure areas like payroll or high-value data.
Meanwhile, there are other signs that AI’s trust issues are lessening as the technology gradually proves itself. As we examined recently, consumers are increasingly trusting AI to find and qualify businesses. 62 percent do so according to Yext, putting it on par with traditional search during key decision moments.
“High-agency AI requires high accountability,” said Kim. “Trust OS is how Sendbird delivers it. It gives businesses complete oversight into what their AI agents know, say, and do, before and after deployment. We built the guardrails to control exactly what an agent can and can’t say, the testing to preview behavior before going live, and the infrastructure to support it all at scale.”
Header image credit: charlesdeluvio on Unsplash


