Instruct
Prompt Templates
It all starts with the prompt. Magnet AI enables admins to:
- create prompt templates
- choose the best-matching LLM or small model from a range of options
- adjust LLM output diversity and format
- preview, publish, and update prompt templates.
Prompt templates then become accessible directly via API (for example, to summarize a CRM record) or can be used as building blocks of more complex AI tools (for example, RAG flows).
Ground
Knowledge Sources
Knowledge Sources serve to ground the Gen AI in trustworthy, curated content aligned with business standards and policies. Supported content sources that can be connected to Magnet AI:
- Sharepoint (pages, pdfs, videos)
- Salesforce
- RightNow
- Confluence
With just a button click content is embedded into vector store and becomes available for semantic search, which ensures human-like understanding of user’s query.
Configure
RAG Tools
Retrieval-Augmented Generation (RAG) Tools use the power of Prompt Templates and Knowledge Sources to bring excellent and reliable Q&A experience to end users. Available features:
- Define the Knowledge Sources that your RAG system can access to ground its answers
- Control how content is retrieved and ranked
- Shape response format and tone
- Optimize LLM response for multilingual use cases
- Adjust UI settings to ensure the best UX for your users
- Preview and test before making your RAG tool live.
Assemble
AI Apps
When it’s time to make configured AI solutions available for the end users, admins can get things ready for integration with just a few mouse clicks. Available features:
- Create AI Apps
- Connect AI tools like RAGs or custom code
- Preview and test AI Apps before making changes live
- Get the embed URL to use for integration into your CRM.
In this way AI Apps become ready for integration into Siebel, Salesforce or other CRM. Custom AI solutions developed outside of the Magnet AI ecosystem can also be incorporated into AI Apps via the custom code feature.
Measure
Evaluation
Due to the probabilistic nature of LLM-generated output, it is essential to be able to evaluate and compare produced output, especially in the enterprise CRM domain, where data consistency is critical.
Evaluation feature helps admins test RAG Tools and Prompt Templates with sets of test data, so that Gen AI performance can be constantly improved. Available features:
- Import or manually create test data for evaluation
- Configure and launch evaluation jobs
- Quick-launch evaluation from particular RAG Tool or Prompt Template
- Download and view evaluation results
Automate
Coming soon: Agents
Combine Prompt Templates, RAG tools, and configurable API tools to design service-oriented agentic workflows and accelerate task execution from issue categorization to action selection, drafting a customer email and issue post-processing.
Allow human intervention when necessary to ensure optimal control.