Semantic Search over Token Metadata

Build a vector search layer over Dual objects using embeddings for natural-language token discovery.

What You'll Build

Dual's built-in search is great for exact filters, but what about fuzzy queries like "find tokens similar to my gold loyalty card" or "show me anything related to summer events"? In this tutorial you'll build a semantic search layer that uses vector embeddings to find tokens by meaning, not just exact property matches.

Step 1, Project Setup

bash
mkdir dual-semantic-search && cd dual-semantic-search
npm init -y
npm install openai node-fetch

Step 2, Generate Embeddings for Your Tokens

First, pull your objects and create a text representation of each one, then embed it:

javascript
import OpenAI from 'openai';
const openai = new OpenAI();
const API = 'https://api-testnet.dual.network';
function objectToText(obj) {
const props = Object.entries(obj.properties || {})
.map(([k, v]) => \"\" + k + ": \" + v + "\")
.join(', ');
return \"Template: \" + obj.template_name + ". Properties: \" + props + "\";
}
async function embedText(text) {
const res = await openai.embeddings.create({
model: 'text-embedding-3-small',
input: text
});
return res.data[0].embedding;
}
async function indexAllObjects(token) {
const objects = await dualFetch('/objects?limit=200');
const index = [];
for (const obj of objects.items) {
const text = objectToText(obj);
const embedding = await embedText(text);
index.push({ id: obj.id, text, embedding, obj });
}
return index;
}

Step 3, Build the Search Function

Use cosine similarity to find the most relevant tokens for a natural-language query:

javascript
function cosineSimilarity(a, b) {
let dot = 0, magA = 0, magB = 0;
for (let i = 0; i ({
...item,
score: cosineSimilarity(queryEmbedding, item.embedding)
}));
scored.sort((a, b) => b.score - a.score);
return scored.slice(0, topK);
}

Step 4, Query with Natural Language

javascript
async function main() {
const token = process.env.DUAL_TOKEN;
console.log('Indexing objects...');
const index = await indexAllObjects(token);
console.log(\"Indexed \" + index.length + " objects.\\n\");
const queries = [
'loyalty cards with high point balances',
'expired event passes from last month',
'anything related to coffee or food rewards',
'premium collectible tokens'
];
for (const query of queries) {
console.log(\"Query: "\" + query\");
const results = await semanticSearch(query, index);
results.forEach((r, i) => {
console.log(\" \" + i + 1 + ". \" + r.text + " (score: \" + r.score.toFixed(3) + ")\");
});
console.log();
}
}
main();

Step 5, Add to Your Chatbot

Integrate semantic search as a tool in your conversational assistant:

javascript
// Add to your chatbot's tool definitions
{
type: 'function',
function: {
name: 'semantic_search_tokens',
description: 'Find tokens using natural language description',
parameters: {
type: 'object',
properties: {
query: {
type: 'string',
description: 'Natural language search query'
},
limit: {
type: 'number',
description: 'Max results (default 5)'
}
},
required: ['query']
}
}
}

Production Scaling: For collections over 1,000 objects, swap the in-memory index for a vector database like Pinecone, Weaviate, or pgvector. The embedding generation stays the same, you just change the storage and retrieval layer.

Companion Repo: Get the full working source code for this tutorial at github.com/orgs/DualOrg/dual-ai-semantic-search, clone it, add your API keys, and run it locally in minutes.

What's Next?

Add guardrails to your AI integrations with AI Safety & Guardrails for Token Operations.