Modern Large Language Models (LLMs) gain significant power when connected to external systems and capabilities. While complex “agent” frameworks receive much attention, often simpler, more direct methods like tool calling (also known as function calling or integrations) provide a more robust, efficient, and predictable way to augment LLM capabilities.
Understanding where these tools execute—on the server or on the client—is fundamental to designing effective AI-powered applications.
🛠️ Defining Tool Calling / Integrations
Tool calling provides a structured mechanism for an LLM to request the execution of external code or services during a conversation. The process generally follows these steps:
- Define Capability: A specific function, API endpoint, or UI action is clearly defined, including its purpose and required parameters.
- Inform LLM: The LLM is made aware of the available tools, typically through an API specification or structured description provided in its context or system prompt.
- LLM Intent: Based on the conversational context, the LLM determines that a tool is needed and generates a structured request (e.g., a JSON object) indicating the tool name and parameters.
- Execute Tool: The application layer receives this request, validates it, executes the corresponding code or API call, and obtains a result.
- Return Result: The outcome of the tool execution (e.g., data retrieved, success/failure status, error message) is formatted and sent back to the LLM.
- LLM Response: The LLM uses the tool’s result to formulate its final response to the user.
This pattern, sometimes referred to as Augmented LLMs, allows models to interact with dynamic data, perform actions, and overcome the limitations of their static training data. The critical architectural decision is whether these tools operate server-side or client-side.
🖥️ Server-Side Tools / Integrations
Server-side tools execute on the backend infrastructure, either within the chatbot platform itself or on a separate application server. These represent the system’s internal capabilities.
- Centralized Execution: The request originates from the bot’s backend logic. The execution context is the server environment.
- Consistent Operation: Tool execution is typically uniform across different users and conversations (e.g., fetching a specific file from a shared Dropbox, querying a common product database).
- Secure Context: Suitable for operations requiring sensitive credentials (API keys, database passwords, private certificates) that must not be exposed client-side.
- Common Use Cases:
- Querying internal databases or data warehouses.
- Interacting with third-party APIs (payment gateways, CRMs, ERPs, external knowledge sources).
- Utilizing platform-specific features (e.g., Botpress Knowledge Bases, human escalation triggers).
- Leveraging server-side libraries for complex computations or data processing (e.g., using Node.js crypto for hashing).
- Performing stateful operations on behalf of the system (e.g., creating an order, updating a user record in a central DB).
Conceptual Code Example (Backend):
// Executes securely on the server
async function queryDatabase (query: string) {
try {
console.debug('Using query: ', query);
// Validate query
if (!query || typeof query !== 'string') {
return NextResponse.json({ error: new Error('Query string is required') });
}
// For safety, only allow SELECT queries
if (!query.toLowerCase().trim().startsWith('select')) {
return NextResponse.json({ error: new Error('Only SELECT queries are allowed') });
}
console.debug('Connecting to Database...');
if (!process.env.DATABASE_URL){
return NextResponse.json({
error: new Error("No DATABASE_URL env var set!")
}, {
status: 500
});
}
const sql = db.connect(process.env.DATABASE_URL);
console.debug('Executing query: ', query);
const results = await sql(`${query}`); // Execute query
console.log('Query successful!');
return NextResponse.json({ results });
} catch (error) {
console.error(error);
return NextResponse.json({
error: error instanceof Error ? error.message : new Error('Unknown error'),
}, { status: 500 });
}
};
In platforms like Botpress and N8N, first-party and community-created integrations and workflows function as server-side tools, extending the core capabilities of the bot engine. None of the code from these platforms ever runs on the clients’ browsers for security reasons. This can make it difficult, however, if you want these bots to leverage client-side tools.
💻 Client-Side Tools / Integrations
Client-side tools execute within the end-user’s environment, most commonly their web browser or a native mobile application. They directly manipulate the user interface or leverage local device capabilities.
- Decentralized Execution: Run locally within the user’s browser or application sandbox.
- User-Context Dependent: Actions are specific to the individual user’s session state, UI view, or device context (e.g., updating their map view, accessing their microphone).
- Event-Driven Communication: Often implemented using a publish/subscribe pattern. The bot backend emits events (e.g., via WebSocket messages) specifying the action and parameters. Client-side JavaScript listens for these events and triggers the corresponding UI updates or device interactions. The client can also send events back to the bot (e.g., user clicked a button on the page, form data submitted).
- Limited Security Context: Should generally avoid handling sensitive secrets directly, as client-side code can be inspected. Authentication/authorization might rely on the user’s existing session.
- Asynchronous and State-Sensitive: Client-side events can occur independently of the main conversational flow. The application must robustly handle events arriving at unexpected times or state changes occurring on the client that the bot needs to be aware of.
- Common Use Cases:
- Dynamically updating the User Interface (UI) elements (e.g., highlighting text, showing/hiding components, rendering charts based on bot data).
- Embedding external content within the UI (iframes for help articles, video players).
- Accessing device hardware with user permission (camera, microphone, location services).
- Interacting with the hosting web page (reading page content, pre-filling forms, scrolling to an element).
- Navigating the user (e.g.,
window.location.href = '...'
).
Conceptual Code Example (Frontend JavaScript):
// Executes in the user's browser
function initializeBotEventListeners(botWebSocket) {
botWebSocket.onmessage = (event) => {
const messageData = JSON.parse(event.data);
if (messageData.type === 'tool_call' && messageData.toolName === 'displayMapLocation') {
const { lat, lon, zoom } = messageData.parameters;
// Call local JavaScript function to update the map UI element
updateMapOnPage(lat, lon, zoom);
}
// ... handle other tool calls or bot messages
};
}
function sendClientEventToBot(botWebSocket, eventName, payload) {
const message = { type: 'client_event', eventName, payload };
botWebSocket.send(JSON.stringify(message));
}
// Example: Sending form data collected on the client
// sendClientEventToBot(ws, 'formSubmitted', { name: 'Jane', email: 'jane@example.com' });
Client-side tools are essential for creating rich, interactive experiences where the chatbot is deeply integrated with the application’s presentation layer.
🧠 Designing Effective Tools (Server & Client)
- Abstract vs. Specific: Define tools based on core capabilities rather than single-use functions. Prefer
updateUserProfile(userId, data)
overchangeUserEmail()
andchangeUserPhone()
. PreferrenderChart(chartType, data, elementId)
overdisplaySalesBarChart()
. - Clear Definition: Provide unambiguous names, detailed descriptions, and precise input/output schemas (e.g., using JSON Schema). The LLM relies entirely on this definition to use the tool correctly.
- Atomicity: Design tools to perform a single logical operation. Complex workflows can be orchestrated by the LLM calling multiple atomic tools sequentially.
- Idempotency (where applicable): If a tool might be called multiple times with the same inputs (e.g., due to retries), ensure it produces the same result without unintended side effects. This is particularly important for server-side tools performing writes or updates.
- Error Handling: Implement robust error handling within the tool’s execution logic. Return meaningful error messages to the LLM so it can potentially retry or inform the user.
- Asynchronicity Handling (Client-Side): Design bot flows and client-side logic to manage the timing of events. What happens if the user submits a form before the bot asks for it? What if a UI update event arrives after the user has navigated away?
📚 Resources
Mastering tool integration is key to unlocking the full potential of LLMs in practical applications.
- OpenAI: Function Calling Guide
- Anthropic: Tool Use Documentation
- Botpress: Integrations Documentation, Building Custom Integrations Guide
- Conceptual Overviews: DeepLearning.AI / Langchain Course on Functions, Tools, Agents
Choosing between client-side and server-side tools—or using a combination of both—depends heavily on the specific requirements of your application, security considerations, and the desired user experience.
Feel free to reach out if you’d like to discuss the best integration strategy for your specific chatbot project.
❤️
Gordy