Function calls are the unsung hero of LLM UI manipulations. While OpenAI has made great strides leveraging function calls to manipulate the UI in their demos, the rest of industry is yet to take its first meaningful steps. But what do those steps look like?
Navigating the complex decision of building versus buying a chatbot platform requires exploring strategic considerations, industry insights, and practical approaches. That is a fancy way of saying it is a tough decision!
From full creative freedom to strict fact matching, organizations can design chatbots that meet their specific risk tolerance and communication needs. The key is choosing an approach that serves the user while protecting the brand.
Ensuring proper communication is critical — don’t let a chatbot fail to chat! Bots must strike a balance between confirming user information and proceeding with a reasonable assumption of correctness. Explicit and implicit confirmations are the primary tools to achieve this balance.
Like any software project, building a chatbot requires careful planning. While all software projects fall along the Waterfall-Agile spectrum. I believe chatbot projects should lean closer to the Agile end, emphasizing rapid prototyping and iteration over extensive upfront planning. This post draws on my experience with numerous chatbot projects and outlines what successful teams have done at the start.
Most chatbots stick to one modality—either text or voice. But as someone who uses subtitles for everything, I wonder why voice bots don’t also include text for accessibility. Is it a limitation in the voice tech stack? Does text clutter the UI? To find out, I decided to build my own streaming-first chatbot interface with both text and voice.
You never get a second chance to make a first impression. Chatbots are no different, and the first interaction with users sets the tone for the entire user experience. When crafting this initial message, I recommend to keep the 3 C’s in mind: Context, Capabilities, and Call to Action.
Context Context refers to who and where the bot is; it is the foundation upon which everything about the bot is built.
When a big organization or government is looking at using an AI system, trust is often on their minds. There’s a lot of talk about AI hallucination and lies- like this NY Times article saying GPT-4 hallucinates 3% of the time or this paper that shows GPT-4 can engage in insider trading and then lie about it. How can folks who work in AI build systems that big clients can trust?