Build a Bot Workshop - Daniel Sutton

The Talk

🧠 What are GPT LLMs?

💬 OpenAI Chat Completion

🤖 Building a Bot

🚀 Next Steps

Build a Bot Workshop - Daniel Sutton

What are
GPT LLMs?

Build a Bot Workshop - Daniel Sutton

Definitions

  • LLM: Large Language Model - designed to understand and generate human language
  • GPT: Generative Pre-trained Transformer - a specific type of LLM
What are GPT LLMs?
Build a Bot Workshop - Daniel Sutton

Background and Development

  • Early Models: Rule based systems, statistical models
  • Transformers: Attention is all you need
  • OpenAI: GPT, GPT-2, GPT-3, Codex, GPT-3.5, GPT-4
What are GPT LLMs?
Build a Bot Workshop - Daniel Sutton

Basics of GPT LLMs

  • Tokens: Building blocks of text
  • Byte Pair Encoding: Smart tokenization strategy
  • Context Window: The model's focus span
  • Generation: Crafting text, one token at a time
What are GPT LLMs?
Build a Bot Workshop - Daniel Sutton

Evolving GPT LLMs

  • Adaptive Learning: Beyond language to understanding tasks
  • Prompt Engineering: Crafting inputs for specific outcomes
  • Instruction Tuning: Teaching direct task execution
  • RLHF: Learning with Human Feedback
What are GPT LLMs?
Build a Bot Workshop - Daniel Sutton

OpenAI Chat
Completions

OpenAI Playground

playground

OpenAI Chat Completion

View Code

view-code

OpenAI Chat Completion

Example API Call

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY, }); const response = await openai.chat.completions.create({ model: "gpt-3.5-turbo", messages: [ { role: "system", content: "You are a sentient robot...", }, { role: "user", content: "Tell me about JavaScript", }, ], temperature: 1, max_tokens: 256, top_p: 1, frequency_penalty: 0, presence_penalty: 0, });

Example API Response

{ "id": "chatcmpl-85DcgktUmpLrXgLYkMfofDsoOzJfG", "object": "chat.completion", "created": 1696254698, "model": "gpt-3.5-turbo-0613", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Apologies, but as a robot..." }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 48, "completion_tokens": 161, "total_tokens": 209 } }
Build a Bot Workshop - Daniel Sutton

Customization Options for Chat Completions

  • Choose a Model: Multiple GPT versions
  • Set the Temperature: Tune response unpredictability
  • Define Maximum Length: Cap the response size
  • Use Stop Sequences: Designate response endpoints
  • Adjust Top P: Vary response diversity
  • Apply Penalties: Reduce repetition
OpenAI Chat Completion
Build a Bot Workshop - Daniel Sutton

Advanced Interactivity: Function Calls

  • Enhanced Capability: GPT can invoke specific functions
  • JSON Schema: Define functions and parameters
  • Intelligent Assessment: AI judges the need for function calls
  • Context Consideration: Function details impact the context window
OpenAI Chat Completion

Function Calling Example

{ "model": "gpt-3.5-turbo", "messages": [ { "role": "user", "content": "What's the weather like in Boston today?" } ], "functions": [ { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" }, "unit": { "type": "string", "enum": ["celsius", "fahrenheit"] } }, "required": ["location"] } } ], "function_call": "auto" }

Function Calling Response

{ "choices": [ { "finish_reason": "function_call", "index": 0, "message": { "content": null, "function_call": { "arguments": "{\n \"location\": \"Boston, MA\"\n}", "name": "get_current_weather" }, "role": "assistant" } } ], "created": 1694028367, "model": "gpt-3.5-turbo-0613", "object": "chat.completion", "usage": { ... } }
Build a Bot Workshop - Daniel Sutton

Expanding the Toolbox: Other OpenAI APIs

  • Embeddings API: For semantic analysis and vector comparisons
  • Moderation API: For content filtering and safety controls
OpenAI Chat Completion
Build a Bot Workshop - Daniel Sutton

Building a Bot

Build a Bot Workshop - Daniel Sutton

Hurdles with GPT LLMs

⏱️ Token generation vs. response time
🧠 No conversational memory

Building a Bot
Build a Bot Workshop - Daniel Sutton

Crafting the Back End

Serverless Chatbot with Netlify Functions

  • Approach 1: Async API Calls

    • Stateless interactions
    • Easy to implement and scale
  • Approach 2: Streaming via Server Sent Events

    • Continuous, real-time communication
    • Keeps connection alive for updates
Building a Bot

Why Netlify?

  • Serverless: No need to manage servers
  • Scalable: Pay for what you use
  • Simple: Easy to set up and use
  • Supports SSE: Server Sent Events
Building a Bot

Creating Netlify Functions

  1. Create the function in netlify/functions/hello.js:
export const handler = async (event) => { return { statusCode: 200, body: "Hello World!" }; };
  1. Use the Netlify CLI to deploy your site
  2. Navigate to .netlify/functions/hello to call your function
Building a Bot

Serverless RESTful Chatbot

import axios from "axios"; export const handler = async (event) => { const res = await axios.post( "https://api.openai.com/v1/chat/completions", { //OpenAI API Request Body }, { headers: { "Content-Type": "application/json", Authorization: `Bearer ${process.env.OPENAI_API_KEY}`, }, } ); return { statusCode: 200, body: res?.data }; };

Serverless SSE Chatbot

import { stream } from "@netlify/functions"; import axios from "axios"; export const handler = stream(async (event) => { const res = await axios.post( "https://api.openai.com/v1/chat/completions", { //OpenAI API Request Body stream: true, }, { headers: { "Content-Type": "application/json", Authorization: `Bearer ${process.env.OPENAI_API_KEY}`, }, responseType: "stream", } ); return { headers: { "content-type": "text/event-stream" }, statusCode: 200, body: res?.data, }; });
Build a Bot Workshop - Daniel Sutton

Designing the Front End

Dynamic Chat Interface Leveraging Serverless Architecture

  • Approach 1: RESTful API Integration

    • Stateless requests for chat interactions
    • Simplified data fetching for scalability and maintainability
  • Approach 2: Real-Time Updates with SSE

    • Live chat experience with instant display of messages
    • Persistent connection for seamless user experience
Building a Bot

RESTful API Integration

import axios from "axios"; async function callNetlifyFunction(userPrompt) { const response = await axios.get( `/.netlify/functions/bot?prompt=${userPrompt}` ); } callNetlifyFunction("Hello");

SSE Integration

function subscribeToEventStream(endpoint, userPrompt) { const eventSource = new EventSource( `/.netlify/functions/bot?prompt=${userPrompt}` ); eventSource.onmessage = (event) => { console.log("Data received:", event.data); if (event.data === "[DONE]") { console.log("Closing stream upon receiving [DONE]"); eventSource.close(); } }; eventSource.onerror = (error) => { console.error("EventSource encountered an error:", error); eventSource.close(); }; } subscribeToEventStream("Hello");
Build a Bot Workshop - Daniel Sutton

Next Steps

🛡️ Defensive Prompt Engineering
  https://learnprompting.org/

📚 Retrieval Augmented Generation (RAG) and Vector Databases
  https://www.pinecone.io/learn/retrieval-augmented-generation/

⛓️ OpenAI Assistants
  https://platform.openai.com/docs/assistants/overview/

Next Steps
Build a Bot Workshop - Daniel Sutton

Key Points

🤖 GPT LLMs: AI models for language understanding and generation.

💬 Chat API: Enables interactive GPT-powered conversations.

⚙️ Customizable: Includes function calls for tailored responses.

⚡ Real-time: Server Sent Events enable live updates for users.

Next Steps
Build a Bot Workshop - Daniel Sutton

Questions?

Build a Bot Workshop - Daniel Sutton

💛 Thank You! 💛

Build a Bot Workshop - Daniel Sutton

Stay in Touch 😊🤝

LinkedIn 👔
https://www.linkedin.com/in/d-cs

GitHub 💻
https://github.com/d-cs

Page 1 of 32