This is required for interaction with your custom ChatGPT apps, or even embedded chat services and widgets on your websites and webapps.

The boilerplate code is surprisingly lean. Name a document in your vercel project “server.js” and paste in this code to create a basic template for interacting with the ChatGPT servers.

const express = require('express');
const cors = require('cors');
const OpenAI = require('openai');

const app = express();
app.use(cors());
app.use(express.json());

const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

app.post('/chat', async (req, res) => {
    const { prompt } = req.body;
    
    try {
        const response = await client.chat.completions.create({
            model: "gpt-4",
            messages: [{ role: "user", content: prompt }]
        });

        const botReply = response.choices[0].message.content;
        res.json({ reply: botReply });
    } catch (error) {
        console.error('Error in OpenAI API call:', error);
        res.status(500).json({ error: 'Error communicating with the bot' });
    }
});

module.exports = app;

Caveats

  1. Dependencies to install to your Vercel project:

    1. express
    2. cors
    3. openai
  2. OpenAI developer API key is required to interface with their servers programmatically:

    1. Guide link
    2. ensure that your API key is saved to a separate file called “.env” in the project at the root directory (alongside the “server.js“ file)
      1. StackOverflow guide on this process
  3. Your URL is configurable and easy to find: use this with your frontend service

    (it’s the addresses partially obscured in blue below)

    vercel-backend-image.png

  4. GitHub is recommended to manage your project, and it integrates to Vercel ideally;

    1. I’ve not tested other VCS’s with Vercel