This is required for interaction with your custom ChatGPT apps, or even embedded chat services and widgets on your websites and webapps.
The boilerplate code is surprisingly lean. Name a document in your vercel project “server.js” and paste in this code to create a basic template for interacting with the ChatGPT servers.
const express = require('express');
const cors = require('cors');
const OpenAI = require('openai');
const app = express();
app.use(cors());
app.use(express.json());
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
app.post('/chat', async (req, res) => {
const { prompt } = req.body;
try {
const response = await client.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: prompt }]
});
const botReply = response.choices[0].message.content;
res.json({ reply: botReply });
} catch (error) {
console.error('Error in OpenAI API call:', error);
res.status(500).json({ error: 'Error communicating with the bot' });
}
});
module.exports = app;
Dependencies to install to your Vercel project:
OpenAI developer API key is required to interface with their servers programmatically:
Your URL is configurable and easy to find: use this with your frontend service
(it’s the addresses partially obscured in blue below)
GitHub is recommended to manage your project, and it integrates to Vercel ideally;