openai-token
Why openai-tokens
openai-tokens can significantly streamline the process of utilizing OpenAI’s language models. By ensuring prompts fit within the designated token limits, it reduces potential errors and optimizes the efficiency of your language model applications.
import {
truncateMessage, // truncate a single message
truncateWrapper, // truncate all messages
} from 'openai-tokens'
// The input (strings, just like prompts!)
const str = 'Trying to save money on my prompts! 💰'
// truncate with a model and we detect the token limit)
const truncatedByModel = truncateMessage(str, 'gpt-3.5-turbo')
// the model is required for computing tokens, but you can also limit by number
const truncatedByNumber = truncateMessage(str, 'gpt-3.5-turbo', 100)
// enforce truncation around all messages
const truncatedBody = truncateWrapper({
model: 'gpt-4', // auto-detects token limits 🙌
opts: { // surpressed in the output
limit: 1000, // you can enforce your own limit
buffer: 500 // leave room for GPT to respond
stringify: true // useful if you are sending it via fetch
},
messages: [{ role: 'user', content: str }]
})
Page Source