Using the OpenAI API
To use the OpenAI API, first you need an account and to add some credit. I added 10 USD some weeks ago and I’ve still used very little of it. Then you need to create an API key, which can be done on the API key page.
I’m going to be using the API via Python. Since I may upload the Python scripts to GitHub, I don’t want to accidentally expose my key. OpenAI have a page on best practices for handling API keys, all common-sense basic stuff. To use an API key without having it in scripts we can make an environmental variable like so:
export OPENAI_API_KEY='yourkey'
I’m just going to do it on a per session basis for now, rather than permanently. The problem with doing it permanently is that potentially malicious scripts could read your API key, and I’m very aware that I test out a lot of different Python libraries using PIP, which can be exploited.
Here is a simple script to test a connection via the API:
main.py
import openai
import os
# Retrieve the API key from the environment variable
= os.getenv("OPENAI_API_KEY")
openai.api_key
def ask_gpt4_turbo(prompt):
= openai.ChatCompletion.create(
response ="gpt-4o-mini",
model=[
messages"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
{
]
)= response.choices[0].message.content
message_content # return message_content
return response
if __name__ == "__main__":
= input("Ask something: ")
user_input = ask_gpt4_turbo(user_input)
answer print("GPT-4o mini's response:", answer)
> python test.py
Ask something: What is the most famous building in Paris?
This is the raw response:
ChatCompletion(id="chatcmpl-AdG1t6W4nyGQYcEfNaS3ktC8UCAO",
=[
choices
Choice(="stop",
finish_reason=0,
index=None,
logprobs=ChatCompletionMessage(
message="The most famous building in Paris is the Eiffel Tower. Completed in 1889 for the Exposition Universelle (World's Fair), it is an iconic symbol of France and is visited by millions of tourists each year. Its distinctive iron lattice structure and impressive height make it a landmark recognized around the globe.",
content="assistant",
role=None,
function_call=None,
tool_calls=None,
refusal
),
)
],=1733919773,
created="gpt-4o-mini-2024-07-18",
modelobject="chat.completion",
=None,
service_tier="fp_bba3c8e70b",
system_fingerprint=CompletionUsage(
usage=61,
completion_tokens=26,
prompt_tokens=87,
total_tokens={"cached_tokens": 0, "audio_tokens": 0},
prompt_tokens_details={
completion_tokens_details"reasoning_tokens": 0,
"audio_tokens": 0,
"accepted_prediction_tokens": 0,
"rejected_prediction_tokens": 0,
},
),
)
Lots of stuff in the response but I’m not sure if most of it is useful. The finish_reason could be useful to check. Documentation here: OpenAI Platform.
finish_reason Indicates why the generation stopped: * “stop”: The model completed naturally. * “length”: The maximum token limit was reached. * “function_call”: If functions are enabled and the model invoked one. * “content_filter”: If the response was filtered due to policy violations.