A QuickStart for AI API Coding
OpenAI API quick start
OpenAI API setup
Create OpenAI account and set payment method;
Create Project;
Create API key in the project on OpenAI.
init Python environment
$ python3 --version
Python 3.11.9
$ pip3 --version
pip 24.0 from /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pip (python 3.11)
$ python3 -m venv .venv
$ source .venv/bin/activate
(.venv) $ pip install openai==1.50.2
Use OpenAI API
Write a simple Python code to test the OpenAI API.
openai-api.py
:
from openai import OpenAI
client = OpenAI()
completion = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{
"role": "user",
"content": "Show me top 5 differences between Java and Python."
}
]
)
print(completion.choices[0].message)
Execute the code:
(.venv) $ export OPENAI_API_KEY="your_api_key_here"
(.venv) $ python src/openai-api.py
Result:
Certainly! Here are the top 5 differences between Java and Python:
-
Syntax and Readability:
- Python: Python is known for its simple and clean syntax, which emphasizes readability. It uses indentation to define code blocks, which enhances the clarity of code structure.
- Java: Java has a more verbose syntax that requires explicit use of semicolons and braces to define code blocks. This can make Java code longer and arguably more complex compared to Python.
-
Typing System:
- Python: Python is dynamically typed, meaning that variable types are determined at runtime. You can change a variable’s type during execution without any declarations.
- Java: Java is statically typed, requiring explicit variable declarations and type definitions at compile time. This can help catch type-related errors early but may add to the verbosity of the code.
-
Performance:
- Python: Generally, Python is slower than Java due to its interpreted nature and dynamic typing. However, for many applications, this speed difference is not significant.
- Java: Java typically performs faster than Python because it is compiled to bytecode and runs on the Java Virtual Machine (JVM), which can optimize performance through Just-In-Time (JIT) compilation.
-
Memory Management:
- Python: Python uses automatic garbage collection for memory management, which can lead to unpredictable memory usage patterns but simplifies development by managing memory allocation and deallocation automatically.
- Java: Java also features automatic garbage collection but allows programmers more control over memory management through different garbage collectors and memory models.
-
Use Cases and Ecosystem:
- Python: Python is widely used in data science, machine learning, web development, automation, and scripting due to its rich set of libraries (e.g., NumPy, Pandas, Flask, Django) and simplicity.
- Java: Java is commonly used in large-scale enterprise applications, Android app development, and environments where performance and security are critical. It has a robust ecosystem with frameworks like Spring and Hibernate.
These differences highlight the strengths and weaknesses of each language, helping developers choose the right tool for their specific needs.
LangChain with OpenAI quick start
LangChain setup
pip install langchain==0.3.1
pip install langchain-community==0.3.1
pip install langchain-openai==0.2.1
Use LangChain with OpenAI
Write a simple Python code to test the LangChain with OpenAI.
langchain_openai_quickstart.py
:
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
model = ChatOpenAI(model="gpt-4o-mini")
messages = [
SystemMessage(content="Translate the following from English into Chinese"),
HumanMessage(content="hi, how are you?"),
]
result = model.invoke(messages)
print(result.content)
Execute the code:
$ python src/langchain_openai_quickstart.py
你好,你好吗?
LangChain Prompt Template
# prompt template example
from langchain_core.prompts import ChatPromptTemplate
system_template = "Translate the following into {language}:"
prompt_template = ChatPromptTemplate.from_messages(
[("system", system_template), ("user", "{text}")]
)
result_template = prompt_template.invoke({"language": "italian", "text": "hi, how are you?"})
print(result_template)
result_template = prompt_template.invoke({"language": "Chinese", "text": "hi, how are you?"})
print(result_template)
result = model.invoke(result_template)
print(result.content)
Execute the code:
$ python src/langchain_openai_quickstart.py
你好,你好吗?
messages=[SystemMessage(content='Translate the following into italian:', additional_kwargs={}, response_metadata={}), HumanMessage(content='hi, how are you?', additional_kwargs={}, response_metadata={})]
messages=[SystemMessage(content='Translate the following into Chinese:', additional_kwargs={}, response_metadata={}), HumanMessage(content='hi, how are you?', additional_kwargs={}, response_metadata={})]
你好,你好吗?
LangChain with PGVector quick start
setup
Download PGVector docker image:
pgvector/pgvector:pg16
Start PGVector docker container:
docker run --name pgvector-container \
-e POSTGRES_USER=langchain -e POSTGRES_PASSWORD=langchain \
-e POSTGRES_DB=langchain -p 6024:5432 -d pgvector/pgvector:pg16
Install Python libs:
pip install -qU langchain_postgres
pip install psycopg_binary
Python code
from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(model="text-embedding-3-large")
from langchain_core.documents import Document
from langchain_postgres import PGVector
from langchain_postgres.vectorstores import PGVector
# See docker command above to launch a postgres instance with pgvector enabled.
connection = "postgresql+psycopg://langchain:langchain@localhost:6024/langchain" # Uses psycopg3!
collection_name = "my_docs"
vector_store = PGVector(
embeddings=embeddings,
collection_name=collection_name,
connection=connection,
use_jsonb=True,
)
docs = [
Document(
page_content="there are cats in the pond",
metadata={"id": 1, "location": "pond", "topic": "animals"},
),
Document(
page_content="ducks are also found in the pond",
metadata={"id": 2, "location": "pond", "topic": "animals"},
),
Document(
page_content="fresh apples are available at the market",
metadata={"id": 3, "location": "market", "topic": "food"},
),
Document(
page_content="the market also sells fresh oranges",
metadata={"id": 4, "location": "market", "topic": "food"},
),
Document(
page_content="the new art exhibit is fascinating",
metadata={"id": 5, "location": "museum", "topic": "art"},
),
Document(
page_content="a sculpture exhibit is also at the museum",
metadata={"id": 6, "location": "museum", "topic": "art"},
),
Document(
page_content="a new coffee shop opened on Main Street",
metadata={"id": 7, "location": "Main Street", "topic": "food"},
),
Document(
page_content="the book club meets at the library",
metadata={"id": 8, "location": "library", "topic": "reading"},
),
Document(
page_content="the library hosts a weekly story time for kids",
metadata={"id": 9, "location": "library", "topic": "reading"},
),
Document(
page_content="a cooking class for beginners is offered at the community center",
metadata={"id": 10, "location": "community center", "topic": "classes"},
),
]
vector_store.add_documents(docs, ids=[doc.metadata["id"] for doc in docs])
results = vector_store.similarity_search(
"kitty", k=10, filter={"id": {"$in": [1, 5, 2, 9]}}
)
for doc in results:
print(f"* {doc.page_content} [{doc.metadata}]")
Execute the code:
$ python src/langchain_pgvector_example.py
* there are cats in the pond [{'id': 1, 'topic': 'animals', 'location': 'pond'}]
* the library hosts a weekly story time for kids [{'id': 9, 'topic': 'reading', 'location': 'library'}]
* ducks are also found in the pond [{'id': 2, 'topic': 'animals', 'location': 'pond'}]
* the new art exhibit is fascinating [{'id': 5, 'topic': 'art', 'location': 'museum'}]