Building AI Agents

Using AI SDK

Using AI Frameworks

Modern AI frameworks simplify agent development. Here's how they compare.

Popular Frameworks

FrameworkBest For
OpenAI SDKDirect OpenAI access
LangChainComplex chains
AI SDK (Vercel)Web applications
Anthropic SDKClaude models

Common Patterns

All frameworks share similar patterns:

  • Initialize client with API key
  • Create messages array
  • Call completion endpoint
  • Handle response (text or streaming)

Key Features to Look For

  • Streaming support
  • Tool/function calling
  • Error handling
  • Type safety
  • Async support

Try It Yourself

Run this code example to practice what you've learned.

example.py
1# Simulated AI SDK patterns (framework-agnostic concepts)
2
3from typing import AsyncIterator
4import asyncio
5
6class AIClient:
7 """Simulated AI client demonstrating common patterns."""
8
9 def __init__(self, api_key: str = "demo"):
10 self.api_key = api_key
11 self.default_model = "gpt-4"
12
13 async def chat(
14 self,
15 messages: list[dict],
16 model: str | None = None,
17 temperature: float = 0.7,
18 tools: list[dict] | None = None
19 ) -> dict:
20 """Non-streaming chat completion."""
21 await asyncio.sleep(0.3) # Simulate API call
22
23 user_msg = messages[-1]["content"] if messages else ""
24 return {
25 "content": f"I understand you said: '{user_msg}'. How can I help?",
26 "model": model or self.default_model,
27 "usage": {"prompt_tokens": 50, "completion_tokens": 30}
28 }
29
30 async def chat_stream(
31 self,
32 messages: list[dict],
33 model: str | None = None
34 ) -> AsyncIterator[str]:
35 """Streaming chat completion."""
36 response_words = "Hello! I am an AI assistant ready to help you.".split()
37 for word in response_words:
38 await asyncio.sleep(0.1)
39 yield word + " "
40
41# Using the client
42async def main():
43 client = AIClient()
44
45 # Non-streaming
46 messages = [
47 {"role": "system", "content": "You are helpful."},
48 {"role": "user", "content": "What can you do?"}
49 ]
50
51 response = await client.chat(messages)
52 print("Non-streaming response:")
53 print(f" {response['content']}")
54 print(f" Tokens used: {response['usage']}")
55
56 # Streaming
57 print("\nStreaming response:")
58 print(" ", end="")
59 async for chunk in client.chat_stream(messages):
60 print(chunk, end="", flush=True)
61 print()
62
63asyncio.run(main())