FAQ Library

AI API Integration FAQ

Practical answers for Base URL setup, SDK configuration, tool integrations, API errors, model selection, tokens and quota.

Base URL and SDK

What Base URL should I use?

For OpenAI-compatible access, use https://api.aliapi.me/v1 and an aliapi.me API key.

What changes in the OpenAI SDK?

Keep messages, model and temperature unchanged. Replace only apiKey and baseURL.

Does this work with Node.js and Python?

Yes. Any SDK that supports a custom Base URL can connect through the gateway.

Errors and timeouts

How do I fix 401 Unauthorized?

Check the API key, Bearer prefix, key status and whether the request is sent to the correct Base URL.

What causes 429 Too Many Requests?

Rate limits, quota, high concurrency or upstream throttling. Check logs and add exponential backoff.

How should I handle timeouts?

Check network conditions, model latency, payload size and client timeout settings.

Model choice

Which model should I use for chat?

Start with a stable general chat model, then move to stronger reasoning models for difficult tasks.

Which model is good for code?

Choose a model with strong code ability and enough context, then monitor latency and failure rate.

Which model is good for vision?

Use multimodal models that accept images and watch image size, format and cost.

Tokens and quota

How are tokens counted?

Usage usually includes input and output tokens. Long context, images and tool calls increase cost.

How do I control quota?

Create separate keys for each project and set quota, concurrency and pause rules.

Tool setup

How do I connect Dify or FastGPT?

Choose OpenAI-compatible or custom OpenAI provider, then enter Base URL and API key.

How do I connect Cursor or Cline?

Set an OpenAI-compatible endpoint with Base URL, API key and model name.