API integration guide

Transform API JSON before it reaches your app, database, or LLM

Use reusable Forge Json utilities and saved pipelines to clean, normalize, and prepare JSON inside backend jobs, ingestion workers, UI adapters, and LLM workflows.

Where Forge Json fits

Put a JSON transformation step between the external API and the place that consumes the data.

Instead of scattering JSON cleanup logic across your backend, centralize it in reusable pipelines. This keeps transformations auditable, consistent, and easier to evolve.

Client -> API -> Forge JSON Pipeline -> Clean JSON -> UI / Database / LLM

Use utility execution when a packaged tool already matches the job. Use saved pipeline execution when your team has composed and saved a multi-step workflow.

Run a packaged utility by utilityId

Use utilities when a single transformation step solves your problem, such as cleaning JSON, mapping values, or formatting data.

Tool pages map to packaged utilities. The API request uses the utility id in the URL and sends JSON under inputs.primary, with utility options in config.

{
  "inputs": {
    "primary": {
      "user": {
        "name": " Ada ",
        "email": null
      }
    }
  },
  "config": {
    "trimStrings": true,
    "removeNulls": true
  }
}
curl -X POST "$BASE_URL/api/v1/utilities/cleanup.clean-json" \
  -H "Authorization: Bearer fje_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{"inputs":{"primary":{"user":{"name":" Ada ","email":null}}},"config":{"trimStrings":true,"removeNulls":true}}'

You can copy a ready-to-run request directly from any tool page.

Use this path for cleaning API responses before rendering UI components.

{
  "success": true,
  "data": {
    "output": {
      "user": {
        "name": "Ada"
      }
    }
  }
}

Normalize data for analytics

Analytics pipelines usually need stable keys, predictable nesting, and consistent scalar types. Run a utility before writing to a warehouse, event queue, or reporting table so downstream queries do less defensive parsing.

Preprocess JSON for LLMs

Before sending API data to an LLM, remove fields the prompt does not need, normalize repeated objects, and trim noisy nulls. Smaller and more regular JSON keeps prompts easier to audit and cheaper to run.

Run a saved pipeline by pipelineId

Use saved pipelines when your workflow requires multiple steps or custom logic composed in the pipeline editor.

Saved pipelines use the pipeline id in the URL. Inputs are mapped to saved pipeline input nodes by name, or to the single input node when the saved pipeline has one input.

curl -X POST "$BASE_URL/api/v1/pipelines/pl_123/run" \
  -H "Authorization: Bearer fje_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{"inputs":{"default":{"orders":[{"id":"A-100","total":"19.99"}]}}}'

Production guidance

Keep API keys on the server, set request timeouts around external calls, and log the utility id or pipeline id with each job. Validate payload size before sending large documents, and store the transformed output only after checking the API response.

Endpoint reference

The full API reference is available at /openapi. Use it for authentication, limits, response schemas, and current endpoint details.