Normalize JSON Records Online (Free & Fast)

Make every JSON record share the same shape — fill missing keys, drop extras, all in one pass.

Paste your JSON → Get results instantly (no signup)

⚡ Instant resultsNo signupRuns in your browser
Try examples:

Normalize this JSON structure and fill missing keys with null.

[
{
"a": 1,
"b": 2
},
{
"a": 3,
"c": 4
},
{
"b": 5,
"c": 6
}
]
Output
1[
2 {
3 "a": 1,
4 "b": 2,
5 "c": null
6 },
7 {
8 "a": 3,
9 "b": null,
10 "c": 4
11 },
12 {
13 "a": null,
14 "b": 5,
15 "c": 6
16 }
17]

Love the result?

Use this exact pipeline in your app, backend, or LLM workflow.

No setup needed. Works with curl, Node, Python.

Uses example data. For edited input, copy from the playground.

Read integration guide

Works with:

  • Mixed-shape arrays from multiple API endpoints
  • User-submitted JSON with optional fields
  • Pre-export normalization for analytics or CSV

Example: input → output

Input / Output
Input
[
{
"a": 1,
"b": 2
},
{
"a": 3,
"c": 4
},
{
"b": 5,
"c": 6
}
]
Output
[
{
"a": 1,
"b": 2,
"c": null
},
{
"a": 3,
"b": null,
"c": 4
},
{
"a": null,
"b": 5,
"c": 6
}
]

About this tool

Normalizing JSON enforces a consistent shape across an array of records. If the input is [{a, b}, {a, c}, {b, c, d}] and the target shape is {a, b, c}, normalization fills in the missing keys, removes the extras, and (optionally) orders the keys consistently. The output is an array where every record has the same key set — predictable, type-stable, and safe to feed into anything downstream that assumes uniform records.

The need for this comes up whenever you ingest data from sources that don't all return the same shape. Multi-vendor API integrations, user-submitted form responses, partial database dumps, and merged datasets from different teams all produce mixed-shape arrays. Most downstream tools — CSV exports, table renderers, type inference, schema validators — assume the records share a key set. Without normalization, you discover the mismatch via cryptic errors three steps later.

You can normalize against a static target schema or let the tool infer one from the union of all keys present in the input. The first is right when you have a known contract; the second is right when you're cleaning up exploratory data. Default values for missing keys can be null, an empty string, or any literal you choose. Extra keys get dropped, kept, or moved to a metadata bucket depending on the configuration.

Like the other tools here, this runs entirely in your browser. There's no upload step, no rate limit, and no schema service to authenticate against — just paste the JSON, pick a strategy, and copy the normalized output.

Frequently asked questions

What does normalizing JSON mean?+

It enforces a consistent shape across an array of records. Missing fields get filled in with defaults, extra fields get removed (or moved aside), and key order can be made consistent across every record.

When should I normalize my JSON?+

Whenever you have an array of records that don't all share the same keys — common with multi-vendor APIs, partial dumps, merged datasets, or user-submitted data. Normalize before exporting to CSV, validating against a schema, or feeding records to a downstream tool that assumes uniform shape.

How does the tool decide the target shape?+

Two options: provide a static target schema (right when you have a known contract), or let the tool infer one from the union of all keys present (right for exploratory data).

What happens to extra fields not in the target schema?+

Configurable: drop them silently, keep them as-is, or move them to a metadata bucket. The default is 'drop' so the output strictly matches the target shape.

Can I use null vs empty string vs a custom default for missing fields?+

Yes. The fill-value is configurable per run. Pick null when downstream code distinguishes 'absent' from 'present-but-blank', empty string when feeding spreadsheet tools, or any literal you choose.

Common next steps

Advanced usage (optional)

Normalize

v1.0.0
Schema
arrayobjectdestructive

Description

Normalize

Ensure consistent structure across array items — fill missing keys, remove extra keys, sort keys, enforce type schemas, and require specific fields. Essential for preparing irregular data for tabular display or database insertion.

How It Works

The utility examines all items in an array, determines the superset (or intersection) of keys, and normalizes each item to have the same structure.

Fill Missing Keys

Items missing keys found in other items get those keys added with a configurable fill value.

Remove Extra Keys

Keep only keys that appear in every item (intersection). Removes keys that are unique to some items.

Type Schema Enforcement

Coerce field values to specified types. For example, ensure age is always a number even if some records have it as a string.

Required Fields

Specify fields that must exist in every item. Missing required fields are filled with the fill value.

Configuration

FieldTypeDefaultDescription
Target Pathspath-picker[]Select array paths to normalize (empty = normalize root array)
Fill Missing KeysbooleantrueAdd keys from other items that are missing in each item
Fill ValueenumnullValue for missing keys: null, empty-string, zero, or false
Remove Extra KeysbooleanfalseRemove keys not present in ALL items (keep only intersection)
Sort KeysbooleanfalseSort object keys alphabetically within each item
Type Schemajson{}Map of key → expected type for coercion (string, number, boolean)
Required Fieldsjson[]Array of keys that must be present in every item

Use Cases

Data Consistency

  • API responses: Normalize inconsistent API responses before processing
  • CSV preparation: Ensure all objects have the same keys before CSV export
  • Table display: Guarantee every row has the same columns for table rendering

Schema Enforcement

  • Type coercion: Fix string numbers ("30"30) across the dataset
  • Required fields: Ensure critical fields like id and name exist in every record
  • Key ordering: Sort keys alphabetically for consistent structure

Data Quality

  • Gap detection: Fill missing fields with null to make gaps visible
  • Intersection: Strip non-standard fields to keep only common structure
  • Import normalization: Standardize records from different sources before merging

Configuration

NameTypeDefaultDescription
Target Pathspath-picker[]Select array paths containing objects to normalize (empty = normalize root array)
Fill Missing KeysbooleantrueAdd keys from other items that are missing in each item
Fill ValueenumnullValue to use for missing keys when Fill Missing is enabled null empty-string zero false
Remove Extra KeysbooleanfalseRemove keys not present in ALL items (keep only the intersection)
Sort KeysbooleanfalseSort object keys alphabetically within each item
Type Schemajson{}Map of key → expected type for coercion (string, number, boolean). Example: {"age": "number", "active": "boolean"}
Required Fieldsjson[]Array of keys that must be present in every item. Missing keys are filled with the Fill Value. Example: ["id", "name"]

Examples

AI Prompt
Normalize this JSON structure and fill missing keys with null.
[
{
"a": 1,
"b": 2
},
{
"a": 3,
"c": 4
},
{
"b": 5,
"c": 6
}
]
Output
1[
2 {
3 "a": 1,
4 "b": 2,
5 "c": null
6 },
7 {
8 "a": 3,
9 "b": null,
10 "c": 4
11 },
12 {
13 "a": null,
14 "b": 5,
15 "c": 6
16 }
17]
Config
Target Paths
all
Fill Missing Keys
ON
Fill Value
null
Remove Extra Keys
OFF
Sort Keys
OFF

API Usage

POST /api/v1/utilities/schema.normalize
Example:
curl -X POST https://your-domain.com/api/v1/utilities/schema.normalize \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"inputs":{"primary":[{"a":1,"b":2},{"a":3,"c":4},{"b":5,"c":6}]},"config":{"targetPaths":[],"fillMissing":true,"fillValue":"null","removeExtra":false,"sortKeys":false}}'
Response
1[
2 {
3 "a": 1,
4 "b": 2,
5 "c": null
6 },
7 {
8 "a": 3,
9 "b": null,
10 "c": 4
11 },
12 {
13 "a": null,
14 "b": 5,
15 "c": 6
16 }
17]