keyNormalization
Reports object keys that are not normalized using Unicode normalization forms.
✅ This rule is included in the json logical
preset.
Unicode characters can sometimes have multiple representations that look identical but are technically different character sequences. For example, the character “é” can be represented as a single code point (U+00E9) or as an “e” followed by a combining accent (U+0065 + U+0301). This can lead to confusion, comparison issues, and unexpected behavior when working with JSON data.
Using normalized Unicode ensures consistent representation of text, which is important for key comparison, sorting, and searching operations. When keys are properly normalized, operations like key lookups and equality checks will work as expected across different systems and platforms.
Examples
Section titled “Examples”{ "cafe\u0301": "value"}
{ "cafè": "espresso", "cafe\u0300": "latte"}
{ "café": "value"}
{ "résumé": "document", "naïve": "approach", "piñata": "party"}
{ "simple": "value", "no_special_chars": true}
Options
Section titled “Options”Type:
"NFC" | "NFD" | "NFKC" | "NFKD"
Default:
"NFC"
Unicode normalization form to use when checking keys.
Each normalization form has specific characteristics:
- NFC: Canonical Decomposition followed by Canonical Composition (default)
- NFD: Canonical Decomposition
- NFKC: Compatibility Decomposition followed by Canonical Composition
- NFKD: Compatibility Decomposition
json.rules({ keyNormalization: { form: "NFD", },});
{ "café": "value"}
{ "cafe\u0301": "value"}
When Not To Use It
Section titled “When Not To Use It”If your project intentionally uses a mix of JSON normalization forms, such as systems that manually manipulate strings, this rule might not reflect your project’s needs. Consider refactoring to use safer string representations.
Further Reading
Section titled “Further Reading”Equivalents in Other Linters
Section titled “Equivalents in Other Linters”- ESLint:
json/no-unnormalized-keys