A simple translation of keywords seems straightforward, I wonder why it's not standard.
# def two_sum(arr: list[int], target: int) -> list[int]:
펀크 투섬(아래이: 목록[정수], 타개트: 정수) -> 목록[정수]:
# n = len(arr)
ㄴ = 길이(아래이)
# start, end = 0, n - 1
시작, 끝 = 0, ㄴ - 1
# while start < end:
동안 시작 < 끝:
Code would be more compact, allowing things like more descriptive keywords e.g.
AbstractVerifiedIdentityAccountFactory vs 실명인증계정생성, but we'd lose out on the nice upper/lowercase distinction.
Wonderful! What a cool idea. For anyone interested, you can learn the whole of Hangul in an afternoon; it's cleverly designed to be very logical and has some handy mnemonics: https://korean.stackexchange.com/a/213
That is a deep knowledge that even Korean-natives would not know. I will add this site as a reference to Github. I am glad that I have you as a supporter!
Korean doesn’t reduce typing compared to English from my experience. What looks like a “character” is actually a syllable block called “eumjeol” that’s made up of consonants (moeum)and vowels (jaeum). You can’t have a vowel only syllable either so you always have to pair it with a null consonant no matter what (which kinda looks like a zero: ㅇ) and while nouns can be much more concise compared to English, verbs can get verbose.
The main benefit of Korean actually comes from the fact that the language itself fits perfectly into a standard 27 alphabet keys and laid out in such a way that lets you type ridiculously fast. The consonant letters are always situated in the left half and the vowels are in the right half of the keyboard. This means it is extremely easy to train muscle memory because you’re mostly alternating keystrokes on your left hand and right hand.
Anecdotally I feel like when I’m typing in English, each half of my brain needs to coordinate more compared to when I’m typing in Korean, the right brain only need to remember the consonant positions for my left hand and my left brain only need to remember the vowel positions.
What you talked is mostly right and I did not know about typing in Korean, the left-hand side and right-hand side. Btw, Consonant(Jaeum) and vowel(Moeum).
In experience-wise, what you had would be precise.
Thanks! One thing that motivated me was curiosity about prompt efficiency in the AI era. Hangul is beautifully dense — a single syllable block packs initial consonant + vowel + final consonant into one character. I wondered if Korean-keyword code might produce shorter prompts for LLMs.
I actually tested this with GPT-4o's tokenizer, and the result was the opposite — Korean keywords average 2-3 tokens vs 1 for English. A fibonacci program in Han takes 88 tokens vs 54 in Python.
The reason comes down to how LLM tokenizers work. They use BPE (Byte Pair Encoding), which starts with raw bytes and repeatedly merges the most frequent pairs into single tokens. Since training data is predominantly English, words like `function` and `return` appear billions of times and get merged into single tokens.
Korean text appears far less frequently, so the tokenizer doesn't learn to merge Hangul syllables — it falls back to splitting each character into 2-3 byte-level tokens instead.
It's a tokenizer training bias, not a property of Hangul itself. If a tokenizer were trained on a Korean-heavy corpus, `함수` could absolutely become a single token too.
So no efficiency benefit today. But it was a fun exploration, and Korean speakers can read the code like natural language. It could also be a fun way for people learning Korean to practice reading Hangul in a different context — every keyword is a real Korean word with meaning.
I don't know how to read Hangul (I know the general idea about how the character is composed). To me just looking at the examples, it doesn't seem as obvious what the structure of the code is, compared to Latin letters and punctuation. Actually, most punctuation looked OK, but the first couple of examples used arrays and [ and ] seemed to just blend in with the identifiers wherever they appeared. I'm not sure how distinct they look with familiarity with Hangul characters. I'm sure it's also nothing that colour syntax highlighting wouldn't make easier.
I have similar idea to train LLM in Serbian, create even new encoding https://github.com/topce/YUTF-8 inspired by YUSCII.
Did not have time and money ;-)
Great that you succeed.
Idea if train in Serbian text encoded in YUTF-8 (not UTF-8) it will have less token when prompt in Serbian then English, also Serbian Cyrillic characters are 1 byte in YUTF-8 instead of 2 in UTF.Serbian language is phonetic we never ask how you spell it.Have Latin and Cyrillic letters.
Really interesting approach — attacking token efficiency at the encoding level is more fundamental than what I did.
Even without retraining BPE from scratch, starting with YUTF-8 and measuring how existing tokenizers handle it would already be a worthwhile experiment.
A simple translation of keywords seems straightforward, I wonder why it's not standard.
Code would be more compact, allowing things like more descriptive keywords e.g. AbstractVerifiedIdentityAccountFactory vs 실명인증계정생성, but we'd lose out on the nice upper/lowercase distinction.Wonderful! What a cool idea. For anyone interested, you can learn the whole of Hangul in an afternoon; it's cleverly designed to be very logical and has some handy mnemonics: https://korean.stackexchange.com/a/213
That is a deep knowledge that even Korean-natives would not know. I will add this site as a reference to Github. I am glad that I have you as a supporter!
I don't know Korean at all, but this looks cool and a fun project. I'm curious if this reduces typing or has any benefits being in Hangul vs Latin?
Korean doesn’t reduce typing compared to English from my experience. What looks like a “character” is actually a syllable block called “eumjeol” that’s made up of consonants (moeum)and vowels (jaeum). You can’t have a vowel only syllable either so you always have to pair it with a null consonant no matter what (which kinda looks like a zero: ㅇ) and while nouns can be much more concise compared to English, verbs can get verbose.
The main benefit of Korean actually comes from the fact that the language itself fits perfectly into a standard 27 alphabet keys and laid out in such a way that lets you type ridiculously fast. The consonant letters are always situated in the left half and the vowels are in the right half of the keyboard. This means it is extremely easy to train muscle memory because you’re mostly alternating keystrokes on your left hand and right hand.
Anecdotally I feel like when I’m typing in English, each half of my brain needs to coordinate more compared to when I’m typing in Korean, the right brain only need to remember the consonant positions for my left hand and my left brain only need to remember the vowel positions.
만나서 반가워요!
What you talked is mostly right and I did not know about typing in Korean, the left-hand side and right-hand side. Btw, Consonant(Jaeum) and vowel(Moeum).
In experience-wise, what you had would be precise.
Thanks! One thing that motivated me was curiosity about prompt efficiency in the AI era. Hangul is beautifully dense — a single syllable block packs initial consonant + vowel + final consonant into one character. I wondered if Korean-keyword code might produce shorter prompts for LLMs.
I actually tested this with GPT-4o's tokenizer, and the result was the opposite — Korean keywords average 2-3 tokens vs 1 for English. A fibonacci program in Han takes 88 tokens vs 54 in Python.
The reason comes down to how LLM tokenizers work. They use BPE (Byte Pair Encoding), which starts with raw bytes and repeatedly merges the most frequent pairs into single tokens. Since training data is predominantly English, words like `function` and `return` appear billions of times and get merged into single tokens.
Korean text appears far less frequently, so the tokenizer doesn't learn to merge Hangul syllables — it falls back to splitting each character into 2-3 byte-level tokens instead.
It's a tokenizer training bias, not a property of Hangul itself. If a tokenizer were trained on a Korean-heavy corpus, `함수` could absolutely become a single token too.
So no efficiency benefit today. But it was a fun exploration, and Korean speakers can read the code like natural language. It could also be a fun way for people learning Korean to practice reading Hangul in a different context — every keyword is a real Korean word with meaning.
I don't know how to read Hangul (I know the general idea about how the character is composed). To me just looking at the examples, it doesn't seem as obvious what the structure of the code is, compared to Latin letters and punctuation. Actually, most punctuation looked OK, but the first couple of examples used arrays and [ and ] seemed to just blend in with the identifiers wherever they appeared. I'm not sure how distinct they look with familiarity with Hangul characters. I'm sure it's also nothing that colour syntax highlighting wouldn't make easier.
Fair point that [ ] can blend in.
For Korean readers the character systems look quite different, but I can see how it's hard to parse visually without familiarity.
As you said, syntax highlighting helps a lot — there's a colored screenshot at the top of the README showing how it looks in practice.
Very Interesting...
I have similar idea to train LLM in Serbian, create even new encoding https://github.com/topce/YUTF-8 inspired by YUSCII. Did not have time and money ;-) Great that you succeed. Idea if train in Serbian text encoded in YUTF-8 (not UTF-8) it will have less token when prompt in Serbian then English, also Serbian Cyrillic characters are 1 byte in YUTF-8 instead of 2 in UTF.Serbian language is phonetic we never ask how you spell it.Have Latin and Cyrillic letters.
Really interesting approach — attacking token efficiency at the encoding level is more fundamental than what I did.
Even without retraining BPE from scratch, starting with YUTF-8 and measuring how existing tokenizers handle it would already be a worthwhile experiment.
Hope you find the time to build it, good luck!