Unicode System is used to create a single character set Standard that included every reasonable character in all writing languages in the world.
The reason character encoding is too important is that every device can display the same information. A custom character encoding scheme can be work effectively on one computer, but what if you send that same text to someone else, that time problem will occur as it will not know what is written in that text it understands the encoding scheme too.
ASCII (American Standard Code for Information Interchange) is the first widespread encoding scheme. However, it was limited to only 128 character definitions. This limit is fine for the most common English characters, numbers, and punctuation, but is a bit limiting for the rest of the world.
So, it causes to find a new and suitable encoding scheme which for their characters too.
In the end, everyone started to create their own encoding schemes, and things started to get a little bit confusing.
So, A new globally known character encoding must be needed. The objective of Unicode is to reduce the confusion between computers and unify all the different encoding schemes so that can be limited as much as possible.
These are the several character encoding forms:
UTF-8: It only uses 8 bits (1 byte) to encode English characters. It uses a sequence of bytes to encode other characters. This type of Unicode scheme is mainly used in email systems and on the internet.
UTF-16: Uses 16 bits (2 bytes) to encode the most commonly used characters.
UTF-32: Uses 32 bits(4 bytes) to encode the characters.
A 16-bit number is too small to represent all the characters so it became so popular to use 32 bit long Unicode system. UTF-32 Unicode Scheme is capable of representing every Unicode character as one number.
Note: UTF means Unicode Transformation Unit.
How Does Java Use Unicode?
The time when Java was invented, the Unicode system had values defined for a much smaller set of characters. So there were not so much of that requirement and so Java designed to use UTF-16.
Note: A thing to remember is that a single char data type can no longer represent all the Unicode characters.