Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
192 views
in Technique[技术] by (71.8m points)

c++ - How to correctly determine character encoding of text files?

Here is my situation: I need to correctly determine which character encoding is used for given text file. Hopefully, it can correctly return one of the following types:

enum CHARACTER_ENCODING
{
    ANSI,
    Unicode,
    Unicode_big_endian,
    UTF8_with_BOM,
    UTF8_without_BOM
};

Up to now, I can correctly tell a text file is Unicode, Unicode big endian or UTF-8 with BOM by calling the following function. It also can correctly determine for ANSI if the given text file is not originally a UTF-8 without BOM. The problem is that when the text file is UTF-8 without BOM, the following function will mistakenly regard it as a ANSI file.

CHARACTER_ENCODING get_text_file_encoding(const char *filename)
{
    CHARACTER_ENCODING encoding;

    unsigned char uniTxt[] = {0xFF, 0xFE};// Unicode file header
    unsigned char endianTxt[] = {0xFE, 0xFF};// Unicode big endian file header
    unsigned char utf8Txt[] = {0xEF, 0xBB};// UTF_8 file header

    DWORD dwBytesRead = 0;
    HANDLE hFile = CreateFile(filename, GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL);
    if (hFile == INVALID_HANDLE_VALUE)
    {
        hFile = NULL;
        CloseHandle(hFile);
        throw runtime_error("cannot open file");
    }
    BYTE *lpHeader = new BYTE[2];
    ReadFile(hFile, lpHeader, 2, &dwBytesRead, NULL);
    CloseHandle(hFile);

    if (lpHeader[0] == uniTxt[0] && lpHeader[1] == uniTxt[1])// Unicode file
        encoding = CHARACTER_ENCODING::Unicode;
    else if (lpHeader[0] == endianTxt[0] && lpHeader[1] == endianTxt[1])//  Unicode big endian file
        encoding = CHARACTER_ENCODING::Unicode_big_endian;
    else if (lpHeader[0] == utf8Txt[0] && lpHeader[1] == utf8Txt[1])// UTF-8 file
        encoding = CHARACTER_ENCODING::UTF8_with_BOM;
    else
        encoding = CHARACTER_ENCODING::ANSI;   //Ascii

    delete []lpHeader;
    return encoding;
}

This problem has blocked me for a long time and I still cannot find a good solution. Any hint will be appreciated.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

For starters, there's no such physical encoding as "Unicode". What you probably mean by this is UTF-16. Secondly, any file is valid in "ANSI", or any single-byte encoding for that matter. The only thing you can do is guess in the best order which is most likely to throw out invalid matches.

You should check, in this order:

  • Is there a UTF-16 BOM at the beginning? Then it's probably UTF-16. Use the BOM as indicator whether it's big endian or little endian, then check the rest of the file whether it conforms.
  • Is there a UTF-8 BOM at the beginning? Then it's probably UTF-8. Check the rest of the file.
  • If the above didn't result in a positive match, check if the entire file is valid UTF-8. If it is, it's probably UTF-8.
  • If the above didn't result in a positive match, it's probably ANSI.

If you expect UTF-16 files without BOM as well (it's possible for, for example, XML files which specify the encoding in the XML declaration), then you have to shove that rule in there as well. Though any of the above may produce a false positive, falsely identifying an ANSI file as UTF-* (though it's unlikely). You should always have metadata that tells you what encoding a file is in, detecting it after the fact is not possible with 100% accuracy.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...