I am working with a remote application that seems to do some magic with the encoding. The application renders clear responses (which I'll refer as True and False), depending on user input. I know two valid values, that will render 'True', all the others should be 'False'.
What I found (accidently) interesting is, that submitting corrupted value leads to 'True'.
Example input:
USER10 //gives True
USER11 //gives True
USER12 //gives False
USER.. //gives False
OTHERTHING //gives False
so basically only these two first values render True response.
What I noticed is that USERà±0 (hex-wise x55x53x45x52C0xB1x30) is accepted as True, surprisingly.
I did check other hex bytes, with no such success. It leads me to a conclusion that xC0xB1 could be somehow translated into 0x31 (='1').
My question is - how it could happen? Is that application performing some weird conversion from UTF-16 (or sth else) to UTF-8?
I'd appreciate any comments/ideas/hints.
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…