00202 00022 00012 00020 00110 00001 00200 00012 00020 00120 00200 00210 00111 00201 00200 00120 00010 00102!!!
Each set of numbers represent 1 letter. I put spaces between them so it would be easier to figure the whole thing out. Good luck!!!
*yawn*
Come on, you can do better than that. Took me ten minutes.
I'll do better in Crack the Code III. I just need to develope a new one first.
[ July 04, 2001: Message edited by: MIB ]
Fry: "There's no such thing as two."
Normally we use the decimal (Arabian) system to count and index items. We start at zero and end at nine. When we want to represent ten (for which there is no single number, obviously), we simply make the rightmost digit a zero, add another digit to the left of the figure, and set that 1 to one. We've moved up a step - a decimal. To represent greater values, we manipulate the right digit until it reaches nine again, after which the left one gets its value raised by one. This process repeats itself until both digits' value equals nine - then it starts over, with a third digit set to one, and the other two to zero.
I've described to you in somewhat clumsy language something which you all (presumably) have already mastered. Everyone knows how it's done, except maybe not all understand *why* it's done this way - for instance, why isn't eight the highest value we 'use' before adding a new decimal, or seven, or five, or any other number, or even a letter? Well, thank the Arabs. They could have arbitrarily decided otherwise (there's more to it, but my memory is a haze right now), but what they came up with was based on their daily needs.
Computers, despite our best efforts, are still mind-bogglingly dumb. They only do what we tell them to, when we tell them to, and how we tell them to (yes, I know Windows might *on occasion* not make it seem that way, but it's all a matter of flawed programming logic). Moreover, they can only 'count' (rather, make the distinction between) 0's and 1's - electricity either flows, or it doesn't - the entire operation of a computer is based on that principle. Manipulating tiny currents is all a computer is capable of. A binary system was/is thus a logical representation of the inner workings of a PC.
So, instead of counting from zero to nine, we now count from zero to one beforing adding a decimal - the same method, but less efficient, since we need much more digits to represent the same value in decimal notation, i.e. '4' is '100': a magnification factor of three. The ratio gets worse (fast) the larger decimal numbers are, which is why a new CPU-architecture is so badly needed and pushed for - a 32-bit processor can deal with binary values (without having to chunk 'm up) that are 65536 times larger than a 16-bit CPU could. But a 64-bit processor can eat values for breakfast that are 4294967296 times larger than those a 32-bit machine could swallow (you'll note that 65536 is the square root of 4294967296 - in other words, the CPU's ability to crunch figures is multiplied by itself)! Naturally, this is an enormous gain - and a necessity created by the nature of having only 0's and 1's to count with.
Almost done with the comp. science lesson (it's my major, you see), let us get down to the 'cracking' part. We've already learned that *everything* on a computer is internally represented as zeros and ones. That means that the letters, symbols, function keys, and the other funky buttons on your keyboard all have their own (binary) values which get send to the CPU when you press their respective keys. When you type 'a', you really tell the computer '1100001' (this is almost right, the keyboard sends it's own little I.D. code along with the number - but I digress). This is where the ASCII standard peeks around the corner - it's an agreement between the founding computer corporations to use decimal values between 0 and 255 for normal and 'special' characters - the weird thingies you get by holding down the ALT key and typing a number. Thus, the letter 'a' in ASCII equals 97, which in turns equals 1100001 in binary. Get it (you'd better!)?
Now we've moved from binary to trinary: instead of counting to one, we go up to two before we add a decimal. So, 0 = 0, 1 = 1, 2 = 2, 3 = 10, 4 = 11, 5 = 12, 6 = 20... and so forth. If we want to translate the seemingly illegible jibberish, all we have to do is convert the numbers back to binary form, find the matching ASCII values, and work those over to plain text. Easy, non?
There's probably a billion errors of various sorts in this reply, and by the time I've posted it somebody else most likely beat me to the punch with a far more comprehensible explanation, but... I was bored
[ July 04, 2001: Message edited by: The_Evil_Lord ]
As for Charles's...
Sick Charles! Sick sick sick sick sick!