Tsukatu wrote:jean-luc wrote:I can't think of anything for base 8... base 2 is used in computers (since it can be represented digitally), and by association base 16 (because it's very easy to convert between base 2 and base 16).
We only use base 2 in computing because it becomes exponentially more complicated to manage
ten voltage differences. Do you know what the voltage difference is across a bit storing a 0? It isn't 0 V. Modern computer hardware already accounts for sensing voltage levels, it's just that using binary bits makes it boil down to sensing presence or absence of (excess) charge. We could just as easily make ternary or quadrinary bits, but the potential for error will be much higher, and it number of issues with logic becoming more complicated and production costs arise.
I don't know where I was going with this, actually.
The issue is not the ease of sensing voltage differences, but the use of logic gates (transistors). logic gates are either on or off as a matter of their design. In order to use varying voltage levels, you'd have to have multiple logic gates with resistors for each 'bit'. This becomes much, much more complex for transistor-based devices.
Early computers (think ENIAC and ERMA) were analog, meaning that they used more than just on and off for processing. However analog computing is not practically possible with transistor-style logic gates, and the devices used for analog computing were very hard to miniaturize.
binary digital computers ('digital' means 'discrete values', or noncontinuous values. it is antonymous to 'analog') could be constructed out of transistors, which were much smaller and more durable than vacuum tubes, and were eventually manufactured in enormous densities on silicon plates, which enabled integrated circuits and ultimately modern computers.
The limitation of binary digital computers is, of course, that they cannot truly work with analog values. in fact, they cannot really work with any value besides on or off. a device called an Analog-Digital Converter or Digital-Analog Converter (ADC or DAC) is thus required to convert analog (continuous) signals to digital (discrete number) signals and visa versa, such as occurs when a binary digital computer works with audio, video, and other analog signals.
Note: it occurs to me that I failed to clarify the distinction between analog and digital systems, a distinction which is commonly misunderstood. 'digital' does not imply 'binary' - non-binary systems can be digital. rather, digital means 'discrete numbers'. The easiest way to explain this is that an analog symbol is a floating-point number, say, 0.385019 or 4.39184 or 8.918238. basically, an analog value can be anywhere from minimum to maximum. that is, analog values occur on a continuum. Digital values, on the other hand, must form discrete numbers. If analog values are floating-point, then digital values are integers. for example, 1, 8, or 3. they occur between minimum and maximum and must be on even increments. They are, therefore, not continuous. Digital systems are often binary, because binary is used by logic-gate based systems (see above).
I hope that wasn't too extremely confusing.