9

I've looked everywhere for this answer or even at least a question like this one (even tom's hardware didn't have anything 'explicitly' related to this).

My question is simple:

Is there or are there any alternatives to the current way data is processed (using 0s and 1s) in computer architecture?

I came across this question when looking for a new PC to buy and got into looking at how Intel and the other processor guys spend billions squeezing more transistors onto chips, etc. (but that is only partly related to my question).

Some people may say that "0s and 1s are the lowest form of representing data", which was true back when such computers started using such a system. Is it still the case today? Have we really not gone right back to the drawing board to look at alternatives for processing that can likely shrink the processing needs we currently face?

I know to some of you that this question may have a simple answer that you think is correct, but just thinking about it and going all the way back to 0s and 1s and even the transistor itself, it makes you wonder whether alternatives to every single method or step of the architecture exists out there (not just the 0 and 1 representation).

My personal opinion not related to the question "I believe that because of the complex nature current PCs have, the capacity to do something more complex than 0|1 processing at the lowest level is something that may be possible today, simply because that type of processing seems like it defeats the purpose/s of complex solving the PC was designed for"

Joe
  • 191
  • 2
    If you add more levels beyond 0/1, things start to get more complicated. – Renan Jul 07 '13 at 20:45
  • 3
    Since you recommend going back to the drawing board, can you make a case for why simple (as in 0s and 1s) is bad or inefficient? – Karan Jul 07 '13 at 20:58
  • 1
  • 1
    I don't see how this is opinion based, it would probably be a better fit on CS.SE but this is an interesting question with specific answers. – terdon Jul 07 '13 at 22:24
  • There might be but all customer level hardware is not. Talking about alternatives to binary. – Ramhound Jul 07 '13 at 22:37
  • And to what purpose this question? Quantum computing is a ways off, Analog has to be converted to digital for it to be understood and digital logic is switch based. Any discussion of other means belongs in a computer lab as SuperUser is based on fait acompli. We use digital binary, octal and hex because they work with switches and the registers in microprocessors are just really complex banks of switches. – Fiasco Labs Jul 07 '13 at 23:28
  • I know some of you think this question seems to not fit appropriately, but it would be nice if you could leave it open so that even if the question doesn't seem good enough, the answers provided below give concise counter-arguments to my proposal. Also, I did try searching for my question, but it seems to have been asked in varying styles which wouldn't link to each other (search engines aren't that smart yet) – Joe Jul 08 '13 at 00:14
  • http://en.wikipedia.org/wiki/Harwell_computer would be interesting as a counterpoint - using decimal rather than binary number representations. – Journeyman Geek Jul 08 '13 at 05:51
  • I've reopened this, as the question is certainly not opinion-based, and can be backed up with facts and specific expertise. – slhck Jul 08 '13 at 06:11
  • Note that solid state drives are already making use of more than bits on their lowest-level: MLC devices store "two bits" (4 states) and TLC devices advance this to "three bits" (8 states) and the state-of-the-art QLC use "four bits" (16 states) at their lowermost storage level. The "bits" are just for interfacing with the rest of the hardware, internally there are just said numbers of states. – linux-fan Nov 10 '19 at 22:29

3 Answers3

11

The 0/1 structure is indeed the simplest way to represent and store data. But remember that before digital technology (for storage) was introduced, devices used analog storage solutions. Also remember that quantum computing is currently being researched & implemented (but at a very early stage), and it is other kind of data representation and processing.


Referring to everyday computing in the present, note that 0/1 architecture (or true/false, on/off, etc) is mandatory because the current technology relays on digital (2-state) streams. If you try to make stuff more complex on the most basic level, it will eventually render the system harder to maintain and understand how it works. I'm not saying that it is not possible - as I said the "next big thing" on this is approaching us, but it has to be done very carefully to not mess it up. Trying to make things more complex for no reason is not a good idea. But, my previous example, quantum computing, is an exception because it's a new area of science to explore, and in top of all - more efficient, comparing to digital technology.


In addition, the idea of ternary computer (3-state instead of 2-state technology) has been suggested, but not widely implemented for couple of reasons:

It is much harder to build components that use more than two states/levels/whatever. For example, the transistors used in logic are either closed and don't conduct at all, or wide open. Having them half open would require much more precision and use extra power. Nevertheless, sometimes more states are used for packing more data, but rarely (e.g. modern NAND flash memory, modulation in modems).

If you use more than two states you need to be compatible to binary, because the rest of the world uses it. Three is out because the conversion to binary would require expensive multiplication or division with remainder. Instead you go directly to four or a higher power of two.

These are practical reasons why it is not done, but mathematically it is perfectly possible to build a computer on ternary logic.

References / Further Reading:

Wikipedia

Nature

Other

matan129
  • 1,950
  • 1
    Thanks! Your answer was great. I found 2 other links from your link and I now see some of the other views mentioned. I'd just like to point you here: http://stackoverflow.com/questions/764439/why-binary-and-not-ternary-computing and the post by "rbud". His last paragraph mentions "Apparently they are much less costly to build and they use far less energy to operate." which seems significant to me, although counter-arguments for precision were also mentioned. – Joe Jul 08 '13 at 00:08
  • The answer has a few typos -- "current technology relays on digital (2-state) streams" should be "relies" and "binary" (digital does not imply binary). – Ben Voigt May 19 '23 at 14:15
3

A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away. -- Antoine de Saint-Exupéry

0s and 1s are just a simpliest way of expressing numbers, and computers we know are all about numbers. Any number that can be written using digits 0-9 has its equivalent in 0s and 1s (see binary number in Wikipedia). As far as you're using a computer for calculations (and that's what we're doing right now), you don't need more than 2 digits. Actually, introducing next digits would make calculations more complex, as you'd need another layer of abstraction over the physical 0-1 architecture.

You should also be aware that 0 and 1 are logical states: false and true. Another digit wouldn't be of much use as long as we're sticking to the logic (although some people state that we need third state, file not found ;) ) Computers like the ones we're using right now don't need more than 0/1.

But. When you stop thinking in categories of logic, that's a whole different story. Quantum computers are being researched. In quantum mechanics there's just a probability that something's true or false, the real state is somewhere in between. There are very few people in the world that could say they have at least some general idea of how quantum computers work and the science behind them isn't completely understood yet. But there are few quantum computer-related ideas that were already implemented, like this one.

gronostaj
  • 57,004
  • Thanks! Your answer was great too. I'm glad both you and matan129 took the time to share your knowledge with detailed answers. It definitely answers some of the thoughts I have, although I will look into quantum computing, analog computing and ternary computing. Interesting to see the possibilities that exist. – Joe Jul 08 '13 at 00:10
  • 1
    "Any number that can be written using digits 0-9 has its equivalent in 0s and 1s" Well, that isn't precisely true. Think decimal numbers. Some convert trivially to binary (using a given representation), others don't. While this isn't a problem with binary representation per se (one could always pick a different binary representation format), it is a problem with what we do have and the reason why programming with floating point numbers is non-trivial in some cases, and inexact in the general case. – user Jul 08 '13 at 07:32
-2

Yes, there is. I am the proponent of one such computer architecture and new method of computation. I have called it U-Mentalism. There is a published white paper of it named U-MENTALISM PATENT: THE BEGINNING OF CINEMATIC SUPERCOMPUTATION

Welcome to the Future!

  • 1
    Please add further details to expand on your answer, such as working code or documentation citations. – Community Sep 06 '21 at 13:35
  • Well, as to documentation citation, the paper with all the technical details is consultable through the hyperlink in the answer. As to the code, U-Mentalism Assembly Language is going to be open to development in the community late this year, or early 2022. – Luis Homem Sep 07 '21 at 17:23