We invented multi bit models so we could get more accuracy since neural networks are based off human brains which are 1 bit models themselves. A 2 bit neuron is 4 times as capable as a 1 bit neuron but only double the size and power requirements. This whole thing sounds like bs to me. But then again maybe complexity is more efficient than per unit capability since thats the tradeoff.
Human brains aren’t binary. They send signals in lot of various strength. So “on” has a lot of possible values. The part of the brain that controls emotions considers low but non zero level of activation to be happy and high level of activation to be angry.
Human brains aren’t 1 bit models. Far from it actually, I am not an expert though but I know that neurons in the brain encode different signal strengths in their firing frequency.
Neuronal firing is often understood as a fundamentally binary process, because a neuron either fires an action potential or it does not. This is often referred to as the “all-or-none” principle. Once the membrane potential of a neuron reaches a certain threshold, an action potential will fire. If this threshold is not reached, it won’t fire. There’s no such thing as a “partial” action potential; it’s a binary, all-or-none process.
Frequency Modulation: Even though an individual neuron’s action potential can be considered binary, neurons encode the intensity of the stimulation in the frequency of action potentials. A stronger stimulus causes the neuron to fire action potentials more rapidly. Again binary in nature not analog.
Neuronal firing is often understood as a fundamentally binary process, because a neuron either fires an action potential or it does not. This is often referred to as the “all-or-none” principle.
Isn’t this true of standard multi-bit neural networks too? This seems to be what a nonlinear activation function achieves: translating the input values into an all-or-nothing activation.
The characteristic of a 1-bit model is not that its activations are recorded in a single but but that its weights are. There are no gradations of connection weights: they are just on or off. As far as I know, that’s different from both standard neural nets and from how the brain works.
So what you are saying is they are discrete in time and pulse modulated. Which can encode for so much more information than how NNs work on a processor.
The network architecture seems to create a virtualized hyperdimensional network on top of the actual network nodes, so the node precision really doesn’t matter much as long as quantization occurs in pretraining.
If it’s post-training, it’s degrading the precision of the already encoded network, which is sometimes acceptable but always lossy. But being done at the pretrained layer it actually seems to be a net improvement over higher precision weights even if you throw efficiency concerns out the window.
You can see this in the perplexity graphs in the BitNet-1.58 paper.
Multi bits models exist because thats how computers work, but there’s been a lot of work to use e.g. fixed point over floating for things like FPGAs, or with shorter integer types, and often results are more than good enough.
We invented multi bit models so we could get more accuracy since neural networks are based off human brains which are 1 bit models themselves. A 2 bit neuron is 4 times as capable as a 1 bit neuron but only double the size and power requirements. This whole thing sounds like bs to me. But then again maybe complexity is more efficient than per unit capability since thats the tradeoff.
Human brains aren’t binary. They send signals in lot of various strength. So “on” has a lot of possible values. The part of the brain that controls emotions considers low but non zero level of activation to be happy and high level of activation to be angry.
It’s not simple at all.
Human brains aren’t 1 bit models. Far from it actually, I am not an expert though but I know that neurons in the brain encode different signal strengths in their firing frequency.
Firing of on and off.
Human brains aren’t digital. They’re very analog.
Neuronal firing is often understood as a fundamentally binary process, because a neuron either fires an action potential or it does not. This is often referred to as the “all-or-none” principle. Once the membrane potential of a neuron reaches a certain threshold, an action potential will fire. If this threshold is not reached, it won’t fire. There’s no such thing as a “partial” action potential; it’s a binary, all-or-none process.
Frequency Modulation: Even though an individual neuron’s action potential can be considered binary, neurons encode the intensity of the stimulation in the frequency of action potentials. A stronger stimulus causes the neuron to fire action potentials more rapidly. Again binary in nature not analog.
Isn’t this true of standard multi-bit neural networks too? This seems to be what a nonlinear activation function achieves: translating the input values into an all-or-nothing activation.
The characteristic of a 1-bit model is not that its activations are recorded in a single but but that its weights are. There are no gradations of connection weights: they are just on or off. As far as I know, that’s different from both standard neural nets and from how the brain works.
So what you are saying is they are discrete in time and pulse modulated. Which can encode for so much more information than how NNs work on a processor.
We really don’t know jack shit, but we know more than enough to know fire rate is hugely important.
The network architecture seems to create a virtualized hyperdimensional network on top of the actual network nodes, so the node precision really doesn’t matter much as long as quantization occurs in pretraining.
If it’s post-training, it’s degrading the precision of the already encoded network, which is sometimes acceptable but always lossy. But being done at the pretrained layer it actually seems to be a net improvement over higher precision weights even if you throw efficiency concerns out the window.
You can see this in the perplexity graphs in the BitNet-1.58 paper.
None of those words are in the bible
No, but some alarmingly similar ideas are in the heretical stuff actually.
We need to scale fusion
Multi bits models exist because thats how computers work, but there’s been a lot of work to use e.g. fixed point over floating for things like FPGAs, or with shorter integer types, and often results are more than good enough.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10441807/