I could put on my tinfoil hat and say if signal is blocked but telegram isn’t, maybe that means that telegram isn’t as secret as they make it out to be.
No need, all you have to do is read the whitepaper. they home brewed the encryption algorithm and nobody actually knows if it’s worth a damn. That’s not exactly a secret.
On that level it usually falls on computer scientists. Formal methods can prove that any implementation is correct, but proving the absence of unintended attacks is a lot harder.
Needham-Schroeder comes to mind as an example from back when I was studying the things.
On that level it usually falls on computer scientists.
And not a single one has been able to analyze the encryption in all these years? Fact is, Telegram is the tool the Russian opposition and even Ukrainians use to communicate without Putin being able to infiltrate.
No. It kind of falls on Dijkstra’s old statement.
“Testing can only prove the presence, not absence of bugs.”
You can prove logical correctness of code, but an abstract thing such as “is there an unknown weakness” is a bit harder to prove. The tricky part is coming up with the correct constraints to prove.
Security researchers tend to be on the testing side of things.
A notable example is how DES got its mixers changed between proposal and standardisation. The belief at the time was that the new mixers had some unknown backdoor for the NSA. AFAIK, it has never been proven.
I could put on my tinfoil hat and say if signal is blocked but telegram isn’t, maybe that means that telegram isn’t as secret as they make it out to be.
It’s open source. Look can up the encryption yourself.
No need, all you have to do is read the whitepaper. they home brewed the encryption algorithm and nobody actually knows if it’s worth a damn. That’s not exactly a secret.
After all these years, security researchers still don’t know if the encryption is any good?
On that level it usually falls on computer scientists. Formal methods can prove that any implementation is correct, but proving the absence of unintended attacks is a lot harder.
Needham-Schroeder comes to mind as an example from back when I was studying the things.
And not a single one has been able to analyze the encryption in all these years? Fact is, Telegram is the tool the Russian opposition and even Ukrainians use to communicate without Putin being able to infiltrate.
No. It kind of falls on Dijkstra’s old statement. “Testing can only prove the presence, not absence of bugs.”
You can prove logical correctness of code, but an abstract thing such as “is there an unknown weakness” is a bit harder to prove. The tricky part is coming up with the correct constraints to prove.
Security researchers tend to be on the testing side of things.
A notable example is how DES got its mixers changed between proposal and standardisation. The belief at the time was that the new mixers had some unknown backdoor for the NSA. AFAIK, it has never been proven.
They don’t have reproducible builds afaik (unlike Signal). You can have a completely different code running on your phone than on GitHub.
Besides, who is using Secret Chat anyways? All default chats and group chats are unencrypted.
Just use the F-Droid version if there is any doubt.
Probably Russians who used Signal before.
The F-droid version is also not reproducible. The binary you install has a different hash than the one you build from the GitHub.
It’s reproducible if you compare it with F-droid’s tarball, which has all the source code in it.
F-Droid builds from source, so any suspicion whether the Google Play version has been tampered is completely irrelevant for the F-Droid version.
Can it be proven that that encryption is what’s used in practice?
Just use the F-Droid version if there is any doubt.