Tommaso Gagliardoni's Homepage

Quantum Security and Cryptography

Email: tommasoATgagliardoniDOTnet

All the contents of this website are released under a Creative Commons
Attribution-NonCommercial-ShareAlike 3.0 Unported License.

All the opinions expressed herein are mine and do not necessarily represent the viewpoint of anyone else.






(Feb 8th, 2024) Another Change of Host

I moved this website to a different host, still with Hetzner but I'm playing around a bit. SSL certificates changed accordingly. The new IP is 49.13.21.214







(Jan 25th, 2024) ML Model Collapse as Radionuclide Contamination in Post-War Steel

I keep hearing that ML models such as LLMs and other transformers are prone to "model collapse", i.e., the process according to which the behavior of these models degrades over time due to the progressive ingestion during the training phase of more and more ML-generated content. For example, the more and more ChatGPT is used, the more and more the Internet gets flooded with ChatGPT-generated content, and therefore subsequent iterations of ChatGPT are trained with less human-made text and more, lower-quality, machine-generated text. Trash in, trash out.

It looks like this problem is here to stay and it is so bad that, apparently, some AI companies are already scraping the Internet Archive for pre-ChatGPT content. This phenomenon gives a huge advantage to companies like OpenAI, which entered the business first and have therefore made it in time to stock large datasets of "pure" data.

Here is a random thought. This feels similar to the search for low-background steel in radio-sensitive detectors and applications. This steel is scarce and precious, usually salvaged from pre-1940 sunken vessels - just like data scraped at low-bandwidth from the Internet Archive.

I hope that, just like the need of this pre-war steel became mostly obsolete after the nuclear ban treaties, we will reach at some point some form of AI treaty which will make it possible to flag most of machine-generated content as such. Technically, I have no idea how to do that, but life taught me to not underestimate the power of social norms and regulations, so who knows?







(Nov 13th, 2023) A Trick to Speed up RSA Modulus Generation

Recently I had reasons to recall something that happened in 2019 at Kudelski Security during some client project. At some point there was a discussion on how to speed up the generation of RSA keys. As you might know, you need two primes roughly of the same size, p and q, but they cannot be "any" primes, they must be (among other things) "safe primes". In particular, they must have the property that (p-1)/2 and (q-1)/2 are still prime. And this is annoying, because first you must spend a lot of CPU time generating a large prime, and then maybe you have to discard it because when you check the condition above it doesn't hold. That's a big bottleneck of RSA key generation.

Then I made the observation (and I cannot believe I'm the first one to have made such observation) that for this property to hold, the last 2 LSB of the prime must *both* be 1, because large prime implies odd, so modulo 2 must give result 1 both for the candidate prime and its half. So, having these 2 LSB both set to 1 is a very minimal condition to ensure that your numbers are candidate safe primes.

The way this was implemented in the project we were working on (e.g., for a 2048-bit RSA key, so 1024-bit primes) was the following:
  1. Generate a random 1024-bit string.
  2. Set the first MSB and the last two LSB to 1.
  3. Test the 1024-bit number obtained for primality; if fail then goto 1.
  4. Right-shift this 1024-bit number by one bit.
  5. Test the 1023-bit number so obtained for primality; if fail then goto 1.
  6. Return the 1024-bit number as a safe prime.
Which is cool but I made the following observation. The method below is probably faster in most cases:
  1. Generate a random 1023-bit string.
  2. Set the first MSB and the last LSB to 1.
  3. Test the 1023-bit number obtained for primality; if fail then goto 1.
  4. Left-shift this 1023-bit number by one bit and set the LSB to 1 again.
  5. Test the 1024-bit number so obtained for primality; if fail then goto 1.
  6. Return the 1024-bit number as a safe prime.
This algorithm looks very similar to the first one above, and for sure it produces the same output, but it's not the same. In fact, when we tested it, it was indeed slightly faster than the first one.

The intuition is the following: testing a 1024-bit odd number for primality is slightly more expensive (and slightly less likely to succeed) than testing a 1023-bit one. You might arguably say that the difference should be minimal (and you'd be right in general), but these 1024-bit numbers are not really random odd numbers: they are all set to be 3 modulo 4, which means that the probability of one of them being prime is even less than what you would expect for a random 1024-bit odd number. So this is to say that, in both algorithms above, the 1024-bit testing is actually quite trickier than the 1023-bit testing.

In the first algorithm you do "the hard work first", by repeatedly doing the 1024-bit testing. But then, when you finally have a positive match, you are not guaranteed that the right-shifted 1023-bit number obtained is still prime - in fact, that's a small chance of it happening. In the second algorithm, instead, first you do a lot of 1023-bit testing, and for each candidate you do the "difficult" 1024-bit testing. The overall number of primality tests is roughly the same, but the difference is that in the first algorithm you do many "hard" tests and few "easy" ones, while in the second algorithm you do many "easier" tests and few "hard" ones.

This difference is enough to be detectable. I don't remember exactly the numbers but I seem to recall something like 2% on a Go implementation - feel free to correct me. So, keep this in mind when you implement your own RSA key generation cryptography - just kidding, don't do that!

Also, this is n00b stuff, I'm sure there is zillion of better ways to generate RSA primes in a much more efficient way.







(Oct 7th, 2023) Shufflecake Accepted At ACM CCS 2023

I can finally break the confidentiality embargo and proudly give the big announcement: The Shufflecake research paper (coauthored with Elia Anzuoni) has been accepted at ACM CCS 2023! We are super excited to present Shufflecake at one of the most prestigious cybersecurity conferences worldwide. A 50-page full version will be available in the next couple of days both on ArXiv and IACR's Eprint. Check the Shufflecake website or the Shufflecake Mastodon for news.

There is a lot of "meat" in the full version, especially regarding future works and planned features. The most pressing point, as previously explained, is the topic of crash consistency. We have ideas on how to do that, but we are still working on implementations.

Many people will probably find the discussion on the topic of multi-snapshot security most interesting. We know that in order to achieve multi-snapshot security we NEED to use WoORAMs, but they are very slow. Our insight is to try to achieve a weaker version, "operational" multi-snapshot security, which (we argue) should be enough in real-world scenarios.

But my favourite topic, and one that I think has been foolishly overlooked in previous literature is the topic of "safewords". What is a safeword in the context of plausible deniability? It's the idea that a user might want to have the "last resort" possibility to surrender to an adversary and prove that she has nothing to hide. The concept would be: If I'm caught by my CS teacher doing non-school activities on my laptop, I can use plausible deniability and be un-prosecutable in front of the principal, but if the police comes knocking at my door, I don't want to get in trouble and I want to comply - and PROVE that I am complying.

With TrueCrypt this is easy: just give up the decoy password to the principal and the real password to the police. In Shufflecake and similar systems that's not so easy, because there are many layers of nested secrecy. So "the adversary does not know when he can stop torturing you" because he cannot trust you whatever you say.

Is this good or bad?

In the paper we make the following points.

  1. It is actually possible to implement a safeword even on systems such as Shufflecake. It's an easy trick really.

  2. The idea of a safeword is SUPER BAD for plausible deniability. Not only you should NOT USE a safeword, but the mere POSSIBILITY of having a safeword degrades the operational security of the scheme.

  3. This problem does not seem to have been addressed before on existing constructions, as all of them (even WoORAM-based ones) have the implicit possibility of creating a safeword.

  4. We propose an idea to make Shufflecake "safeword-free"! This is future work but it's definitely going to be implemented at some point. The idea is that we can make ANY kind of safeword-like feature impossible by implementing a nice hack to have potentially INFINITE nested volumes (rather than limited by an artificial hard-limit, which is 15 in the current implementation).
I want to expand a bit here on the last idea. Nothing of what I write down here has to be intended as rigorous, this is just the high level intuition. STRICTLY FUTURE WORK!

Currently, Shufflecake packs all the headers (DMB and 15 VMBs) at the beginning of the storage space, and data sections come afterwards in the form of 1 MiB slices. If we want to have the possibility, at least in theory, of having an unbounded number of volumes, then we need an infinite (subject to total storage space) number of headers, and hence we cannot pack them at the beginning of the disk. Clearly, most of these headers will actually be "bogus", i.e. they will not be headers at all, but the adversary should never be sure about that.

So my idea was the following: imagine we don't have a DMB and 15 VMBs, but just a unified, per-volume "header" (encapsulating everything we need for opening the volume, including its encrypted position map), and suppose that this header is one slice large. What we can do is: having the first header (for the first volume) at the beginning of the disk, and the others at random positions across the disk space, interleaved between the other data slices. If everything is encrypted, the adversary cannot tell whether a slice is a data slice or a header, and hence cannot tell exactly how many volumes are there, or even what MAXIMUM number of volumes there can be.

This is cool, it makes safewords impossible! No artificial limit on the maximum number of volumes. But there are challenges.

First of all, how do we "navigate" this chain of headers? We know where the first one is, but what about the others? The idea is to have every header pointing at the position of the next one through a dedicated field. But how about this field? We cannot encrypt it: We would need to encrypt it with the password of the CURRENT header, but then it would be useless for plausible deniability (the adversary will know the position of the next header). We could encrypt it with the password of the next header then. This works if we only have two volumes, but what if we have three instead, and we want to unlock the third one? There would be an unnavigable "gap" in the chain.

My solution is simple: this pointer field is simply left unencrypted, but ANY random string could be a valid pointer. In practice, the concrete idea is to have a random value in the header, say 128 bit, and use it as an input to a linear function which maps uniformly to all slices of the disk: the string "0x0..0" would be the first slice, while the string "0xF...F" would be the last slice, and anything in between would point to other slices linearly. Clearly, random values that are close to each other would point to the same slice, but this is OK. The observation now is the following: even without having any password, the adversary can navigate from the first header to the position of the second, to the third, and so on, but she will not know when to stop, because there will be no way of knowing when the "real" chain is over! The adversary would keep jumping back and forth on the disk like crazy, but no guess on the number of volumes!

Except... three problems.

The first problem is that there actually IS a way for the adversary to know that, starting from a certain point, surely the chain is finished, and she is now inspecting data slices rather than headers. This happens if there is a loop. No header can point to a previous one, so if this random value points "backwards", then we know that something is wrong. But notice that headers will actually be a negligible amount of the disk space, so the chance that a random value points to one of the existing headers is negligible. Sure, it will happen sooner or later along the chain, but we cannot know in advance when. This is sufficient to argue that: in no way a user could use a safeword to "limit" the growth of this chain.

The second problem is: what happens if, during use, modification of a data slice "breaks" the chain? Well, also not a big deal: Notice that Shufflecake would know exactly the position of all unlocked headers, and would treat those slices as permanently occupied, so no risk of overwriting one of those. The only possibility would be to overwrite a "bogus" header, thereby breaking the chain of fake headers. But for the adversary, without the right password, everything is a link of the chain, even the new (encrypted) data slice! So, all that changes is the chain, which (after that slice) might now point to completely new areas of the disk, but this doesn't break anything for the user.

But, wait, then could the adversary know how many "real" headers are there by looking at the state of the chain over time? Yes, this is possible: Every time the adversary sees a change of chain at link N, she knows that the number of volumes is AT MOST N-1, and every observation can shrink down this estimate. This is a problem, but it depends on how often the data slices change, and anyway we are now in the multi-snapshot regime. Things become quickly complicated here, but also solutions multiply. We can use re-randomization (since we use AES-CTR), or we could do some brute-forcing to make sure that the random pointer lands somewhere we want. In any case, the accuracy to which an adversary identifies the real number of headers depends on the number of snapshots she has, which cannot be too many because TRIM is not an ever-growing blockchain...

The last problem is: how about the slice maps? Those are usually larger than a single slice, they can't fit into a single header so defined. True, but nothing changes if the header is 2 slices large, or 3, 4, 5, etc. Just use the same schema: every slice of a header points to the next one, and the last slice points to the first slice of the next header, and so on. This also has the advantage that we can use more or less slices for a header, depending on the size of the device. We also can embed as much volume metadata as we want.

Cool stuff!







(Aug 17th, 2023) Back From Vegas, Black Hat, DEF CON, Shufflecake News, Etc...

Let's try to have a proper post for once after a long time. I'm back from Vegas, from my first Vegas. It was tiring, it was productive, it was exciting, but mostly it was so that I hope it won't be my last Vegas, because I really enjoyed it! Let's see what happened.

I arrived on Tuesday, Aug 8th in the early evening, after a long, boring and uneventful direct flight. The first impression of the city itself was as I expected it: fake, but in a kind of friendly way. Clearly, this only holds for The Strip, which is pretty much the only part I saw. Time was short and packed. I was there to attend Black Hat and DEF CON, my first time for both of them.

I met with other folks from Kudelski Security. The first thing I learned about Black Hat is that the conference itself is just half of the fun: in the evenings, every evening, people go to "parties", which are more or less exclusive networking events with cheap booze and finger food. So, as soon as I arrived at my hotel I barely had the time of changing myself and go to the first evening party of the week! It was good: I quickly entered "undead mode" and managed to stand straight on my feet until around 10 PM, at which point the reanimation spell which was keeping me going faded, and I barely had the energy to crawl back to the hotel and crash. I still woke up at 5 AM and went to have "burger breakfast" at the casino downstairs with a colleague who was as jet-lagged as I was. Reading twice what I just wrote it sounds super weird but, hey, it's Vegas!

Wednesday Black Hat itself started. If I have to describe it with a single word I would call it grandiose. I walked so much my feet still hurt, I am not used to a couple of miles just to walk from the breakfast "room" (more "arena") to the talk room. I have to say, I didn't enjoy Black Hat too much. Too "corporate", talks were less interesting than I expected, and the sight of business people in the vendor booths trying to attract you and subscribe you to their mailing list in exchange for cheap goodies made me feel very sad for them. The other parties I went to varied greatly in terms of quality and fun. I had good times with my colleagues, had my taste of American cuisine, but overall I would not define the first days at Black Hat as very interesting.

DEF CON, on the other hand, was a blast!

A cyberpunk orgy of very special humans and LED lights. Pure chaos, but in a cozy way. People were really friendly, everyone being "as weird as they are", without faking it, and everyone being very friendly and open-minded, or at least this was my experience with the people I interacted with. The only problem with DEF CON is that there is SO MUCH stuff to see and experience, you get quickly overwhelmed by a massive FOMO blast. As much as you can run and scramble from one room to another (huge conference venue here too) there is always something cool you're missing, a riddle hidden somewhere, a secret party, an art performance, a cool talk not listed on the website, etc. Even if you could split yourself tenfold you'd be probably not enough to experience all of it.

My Shufflecake talk at Demo Labs went well, or at least I got a lot of audience and a ton of good questions. Slides available here. Let me also announce with a bit of delay that the latest version of Shufflecake is v0.4.1, with a ton of improvements and new features.

Tomgag giving a talk at DEF CON Demo Labs 2023

Regarding the future of Shufflecake, I was asked "is this ready for use or still a beta?". This is a good question. I think that, right now, the only thing that is missing in order to consider Shufflecake a "candidate for stable" is the issue of crash inconsistency. There is a lot of other things that should be addressed, sure, but I'd say that crash inconsistency is the nr. 1 issue. This problem is given by the fact that, unlike traditional disk encryption tools like LUKS and TrueCrypt, Shufflecake uses AES-CTR rather than XTS, and explicitly writes fresh IV on disk every time in order to avoid IV reuse attacks, meaning that if a system crash happens between a block write and the write of the related IV you're left with undecryptable data on disk. We do this because we want a feature of "block re-randomisation", which we have plans of using in the future for extending Shufflecake's security to "operational" multi-snapshot adversaries (see it as a "lightweight ORAM"). But, as of now, we do not actually use this feature, which on the other hand gives us 1) performance loss in I/O, and 2) the issue of crash inconsistency. So, we have two ways of addressing this: 1) solving the issue by rendering the block and IV writes atomic. Ther are ways to do it, just with a loss of roughly 11% of space, but we think this would, as a side benefit, improve I/O performance. Or, 2) forget about block-rerandomisation for now, and provide a "lite" mode which uses AES-XTS and will never be able to achieve more than single-snapshot security (but still with all the operational benefits of Shufflecake such as arbitrary filesystems and many nested levels of secrecy). We might actually go for both options and leave the choice to the user, let's see.

There is some other news about Shufflecake, namely about the academic paper for it, but I'm reluctant to share them at the moment, let's still wait a bit. Suffice to say that very soon we should be able to publish a version on Eprint/ArXiv. Stay tuned!







(Jul 11th, 2023) Shufflecake v0.3.0 Released!

Shufflecake v0.3.0 released! This is a major release with tons of improvements! And there is further news coming soon!







(Jul 11th, 2023) I'm on Mastodon

Hell yes, after having escaped the clutches of social media for more than a decade, I decided to give Mastodon a try. You can find me at @tomgag@infosec.exchange but I promise you I will post very rarely.

I've been neglecting this homepage for a while now. Thing is, I'm revamping a bit my "personal IT infrastructure", so I guess I'll try to re-design this website as well. Time will tell.







(Mar 26th, 2023) Change Of Host

I'm moving this website to a new host with Hetzner. SSL certificates changed as well. The new IP is 167.235.145.56







(Jan 2nd, 2023) Happy New Year, Founding De Componendis Cifris

Happy New Year 2776! I've been very busy with personal matters recently, but (together with many other Italian academics and professionals) I found the time to join an important event on December 21st: the official founding ceremony of the newly-formed no-profit italian cryptographers' association De Componendis Cifris (or, De Cifris in short). Many thanks to the director, Prof. Massimiliano Sala, for organizing the effort.

De Cifris has been already active for many years now, but it was officially formed as a legal entity last month. The reason I decided to join this initiative is that I believe there is both a lack of resources and an abundance of untapped talent in the landscape of cryptologic research in Italy. Even if I'm by now firmly rooted in Switzerland, I'm European at heart, and I always keep an eye on the situation of the motherland. I gladly support actions that can improve the societal, cultural, economical and technological development in the EU, and in Italy in particular.







(Nov 10th, 2022) Introducing Shufflecake: Plausible Deniability For Multiple Hidden Filesystems On Linux

I'm super, super excited to finally announce the first public release of Shufflecake, a plausible deniability tool for disc encryption, aimed at helping people whose freedom of expression is threatened by repressive authorities or dangerous criminal organizations, in particular: whistleblowers, investigative journalists, and activists for human rights in oppressive regimes. You can see it as the "spiritual successor" of software such as Truecrypt and Veracrypt, but vastly improved. Shufflecake is FLOSS (Free/Libre, Open Source Software), source code in C is available on Codeberg (because Github no more, thank you) and released under the GNU General Public License v3.0 or superior.

Releasing this project, and following its development from the beginning until today, was a lot of effort but also really exciting! It allowed me to learn many new things, get my hands dirty with code again after some time, and making myself more familiar with software development in general. And I think the result is really game-changing in the world of privacy, or at least it has the potential to become so. But nothing of all this could have happened without the hard work of Elia Anzuoni. Elia was a MSc student in CS at EPFL (recently graduated), and he applied to the research team where I work at Kudelski Security in search for a topic for his thesis. I pitched the idea for a successor of Truecrypt internally in the team back in 2020 already (my interest for plausible deniable disc encryption software dates back at least to 2011) but so far I hadn't had the opportunity of supervising a student on the project. But with Elia it was a blast! Shufflecake is based on the work of his thesis. Also thanks to his EPFL advisor. Prof, Edouard Bugnion, who had very insightful ideas and recommendations to make the thing work.

Learn more about Shufflecake on the Kudelski Security Research Blog.







(Nov 9th, 2022) A Few Notes On Resetting A Git Repo To An Initial State And Delete Commit History

A few notes for self-reference. When preparing a private repository for public release, sometimes you might want to "clean up the mess" beforehand. The lame solution is to delete the repo and then recreate it through a local copy (spoiler alert: don't). The following commands, harvested randomly through many posts in the interwebz, work well for me:

git pull
git checkout --orphan tmp-main
git add -A
git commit -m 'Initial commit'
git branch -D main
git branch -m main
git push -f origin main
git branch --set-upstream-to=origin/main main
git gc --aggressive --prune=all
git fetch --all
git reset --hard origin/main






(Aug 29th, 2022) A Few Notes On Configuring Apache2 And SSL With ACME

UPDATE 2023-03-20: Added some extra configuration.
UPDATE 2023-01-06: Thanks to the person who kindly pointed out some possible configuration improvements that give me now a score of "A" on SSL Labs.

Today I finally defeated my laziness and got into configuring HTTPS mode for this and other websites. I am using Apache and Let's Encrypt as a CA, but I don't want to use LE's default "certbot" tool for this. I opted for acme.sh instead, which can do everything I need in a few lines of Bash and without root.

First of all make sure that you have write permission on the webroot directory of the website. In my case this is /var/www/mysite.net . Needless to say, Apache must be configured to support SSL, so at the very minimum do a2enmod ssl. Added bonus since you're at it: do a2enmod headers so you can add MIME and XSS protection later. Make sure to only enable strong TLS ciphers (notice: will break compatibility with some older browsers) by making sure these lines appear in /etc/apache2/mods-available/ssl.conf:

SSLCipherSuite TLS_AES_256_GCM_SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256
SSLCompression off
SSLHonorCipherOrder on
SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1
SSLUseStapling on
SSLStaplingCache "shmcb:logs/stapling-cache(150000)"
SSLSessionTickets off


You must also decide where to store key and cert files - don't include them in the webroot. I opted for /etc/apache2/ssl/mysite.net (create it and make sure to set permissions to 700). Finally, make sure that the Apache website entry in /etc/apache2/sites-available/mysite.net.conf has HTTPS enabled and correctly points to the two (not yet created) files key.pem and cert.pem.

<VirtualHost 1.2.3.4:80>
   ServerAdmin admin@mysite.net
   ServerName www.mysite.net
   ServerAlias mysite.net *.mysite.net
#   Redirect permanent / https://mysite.net/
   DocumentRoot /var/www/mysite.net
   <Directory />
      Order Deny,Allow
      Options -Indexes
      AllowOverride None
   </Directory>
   ErrorLog ${APACHE_LOG_DIR}/error.log
   LogLevel warn
   CustomLog ${APACHE_LOG_DIR}/access.log combined
   Header set X-XSS-Protection "1; mode=block"
   Header always append X-Frame-Options DENY
   Header set X-Content-Type-Options nosniff
</VirtualHost>

<VirtualHost 1.2.3.4:443>
   ServerAdmin admin@mysite.net
   ServerName www.mysite.net
   ServerAlias mysite.net *.mysite.net
   DocumentRoot /var/www/mysite.net
#   SSLEngine on
#   SSLCertificateKeyFile /etc/apache2/ssl/mysite.net/key.pem
#   SSLCertificateFile /etc/apache2/ssl/mysite.net/fullchain.pem
#   <If "%{HTTP_HOST} == 'www.mysite.net'">
#    Redirect permanent / https://mysite.net/
#   </If>
   <Directory />
      Order Deny,Allow
      Options -Indexes
      AllowOverride None
   </Directory>
   ErrorLog ${APACHE_LOG_DIR}/error.log
   LogLevel warn
   CustomLog ${APACHE_LOG_DIR}/access.log combined
   Header set X-XSS-Protection "1; mode=block"
   Header always append X-Frame-Options DENY
   Header set X-Content-Type-Options nosniff
</VirtualHost>


Notice the commented lines. Enable the website with a2ensite mysite.net if not already enabled. Now check that the configuration looks OK with apache2ctl configtest. If everything looks OK you can restart Apache with systemctl restart apache2. Point your domain name to the server's IP if not done already and make sure that the non-HTTPS version of the website works.

Time to install acme.sh:

curl https://get.acme.sh | sh -s email=me@whatever.com


First of all, close and reopen the terminal. Then let's revert the default CA from ZeroSSL to LE:

acme.sh --set-default-ca --server letsencrypt


Then let's issue the certificates for mysite.net but changing the keysize at least (yes, I still prefer to use RSA over EC for technical reasons I might write a blog post about in the future):

acme.sh --issue -d mysite.net -d www.mysite.net -w /var/www/mysite.net --keylength 4096


Finally, let's transfer the generated certs to their place and specify the apache2 reload command for our installation, which in my case is:

acme.sh --install-cert -d mysite.net --cert-file /etc/apache2/ssl/mysite.net/cert.pem --key-file /etc/apache2/ssl/mysite.net/key.pem --ca-file /etc/apache2/ssl/mysite.net/ca.cer --fullchain-file /etc/apache2/ssl/mysite.net/fullchain.pem --reloadcmd "systemctl restart apache2"


CAVEAT: for the last --reloadcmd parameter, I guess root is required. Technically speaking, it might not be totally necessary, but I'm afraid it will make autorenewal fail. Well, let's see after 60 days what happens, we'll find out. (UPDATE 2022-11-09: I still haven't figured it out whether it works or not, because in the meantime I had to restart apache manually a couple of times, so I'm still not 100% sure)

Anyway, let's complete by removing the placeholder comments on the lines of /etc/apache2/sites-available/mysite.net.conf so to enable the SSL engine (these lines will remove the non-HTTPS version of the website because it's 2023 now), then reload Apache with systemctl restart apache2 et voila'!







(Jul 6th, 2022) NIST Finally Announces End Of PQC Standardization 3rd Round

After a couple of months of delay due to "legal reasons", NIST has finally announced the end of the 3rd round in the PQC standardization process - standardization of quantum-resistant cryptographic algorithms. The first winners of the competition are the following:
  • For KEMS, only one: CRYSTALS-KYBER (due to the small encapsulation size, keysize, and fast performance, which makes it substantially a forced choice as a TLS drop-in replacement)

  • For signatures, the following: KRYSTALS-Dilithium (mainly due to speed), Falcon (as a fallback in case Dilithium signatures are too large for applications), and SPHINCS+ (because it's just rock-solid and based on completely different security assumptions unlike the previous two both based on structured lattices).

Furthermore, there is other 4 finalists KEMs that will advance to the 4th round:BIKE, Classic McEliece, HQC, and SIKE.

Both BIKE and HQC are based on structured codes, and either would be suitable as a general-purpose KEM that is not based on lattices. NIST expects to select at most one of these two candidates for standardization at the conclusion of the fourth round. SIKE remains an attractive candidate for standardization because of its small key and ciphertext sizes and will continue to study it in the fourth round. Classic McEliece was a finalist but is not being standardized by NIST at this time. Although Classic McEliece is widely regarded as secure, NIST does not anticipate it being widely used due to its large public key size. NIST may choose to standardize Classic McEliece at the end of the fourth round.


Needless to say, this is great news. I was expecting Falcon to win but not Dilithium, which in my opinion is worse under many realistic scenarios, but having both doesn't hurt. I was not sure NIST would standardize SPHINCS+ already at the end of the 3rd round, I think the original idea was to keep it as an alternate, but I'm really glad they decided to go with this one! I think SPHINCS+ is practically unbeatable where security is the #1 concern and statefulness is not an option. I was expecting NTRU Prime to be among the winners, too.

In any case, notice that these are not the only options. For example:
  • The German BSI did not want to wait, and already endorsed two algorithms for protecting classified communication:

  • Due to the urgency of the transition to quantum computer-resistant key agreement procedures, BSI has recommended two of these schemes in its technical guideline TR-02102-1 already at the beginning of 2020 for the first time. [...] The two schemes are the lattice-based FrodoKEM and the code-based Classic McEliece.

  • OpenSSH starting from version 9.0 supports the hybrid NTRU Prime + x25519 key exchange method by default:

  • * ssh(1), sshd(8): use the hybrid Streamlined NTRU Prime + x25519 key exchange method by default ("sntrup761x25519-sha512@openssh.com"). The NTRU algorithm is believed to resist attacks enabled by future quantum computers and is paired with the X25519 ECDH key exchange (the previous default) as a backstop against any weaknesses in NTRU Prime that may be discovered in the future. The combination ensures that the hybrid exchange offers at least as good security. as the status quo.

  • IRTF drafted XMSS, stateful hash-based signatire scheme.







(Jul 2nd, 2022) Webit Impact Forum 2022, Sofia

I had the pleasure of taking part in a panel discussion on "Cyber security and ethical tech" at the Webit Impact Forum 2022 in Sofia, Bulgaria. The discussion was extremely interesting and I wish we had more time - alas, with such a topic we could have talked for hours!. I really enjoyed it and I would like thank the organizers, the panelists, and the moderator for the opportunity!







(Jun 16th, 2022) Change of IP

My host, OrangeWebsite, warned me on June 12th that there would be a change of IP for this host. I guess 4 days of notice must seem like an eternity for full-time sysadmins but as you have probably guessed already I'm being pretty lazy with this... webpage. So, anyway, the new IP is now 82.221.141.53 kthxbye.







(Apr 15th, 2022) "WhatsApp Is Cancer" Explained To Non-Tech People

Disclaimer: I know nothing about Teflon, any example below is purely illustrative.

By now I lost every hope of explaining to friends and relatives why they should steer away from WhatsApp and anything coming from the Cyborg Reptilian Lord. So I am stuck with appeal-to-authority as a last resort, with the following example. Everybody uses Teflon-coated no-stick frypans. Because the material is cheap, ubiquitous, convenient, and we'd waste much more time washing dishes if that wasn't available. But let's say that 9 out of 10 material scientists (or, alas, even 3 out of 10) tell you "you should not use Teflon, but rather this other material that is 10x more expensive and ugly, because there is serious concerns about Teflon's long-term health effects". Let's also suppose that you are not a material scientist. What would you do?

The problem I personally meet when trying to educate people on the technical and social issues of surveillance capitalism is that everybody seems to know more than the material scientists. Because admitting otherwise, and bringing that down to the logical consequences, would mean to stop using Teflon and either to spend more time washing dishes or to go for something more expensive. So, instead of facing the uncomfortable truth, people prefer to convince themselves that the material scientists are wrong, or paranoid, or weird. Or there is always some other excuse: "I only cook once in a while", "I only use low heat", etc.

I don't have a solution for that, but I observe that it's the same process happening when talking to people about climate change, and that the rational solution to improve the current state of things is [uncomfortable truth].

Please stop using WhatsApp. Treat it like smoke: It hurts you and the people around you, it's difficult to quit, it's designed to exploit you.







(Jan 1st, 2022) Can You Sign A Quantum State?

Happy new year 2775! I am happy to announce that, after a looong and suffered publication history, the paper "Can You Sign a Quantum State?", coauthored with Gorjan Alagic and Christian Majenz, has finally been published on the open access Quantum (Journal).

We posted the first version of this paper I think in 2018 on arXiv and IACR Eprint. After that, we tried many times, unsuccessfully, to have it accepted at some well-known IACR conferences. I don't want to rant too much here, but I was a bit disappointed by the reviews we got. My idea is that the paper was "too much quantum" for the audience there, and was victim of the misconception that we were just "rediscovering Barnum et al. (FOCS 2002)". I encourage you to read Barnum et al's original paper on quantum authentication and read our paper and get yourself an opinion. We opted finally for a more "quantum-oriented" venue, and it worked at the first shot!

In a nutshell: As a first result we prove that public-key signing arbitrary quantum states is strongly impossible, meaning that it is impossible even if we extremely relax the security requirements of the signature. In other words: only classical digital signatures exist. The only workaround to this is, if in addition to signing, we also encrypt the quantum state (but even in that case, we cannot get non-repudiation, which is a crucial property of digital signature schemes). The resulting notion is a quantum analogue of signcryption, which was not studied before, while we give definitions, security notions, and constructions.







(Dec 26th, 2021) My Opinion on NFTs

I guess it's about due time I express my opinion on NFTs. In the crypto community I tend to see people pretty divided between two factions: love NFTs and hate NFTs. I must say I see mostly the more "tech" people who hate NFTs, which should already give us a hint. However, I must also admit that I do not have a very strong opinion on this... thing. For me, the idea of NFTs makes totally sense if seen from two perspectives.

The first perspective is best summarized by the following image:

Male satin bowerbird with blue objects surrounding the bower

The second perspective is: Suppose you're a crypto millionaire or early investor in Bitcoin/Ethereum or similar, and you own a fortune in tokens. This does not mean that you're sipping cocktails on your private tropical island. What this actually means is that you own tokens that, in the vast majority of cases, must be exchanged for real fiat before they can have an observable effect on your daily life. This means horrible conversion fees on top of the alredy horrible transaction fees, plus most likely withdrawal fees. Add to this that your tokens are by now object of interest for 1) cybercriminals 2) taxation authorities in many jurisdictions. In other words, owning large amounts of crypto tokens is a toxic asset. NFTs are an effective way to get rid quickly of these assets (that you would have hard time spending with other means anyway) and at the same time "recycle" partially their values by building an artificial ecosystem of scarce resources on top of them. As in "you need to own an NFT to access this exclusive party", or "dear employee, part of your bonus will be paid in NFTs".

As I write this, it might sound like I do hate NFTs, but believe me, this is not really the case. Because, I observe, we have been living for long time already in a society that values the possession of things like diamonds, tulip bulbs, fancy cars, gold-plated toilets. So, when you read of "Superstar X spent $500'000 on a limited edition Nike sneakers' NFT", or you see an attractive person on your favorite dating app with the headline "I am looking for a passionate and generous NFT owner who can become the love of my life", my opinion is that you should not think "Oh gosh, this is insane, someone must stop this madness!".

You should rather think: "Nothing new to be seen here, let's move on".







(Jul 21st, 2021) Introducing oramfs: Libre, Storage-Agnostic ORAM Client For Linux

With my colleague and friend Nils Amiet at Kudelski Security we recently released oramfs, a prototype (not production-ready yet) of ORAM client for Linux written in Rust. The code is available on github under the GPLv3 license.

Oblivious random access machines (ORAM) are cryptographic algorithms that can be seen as "compilers", turning an untrusted storage into a trusted one. They can also be seen as improved versions of encrypted filesystems, with slower performance but better security. What is exciting about our approach is that the software is totally storage-agnostic: regardless if you're using a local drive, an SFTP folder, an AWS cloud storage, it doesn't matter: anything that can be mounted as a local directory can be "ORAM-ised" using oramfs, without the need of plugins or server-side support.

Preliminary testing shows (as expected) low performance, but still competitive compared to other existing solutions! Furthermore, it has to be stressed that this is just a proof of concept: we already have, in fact, a lot of ideas for further optimization, and we hope of working on some of these in the near future!

You can also watch our presentation at the Pass The Salt 2021 conference. Stay tuned, we have ambitious plans for this project!







(Jul 21st, 2021) Quantum Indistinguishability For Public Key Encryption

A paper coauthored by Juliane Kraemer, Patrick Struck, and myself has been accepted at the PQCRYPTO 2021 conference and will be presented tomorrow (virtual) by Patrick. Full version available on IACR's Eprint and ArXiv.

I previously wrote about this paper, which was initially titled "Make Quantum Indistinguishability Great Again". I think it might be beneficial to explain a few things about this choice, which apparently angered some reviewers at previous conferences we tried to submit to. This initial choice was dictated in part, undeniably, by the desire of having a "catchy" title, given that we were in the middle of pre-US elections craze that was wrecking everybody's nerves. I want to stress again that this had nothing to do with one or the other political stance of any of the authors, it was just "for the lulz", and frankly we considered this choice totally legitimate given a plethora of other recent and not-so-recent "catchy" titles appearing in the realm of cryptography and beyond.

But the real joke was another one. Namely, in this paper we tackle the problem of defining quantum indistinguishability for (classical) public-key encryption schemes by "resurrecting" an old technique that was previously used in another work appeared at CRYPTO 2016: the use of so-called minimal, or type-2, quantum oracles. Between 2013 and 2016 there was quite a bit of activity on the topic of quantum indistinguishability for classical cryptographic schemes. This activity culminated with the 2016 paper, that was specifically targeting symmetric-key encryption, and no other relevant works that I am aware of appeared after that on the topic (there was further research on quantum security notion for classical schemes, but not about indistinguishability). It was kind of a "quantum indistinguishability winter" after 2016, and in particular the technique of type-2 oracles did not appear anywhere else. That is, until our new paper which first appeared in 2020. So we thought it was funny and appropriate to "make great again" quantum indistinguishability after nobody cared about it for a few years.

Little did we know that our "joke" was about to be ruined: Clearly, it becomes much less funny if, by pure coincidence, two other respectable research groups, concurrently and independently, publish two other very related works that both tackle the very same problem, albeit with different techniques! So it looks like quantum indistinguishability was not dead after all, just hibernating! Surely there was no need of "making it great again".

We only found out about the other two papers after we first submitted our work, and the full version was up and running, so it was too late to change the title. After that, given that the joke was not so funny anymore, and given that the sad circus of US elections was finally over, and given that we had no desire of further angering people, we decided to change the title to something less controversial.

I hope this clarifies, and hope you'll enjoy tomorrow's talk (full video recorded here).







(Apr 22nd, 2021) On Quantum Supremacy, Racial Supremacy, Teapot Supremacy, and Quantum Non-Lameness

Recently, British mathematician Richard Borcherds posted on Youtube a so-called "rant" about quantum computers, specifically about the concept of "quantum supremacy". This video became viral and got my attention. In a nutshell, Prof. Borcherds asserts that the famous quantum supremacy experiment by Google proves nothing, because the problem solved by the Google team was nitpicked to be only solvable on their quantum computer. To prove this argument, Borcherds claims that the problem of guessing how many shards will be produced by a teapot shattering on the floor is arguably intractable for both classical and quantum computers, but trivially solvable by shattering a real teapot on the floor, therefore claiming that this "teapot computing device" achieves "teapot supremacy".

Richard Borcherds is a Fields medalist and very respected mathematician, so one does not dismiss this argument too lightly, even if amusing. But I personally believe that he's really wrong on this one. He used the same argument that I see way too often in random online discussions:
  1. Mischaracterize the definition of "quantum supremacy".

  2. Show that something as lame as a teapot or a vial of chemicals also satisfies the definition.

  3. Conclude that quantum computers are therefore also lame.
The issue with this reasoning (that I tried to point out many times in different places, see for example my post on Kudelski Security's Research blog) is that for a computing device to be meaningful it has to be programmable. That is, it must be able to solve a problem for a large enough range of (classical) input, otherwise it is no more useful than having a table with "problem - result" entries!

It is not necessary that it solves different types of problems (it is perfectly OK to have a specialized computing device that solves only one type of problem, for example addition, factorization, teapot shards prediction.)

It is perfectly OK if the problem is specially engineered to be best solvable on that kind of device, or if the problem has zero real-world applications.

It is even OK if the device only solves problems of a certain instance size (for example, it only factorizes numbers 2048 bits large, or it only predicts sharding of a teapot weighting 500 grams), because worst case scenario you can have a bunch of different devices each of them specialized for a certain instance size - although this is a grey area, because it would be desirable to have a common blueprint for generating designs of different devices for every instance size, a property known as uniformity.

But it must be programmable! For every instance size you must have at least a superpolynomial number of possible admissible inputs, otherwise - again - your device is no more useful than a static table!

Borcherd's teapot mind experiment is not on point, because it is not programmable. You have one teapot, you break it, you count the shards. It is not a "computing device" in any reasonable sense, because there is no input to the problem, you could just note down your observation on a piece of paper "this teapot shattered into 5 shards" and that's all the information you can ever get out of that device. Google's device, on the other hand, is programmable: it accepts as input an (almost arbitrary) description of a quantum circuit up to 53 qubits and up to depth 20 and it estimates the output distribution.

Just after writing this, I found out that Scott Aaronson beat me on time and published a similar rebuttal on his blog - which makes me actually proud :) One interesting question asked therein by Ernest Davis was the following: what if we modify the teapot experiment so it can be made programmable? For example, consider a large industrial machinery complex that does the following:
  1. Takes as input a classical description (e.g., a 3D CAD model) of a teapot.

  2. 3D-prints a large number of teapots according to the input and fires them.

  3. Shatters them on the ground one after one and counts the resulting shards (either mechanically or with A.I. vision software).

  4. Records the observed results and outputs the probability distribution of shards for that particular teapot design.
Would this account for "teapot supremacy"? My answer is, probably yes, and I see no reason why this should diminish anyway Google's quantum supremacy result. We would have built a specialized machine to solve a specialized problem, it is reasonable that it can achieve supremacy on that problem. The difference with quantum computers is that those can efficiently simulate any classical computer as well, while the teapot machine seems quite limited in its task. Quoting Scott Aaronson about the teapot machine:

I replied that this is indeed more interesting—in fact, it already seems more like what engineers do in practice (still, sometimes!) when building wind tunnels, than like a silly reductio ad absurdum of quantum supremacy experiments. On the other hand, if you believe the Extended Church-Turing Thesis, then as long as your analog computer is governed by classical physics, it’s presumably inherently limited to an Avogadro’s number type speedup over a standard digital computer, whereas with a quantum computer, you’re limited only by the exponential dimensionality of Hilbert space, which seems more interesting.

Or maybe I’m wrong—in which case, I look forward to the first practical demonstration of teapot supremacy! Just like with quantum supremacy, though, it’s not enough to assert it; you need to … put the tea where your mouth is.


On the other hand, I think two things Borcherds is right about are the following. First: it is to be expected that in the future quantum computers will not be used as super powerful versions of classical computers, but rather as specialized add-on devices that augment a classical computer's capabilities - a bit like modern GPUs are specialized hardware for certain tasks.

The second is that, if you think about it, the concept of "quantum supremacy" is a bit underwhelming. You only need to prove that exists: *one* instance size for *one* specific problem that has *no* practical use (and that is specialized for *your* device) in order to claim "supremacy" for your device. And he's totally right on this (but, as I just wrote, he went too far with his argument and did not account for programmability, which makes all the difference.) To which I answer with two observations.

The first is that, for quantum "skeptics" in the community (such as Gil Kalai for example), even this lame definition of quantum supremacy should not be attainable! Skeptics have to understand that denying the feasibility of quantum supremacy is a very difficult task, exactly because the definition is so underwhelming! If you are a non-believer in quantum supremacy, then not only you claim that quantum computers will never have an edge over classical computers for practical problems, but you are claiming that, no matter what the problem and instance size at hand, classical will always be better or roughly on par with quantum in performance. So, think twice before taking this stance. I am looking at you, uneducated Dunning-Kruger commenter on Slashdot.

My second observation is that in my opinion all these misunderstandings are due to the unfortunate choice of the word "quantum supremacy", because it gives false expectations to the non-expert audience. I think clear communication is important, not only in science, and that words should not just be placeholder symbols to represent a concept and to be decoded with the right "knowledge key", but should be understandable also by the largest possible audience who does not possess such key. A typical example I often rant about is "post-quantum", which I think should receive an award for "worst possible vocabulary choice in science." When we talk about "post-quantum", for experts in cryptography it is perfectly clear what we're talking about. The threat model is clear, the scenario is clear, the object of discussion is clear. But for the rest of the people on the planet it is a pain. This word is so prone to misrepresentation that I stopped using it altogether, preferring the term "quantum-resistant".

There was a discussion some time ago (that I won't link here so to not further feed the fire) on the fact that we in the quantum community should stop using words such as "ancilla qubit" and "quantum supremacy" in physics because they can offend people and bring despisable racial or slavery connotations. Back in the time, I was of the opinion that we should dismiss this argument as nonsense and stick to the commonly accepted scientific vocabulary. However, now I changed my mind for what concerns "quantum supremacy" - and, mind you, not for the politically correct reasons above. The reason is that, as I wrote, I think it's a too much strong word for something that it's actually the *very minimum bar for a computing technology to be vaguely interesting* and therefore gives false expectations to most of the audience. I think we should use a different word.

I suggest "quantum non-lameness".







(Apr 8th, 2021) Is Signal Turning Evil?

Like many other security people and tech-savvy users, I am deeply concerned by the announcement that Signal is experimenting with in-app cryptocurrency payments.

Signal Payments makes it easy to link a MobileCoin wallet to Signal so you can start sending funds to friends and family, receive funds from them, keep track of your balance, and review your transaction history with a simple interface.


Others have written already why this is a terribly bad idea. Personally, for me this marks the beginning of the end of my love for this fantastic app. I have been using Signal for a long time, and helped relatives and friends in the process of migrating to a better and safer alternative to Whatsapp, Telegram, Threema, whatever. But this just feels so wrong, and I am not even a blockchain hater!

I wish I am mistaken, but to me it looks like things can only go downhill from here. I will start looking at alternatives, either peer-to-peer or federated this time. Signal for me has always been a temporary solution - at the end it's just Whatsapp run by someone less evil than Facebook - but this just feels bad.







(Apr 8th, 2021) A Dangerous Vulnerability In Existing Gennaro-Goldfeder’18 Implementations

Threshold signature schemes (TSS) are a class of complex cryptographic schemes that have seen a growing interest and rapid adoption in the last few years, mainly driven by blockchain applications such as Bitcoin. They allow to reduce the risk profile of wallet management in large organizations, and are especially attractive in highly regulated environments such as exchanges and financial institutions. The sheer complexity of many of such schemes, however, makes them prone to implementation and design issues.

During a cryptographic code audit of one of these applications at Kudelski Security, we found a potentially serious vulnerability in one of the most widely used ECDSA TSS protocols: Gennaro-Goldfeder'18. The vulnerability is actually an extension of another known vulnerability, the "forget-and-forgive" attack, and stems from the necessity of a security assumption in the original protocol that is not given for granted in many real-world cases. This problem might allow a single malicious attacker to delete or lock funds and blackmail all other peers. Special kudos to Yolan Romailler (@anomalroil on Twitter) for this.

More information is available here and the original report can be found here. No reason to panic: the vulnerability is application-specific, and does not depend on the implementation of the most widely adopted GG18 libraries, but still something to keep an eye on.







(Apr 3rd, 2021) In Support Of Richard Stallman And The Free Software Foundation

Just a brief post to express my support for Richard Stallman (rms) and his return to a leading role within the FSF. I admit the more I read about this sad story, the more I am disgusted by what to me looks so far like an orchestrated attack to the use of proper language and rationality, and an ad hominem case against a person guilty of not conforming to a corporate promiscuity thinly veiled as political correctness.

I am not a fan of rms. I do not worship him nor FSF. I cannot vouche for him being a good fit for his role within FSF, not even for being a nice person, or being a person I would like to work with. But the arguments that have been used against him are frankly horrifying. Not laughable, really just horrifying.

But, hey, there might be some backstory I am not aware of. Feel free to change my mind. Please use different arguments from what I've read so far though.







(Mar 5th, 2021) No, Peter Schnorr Did Not "Destroy RSA"

The cryptographic community was, let's say "stirred" in the past few days, by a paper appeared on the IACR's Eprint authored by famous cryptographer Peter Claus Schnorr (yes, the guy from Schnorr signatures). In the paper, Prof. Schnorr claims to have found a (classical, not quantum) lattice sieve reduction method that quickly factorizes large integers, and thereby "destroyes RSA" (sic).

The paper is short but dense, and I think most cryptographers in the world aged a couple of years during the last week. However, the claim is apparently incorrect as there are serious flaws in the paper.

I had the pleasure of meeting Prof. Schnorr at various conferences and workshops, I even shared a bus ride with him at the 2015 Dagstuhl Seminar on Quantum Cryptanalysis. I recall him as quiet and full of energy and charisma at the same time. I won't add anything to what has already been said, see for example here and here. If you are looking for an explanation of what happened behind the curtains, I cannot give you one.

However, I guess you might want to read about the sad story of Sir Michael Atiyah's attempted proof of Riemann's hypothesis.







(Mar 5th, 2021) My Podcast Interview With Mysecurity Media

I gave a podcast interview at the Cyber Security Weekly Podcast for Singapore-based IT news outlet Cybersecurity Media. Working with Jane Lo was a great pleasure. I was impressed by how, unlike too often the case in "cheap" press, she did due research and preparation beforehand and we could easily navigate together a complex topic such as quantum security.

Fun fact: a few seconds after I start talking you can hear my doorbell in the background. That was the mailman, arriving at the very wrong moment!







(Dec 23rd, 2020) Android / Lineage OS: How To Separate Screen Lock Password From Storage Password (Different Password)

UPDATE 2021-04-25: Unfortunately I have realized that this method does not work with new devices! In my case it was working because I was installing an Android 11 based ROM on an older device (Samsung Galaxy S4 mini), but for modern devices and since Android 10 it is mandatory to use File Based Encryption (FBE) rather than Full Disk Encryption, and this makes it impossible to have two separate passwords (don't ask me why, I have no idea, sounds ridicously dumb to me.) Thank you again, Google!

Just a quick note for those of you who have recently installed Android 11 or a ROM based on it such as Lineage 18 and are struggling setting up a storage encryption password different from the screen lock one. I recently had troubles myself doing so, hence these notes to show you how to do it. Why you want to do it? Basically the idea is to use a short PIN/pattern/passphrase to unlock the screen for ease of use, and a very long and complex one to decrypt storage at boot. Then you need to install an app such as Wrong PIN Shutdown (needs root) that powers off the phone after, say, 3 failed attempts at unlocking the screen. This way if someone steals your phone and starts fiddling with it, very soon they'll find themselves locked out. Notice: this is of course useless against a forensic analysis or other forms of attack, so it pretty much defends only against this specific case. Still, it pisses me off that Google decided that somehow every Android user does not need to decide by themselves how to use this feature, and opted to link the screen unlock and storage encryption keys by default to make it easier to manage. So much in fact that with the last updates it has become even more difficult to work around this limitation. A nice app that I used before to do this, Cryptfs Password (also available on F-Droid) stopped working some time ago already. The reason is that it is basically a script invoking the cryptfs command, whose syntax has changed.

So, here is how to do it (spoiler: requires root)

  • Fresh install your ROM (e.g. Lineage) and root it (e.g. Magisk)

  • At setup, choose a short PIN for screen unlock

  • Settings -> Security -> Encryption -> Encrypt Device (requires fully charged battery and device under charge, when requested to enable "boot lock" choose "yes", phone will reboot)

  • Install F-Droid, install a terminal emulator (e.g. Termux) and Wrong PIN Shutdown

  • Fire up the terminal and do this:

    su
    vdc cryptfs changepw TYPEOFNEWPASSWORD OLDPASSWORD NEWPASSWORD


    where TYPEOFNEWPASSWORD can be any of password, pin, pattern (if you use a pattern, match the position of the dots with corresponding numbers on keypad to obtain a numeric string)

This solution appeared here and here. Enjoy!







(Dec 8th, 2020) Coming Soon: My Talk At Black Hat Europe 2020

I am excited to give a talk at Black Hat Europe 2020 (virtual conference) on December 10th (in two days) on the topic of quantum security and quantum cryptography (not quantum key distribution, the "real" stuff!). If you think that's an unorthodox topic for Black Hat... You're totally right! So, stay tuned!







(September 10th, 2020) My Opinion On Covid-19 Contact-Tracing Apps

Update 2020-10-01: there is a new paper by the University of Salerno crypto group that describes further possible "terrorist attacks" that can be mounted against decentralized apps (in particular focusing on SwissCovid and Immuni) by leveraging smart contracts. Also, I corrected some broken links in the post.

Disclaimer: what follows is a personal opinion, and it does not necessarily represent the point of view of my employer or my colleagues or collaborators or pretty much anyone else.

Foreword: if you are one of the many who think that vaccines cause autism, that Covid does not exist, that face masks hurt your freedom, that the pandemic is just a huge lie orchestrated by Soros and Bill Gates to implant us with microchips and control our minds with 5G, and that sun and good vibes can cure any disease, please stop reading now: this post is not going to be helpful for you, sorry.

I think contact-tracing apps are a bad idea, and mark a dangerous path that we should all steer off from. In this post I will try to explain why, given that I received questions from many acquaintances who cannot understand how a moderately "socially conscious" person like me could be so vocally against this technology. At times I even had to clarify that I am in no way an anti-vaxxer or a Covid-denialist.

I should have written this way earlier, it feels a bit weird to do it now that the heat for this coolyoungtechthing is waning off and almost nobody is talking about it anymore. But better late than never, and it also allows me to put in retrospective some early foresight that I gave verbally to many friends and colleagues in the past.

There are four aspects that I will touch in this post: privacy, security, effectiveness, and sociopolitical implications. You might be surprised to know that my personal concerns follows this list in the reverse order in terms of severity. I will first give a succint explanation of my opinion on these four points (a TL;DR version), then I will go in more detail through them, and finally I will conclude with a FAQ section.


In short:

Privacy: all these apps are not 100% "privacy-preserving", even the decentralized ones expose the users to certain privacy risks. Most importantly, they only work on non-free software/hardware solutions that expose the user to further, more serious tracking.

Security: none of these apps has been audited satisfactorily, and there is a general short-sighted attitude of "f..k security, this is an emergency!". Most importantly, these solutions require to adopt non-trusted mobile platforms such as Google Android and Apple iOS that take away control from the people in favor of foreign megacorporations who do not share the best interests of their users.

Effectiveness: these apps are not good at slowing the spread of the disease effectively, both for technical reasons and for usability reasons. Previous experiments have shown (and the phenomenon is repeating again) that even in the case of an initially rapid mass adoption by the userbase, an abandon phase follows shortly, where users simply uninstall the app.

Sociopolitical: this short-sighted approach pushed by various authorities and blindly emphasized by the media shows on one side a dangerous divide between different scientific sectors (from my point of view, in particular it shows a dangerous cyber-illiteracy or lack of attention for privacy/security by the medical community.) On the other side it shows a global political will to normalize (even if with a "soft" approach) a doctrine of mass surveillance and technological control that it is so far achieved only in one large Asian country - may I add, with horrifying consequences.

I will now explain more in depth the points above.

Let's talk first about privacy, which is frankly one of the least important issues in the emergency caused by a devastating pandemic. There will surely be people ready to educate me on the difference between centralized VS decentralized approach, how the latter is so much better and even endorsed by academic institutions and so on. I am well aware of the differences. Let me put it straight for you: saying that decentralized apps like SwissCovid or Immuni are "privacy preserving" is a blatant lie - they are not even "decentralized" in fact. This can be shown in math, and if you claim otherwise then you are the uneducated one.

Both approaches (centralized and decentralized) have pros and cons, in terms of privacy and abuse potential, and which one is better depends on your security model. This has been pointed out very vocally by Serge Vaudenay (a world-renown cryptographer and professor at EPFL) in this paper, formalizing and echoing what was already a growing concern in part of the cryptographic community.

Without going too much into the details, the problem is twofold: on one side, all these apps still process certain data through a centralized authority that has thus quite a lot of control on the information associated to patients (albeit less than in a fully centralized approach). Second, the apps broadcast encrypted data that can be gathered and unmasked when a user tests positive by a so-called "paparazzi attack". This is quite troublesome because it targets specifically those users who, arguably, need more privacy protection. Notice how this attack is actually not possible in a fully centralized solution, so this is a typical example of security trade-off.

I think one of the best explanations of the technical issues with contact-tracing apps is this short video by Prof. Ivan Visconti (in Italian).

Another exhaustive resource on the issue is maintained by the EPFL crypto group: The Dark Side of SwissCovid. Please read it, it is really well-written, so much in fact that the best I can do is to quote part of it:

In summary, our report shows that

  • a big part of contact tracing is outsourced to the Apple-Google implementation;

  • the Apple-Google implementation has no public source code (at least until July 21);

  • Amazon is implementing part of the server infrastructure;

  • the available information is unclear, incomplete, or incorrect;

  • some previously identified attacks are still feasible.


Our compliance analysis concludes that

  • the switch from the pre-standard version to the Apple-Google-based one made SwissCovid not compliant with the law;

  • the implementation of DP3T has bypassed the law;

  • the law was shown to be powerless to protect people from using a non-transparent contact tracing system.


Let's not talk about the fact that all these apps can only be downloaded through "trusted" (note my sarcasm) stores such as Google Play or App Store. Remember that these platforms do not respect your rights and give an unlimited control over your digital life to evil megacorporations. By using these services "for free" you are actually working for free for them. Refuse to be used, choose a solution "Free as in Freedom" rather than "free as in beer".

Which brings us to the second problem: security. Privacy and security are often interconnected but they are not the same thing. Installing these apps on your phone is equivalent to installing a black box of obscure code that can do literally anything code can do, without the user having any control over it. How much do you trust Google? Even if you do, how much you trust the programmers who wrote the app? As part of our job, my colleagues and I routinely audit cryptographic code. Not all the results of our audits are public, and I can tell you that very often I am horrified by the bad coding practices and exploitable bugs one can find in software allegedely written by top-tier programmers. I will talk anecdotally here: I know that a certain contact-tracing app has not undergone a complete code audit for cost reasons, and some of the security issues identified have not been deemed worthy to fix. This is not surprising: in an emergency, nobody has ever time to do things properly. On the other hand, there are a lot of economical and political interests for developing and adopting these apps, so they will be given a pass even if their security is shabby. To summarize this point: installing these apps on your phone enlarges the attack surface that can be exploited by the bad guys.

But then of course there is the usual argument: even if these apps are insecure, even if they are not perfect, don't you think they serve a useful cause? My answer is: no, actually I think they do not serve anything useful at all because they just do not work. This has been brilliantly summarized by famous public interest technologist Bruce Schneier, whom I will quote here:

“My problem with contact tracing apps is that they have absolutely no value [...] I mean the efficacy. Does anybody think this will do something useful? … This is just something governments want to do for the hell of it. To me, it’s just techies doing techie things because they don’t know what else to do.”


There are so many things that do not work with contact tracing apps that I don't even know where to start. First of all, the app cannot detect contact with infected surfaces or exposure to aerosol, both of which have been proven to account for a large part of infections. Poor ventilation in closed spaces accounts for most of the "superspreader" events: infection in these cases has been reported to distance much greater than that reached by Low Power Bluetooth technology used by contact tracing apps. Moreover, the app does not broadcast continuously (that would drain the phone battery pretty fast) but only at intervals of 10-15 minutes. All this means a lot of false negatives, but there are also a lot of false positives, because the app for example doesn't know if there was a plexiglas shield between you and the cashier at the supermarket who subsequently tested positive, and even if you were in close contact with an infected person the infection rate is not 100%. Again, Bruce Schneier summarizes it best:

Assume you take the app out grocery shopping with you and it subsequently alerts you of a contact. What should you do? It's not accurate enough for you to quarantine yourself for two weeks. And without ubiquitous, cheap, fast, and accurate testing, you can't confirm the app's diagnosis. So the alert is useless.

Similarly, assume you take the app out grocery shopping and it doesn't alert you of any contact. Are you in the clear? No, you're not. You actually have no idea if you've been infected.

The end result is an app that doesn't work. People will post their bad experiences on social media, and people will read those posts and realize that the app is not to be trusted. That loss of trust is even worse than having no app at all.

It has nothing to do with privacy concerns. The idea that contact tracing can be done with an app, and not human health professionals, is just plain dumb.


Notice this is exactly what has been observed in the userbase so far, both for SwissCovid and Immuni: the first days after the app's release we got hit by big news of "millions of downloads in a few hours": people got curious, installed the app, realized it's trash, and uninstalled. This is a funny read (in Italian) about what happens to your life when you get a notification of exposure.

But to tell you the truth, all of the above are just a nuisance for me compared to what in my opinion is the greatest Issue with contact-tracing apps: the idea that I need to carry a smartphone with me (and one running a non-free OS specifically designed to track and manipulate users) in order to conduct a normal life.

In case you were not paying attention: this is what leads us all to Damnation and down the China model. It is a trick used countless times in history, and I am really puzzled how we are still blind to it: even the dumbest People on Earth will rebel to power if deprived of all freedom at once, but will behave sleepy and peaceful if the same freedom is constantly eroded and slowly stripped away. Here's what happened in China: You might have heard of a smartphone app called WeChat, it is not very popular in the West but it is literally impossible to live in China without it. It is a voice/IM app, news feed, payment system, collaboration platform, all in one. It is heavily controlled by the government, with which it shares private information of every citizen and tracks people all around the clock. It is literally The Orwellian Nightmare. For this reason, its mandatory adoption has been pushed heavily by regulators: you cannot call a taxi or do groceries without it, you cannot get housing or schooling without it, you are basically a ghost in society. And when last week I've heard from friends that some bars here in Zurich started denying entry to patrons who don't have SwissCovid installed, I felt a little bit that way. I'm sure it's somehow illegal (at least here in Switzerland) to deny entrance to a public venue depending on a smartphone app, but people are doing it anyway, on the basis that you cannot protest against such a clearly thoughtful decision, you would be a bad person otherwise.

Contact-tracing apps produce various dangerous sociopolitical phenomena. On one side, suddenly everybody seems to have become an expert in contact-tracing technology. On the other side, they polarize people so much that even in the scientific community many are blind to their implications. Medical experts showed support of these apps, but just because for them it makes life easier - they don't think of the deeper ramifications. I mean, it would be great for ME personally if access to the whole Internet was shut down to anybody who cannot prove an academic instruction in cybersecurity or related fields. But, seriously doc, can we have a discussion about this?

Ultimately, the problem is that "better to do something than do nothing" does not apply in cybersecurity: in many cases it is literally better to do nothing, because a careless cure can easily be worse than the disease.

I apologize if you got angry reading this, but I really think the fear of the virus has put critical thinking too much on a second place.




Answers to questions I receive frequently:

Q) Even if these apps are only 10%, or even just 1% accurate, don't you think they are better than nothing?

A) Let me correct myself: "better to do something than do nothing" NOT ONLY does not apply in cybersecurity. Wouldn't it be a good idea to ban all bikes since they cause accidents? You always have to weight in the costs VS the benefits. An app that is 10% effective does NOT weight favorably compared to the many drawbacks I explained above. So, ultimately no: I don't think they are better than nothing.



Q) But the doctors know better, don't you think?

A) The doctors know, on average, NOTHING about cybersecurity, or even UX. They know that these apps can slow down the spread of the virus, fine. But they don't know *how much*, they do not consider that people are people rather than robots, and they have no idea of the costs and risks of the technology. Just to be clear: this has nothing to do with the seriousness of the emergency, which I do not deny. Even if Covid-19 were a terrible death sentence at any age range, contact-tracing smartphone apps would still be a terrible idea to mitigate the problem.



Q) Don't you think you are putting your paranoia before public health?

A) I explained above how the privacy issue is the *least* of my worries here. I know it's a boring read, sorry for that, but please read again.



Q) But then what do you think is the reason why authorities are recommending these apps?

A) This is a good question, I think there are many answers. On one side, control. I think many governments see this as a "test", to check how further down the China path they can go without people being pissed off. On another side, personal incentives: companies have interest in designing and selling these solutions, Google and Apple have interest in strengthening further their mobile market dominance by embedding the functionality at low-level in their OSes, academics have interest in having their names on safety reports or research papers on the topic. And then, of course, the omnipresent desire of virtue-signalling: "Hey, look at me, I want to support this because it is a Good thing and will make the world a better place" - shouting is less effort than actually doing something, no?



Q) Are you also against the use of face masks?

A) No.



Q) What triggered you in writing this post?

A) Mainly that people became super fascist and/or partisan in advocating something they know nothing about. But also the fact that apparently in the modern, anti-intellectual society we live in, working and having a formation in a certain field does not help in having your opinion on the matter being heard. "My ignorance is as important as your knowledge" has become the new mantra here, and I blame social media for this.







(July 29th, 2020) NIST Announces Finalists for the 3rd Round PQC Competition

A few days ago NIST published the list of finalists for the 3rd round of the PQC competition. I'm a bit disappointed that NIST apparently prioritized a lot performance over (estimated) security, but I guess it had to be expected, after all we need standards that must be suitable for a broad range of applications.

EDIT 2020-08-23: Just to be clear, my comment above is not to be meant as a criticism to NIST, they have very smart people evaluating the candidates and I'm sure they are taking very rational decisions.







(May 13th, 2020) Adiabatic Quantum Computing News

There is some big news announced related to the kind of quantum computing technology (adiabatic QC) developed by companies such as D-Wave. For a long time it has been unclear whether such technology could even just in theory be better than a classical computer or not. Now it looks like the answer might be yes. From Scott Aaronson's blog:

"Matt Hastings has announced the first provable superpolynomial black-box speedup for the quantum adiabatic algorithm [...] the first strong theoretical indication that adiabatic [...] can ever get a superpolynomial speedup [...] over all possible classical algorithms"

What it means: basically we cannot say anymore that adiabatic QC is a scam. We still have no evidence, as far as I know, that quantum adiabatic annealers are really useful quantum computers, so someone might say that for what concerns the practical aspects they are still snake oil, and also notice that the scientific paper mentioned above does not give evidence of adiabatic QC being able to solve any real world problem (just a theoretical one with no applications). But still, it means that from now on adiabatic QC is also something we cannot dismiss as pure BS (even if it looks very unlikely that it could have any implications for cybersecurity).







(March 20th, 2020) Personal Coronavirus Update

Just a quick update to say that (luckily) neither I nor my relatives and friends have been so far affected by the infamous virus. I am certainly worried for the virus itself, but I am actually more worried for the panic that it is causing. This brings not only infamous cases of mass hysteria (toilet paper, seriously?), but also vigilantism, intolerance, and willingness to trash any form of civil liberty and understanding of each other's feelings and needs in name of "the emergency situation". To me, this looks like 9/11 psychological effect all over again, and sure thing I would not enjoy to see it happening. That's not to say that people should be careless (please, be safe and make others safe), but welcoming a new wave of fascism and authoritarianism in exchange for a false sense of security is definitely not a good idea.







(March 8th, 2020) Upcoming Speaking Engagements

Provided they do not get canceled because of the spread of the coronavirus CoVid-19, here are my upcoming speaking engagements.

On March 11th I will be in Paris for the Deeptech Week, speaking at an event called "Quantum Cyber-Security : Impact And Challenges".







(March 8th, 2020) Make Quantum Indistinguishability Great Again

New paper together with Juliane Krämer and Patrick Struck from TU Darmstadt, currently online on Eprint and arXiv. I think the results are pretty cool: basically we show that so-called "type-2 operators" or "minimal oracles" for encryption are, perhaps counterintuively, very natural for many real-world quantum-resistant schemes. This allows to define a sound notion of quantum indistinguishability where the challenge phase is quantum, and the new security notion is strictly stronger than commonly used ones but at the same time achievable by "natural" schemes such as the barebone LWE encryption.

Needless to say, the title is purely ironic (but it actually also fits what we are doing, as we "resume" old ideas from CRYPTO 2016 that were kind of abandoned since then).







(March 8th, 2020) Old Article on 01.net

I decided to copy on this webpage, just for the sake of keeping records, articles I previously wrote for other venues. The following appeared in June 2018 on Italian language news website 01.net.


La sicurezza dei dati quantistici nell’Internet del futuro

Tommaso Gagliardoni, 28 giugno 2018

Supponiamo per un momento di ritrovarci improvvisamente nel 1968. La prima cosa che notereste (ovviamente!) è che avreste delle serie difficoltà a leggere questo articolo. Uno dei più potenti supercomputer dell'epoca, l'IBM System/360 Model 85, aveva l'equivalente di ben 4 megabytes di memoria. Era un gioiello di tecnologia, incredibilmente veloce e costoso, tanto che solo poche grandi istituzioni al mondo potevano permettersene uno - ne sono stati prodotti 30 esemplari. Consumava "solo" 4 kW di elettricità, ed era probabilmente più piccolo della vostra cucina. Probabilmente.

Oggi, nel 2018, portiamo nella tasca dispositivi grandi come il palmo di una mano che sono di diversi ordini di grandezza più performanti di tali antiquati behemot, e li usiamo quotidianamente per accedere a una Rete globale in grado di scambiare quantità mostruose di dati. Una rete anzi, a cui tutto oggigiorno è connesso e da cui tutto dipende.

Viviamo in "tempi veloci": 50 anni cambiano molte, molte cose, soprattutto dal punto di vista del mondo IT. Diventa quindi naturale fare il paragone con quelli che oggi sono i primi futuristici prototipi di computer quantistico. Macchine estremamente sofisticate, costose, ingombranti e complesse, avanguardie di una tecnologia rivoluzionaria. Tecnologia che, per quanto indubbiamente giovane, sempre con meno confidenza può definirsi allo stato "embrionale".

Negli ultimi anni si è assistito a una vera e propria esplosione nel campo della ricerca in quantum information processing. Cose che potevano sembrare fantascienza agli inizi del secolo sono ora a portata di mano: riusciamo a spedire e misurare informazione quantistica a centinaia di km di distanza, a correggere in maniera affidabile molti errori di trasmissione e si è passati in breve tempo dal contenere a fatica un singolo qubit ("bit quantistico") ad avere macchine che di qubit ne hanno poco meno di un centinaio.

Considerando che secondo molti ricercatori 90-100 qubit sono il limite oltre il quale un computer quantistico può effettuare operazioni impensabili con il più potente supercomputer classico, la sensazione generale che si ha è che probabilmente non ci sarà bisogno di aspettare ancora 50 anni per assistere a quella che si preannuncia come una nuova rivoluzione informatica e tecnologica.

Bisogna quindi guardare al futuro e iniziare a considerare seriamente quali possibili scenari potrebbero conseguire da questa rivoluzione.



La sicurezza dei dati quantistici

La disciplina della crittografia è tradizionalmente vista come un'area che potrebbe venire impattata in maniera drastica dall'avvento del quantum computer. La possibilità teorica dei computer quantistici del futuro di poter compromettere cifrari che sono oggi usati ubiquitariamente per rendere affidabili infrastrutture critiche è vista come una minaccia globale, per difendersi dalla quale negli ultimi anni si sono moltiplicati gli sforzi di governi e istituzioni pubbliche e private.

Le aree di ricerca della cosiddetta crittografia quantum-safe e crittanalisi quantistica si occupano di preparare le infrastrutture moderne (Internet in primis) a uno scenario futuro in cui computer quantistici in grado di forzare le protezioni crittografiche moderne saranno a disposizione di pochi ma potenti attori - anche se non necessariamente a disposizione del resto della popolazione.

Tale scenario sembra non configurarsi più oramai come qualcosa di troppo remoto, nè troppo futuristico, ma la buona notizia è che già oggi disponiamo di soluzioni adatte. Ad esempio in IBM Research, dove lavoro, stiamo sviluppando e perfezionando un tipo di schemi crittografici che protegge i dati codificandoli all'interno di complessi problemi matematici (strutture algebriche) chiamati "reticoli" ("lattices" in inglese).

Questi problemi sono così difficili che si stima che neanche un potente computer quantistico del futuro sarà in grado di risolverli. Ciò è molto utile per noi crittografi, perché significa che possiamo proteggere già da oggi i nostri dati in uno scenario in cui i computer quantistici diventeranno abbastanza forti da rompere la sicurezza degli schemi tradizionali odierni. Ovvero, non è necessario (né consigliabile!) attendere lo sviluppo di quantum computer potenti per iniziare a migrare le nostre infrastrutture verso una sicurezza a prova di futuro: i nostri schemi di crittografia quantum-safe sono ormai pronti per l'adozione su larga scala e sono già entrati nel processo di standardizzazione. IBM Research è alla continua ricerca di partner interessati a far parte di programmi pilota in questo senso.

Guardare al futuro però significa guardare anche oltre tale scenario. Magari traendo ispirazione proprio dalla storia del calcolo moderno.

Proprio come agli albori dell'era informatica solo poche grandi organizzazioni potevano permettersi di lavorare con un computer, oggi solo poche grandi aziende e centri di ricerca dispongono di un prototipo di computer quantistico. Ma, come tutti noi sappiamo, storicamente questo stato di cose non ha avuto vita lunga: la potenza di calcolo a disposizione è andata aumentando, e ben presto i primi computer sono stati connessi tra di loro, all'inizio in piccole reti, che sono poi cresciute man mano, a formare quella che oggi è la rete delle reti: Internet.

Allo stesso modo, non appena i computer quantistici cominceranno a diffondersi su larga scala, sarà naturale connetterli tra di loro: la tecnologia necessaria a scambiare dati quantistici esiste già e non serve quindi un grande sforzo per immaginare una "Internet quantistica" del futuro, in cui dati quantistici vengono codificati in pacchetti d'onda, fotoni, o particelle, e scambiati da quantum computer a quantum computer a velocità folli tramite laser, o fibre ottiche, o quant'altro la tecnologia del futuro potrà metterci a disposizione. Dopo l'evoluzione quantistica della tecnologia dei computer, una rete per lo scambio di dati quantistici sarebbe la naturale evoluzione delle reti dati moderne.

Tra il dire e il fare c'è di mezzo il mare. Lasciamo da parte per un momento le sfide ingegneristiche e tecnologiche da risolvere per arrivare a realizzare una "quantum Internet", di cui senza una sfera di cristallo possiamo dire ben poco. I problemi da risolvere sono prima di tutto di natura squisitamente teorica.

Il punto è che, come ormai noto da più di un secolo sia nel campo delle discipline fisico-matematiche che ingegneristiche, l'informazione di tipo quantistico si comporta in maniera peculiare, del tutto controintuitiva rispetto ai "dati digitali" classici a cui siamo abituati e su cui i computer tradizionali lavorano, a causa delle leggi fisiche della meccanica quantistica. Progettare una "Internet quantistica" richiederà prima di tutto capire come gestire questi comportamenti "anomali".

Ad esempio, l'informazione quantistica non può essere copiata - quindi risulterebbe difficile effettuare un backup, o un log del traffico. Oppure: dati quantistici possono entrare "in interferenza" tra di loro, cancellandosi o rafforzandosi a vicenda - non solo durante la trasmissione ma anche durante il calcolo. Tutte queste caratteristiche, così "aliene" al nostro modo moderno di concepire le reti di dati, significano che ci sarà bisogno di approcci e protocolli di trasmissione completamente nuovi. Bisognerà rivedere il modo in cui i dati vengono rappresentati, codificati, trasmessi, manipolati. E protetti.

Infatti sarà importante non ripetere gli errori del passato: quando le prime reti di computer sono state create, la sicurezza delle comunicazioni non è stata considerata una priorità. Internet stessa non è stata strutturata pensando alla sicurezza: protocolli per cifrare e autenticare i dati in transito sono stati aggiunti solo in un secondo momento, "a strati", con difficoltà tecniche notevoli di cui ancora subiamo i contraccolpi.

Oggi sappiamo quanto sia un errore tralasciare la sicurezza. Nel creare una Internet quantistica, un ruolo fondamentale dovrà avere la crittografia: quello che servirà saranno protocolli quantistici (da far girare su un quantum computer insomma) in grado di garantire sicurezza, privacy, autenticità, e integrità dei dati. Tali protocolli dovranno lavorare nativamente su dati quantistici, per cui le tecniche crittografiche moderne (inclusa la crittografia quantum-safe) non saranno di aiuto: serviranno schemi completamente nuovi.



Quantum security

La ricerca che porto avanti nel gruppo di Security & Privacy allo Zurich Research Lab di IBM è nel campo della "quantum security". La quantum security è lo studio matematico della sicurezza delle informazioni in presenza di processi quantistici, di qualsiasi tipo. Questo include la crittografia quantum-safe e il modeling di "avversari quantistici", ma un focus particolare è dato allo studio di protocolli crittografici per la sicurezza di dati quantistici.

In questo tipo di ricerca studio, ad esempio, come un algoritmo quantistico possa cifrare un dato in maniera tale che senza avere la giusta chiave di sblocco nessuno, neanche un altro computer quantistico, potrebbe mai recuperarlo.

In uno dei nostri lavori più recenti, pubblicato alla conferenza EUROCRYPT, io e i miei collaboratori abbiamo risolto un problema fondamentale: come garantire l'integrità di un dato quantistico in transito su una rete, anche nel caso che il modello di minaccia includa avversari tecnologicamente molto potenti. Stiamo al momento studiando la sicurezza di altri protocolli (sia per computer classici che quantistici) che siano sicuri contro gli "hacker quantistici" del futuro.

Questo campo di ricerca è giovane e terribilmente stimolante. Il conto alla rovescia è partito, ma assistiamo a continui progressi e sono più che sicuro che arriveremo preparati con strumenti adeguati a rendere l'Internet del futuro più sicura che mai.

Vale infine la pena ricordare che non è necessario attendere sviluppi futuri per poter "toccare con mano" un computer quantistico: con il programma Q Experience di IBM si può imparare a scrivere codice per programmare quantum computer, accedere gratuitamente (via cloud) ai prototipi di quantum computer sviluppati da IBM (attualmente fino a 16 qubit), e far girare algoritmi quantistici su tali macchine.

Questo network è già utilizzato da una miriade di studenti, ricercatori e università in tutto il mondo, e rappresenta uno strumento di importanza fondamentale per acquisire il know-how necessario ad arrivare preparati all'imminente rivoluzione informatica.







(Dec 10th, 2019) Relocation to Zurich + Quantum Supremacy Blog Post

Just wanted to let you know that I succesfully completed my relocation to Zurich! I am continuing to lead the efforts in innovation and research in quantum activities at Kudelski Security, but from here it will be more convenient for me to meet interested stakeholders in the Crypto Valley area. Plus, I love Zurich!

Let me also point you out to a blog post I recently wrote for the KS Research blog on the topic of quantum supremacy (in particular explaining what happened between Google and IBM).







(Aug 12th, 2019) Quantum Vocabulary

Too often I read articles where there is confusion (both in concepts and terminology) between quantum cryptography, quantum key distribution (QKD) and post-quantum cryptography. I wish this confusion vanished in the popular literature, it is not so difficult to grasp after all, and makes the work of us scientists much more difficult in explaining our results.

Let me recap it for you:

Quantum Computing (QC) is the computing technology being researched right now. A few misconceptions: a QC cannot just "factorize 15" (that was long ago), it is not only interesting for breaking cryptography (actually most of the useful applications are civilian), a famous Canadian company does not have a "quantum computer" (they call it like that but it's something different), and building a QC would not "violate the laws of quantum mechanics", nor there is any scientific evidence that it should be impossible (just the engineering effort is hard, as in any new technology). If you are amongst those who think they can show that building a QC is impossible, then congratulations for your 100k USD bet win!

Quantum Cryptography is cryptography that runs on a quantum device. This can be a quantum computer proper, or something more rudimentary but which uses quantum effects. Just like classical cryptography, there are many possible different kind of quantum cryptography applications (symmetric-key encryption, public-key encryption, fully homomorphic encryption, etc), the general rule is that quantum cryptography encrypts quantum information (qubits) rather than classical.

Quantum Key Distribution (QKD) is a particular subfield of quantum cryptography. This is what is usually mistakenly called "quantum cryptography", but it is very different in concept. QKD uses quantum states to establish a classical secret key over a quantum channel. It does not encrypt quantum data. Whereas quantum cryptography "proper" requires a working quantum computer (and it is therefore impossible nowadays), QKD only requires modest quantum hardware, and can be realized today (you can buy it actually). However: QKD only solves a very, very limited cryptographic task, at the cost of expensive infrastructure, it is vulnerable to man-in-the-middle attacks (so it has to be composed with other, less secure protocols), and the actual hardware implementations are usually not as secure as in theory.

Post-Quantum Cryptography is a badly, badly, badly chosen term and it should be avoided because it really creates confusion. Somehow the name stuck because it was cool. A better name would be "quantum-resistant", or "quantum-safe" cryptography. This is cryptography in the classical sense (that is, you can run it on your PC), but that it's based on very hard math problems supposedly resistant against future quantum computers. It has drawbacks in terms of performance in respect to traditional schemes such as RSA, but it is important to adopt it ASAP, even if quantum computers won't be around for other 100 years, otherwise it's gonna be too late.

Hope that helps.







(July 26th, 2019) Debian Sid Initramfs Boot Dependency failed for boot with LUKS after update

If you also had boot problems after performing an update on Debian Sid a few days ago (boot stuck at initramfs shell) here is what worked for me.

Disclaimer 1: this is not guaranteed to work for you as well
Disclaimer 2: this is also not guaranteed to not give you terminal illness or cause widespread thermonuclear war
Disclaimer 3: I am a n00b of Debian and I don't really know what I am doing, so it's basically trial and error
Disclaimer 4: Debian being Debian, *nobody* really knows what they are doing anyway, and if they claim otherwise then they are liars or Reptilians.

Anyway, here you go:

boot a Debian live distro (e.g. from USB drive)

make sure you have networking and DNS running on the live system, and install any tool you might need (vim etc)

sudo su
lsblk -o name,uuid,mountpoint

Sometimes these problems are caused by wrong UUIDs on /etc/fstab or /etc/crypttab. I assume you are a pr0 and this is not your case. Then, identify boot and encrypted partitions on your hard drive, and unmount them if not already dismounted. For the following we assume:

/dev/sda1 EFI partition
/dev/sda2 boot partition
/dev/sda3 encrypted LUKS partition containing the root partition

cryptsetup open /dev/sda3 sda3_crypt

Insert passphrase to unlock the partition (or provide a keyfile). If the underlying partition is an LVM one, you also need to activate the logical volume group and mount the correct logical volume (but I will ignore this here). Then mount everything in a chrooted environment:

mount /dev/mapper/sda3_crypt /mnt
mount /dev/sda2 /mnt/boot
mount /dev/sda1 /mnt/boot/efi
mount --bind /dev /mnt/dev
mount --bind /proc /mnt/proc
mount --bind /sys /mnt/sys
mount --bind /run /mnt/run
chroot /mnt

Now it is a good idea to update the system and firmware modules if needed, especially if you were getting warnings when booting the system before the boot failure. In my case installing firmware-linux removed the warnings. So, check that you have network connection in the chrooted system (if necessary edit resolv.conf to add DNS). Edit /etc/apt/sources.list and make sure that there are the following two lines:

deb http://debian.org/debian/ sid main contrib non-free
deb-src http://debian.org/debian sid main contrib non-free

Now do a system update (as mentioned, install related modules if getting warnings during initramfs generation)

apt update
apt upgrade
depmod `uname -r`

You might get an error at this point. Maybe the kernel version of the live system is different from the one installed on your system. To avoid issues, if this happens to you, build the initramfs against the installed kernel. To check what version you have, do cd /boot and ls -Al. You will see a bunch of files, for example:

total 107976 drwxr-xr-x 5 root root 4096 Jul 26 08:15 ./
drwxr-xr-x 24 root root 4096 Jul 21 21:39 ../
-rw-r--r-- 1 root root 206118 Mar 15 03:16 config-4.19.0-4-amd64
-rw-r--r-- 1 root root 206213 Jul 19 00:23 config-4.19.0-5-amd64
drwx------ 3 root root 4096 Jan 1 1970 efi/
drwxr-xr-x 5 root root 4096 Jul 23 20:54 grub/
-rw-r--r-- 1 root root 40260505 May 1 13:57 initrd.img-4.19.0-4-amd64
-rw-r--r-- 1 root root 52681191 Jul 26 08:15 initrd.img-4.19.0-5-amd64
drwx------ 2 root root 16384 Jan 10 2019 lost+found/
-rw-r--r-- 1 root root 3365519 Mar 15 03:16 System.map-4.19.0-4-amd64
-rw-r--r-- 1 root root 3371003 Jul 19 00:23 System.map-4.19.0-5-amd64
-rw-r--r-- 1 root root 5213424 Mar 15 03:16 vmlinuz-4.19.0-4-amd64
-rw-r--r-- 1 root root 5217520 Jul 19 00:23 vmlinuz-4.19.0-5-amd64

In this case the most recent version is 4.19.0-5-amd64, so in this case do:

depmod 4.19.0-5-amd64

Rebuild the initramfs:

update-initramfs -u

It might complain with warnings about missing locales, in that case do: dpkg-reconfigure locales (and then select `update all`, will take some minutes, then redo update-initramfs -u )

Finish by updating and reinstalling the bootloader:

update-grub
grub-install /dev/sda
exit

Now it is important to unmount everything in a clean way and in the right order before rebooting the system. Also a filesystem check helped in my case:

unmount /mnt/boot/efi
unmount /mnt/boot
unmount /mnt
fsck -f /dev/sda3_crypt
cryptsetup close sda3_crypt
exit

Unplug the live distro media, reboot your machine, and pray your favorite digital gods.






(May 30th, 2019) My Article on Quantum Signatures and Signcryption on the Kudelski Security Research Blog

I wrote an article on the KS Research Blog about some of the recent results I submitted to ASIACRYPT together with Gorjan Alagic and Christian Majenz. The take-home message is twofold. On one hand, cryptographically signing a quantum state (as in public-key digital signatures) turns out to be impossible. This result was considered folklore as a consequence of a result by Barnum et al. in 2002, but it turned out that formally proving it was quite tricky and not a simple extension of the symmetric-key case they consider. On the other hand, we show how to circumvent such impossibility by using signcryption, that is, signing and encrypting at the same time. We give novel security notions and constructions for quantum signcryption, with the caveat that (because of the impossibility result above) non-repudiation is anyway impossible.





(Apr 7th, 2019) My Article on Quantum Cryptography on the Kudelski Security Research Blog

I wrote an article on the KS Research Blog about quantum cryptography. Not to be confused with quantum key distribution (QKD), which is a specific subfield of it, quantum cryptography is the manipulation of data at a quantum level in order to achieve tasks such as encryption, digital signatures, and fully homomorphic encryption. Some of these classical tasks have a natural analog in the quantum realm; some others are impossible classically but possible quantumly, or vice versa. In that blog post I give an introduction to quantum cryptography, as well as presenting in a non-technical way some of the latest scientific breakthroughs in the area.





(Mar 21st, 2019) Email Spam Filter Tightened

I'm really pissed of the amount of academic spam emails I'm receiving recently. Apparently my provider, Gandi.net, is not doing a great job at blacklisting these idiots, so I decided to manually add some keyword-based filtering. If you receive a reject message with an angry text (and you're not a spammer) please try to send it again using a different topic or email address (oh, and sorry for the angry words, I do not really think that your mom will die in her sleep tonight).





(Feb 28th, 2019) My Article on Quantum Computing and the New IT Revolution on the Modern CISO Blog

I wrote an article on the Modern CISO Blog about Quantum Computing that pretty much summarizes my view on the issue. I did my best to concile scientific accuracy with the need of a simple, eye-catching vocabulary. I hope Scott Aaronson won't be mad at me :)





(Jan 20th, 2019) My New Position at Kudelski Security + Quantum Signcryption Paper

I am super excited to announce that I have recently started a new job as Cryptography Expert at Kudelski Security! Looking forward to doing a lot of cool and crazy stuff with my new colleagues! The relocation to Lausanne went smoothly and I'm now pretty settled. I loved my stay at IBM Research and I'm thankful to my former colleagues for the great time spent together!

On an unrelated note, check out this cool paper (collaboration with Gorjan Alagic and Christian Majenz) where we show that: 1) signing a quantum state with a private-public key system is impossible, unless 2) the state is also encrypted, leading to a notion of quantum signcryption, but 3) in any case non-repudiation is impossible to achieve quantumly. I think that's a pretty cool result, and I'm really thankful to my coauthors, this would have not been possible without a lot of teamwork!






(Oct 18th, 2017) Travel to Uzbekistan: Some Updated Info

I've traveled recently to Uzbekistan for holiday, which was quite amazing. But since it is a kinda off-the-beaten-path destination, I figured out first-hand that some of the info on the internet for Western travelers are quite outdated. So, although off-topic, here is some more recent "practical" information.

First of all, it is true that Uzbekistan is a cash society, but it is not true anymore that you need to carry around bags full of banknotes: 20000 and 50000 Som (UZS) bills have been introduced (very) recently.

Moreover, there is no more a difference between "official" and "black market" exchange rate with the dollar. This is because, also very recently, the government de-regulated the rate, and now banks and money exchanges adapted to the market price. This has three immediate consequences: 1) black-market money exchanges stopped operating: now you can safely go to a bank without being ripped off. 2) looking at the chart of the UZS/USD exchange rate looks scary. That peak (from about 4000 to 8000 Som/USD) is not because of a war or a golpe: it's just the de-regulation in effect - I say it because at first it looked surely scary to ME. 3) there is currently a somewhat widespread scarcity of cash. As far as I understand you can only withdraw a limited amount of money at the bank, and only upon showing a valid ID. So, if you visit the country, the old recommendation of carrying some USD in cash still holds.

Also, withdrawing cash is tricky. In Bukhara, for example, there is only one ATM in the city center, and it only accepts VISA credit cards. In general, according to my personal experience, Maestro, MasterCard and any debit/prepaid cards do not work at all, only VISA. But maybe in Tashkent's "western" hotels the situation is better.

Life there, as far as I could see, is cheap, but not crazy cheap. The country feels very safe from a tourist's perspective. It's full of police everywhere, but they did not bother me. Islam is very, very moderate. Infrastructure is meh, but liveable. Driving is crazy if you're never been in Rome. I can say there is a reason why fruit and vegetables from Uzbekistan (especially the fabled melons) are famous in the whole ex-URSS. Climate in September was gorgeously Mediterranean. Overall, to me the whole country looked disturbingly familiar: it's basically like southern Italy, but without pork and with more muslim architecture.

Kidding apart, I really liked the place and the people, so friendly and genuine. The hospitality is simply unbeatable: western tourists are not so common, so everybody wants to talk to you, practice their English, invite you for tea, showing you around to family and friends, etc.

A last piece of advice: your life will always be empty until you try the local plov (osh). According to the locals, the browner the carrots, the better.







(Jul 2nd, 2017) My New Position at IBM Research Zurich

Long time since I've updated this page. Just a couple of words to say how excited I am to having started my new position as a post-doc at IBM Research Zurich. The relocation was pretty intensive business, but now I am finally settled and looking forward to enjoy my new life here in Switzerland.







(Nov 5th, 2016) Norcia and Central Italy Earthquake

I would just like to reassure you that my family, friends, and relatives are all fine after the recent earthquake which struck Central Italy. I want to thank wholeheartedly all the people and friends who asked me if everything was fine. My thought goes to all the families who have not been as lucky as we were. Those places were incredibly beautiful, and I am sure that they will be again very soon.







(Oct 2nd, 2016) Back in Darmstadt

After an intensive period of travelling, I am finally settled back in Darmstadt again - for a while. I am working on my PhD thesis, the time for my graduation has finally arrived, and I would like to finish it ASAP. Therefore, hereby I solemnly commit to: 1) not start other new research projects for a few months (the open ones I have are more than enough) 2) not travel again until Christmas at least - that's gonna be hard! On a side note, our ICITS paper has been presented at QCRYPT - pity I missed the conference, but I'm happy nonetheless!







(Jul 31th, 2016) Trip to the US!

Hey, I'm heading off to the States! I will first go to Tacoma, WA, for attending ICITS, where I'll give a talk about our recent results on the Computational Security of Quantum Encryption (joint work with Gorjan Alagic, Anne Broadbent, Bill Fefferman, Christian Schaffner, and Michael St Jules). Then I'll move to Santa Barbara, CA, where I'm giving a talk at CRYPTO about our work on Semantic Security and Indistinguishability in the Quantum World (joint work with Andreas Huelsing and Christian Schaffner). And then a few days of vacation for some Californian desert adventure time!







(Mar 6th, 2016) Copenhagen

I'm in Copenhagen, visiting my colleague and friend Gorjan Alagic at Matthias Christandl's group of Quantum Information Theory QMATH at the University of Copenhagen. I should also visit Lars Knudsen's group at the Technical University of Denmark one of these days. Looking forward to some productive time!







(Dec 19th, 2015) Incoming Event: First CROSSING Winter School on Quantum Security

I want to point you out to this winter school which is going to be held in Darmstadt, Germany, from January 25th to 29th, 2016.

Marc Fischlin (my advisor) and I initially devised this school as a nice academic event to be hosted by CROSSING here at the Technical University. We made sure that the school would be free of charge to attend, but at the same time we managed to keep at a very high level the quality of applications.

Apparently, people liked the idea. Very renown speakers accepted our invitation, and we have been overwhelmed with requests of participation from all around the world. We planned around 30-35 participants, we received almost three times that amount of applications. We worked very hard to find a suitable venue and now we managed to extend the list of participants to more than 70. This includes now not only students, but professors, professionals from private companies, military organizations and so on. So, long story short: the whole thing turned out much bigger than we expected!

I am very excited, it's also a completely new experience for me to be part of the organization of such a big event, there are a lot of things to do and to take care of. It's really hard work too, especially considering that we are trying to do in 6 months what usually takes 12. But I'm sure it's going to be a great success, thanks to the amazing speakers we will have. Looking forward to the end of January, but for now I wish you all happy winter holidays (I am myself enjoying some rest at my hometown in Italy finally!)

All the best!






(May 6th, 2015) Lugano, Switzerland: ICITS 2015 and visiting USI

I am in Lugano for ICITS 2015 and visiting the USI. I also gave a talk at Stefan Wolf's group about our work `Semantic Security and Indistinguishability in the Quantum World` (joint work with Andreas Hülsing and Christian Schaffner, full version available on the Eprint).





(Apr 7th, 2014) Buenos Aires, Argentina: PKC'14

I was in Buenos Aires for the PKC'14 conference. Feel free to check out the slides of my talk `General of Group Homomorphic Encryption in the Quantum World` (joint work with Frederik Armknecht, Stefen Katzenbeisser and Andreas Peter, full version available on the Eprint).





(Dec 5th, 2013) Bangalore, India: Asiacrypt'13

I am currently in Bangalore for the Asiacrypt'13 conference. Feel free to check out the slides of my talk `The Fiat-Shamir Transformation in a Quantum World` (joint work with Özgür Dagdelen and Marc Fischlin, full version available on the Eprint).





(Apr 29th, 2013) Eric Schmidt about Google Glass privacy: society will adapt

You may have heard about this Erik-guy: he's the big boss of some non-evil tech company everybody likes. The same guy who says its company is basically reading your mind, but it's OK because they do it with your permission, and in any case you could always change your name in the future to avoid association with your digital past. The same guy who gave us this pearl of wisdom:

"If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place"

except, maybe it's better if you don't try to out-creep him by publishing some of his (publicly available) personal data, or he may get pissed.

On a sidenote, you may have heard about Google Glasses: the new, revolutionary, ultrasmart, uberhipster thing for sexy geek people who love tech, freedom, democracy, and most importantly, sharing. The new, cool people everybody wants obviously to be like.

You may also have heard about some nonsense luddist criticism about the feature of these cool glasses of having a small camera potentially turned on all the time, recording your life and the life of people around you 24/24H. Criticism obviously devoid of any argument, made by old, uncool, paranoid haters never leaving their damp basement full of polluting Debian machines. Oh, and never getting laid. But worry not, the Heric-guy has a proper response to such nonsense:

"criticisms are inevitably [sic] from people who are afraid of change or who have not figured out that there will be an adaptation of society"

Good. Now, the point is: you may also have heard about the Enric-guy's opinion about the use of commercial flying drones by civilians:

"It's got to be regulated. [...] It's one thing for governments, who have some legitimacy in what they're doing, but have other people doing it... It's not going to happen."

and then you may have naively wondered: "how comes that being against the idea of normal people flying their own, cheap drones thanks to the advancement of technology is not just fear of change and failure to adapt to the society?".

A lot of readers noticed the above fallacy and raged against it (see for example the comments in the Slashdot article). But I have the feeling that stopping at this observation is dangerously short-sighted. I'm not going to discuss here the use of drones by civilians, I admit I don't have yet strong opinions about it - while I definitely do about people wearing surveillance equipment 24/24H: not in my house anyway. My point is another:

suppose more and more people react against Schmitt's words, pointing at his opinion about private drones as a proof of his hypocrisy. Then one day he pops up with apologizes: "Yeah, sorry, you're right: maybe private drones are not so bad after all". Satisfied? Not a big damage for him, main argument of his detractors dismissed. Uh... what was so bad about those glasses apart from that thing, anyway? Now you have to explain it over again to me and all my lazy-minded friends on Facebook...

My suggestion: never lose sight of the big picture of things. Eerie Smith's argument is so laughable that I can understand people enjoy bashing him on that point. But to focus on this strategy is to accept candies from the enemy: there may be poison. Whatever you may think about private drones, the implications of a society full of dorks wearing those glasses are FAR more important. Call me paranoid, but I wouldn't be surprised if this were just a smart marketing move to defuse the much more serious criticisms about this dystopian view of (still more) ubiquitous surveillance.






(Mar 17th, 2013) HowTo: OpenVPN with Elliptic Curve Cryptography (ECC) and SHA2 on Linux Debian

UPDATED VERSION: fixed OpenSSL vulnerability from Nadhem Alfardan and Kenny Paterson. Notice that the old version 1.0.1c and previous are vulnerable . Thanks to the anonymous reader who pointed that out (I had already attended a talk by Kenny in Darmstadt about the issue last year but I was still waiting for a fix). If you are affected you should recompile OpenSSL (the new version) and then OpenVPN, all should work as before. Please upgrade!

In this howto I will explain how to setup a typical OpenVPN infrastructure with an atypical cryptographic protocol involving Elliptic Curve Cryptography (ECC) and the more recent SHA2 cryptographic hash family on Linux Debian. The problem lies in the fact that this setup is not (still) natively supported by OpenVPN, and some workarounds are necessary. Proof of this is the fact that you won't find a lot of informations on the internet about a similar setup, and those few are either incomplete or not working. It took me a lot of effort to make this tutorial, I really hope it can help some of you having a similar situation.



Summary

We will first install the latest version of OpenSSL on a Debian system (everything also works fine for Ubuntu and similar derivates). Then we will compile OpenVPN against the newly installed OpenSSL and add a manual patch, which makes possible the use of ECC authentication schemes with SHA2. We will then create a Certification Authority (CA), server and client certificates, and then setup a VPN by enabling routing and NATting over a virtual interface on the server side.



Open problems

The encryption protocol for secure key-exchange is still the traditional Diffie-Hellmann (DH) and not the more secure ECDH (Elliptic Curve Diffie-Hellmann). This cannot be fixed until OpenVPN won't natively support ECDH (e.g., with an "ecdh" directive in the server.conf instead of the "dh" directive).

Moreover, it is not still 100% clear if SHA2 is used over both OpenVPN's channels (control and data) or just over the control channel (see thread at https://forums.openvpn.net/topic8404.html).




Premise

I will have to assume that the reader is familiar with the basic concepts of public-key cryptography, authentication and hash functions. The typical public-key encryption and signature schemes, such as RSA, Diffie-Hellmann (DH) and DSA, do not offer the same advantages (both in terms of security as well as performances) as ECC does: ECC is faster (on the average hardware) and needs smaller key sizes to obtain comparable security, hence saving bandwidth etc.

Many ECC applications are unfortunately plagued by the draconian North American patent system (there is a lot of mess on some of these aspects here, which I will not address). I am based in Europe, so am not going to take into account such considerations in this tutorial, but if you are a US/Canadian citizen maybe you should care, I'm not exactly sure which of the following steps and to what extent may be deemed unappropriate. Please check if in doubt.

SHA2 cryptographic hash functions family comes basically in three variants: SHA-256, SHA-384 and SHA-512, depending on the message digest size. They are all much more secure of the more well-known MD5 and SHA1 (SHA-160), which are both to be considered insecure for today's standards. It is worthwhile repeating: do not use SHA1 or MD5 for secure applications, ever.

The main problem about OpenVPN's most common setups is that they usually employ SHA1 as a digest. There is an undergoing public competition by NIST for a new standard called SHA3 (in a similar way to the competition for AES), but it won't be due until mid 2012.

I will give this tutorial for a Debian stable system, but it shouldn't be difficult to adapt it to other distributions. I assume that all the job is done via the root account for simplicity.

I will assume that there are three machines:
  • the secure working machine where the CA certificates are stored, which should ideally be disconnected from the network
  • a reasonably secure server machine, where the OpenVPN server daemon will run
  • a client machine, which wants to connect to the internet by routing all its traffic through the server machine
Also for simplicity, in this tutorial all the keys and certificates will be generated on the secure machine, and then moved to client and server. Of course the standard way would be instead to generate, e.g., the client key on the client an then submit it to the CA for the certificate generation. I will just simulate this process by using directories on the secure machine, and then copying or moving the certificates and keys as needed.




The issue

OpenVPN uses the encryption routines provided by OpenSSL. ECC and SHA2 crypto primitives are relatively new and have been only recently added to the set of OpenSSL's primitives. The problem is that a composite scheme using both has not yet been added in OpenVPN: with the latest stable release you can either choose an RSA setup with SHA2 as a message digest, or opt for an ECC setup with the standard SHA1, but there is no way to mix both. After having asked around in both OpenVPN forum and OpenSSL mailing list, user janjust has kindly provided a rough patch (see here ) which solves the issue. The patch has been submitted, but AFAIK not yet included in the official version, so we'll have to do the fix manually, but it is likely that it will be included starting from the next stable release.



Preliminaries

First of all open terminal and open a root session with su . On Ubuntu and similar you don't have a root account by default, so either you always use sudo before administrative commands, or:

sudo su -

Another option would be to create a root account with:

sudo passwd root

now you can use su. Remember to close all the root session you open with exit when you have finished.

Install the following packages on all of the machines involved:

apt-get install wipe build-essential

Keep in mind that we are going to build some big stuff, so it might take a lot of time on a slow machine.




Installing updated OpenSSL

The following must also be done on all the machines. First of all install and compile the latest version of the LZO library, we will need it later:

cd /root
mkdir .ssl
cd .ssl
wget http://www.oberhumer.com/opensource/lzo/download/lzo-2.06.tar.gz
tar -zxf lzo-2.06.tar.gz
rm lzo-2.06.tar.gz
cd lzo-2.06
./configure
make
make test
make install
cd ..
rm -R lzo-2.06

Then we are going to install OpenSSL. Be aware that OpenSSL is a vital part of any Linux distribution, so it is not possible to simply uninstall the default version and compile the new one without having a lot of problems. For this reason we are just going to install the latest version in a non-system directory, which in this case is /usr/local but you have to change it if for any reason it is not suitable to your situation. It would be much simpler if the latest release was found in the Debian repositories but I'm not aware of any Debian repository for updated versions of OpenSSL.

The latest OpenSSL stable release at the moment is the 1.0.1e.

wget http://www.openssl.org/source/openssl-1.0.1e.tar.gz
tar -zxf openssl-1.0.1e.tar.gz
rm openssl-1.0.1e.tar.gz
cd openssl-1.0.1e
./config --prefix=/usr/local/openssl --openssldir=/usr/local/openssl
make
make test
make install
cd ..
rm -R openssl-1.0.1e

Now we are going to add an alias, so that every time we run the openssl command, the freshly installed version will be called, and not the default system one. I will use the nano editor for this tutorial.

nano ../.bashrc

Add the following line at the bottom:

alias openssl='/usr/local/openssl/bin/openssl'

then activate the alias:

source ../.bashrc

Check that everything is OK:

openssl version

Should be something like:

OpenSSL 1.0.1e x Yyy zzz

Now we have to use a slightly modified version of the openssl.cnf configuration file. Put this file in the current directory ( /root/.ssl ), so we don't have to overwrite the default conf file (though it would probably be a good idea anyway, since the one we will use is a more appropriate version tuned up for better security). In case you wonder, the main modification is the addition of a [server] section to give the right permissions to the certificates generated for the server.



Installing and patching OpenVPN

This is also going to be done on all the machines.

The latest OpenVPN stable release at the moment is the 2.2.1

NOTICE: I'm still using an old version, but maybe the patch has already been included in one of the latest versions, it would be nice to check.

wget http://swupdate.openvpn.net/community/releases/openvpn-2.2.1.tar.gz
tar -zxf openvpn-2.2.1.tar.gz
rm openvpn-2.2.1.tar.gz
cd openvpn-2.2.1

Now add the following patch to the file ssl.c (you can do it manually: plus signs at the beginning of a line means that the line must be added, minus signs means that the line must be removed, while @@ is a marker for the line position in the file).

We finally build OpenVPN by linking it to the newly installed OpenSSL version:

./configure --with-ssl-headers=/usr/local/openssl/include --with-ssl-lib=/usr/local/openssl/lib --enable-iproute2
make
make install
cd ..
rm -R openvpn-2.2.1

Check that everything is OK:

openvpn --version

must be something like:

OpenVPN 2.2.1 x86_64-unknown-linux-gnu [SSL] [LZO2] [EPOLL] [eurephia] built on ...

Create the configuration directory for OpenVPN in case it has not been created automatically:

mkdir /etc/openvpn

Optionally, reboot.



Creating a Certification Authority

Recall that in this tutorial we are going to create the keys and certificates in three separate directories, each one for CA, server and client, and just move the certificates around for simplicity, but the "safe" way to do this is to have three separate machines and to keep the CA machine unplugged from the network.

cd /root/.ssl
mkdir CA
cd CA
mkdir newcerts
echo "01" > serial
touch index.txt
mkdir private
chmod 700 private
cd private

Now we are going to generate a public/private EC key. Beware especially to the -conv_form compressed option: it will produce smaller curve representation which are also faster for point multiplication, but this representation is covered by a Canadian patent. I am using the Koblitz curve sect571k (571 bit binary field), but you can choose other curves, just run:

openssl ecparam -list_curves

to see a list of available curves.

openssl ecparam -genkey -name sect571k1 -text -conv_form compressed -out cakey_tmp.pem
openssl ec -in cakey_tmp.pem -out cakey.pem -aes256

Choose a passphrase for unlocking the CA key. You will need this key every time you want to generate a server or client certificate.

wipe -f cakey_tmp.pem
chmod 600 cakey.pem
cd ..

Now we are going to create the main CA certificate. This will be needed by every client and server to authenticate the CA. Edit the fields in the command below according to your needs.

openssl req -new -x509 -key private/cakey.pem -config ../openssl.cnf -extensions v3_ca -subj '/C=CaState(TwoLetters)/ST=CaState/L=CaCity/CN=www.yourauthorithy.com/O=YourName/emailAddress=your@email.address' -days 36500 -sha512 -out cacert.pem

Insert the CA passphrase to generate the certificate. You can see the certificate's attributes with:

openssl x509 -text -in cacert.pem
openssl x509 -fingerprint -noout -in cacert.pem -sha256
cd ..

Take note of the hash value and make sure it is publicly accessible by anybody who wants to check the authenticity of the cacert.pem in their possession.



Creating a server certificate

First of all we are going to create a Diffie-Hellmann key exchange parameter file. This might change in the future, when OpenVPN will support the ECDH key exchange protocol. We are going to generate a quite large DH modulus, since this tutorial is aimed at obtaining the best possible security, but keep in mind that this will take a long time (several minutes or possibly hours on a 2.3 GHz quadcore machine). Also keep in mind that this parameter needs not to be secret, so you can share and reuse it for as many other server installations as you want, as long as you don't care that the servers could be linked to the same generating entity.

mkdir server
cd server
openssl dhparam -out dh.pem 4096

Now import the CA certificate and generate a secret TA key which we'll need later to authenticate the control channel for further security.

cp ../CA/cacert.pem ./
mkdir private
chmod 700 private
cd private
openvpn --genkey --secret ta.key

Now we are going to generate the server's private key for authentication. Unlike in the CA step, we are not going to encrypt it, because otherwise we would be prompted for a password every time a service on the server side needs to access the key to be started. E.g., it would be impossible to automatically restart our OpenVPN server without manually inserting a passphrase or delegating this step to a script that would contain the passphrase in cleartext (which is almost equivalent to have an unencrypted key for the most of practical applications). Considered that the services accessing this key in plaintext will be running almost all the time, I think it's not worthwhile to encrypt the server key, unless you REALLY trust the security of all those services OR you think that they won't need frequent restarts OR you employ very secure process isolation policies via virtual machines (e.g., on a Qubes environment or similar).

openssl ecparam -genkey -name sect571k1 -text -conv_form compressed -out serverkey.pem
chmod 600 serverkey.pem
cd ..

Now we need to generate a certificate request, send it to the CA, have it signed by the CA and then sent again back to our server. As usual, in this tutorial these steps will be simulated.

openssl req -new -key private/serverkey.pem -config ../openssl.cnf -extensions server -subj '/C=ServerCountry(TwoLetters)/ST=ServerCountry/L=ServerCity/CN=www.myserver.com/O=ServerName' -days 36500 -sha512 -out req.pem
openssl req -in req.pem -noout -text
cd ../CA
mv ../server/req.pem ./
openssl ca -config ../openssl.cnf -extensions server -policy policy_anything -out ../server/servercert.pem -md sha512 -cert cacert.pem -keyfile private/cakey.pem -infiles req.pem

Insert the CA passphrase to sign the server certificate.

rm req.pem
cd ../server
openssl x509 -in servercert.pem -noout -text -purpose
cd ..



Creating one or more client certificates

The following must be repeated for every client who wants to connect to the VPN and at this point should be self-explainatory:

mkdir client
cd client
cp ../CA/cacert.pem ./
mkdir private
chmod 700 private
cd private
cp ../../server/private/ta.key ./
openssl ecparam -genkey -name sect571k1 -text -conv_form compressed -out clientkey_temp.pem
openssl ec -in clientkey_temp.pem -out clientkey.pem -aes256

Insert a passphrase to protect the client key.

wipe -f clientkey_temp.pem
chmod 600 clientkey.pem
cd ..
openssl req -new -key private/clientkey.pem -config ../openssl.cnf -extensions client -subj '/C=ClientCountry(TwoLetters)/ST=ClientCountry/L=ClientCity/CN=ClientName/O=ClientOwnerName/emailAddress=client@e.mail' -days 36500 -sha512 -out req.pem

Reinsert the client passphrase.

openssl req -in req.pem -noout -text
cd ../CA
mv ../client/req.pem ./
openssl ca -config ../openssl.cnf -policy policy_anything -extensions client -out ../client/clientcert.pem -md sha512 -cert cacert.pem -keyfile private/cakey.pem -infiles req.pem

Insert the CA passphrase.

rm req.pem
cd ../client
openssl x509 -in clientcert.pem -noout -text -purpose
cd ..




Server configuration

This must be done on the server machine. Create the following two folders if they have not been already created:

mkdir /etc/openvpn
mkdir /etc/default

Copy all the content of the directory /root/.ssl/server on the trusted machine we used in the previous step in the server's /etc/openvpn directory. Also, add this file, but substitute xx.xx.xx.xx with your server's IP, and yy.yy.yy.0 will be the VPN address pool (the server will create a virtual interface on yy.yy.yy.1). Also note that with this configuration you can override the DHCP discoveries of your clients being sent through the VPN (i.e.: the DHCP queries of your clients will remain inside their respective networks and will not travel through the VPN). The DNS option allows you to specify your own DNS servers for the VPN, in this case I have put the OpenDNS servers, but you can also choose others or comment out that option (then DNS queries through the VPN will use the same OpenVPN server's DNS).

Now we must make sure that the OpenVPN server's interface is started whenever the server reboots. First of all create a file called openvpn in /etc/default (if it is not present already) and make sure it contains the following line:

AUTOSTART="all"

Then put this script file in /etc/init.d . Notice the line DAEMON=/usr/local/sbin/openvpn instead of DAEMON=/usr/sbin/openvpn : this is because we installed OpenVPN manually at that position. Now run the following:

service openvpn add

Then open the /etc/sysctl.conf file and look for the following line (add it if it doesn't exist):

net.ipv4.ip_forward = x

Where x can be 0 or 1. Change x to 1 if it is set to 0. This will basically turn your server into a router, meaning that it will be able to route packets through different network interfaces. This behaviour will be reset at every reboot though, so we need to make the modification permanent with:

sysctl -p /etc/sysctl.conf

Check that everything is OK:

cat /proc/sys/net/ipv4/ip_forward

must be 1.

Now we need to abilitate a NAT translation from the OpenVPN virtual interface to the server's network interface. To do this, edit the script at /etc/network/if-up.d/iptables (or create it if it doesn't exist, making it executable with chmod +x /etc/network/if-up.d/iptables ). Make sure that it begins with:

#!/bin/sh

and then add the following lines:

iptables -t nat -A POSTROUTING -s yy.yy.yy.0/24 -o eth0 -j MASQUERADE
iptables -A INPUT -i tun+ -j ACCEPT
iptables -A OUTPUT -o tun+ -j ACCEPT
iptables -A FORWARD -i tun+ -j ACCEPT

where eth0 should be your primary network interface, the one which is bound to the xx.xx.xx.xx address (check it with ifconfig if unsure).

Finally, restart networking and OpenVPN:

service networking restart
service openvpn restart

Now the OpenVPN service should automatically run at every reboot, and with ifconfig you should also see the new yy.yy.yy.yy interface.



Client configuration

This must be done on the client machine. Login as a normal user and create the following folder if it has not been already created:

mkdir ~/.openvpn

Copy all the content of the directory /root/.ssl/client on the trusted machine we used in the previous step in the client's ~/.openvpn directory. Also, add this file, but replace xx.xx.xx.xx with your server's IP.

Update: a useful feature for ease of configuration is to bundle all of the configuration files, keys, and certs in a single .ovpn file. Just creat a file client.ovpn which is basically a copy of client.conf, with the following modifications:

1) instead of using ca, cert, key, tls-auth parameters, paste inline directly the content of the related files as in the following example:

<ca>
-----BEGIN CERTIFICATE-----
# insert base64 blob from cacert.pem
-----END CERTIFICATE-----
</ca>

<cert>
-----BEGIN CERTIFICATE-----
# insert base64 blob from clientcert.pem
-----END CERTIFICATE-----
</cert>

<key>
-----BEGIN PRIVATE KEY-----
# insert base64 blob from client.key
-----END PRIVATE KEY-----
</key>

<tls-auth>
-----BEGIN OpenVPN Static key V1-----
# insert ta.key
-----END OpenVPN Static key V1-----
</tls-auth>

2) add the key-direction 1 directive; and

3) use remote-cert-tls server instead of ns-cert-type server.



Connect

From the client, you can connect to the new VPN with the command:

openvpn ~/.openvpn/client.conf

(you need to provide the password to unlock the client's secret key).

If you bundled the files using the .ovpn method above, you can use openvpn ~/.openvpn/client.ovpn instead.



Revoking a client certificate

In order to revoke a certificate, first we have to update the CA's index and generate an updated Certificate Revocation List (CRL), then instruct all the servers who use that CA to use the crl-verify options, and finally upload the new updated CRL to those servers.

cd /root/.ssl/CA
cat index.txt


This will give us a list of the recorded certificate, plus a serial number in front of it. Let's assume we want to revoke client certificate number 03. Then check to be sure:

openssl x509 -text -noout -in newcerts/03.pem

If it is correct we can revoke it:

openssl ca -config ../openssl.cnf -revoke newcerts/03.pem

(insert the CA's passphrase). We can confirm the revocation with:

cat index.txt

(the R stands for "Revocated"). Now we can generate an updated CRL. If this is the first time we revoke a certificate we have to prepare a counter:

echo "01" > crlnumber

Then generate the CRL:

openssl ca -config ../openssl.cnf -gencrl -out crl.pem

(insert the CA's passphrase). Check the CRL with:

openssl crl -text -noout -in crl.pem

Now we have to copy the crl.pem file into the server directory of EVERY server which uses that CA. Usually it's sufficient to just replace an old crl.pem with the new one without restarting OpenVPN, because the CRL is checked whenever an host tries to connect. But since this is the first time we have also to add the following line to the server.conf:

crl-verify ./crl.pem

and then reboot the service. It is also possible to un-revoke a certificate but it is not advisable (there are other ways to block temporarily an user).



Enjoy!

Feedbacks via email at tommaso[at)gagliardoni(dot]net very welcome :)




(Feb 20th, 2013) HowTo: Remote LUKS unlock via SSH/dropbear with multiple ethernet interfaces on Linux Debian

In this howto I will explain how to setup a minimal SSH server on a Debian system with LUKS full disc encryption (i.e.: the root directory is also encrypted) and with multiple ethernet interfaces. In fact, when using the standard howtos, it is always assumed that there is just a single ethernet interface. The difference is not negligible as we will see, since at the stage where dropbear starts the order in which the interfaces are raised is random - udev is not loaded yet. I will not explain in details here the typical scenario where you want to use this setup, but basically what you want to do is being able to reboot a remote machine where the discs are fully encrypted with LUKS. The solution is to install a minimal SSH server (dropbear) in the cleartext boot partition, connect to it and enter the password to unlock the rest of the filesystem, continuing the normal boot procedure.



Remote LUKS: the standard procedure

You can find the standard howto in the Debian cryptsetup documentation:

zcat /usr/share/doc/cryptsetup/README.remote.gz

First of all install the needed tools:

apt-get install dropbear busybox wipe

Check that /etc/initramfs-tools/initramfs.conf contains BUSYBOX=y and does NOT contain DROPBEAR=n .

The host keys used for the initramfs are dropbear_dss_host_key and dropbear_rsa_host_key, both located in/etc/initramfs-tools/etc/dropbear . The RSA key is "too small" by default. If you feel paranoid, delete it and create a new one:

rm /etc/initramfs-tools/etc/dropbear/dropbear_rsa_host_key
dropbearkey -t rsa -s 4096 -f /etc/initramfs-tools/etc/dropbear/dropbear_rsa_host_key


Now you want to generate an user (root) private SSH key to connect to the server. Only thing, it must be first created in Dropbear format:

dropbearkey -t rsa -s 4096 -f /etc/initramfs-tools/root/.ssh/id_rsa_dropbear

Then extract the public part:

dropbearkey -y -f /etc/initramfs-tools/root/.ssh/id_rsa_dropbear | grep "^ssh-rsa " > /etc/initramfs-tools/root/.ssh/id_rsa_dropbear.pub

and include it in the initrd authorized_keys file:

cat /etc/initramfs-tools/root/.ssh/id_rsa_dropbear.pub >> /etc/initramfs-tools/root/.ssh/authorized_keys

At this point (assuming you use OpenSSH to connect) you want your private key converted to OpenSSH format to connect:

/usr/lib/dropbear/dropbearconvert dropbear openssh /etc/initramfs-tools/root/.ssh/id_rsa_dropbear /etc/initramfs-tools/root/.ssh/id_rsa

and then we can delete the old Dropbear one, since we will use OpenSSH to connect:

wipe -f /etc/initramfs-tools/root/.ssh/id_rsa_dropbear

Now assume you want to use the interface eth0 (with DHCP enabled) to unlock your system. You have to add the following line in /etc/initramfs-tools/initramfs.conf :

DEVICE=eth0

Finally, recompile the server's initrd and reboot:

update-initramfs -u ALL
reboot


Let's assume your server is at www.myserver.com and the passphrase to unlock your LUKS setup is "hello world" (without quotes). I am assuming you have a LUKS setup with a single partition, or in any case you are requested for just ONE passphrase at boot (i.e. you use keyfiles stored in the first partition to unlock other partitions). Then, once your server is rebooted and hanged at boot waiting for the passphrase, you can unlock it remotely with the following command:

ssh -o "UserKnownHostsFile=~/.ssh/a_new_file_known_hosts" -i "~/.ssh/id_rsa" root@www.myserver.com "echo -ne \"hello world\" >/lib/cryptsetup/passfifo"

So far, so good.



Multiple ethernet interfaces

Now the problem: let's assume you have TWO (physical) ethernet interfaces, eth0 and eth1. Here you will find an issue: the order in which they are renamed at boot is random. This is because udev (which usually takes care of assigning the interfaces the correct name) is not started yet when Dropbear starts. So, sometimes the above solution will fail. One solution could be to tell Dropbear to listen over ALL the interfaces, but 1) it seems that Dropbear has a hard time doing so, since it was created for embedded environments where usually you cannot afford multiple servers 2) it may be not desirable (maybe one of the interfaces is a DMZ).

Luckily there is another, elegant solution: force some of the udev rules to be loaded in the initrd. Namely, if you open /usr/share/initramfs-tools/hooks/udev with an editor, look for around line 37:

for rules in 50-udev-default.rules 50-firmware.rules 60-persistent-storage.rules 80-drivers.rules 95-udev-late.rules; do

add 70-persistent-net.rules to the list, so it looks like this:

for rules in 50-udev-default.rules 50-firmware.rules 60-persistent-storage.rules 80-drivers.rules 95-udev-late.rules 70-persistent-net.rules; do

Save file. Regenerate initramfs and reboot:

update-initramfs -u ALL
reboot


Enjoy!







(Aug 23rd, 2012) An overview of position-based quantum cryptography.

An overview of position-based quantum cryptography.


Introduction

Imagine a secure cryptographical protocol where you can use your geographical position as a credential. For example, you could register your home location at your bank to secure your on-line banking account from connections not originating from your home PC. Or, as a military organization, you could send an encoded message that only troops deployed at a certain location could read. In general, this would link the security of the digital world to that of the physical world - the Holy Grail of on-line security. This is exactly what position-based cryptography (PBC) tries to achieve, and its goal has proven to be all but trivial. In fact, in order for this model to be meaningful, the geographical position cannot be obtained by external measurement services, which would then become the weak link of the security chain. I.e., it would not make sense to rely on something like the GPS satellite system to obtain coordinates, since this would open a plethora of new attack scenarios. The position should instead depend on some intrinsic properties of the system involved, something as cheating-proof as possible. In practice, this means to exploit some physical law which we are guaranteed to be unmodifiable by any known attacker - just as the GPS system relies on the constant speed of light to pinpoint position.

Cryptosystems relying on the idea of exploiting the physical world are theoretically unbreakable as long as the physical law holds, as their security does not depend, in principle, on any computational assumption. They are, of course, prone to implementation issues like any other cryptosystem, but we are guaranteed that as long as we understood correctly the physics we must not fear any surprise.

Must we?

The gap between theory and practice has always proven to be unpredictably wide and, as often happens, the good and the bad guys can use the same tools. The same physical properties which allow to create strong cryptograms also allow to devise new powerful attacks. In this article we will give an overlook of the rise, evolution, fall and resurrection of position-based cryptography augmented with the magic of quantum physics, keeping in mind that there is no final word to the everlasting arms race of security.




Classical PBC

A formalized model for PBC was first introduced - and proven to be insecure - in [CGMO09]. The idea of the original protocol was to achieve secure position verification via a distance bounding protocol based on relativistic assumptions (namely, the speed of light) and involving multiple verifiers to authenticate an honest prover.

Let us see an example in the 1-dimensional case. Consider a segment of space [0,1] and two verifiers, V_0 and V_1, sitting at positions 0 and 1 respectively. Then we have an honest prover P, standing at position 0.6, whose goal is to convince the two verifiers that he is located exactly at that position. We assume the following:
  • there is a fixed and publicly known Boolean function f:{0,1}^n X {0,1}^n -> {0,1} which is very easy to compute and takes virtually no time to be evaluated for reasonable values of n
  • the two verifiers V_0 and V_1 share a private channel which they can use to securely communicate with each other and synchronize their clocks
  • all the parties are able to communicate just at a fixed, bounded speed (i.e.: we assume that they can broadcast radio waves or laser pulses at the speed of light)
Under these assumptions we can design many different position verification protocols, here is an example:
  1. P initializes the protocol, sending a request to the verifiers to be authenticated at position 0.6
  2. V_0 generates a random n-bit string x and shares it with V_1 through the private channel, V_1 does the same with another random n-bit string y
  3. V_0 generates a random bit b and, if f(x,y) = 1, sends a copy of b to V_1 through the private channel
  4. V_0 sends x and b to P, and V_1 sends y to P; they time their messages so that x, y and b arrive all at the same time at position 0.6
  5. P evaluates f(x,y): if the result is 0 then P routes back b to V_0, otherwise he routes b to V_1
  6. the verifiers accept the round if and only if V_f(x,y) receives the correct b at a time which is consistent with the claimed position 0.6 given the speed of light of the signal
  7. repeat the previous steps for k times (where k is a security parameter) for security amplification and authenticate P's position if and only if all the k rounds are passed.
It is pretty clear that a single malicious prover standing at a position different from what claimed cannot cheat with probability greater than 1/2 at each round. But it is easy for two colluding malicious provers, M0 and M_1 standing, e.g., at positions 0.3 and 0.8 respectively, to trick the verifiers into believing that there is a single honest prover at position 0.6, namely:
  1. as soon as M_0 receives x and b from V_0, she forwards a copy of both to M_1; M_1 does the same by forwarding a copy of y to M_0
  2. as soon as M_0 receives y from M_1, she computes f(x,y) and, if 0, sends b to V_0, otherwise does nothing; at the same time, as soon as M_1 receives b and x from M_0, she sends b to V_1 if f(x,y) = 1, otherwise does nothing.
It is easy to check that by summing up the travel time of the malicious attacker's messages, either V_0 or V_1 will receive the right b at the right time, hence the attackers can convince the verifiers that there is a honest prover at any given position between them.

This argument (formally proved in [CGMO09]) can be extended to any distance bounding protocol, concluding therefore that secure classical PBC against multiple adversaries is unachievable.




Quantum PBC

First ideas for quantum PBC (QPBC) appeared in the academic literature in 2010 [KMS10]. The idea is the following: since the above multi-adversary attack to classical PBC protocols relies on creating copies of some bit and keeping them before relying to the correct verifier, this attack fails if we use quantum bits (qubits) instead.

It is well known that a quantum state cannot be copied. In mechanical physics this is called the "No-Cloning Theorem", and it is a consequence of the thermodynamical Landau's Principle stating that in order to erase information embedded in a physical system one has to spend energy. Since in a closed physical system the total energy must be conserved, and since in order to copy a qubit one has first to "free space" (erase another qubit) to copy the given state into it, it turns out that it is impossible to copy a qubit in a reversible way. The other option is to have the system interact with another system (the observer) in order to measure the given qubit and then prepare another qubit from scratches in the measured state. But by the postulates of quantum mechanics, the process of measurement destroys the original quantum state with very high probability, it is therefore impossible to make a perfect copy of a qubit. This "magic" property of quantum mechanics has been exploited to design new, innovative, unconditionally secure cryptographical schemes and it is at the base of quantum cryptography [BB84].

In QPBC, part of the information exchanged by prover and verifiers consists of some quantum state |phi> which must be measured at a certain step of the protocol and act as a "guarantee" that the particular step can be only performed once for every round. Under the same conditions as the protocol in the previous section, let us consider the following quantum variation:
  1. P initializes the protocol, sending a request to the verifiers to be authenticated at position 0.6
  2. V_0 generates a random n-bit string x and shares it with V_1 through the private channel, V_1 does the same with another random n-bit string y
  3. V_0 generates an entangled EPR pair |ab> = (|00>+|11>)/sqrt(2); if f(x,y) = 1, he sends the first half of the registry, |a>, to V_1 through the private channel, otherwise keeps this half for himself
  4. V_0 sends x and the other half of the EPR registry, |b>, to P, and V_1 sends y to P; they time their messages so that x, y and |b> arrive all at the same time at position 0.6
  5. P evaluates f(x,y): if the result is 0 then P routes back |b> to V_0, otherwise he routes |b> to V_1
  6. the verifiers accept the round if and only if V_f(x,y) receives the qubit |b> at a time which is consistent with the claimed position 0.6 given the speed of light of the signal and it is consistent with a Bell measurement performed together with the other half |a>
  7. repeat the previous steps for k times (where k is a security parameter) for security amplification and authenticate P's position if and only if all the k rounds are passed.
A Bell measurement over two qubits in this case is just a "test" to see whether the two qubits have preserved entanglement or not. Since measuring any of the two qubits destroys entanglement, if this test is passed it means that the qubit |b> has not been measured (and hence has not been replaced nor partially copied). Under this condition it is clear that no coalition of malicious attackers can impersonate a honest P with good probability. In fact, the attack to the classical protocol relies on the fact that the bit b can be copied and distributed amongst the attackers without the verifiers noticing it, which turns out to be impossible with the quantum state |b>. Notice that this protocol only needs a single EPR pair for each of the k round in order to achieve k bits of security, which is pretty efficient.




Breaking QPBC

QPBC looks really unbreakable so far, but unfortunately it does not keep into account the fact that entanglement can also be used by malicious attackers to perform one of the weirdest feats of quantum information: distributed computation via teleporting. This is a consequence of the fact that entanglement between qubits is a non-classical interaction, very different from anything we have been knowing in the last few centuries, like electromagnetic or gravitational interactions. One of its consequences is that if two parties share an entangled EPR pair, then one of the parties can teleport an additional qubit to the other party by sending two classical bits of information and destroying the EPR pair by performing a Bell measurement over the qubit and one of the halves of the EPR pair [NC00]. Let us now see how this can be used by malicious attackers M_0 and M_1 to break the QPBC protocol above in the case where n=1 and f(x,y) = x XOR y:
  1. M_0 and M_1 share three entangled EPR pairs, labelled 0, 1 and 2 respectively, and initialize the protocol
  2. let's assume that f(x,y) = 0, then M_0 receives x (a 1-bit string) and |b>, while M_1 receives the 1-bit string y
  3. M_0 uses the EPR pair labelled x to teleport |b> to M_1, and sends x and the two classical bits for teleportation to M_1 (notice that these three classical bits will travel at the speed of light)
  4. at the same time, M_1 teleports her half of the EPR pair labelled y using EPR channel 2, and sends y and her two classical teleportation bits to M_0 (again, these three bits will travel at the speed of light)
  5. when the classical bits arrive at their destinations (in a single round of simultaneous communication), both cheaters can compute f(x,y)
  6. if f(x,y) = 0, then it is easy to check that |b> has been teleported in M_0's EPR register 2 (and M_0 is able to recover it since she has also received M_1's teleportation bits); otherwise it is easy to check that |b> has been teleported in M_1's EPR register labelled x (and M_1 is able to recover it, since she has received both x and M_0's teleportation bits)
  7. in both cases, it is easy to check that, with a single round of simultaneous communication, the "correct" cheater has received |b> just in time to route it to the "correct" prover, hence breaking the security of the protocol.
This attack can be generalized to any n and any suitable f, the basic idea being the following: if the two attackers share an arbitrary amount of EPR pairs, then they compute the function f "in parallel", like if they were using a single (quantum) computer made of a split (quantum) register physically separated in space.

This is an intrinsic property of quantum computing: non-classical correlations are not limited to the speed of light (in a sense which is still not completely understood). For example it could be possible to have a two-qubit quantum computer where one of the qubits resides on Earth ant the other one is located on Mars, and still it would work perfectly like if we had a single joint register. It is proven, however, that it is impossible to exploit this property to make "miracles" - e.g.: to violate Einstein's relativity principle. Namely, even if the two registers are quantum-jointed (and it is thus possible to perform operations on both simultaneously), it is impossible to access the information of a remote register without transmitting classical information. This is exactly what the classical teleportation bits are for, the interesting part being that the process of computation can be "parallelized" through the EPR channels and can then be performed by using a single round of classical communication - which is enough to break our QPBC protocol.




Future directions

QPBC is thus proven to be insecure in case of multiple colluding attackers sharing an arbitrary amount of EPR pairs. But we must consider that EPR pairs are a costly resource (they are currently, and there is no evidence that they shouldn't be in the future). This has led to an interesting observation.

Let us forget for a second about entangled-capable adversaries and let consider the role of the parameter n in the QPBC protocol above. It has basically no importance, in the sense that the security of the protocol does not depend on it - while depending on the parameter k instead. This means that, in theory, there is nothing wrong in setting n = 1 and using a very simple function f, like the XOR example above. But if we consider the entanglement attack, while the same observation remains true for honest prover and verifiers, n makes a difference for the malicious attackers instead. Namely, the larger n, the more and more EPR pairs M_0 and M_1 need to share in order to compute f.

The QPBC protocol above is then interesting for the following fact: while it only requires a single EPR pair to be used by honest parties for each round, it requires some larger amount of entangled resources to be used by malicious attackers, and this amount in general depends on n. This trade-off can be stated as: the more (cheap) classical computing resources are needed by honest parties, the more (costly) entangled resources are needed by the attackers, which is a pretty nice feature.

This is not an absolute statement though, because it also depends on the function f. Namely, there exist easy to compute functions f where the amount of EPR pairs needed to compute f is small (such as logarithmic in n, or even constant). In order to exploit the aforementioned trade-off in real scenarios we have hence to find some - easy to compute, i.e. in P - functions where the number of EPR pairs needed for its evaluation grows as fast as possible in n.

The Garden-Hose Game is a computational model introduced exactly for this purpose: to model the EPR-related complexity of a function. It basically consists of two players (with their respective inputs x and y) separated by some distance who want to compute a Boolean function f(x,y) in the following way: they have some fixed pipes running between them, plus they have pieces of garden-hose which they can use to connect the ends of the pipes at their location into a pattern dependant on their input only (no communication allowed). There are no T-shaped (or more complex) pieces of hose, just one-to-one connections, but not all the pipes need to be connected. One of the players then has a source of water (tap) which is connected to one of the loose ends of the pipes. The players can arrange their hose pieces in such a way that f(x,y) = 0 if the water eventually reaches one of player 0's ends, 1 else. It is easy to see that water will eventually exit from one side or another, and that this model is closely related to the teleportation technique used to break QPBC. The Garden-Hose complexity GH(f) of a function f:{0,1}^n X {0,1}^n -> {0,1} is then defined as the minimum number of pipes necessary for the two players to compute its output for any input in the above way.

Some computational-theoretic results are known about bounds on GH(f), namely:
  • GH(f) <= 2^n + 1, for any Boolean function
  • if f is in L (computable in logarithmic space on a deterministic Turing Machine), then GH(f) is at most polynomial in n
  • if GH(f) is polynomial in n, then f is in L (up to local preprocessing)
  • if there exists an f in P such that GH(f) is super-polynomial in n, then P != L (notice that L is a subset of P, but it is currently unknown if the equivalence hold)
  • there exist functions f such that GH(f) is exponential in n, but it is unknown whether they are also in P or not.
So, if we could find a function f which is in P, but has an exponential GH(f), that one would be a perfectly good candidate for use in QPBC, making the above attack infeasible for reasonable values of n. But the existence of such a function would imply a major breakthrough in computational complexity. Up to now only functions with polynomial Garden-Hose complexity are known, but the road is open to new, exciting directions.




References

  • [BB84] Bennet, Brassard: "Quantum cryptography: Public key distribution and coin tossing", Proceedings of IEEE International Conference on Computers, System and Signal Processing

  • [BCF+11] Buhrman, Chandran, Fehr, Gelles, Goyal, Ostrovsky, Schaffner: "Position-based quantum cryptography: Impossibility and constructions", CRYPTO 2011

  • [BFSS11] Buhrman, Fehr, Schaffner, Speelman: "The Garden-Hose Game", arXiv:1109.2563v2

  • [CGMO09] Chandran, Goyal, Moriarty, Ostrovsky: "Position-based cryptography", CRYPTO 2009

  • [KMS10] Kent, Munro, Spiller: "Quantum Tagging: authenticating location via quantum information and relativistic signalling constraints", arXiv/quant-ph:1008.2147

  • [NC00] Nielsen, Chuang: "Quantum Computation and Quantum Information", Cambridge University Press, 2000.






(Jun 25th, 2012) Waterloo (ON), 12th Canadian Summer School on Quantum Information, 9th Canadian Student Conference on Quantum Information, 2nd AQuA Student Congress on Quantum Information and Computation.

I've just come back from this big event in Waterloo, Canada. The past two weeks have been quite intensive, though I had the opportunity of visiting some nice places (Niagara Falls are cool, but I prefer Toronto, especially by night). Waterloo is one of the leading places in the world for quantum stuff, with its three main research centers: the University of Waterloo, the Perimeter Institute and the Institute for Quantum Computing. The university campus is really cool, with a lot of wild animals around (beavers, squirrels, groundhogs, bunnies, eagles... but beware of the damn aggressive gees and small birds attacking people at random!), although everything is quite expensive (especially food, and by the way I didn't see many overweight people around...) and the accommodation was... embarrasingly uncomfortable. I've also given a talk about provable security in the quantum random oracle model. Nightlife in Waterloo is not very intensive. Best places to visit: The Flying Dog (salsa place, young people on Thurday night) and the Revolution (disco club).





(Oct 1st, 2011) A general mathematical principle to achieve `perfect' social justice

Some random thoughts about the `fairness' of a punishment in respect to a crime. The principle `the harsher the crime, the harsher the punishment' is not given for granted, but should be a consequence of a deeper principle. Also, I think that the principle in according to which `a punishment only serves to re-habilitate the criminal' is somehow incomplete, since it can be abused: re-habilitation comes automatically once the criminal has repaid all the `damage' caused by his/her crime. Moreover, a punishment is not `fair' if the personal gain obtained in performing a crime is exaclty balanced by the punishment but should be harsher, since not every crime is prosecuted (not every criminal is caught, or else we wouldn't have criminals). I am not a jurist, so maybe I am telling something trivial, but IMHO the only general `fair' rule should be the following:

The punishment for a certain crime should be equal to the `social damage' caused by that crime, divided by the probability of that crime to be prosecuted.

where:
  • `social damage' is assessed on empirical bases by a competent, indipendent organism, and should include the costs needed for the prosecution, moral and physical damages caused by the crime, its consequencies and so on
  • 'punishment' should always be commuted in appropriate forms, e.g., it is completely pointless to physically punish a murderer since this does by no mean `repay' the social damage caused by the murder; a murderer should instead, e.g., do useful, harsh, underpaid jobs for many years to the benefit of the victim's family, friends, colleagues and whatever, until a `fair' balance (estabilished by a judge) has been reached between the crime and the repaid social gain (in most of the cases a `fair' social gain, according to the victim's family, could only be reached with the murderer's execution, but a mathematical principle should not include emotionality in its formulation)
  • `probability' of a crime to be prosecuted should be computed over historical records for known crimes, and over empirical considerations for emerging crimes, in collaboration with police forces or, however, with organisms competent in performing such an estimate.
 EXAMPLE:
A businessman is caught evading 100k EUR of taxes.
The total annual cost of the legal and bureaucratic organisms fighting tax evasion is, say, 500M EUR, but it must be weighted against the ratio between mean total tax evasion (let's say 10G EUR per year) and the amount we are taking here into account (so: 100k * 500M / 10G = 5k EUR).
Plus the judge establishes that there is an additional "mean social damage" (due, i.e., to emulation phenomena) of 2k EUR.
So: the total "social damage" amounts to 107k EUR. This must be divided by the probability of being caught evading taxes, let's say 15%. Then: 107k / 0.15 = 713k EUR (rounded down). This is the total amount the criminal has to repay as a punishment to make complete atonishment, either through fines (provided he has enough money) or social works (which should then be paid much more than now), or a balance between the two. This looks a lot, but it is the only amount statistically effective.
 
This principle is `fair', in the meaning that it makes both the crime and the prosecution of the crime `statistically non-convenient'. It is also `rational', for there is no room allowed for more or less `hated' crimes (usually subject to manipulation, emotional reactions, demagogy and so on), though it is `elastic', since the definition of `social damage' can take into account many factors.

Note that a corollary of this principle is the following: once a criminal has completely made atonishment for his/her guilt, he/she is really `free'. All the damage has been repaid, in every possible rational and practical meaning, so that subsequent discriminations because of his/her police records should not be allowed. Also, the criminal's rehabilitation is automatic. Tests to see if he/she is still capable of crimes should not be necessary, because after the atonishment we don't have a criminal anymore: we have a common citizen who has repaid to the society more than the damage caused. He/she should already be `rehabilitated' enough. Recidivism has never to be taken into account too.

These are just my thoughts anyway, and a test for the making of a blog on this website if and when I will have some time to set it up. Feedbacks appreciated via email :)




(May 27th, 2011) My thesis and slides : `Classical simulation of quantum circuits'

Here are my Master thesis (in English) and related slides (in Italian), should some visitor be interested.