Optical Cryptosystems. Naveen K. Nishchal
and a ‘secret’ would no longer remain a ‘secret’.
1.2.2 Asymmetric key cryptography
Asymmetric key cryptography is also known as public key cryptography. It refers to a cryptographic algorithm which requires two separate keys, one of which is private and another is public. The public key is used to encrypt the message and the private one is used to decrypt the message. This method was developed to address the key management issue of symmetric key cryptography. The process of asymmetric cryptography is shown in figure 1.4. It is a very advanced form of cryptography. Officially, it was invented by Whitfield Diffie and Martin Hellman in 1975. The basic technique of public key cryptography was first discovered in 1973 by the British Clifford Cocks of Communications-Electronics Security Group but this was a secret until 1997. The examples of symmetric key cryptography are discussed below [6].
Digital signature standard (DSS): the DSS is a digital signature algorithm developed by the US National Security Agency to generate a digital signature for the authentication of electronic documents. DSS was put forth by the National Institute of Standards and Technology (NIST) in 1994.
RSA: (Rivest, Shamir, and Adleman who first publicly described it in 1977) It is an algorithm for public-key cryptography. It is the first algorithm known to be suitable for signing as well as encryption, and one of the first great advances in public key cryptography. RSA is widely used in electronic commerce protocols, and is believed to be secure given sufficiently long keys and the use of up-to-date implementations.
ElGamal: ElGamal is a public key method. It is used in both encryption and digital signing. The encryption algorithm is similar in nature to the Diffie–Hellman key agreement protocol and is used in many applications and uses discrete logarithms. ElGamal encryption is used in the free GNU Privacy Guard software.
Figure 1.4. Asymmetric key cryptography.
1.2.3 Hash functions
A cryptographic hash function is a hash function that takes an arbitrary block of data and returns a fixed-size bit string, the cryptographic hash value such that any (accidental or intentional) change to the data will (with very high probability) change the hash value [7]. The data to be encoded is often called the message, and the hash values are sometimes called the message digest or simply digest. The ideal cryptographic hash function has four main properties:
It is easy to compute the hash value for any given message.
It is infeasible to generate a message that has a given hash.
It is infeasible to modify a message without changing the hash.
It is infeasible to find two different messages with the same hash.
The examples of hash functions are discussed below.
Secure hash algorithm (SHA): SHA hash functions are a set of cryptographic hash functions designed by the National Security Agency and published by the NIST as a US Federal Information Processing Standard. Because of the successful attacks on MD5, SHA-0 and theoretical attacks on SHA-1, NIST perceived a need for an alternative, dissimilar cryptographic hash, which became SHA-3. In October 2012, the NIST chose the Keccak algorithm as the new SHA-3 standard.
As multimedia, image, and video are becoming increasingly part of modern economy and social companions, ensuring security from malicious interference, theft, and unauthorized use has become the demand of the hour. Encryption of images is one of the well-known mechanisms to preserve confidentiality of images/data over a reliable unrestricted public media, which is vulnerable to attacks. The image encryption algorithms can be classified into frequency-domain and spatial-domain algorithms. Both are able to protect the data/image with a high level of security. Their output encrypted images are either texture-like or noise-like images. From a security point of view, it is an obvious visual sign indicating the presence of an encrypted image that may contain some important information. It is apprehended that this will attract people’s attention and can result in a significantly large number of attacks and analysis. The solution has been reported in the form that the original image is transformed into visually meaningful encrypted images. This is because people generally consider these images as normal images rather than encrypted ones.
Securing data/image is important in all the domains including medical diagnosis. There is a fear that patients’ computed tomography (CT) and medical resonance imaging (MRI) scan results can easily be changed by hackers, thereby deceiving radiologists and artificial intelligence algorithms that diagnose malignant tumors. The hackers could access to add or remove medical conditions from the scans for the purpose of insurance fraud, ransom, and even homicide. A large number of techniques have been proposed in literature to date, each have an edge over the other, to catch up to the ever-growing need of security. The focus has been devising a mechanism for image encryption that should have the following characteristics.
Low correlation | The value of correlation between the original and the encrypted image should be as low as possible. Ideally its value should be zero. |
Large key space | The key size should be very large since the more the key space, the higher the brute force search time would be. |
Key sensitivity | The image encryption algorithm should have high key sensitivity. In other words, a slight change in the key value should change the encrypted image significantly. |
Entropy | It is a measure of the degree of randomness or disorder. As the level of disorder rises, the entropy rises, and events become less predictable. The minimum entropy value should be zero and it happens when the image pixel value is constant in any location. The maximum value of entropy for an image depends on the number of gray scales. For an image with 256 gray scales, the maximum entropy is log 2(256) = 8. The maximum value happens when all bins of the histogram have the same constant value, or, image intensity is uniformly distributed in [0,255]. |
Low time complexity | Usually, an encryption algorithm with high computational time is not recommended for practical applications. Therefore, an image encryption algorithm should have low time complexity. |
The technology for information security using digital methods is being enhanced by applying more powerful algorithms. Longer key lengths are chosen such that current computers using the best cipher-cracking algorithms would require an unreasonable amount of time to break the key. When encryption key length becomes longer, the processing speed of digital techniques goes down. In order to counter the processing speed and security problem, in 1995 a new technology was proposed that used physical keys employing the principles of classical optics. Owing to the speed of light, it is envisaged that data can be secured at unparalleled speed along with parallel processing. Additionally, optics offers several degrees of freedom that could help encode information more securely [8–14]. Also, there is a natural match between optical processing for optical communications.
With the belief that cryptology based on the optics principle would provide a more complex environment and would be more resistant as compared to purely digital techniques, developing optical cryptosystems have gained much emphasis [13,