Digital Forensics Tools used in Crime Investigation for Forgery Detection

Modified: 12th Oct 2021
Wordcount: 6968 words

Disclaimer: This is an example of a student written essay. Click here for sample essays written by our professional writers.
Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.ae.

Cite This

Abstract

Image Forensics has lot of importance in digital forensics in crime investigation process. As a coin has two different sides, many anti-Forensic tools are available to help criminals for hiding traces of forgery which also have been evolved with advancement in modern technology. In this paper the major focus is on overview of available forensics tools and frequent image processing techniques involved in it to investigate crime related digital traces.

Keywords— Digital Image Forensics, Digital Watermarking, Forgery detection, Intrinsic Fingerprints, Passive Blind Image Forensics, Source Identification, Image Processing, Image Tampering, Copy-move Detection

1 INTRODUCTION

Digital Media has become the most useful tool in information transfer.Images and videos have power to express and convey information much efficiently.As a result Digital images and videos are used to represent evidence in criminal investigations.People believe news by seeing videos.Videos act as evidence for any incident.In a similar way video footages are used as witness in a court of law.

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service

With lot of benefits comes some drawbacks too. The criminals with high knowledge of Image Processing can modify the images and videos in the way they wanted without leaving any hints to detect them. There are many free tools for editing images across internet.As a result malicious activities of modifying images is becoming more and more common. Image editing tools are widely available. It is possible to download professional image manipulation tools (e.g.,Adobe Photoshop), to use image editing operations directly from web interfaces (e.g., Pixlr), or even more easily to automatically forge a picture using completely unsupervised tools (e.g., FaceSwap). If maliciously edited images are shared online or distributed through social media, their impact in terms of opinion formation and fake news distribution can cause serious consequences.

Digital image forensics is an emerging field that supports verifying the authenticity of digital images. This field creates tools and techniques that extract intrinsic and extrinsic traces hidden within the image to recover image history. In fact, some inherent patterns that have been embedded in the image during various phases of image capturing or creation (synthetic), image processing, image compression and image storing so Digital Image Forensics exploits these properties to solve various issues regarding image authentication and integrity assessment.

2 IMAGE FORENSICS (IF)

The digital forensics domain is developing considerably to resist the IF problems in different fields such as sports, medical images, legal services, and intelligence. These digital images can be given as proof for the court of law. In such cases, it becomes extremely significant for proving the originality of digital images. IF plays a dynamic role in these cases by examining authenticity and integrity of digital images For proving the authenticity of digital images, different approaches have been presented which are generally divided into active and passive approaches;

Figure 1: illustrated the IF techniques.

2.1 Active Authentication

There are many tools that can be easily create or manipulate the digital image. As a result, authenticity of image can't be taken granted, now a day digital image is used for legal photographic evidence in that case we can't easily trust on any digital document. Manipulation of image consists of many processing operation like scaling, rotating, blurring, brightness adjusting, change in contrast, etc. or any combination of these operation. Doctoring image means pasting one part of image into other part of image, skillfully without living any trace. One important tool for authenticity of digital image is digital signature watermarking.

2.1.1 Digital Signature

Digital signature is a kind of cryptographic is a mathematical scheme for demonstrating the authenticity of digital document. Digital signatures are used for detecting image forgery and tampering. Digital signature is a robust process where image bits are extracted from original image. In this method image is divided into blocks of size 16*16 pixels. A secret key is used to generate random matrices with entries uniformly distributed in the interval [0, 1]. A low pass filter is applied on each matrix from above generated matrices to obtain a random smooth pattern. Digital image signatures are generated by systems using the following steps: 1) Decompose the image using parameterized wavelet feature. 2) Extract the SDS 3) Cryptographically hash the extracted SDS, generate the crypto signature by the image Sender's private key. 4) Send the image and its associated crypto signature to the recipient.

2.1.2 Watermarking

Watermarking is a technique used for image forgery detection. There are many kinds of watermarking techniques. One of the types uses a checksum schema in that it can add data into last most significant bit of pixels. Others add a maximal length linear shift register sequence to the pixel data and then identify the watermark by computing the spatial cross-correlation function of the sequence and the watermarked image. These watermarks are designed in the way that they are not visible to human eye, or to blend in with natural camera or scanner noise. There are some types of visible watermarks also. In addition to this, a visually undetectable watermarking schema is also available which can detect the change in single pixels and it can locate where the change occur. Embedding watermarks during creation of digital image may limit its application where digital image generation mechanisms have built-in watermarking capabilities.

Figure 2: Framework for Image Forgery Detection.

These active techniques have some limitation as they require some human intervention or specially equipped cameras. To overcome these problems passive authentication techniques have been proposed.

2.2 Passive Authentication

Passive or blind forgery detection technique uses the received image only for assessing its authenticity or integrity, without any signature or watermark of the original image from the sender. It is based on the assumption that although digital forgeries may leave no visual clues of having been tampered with, they may highly likely disturb the underlying statistics property or image consistency of a natural scene image which introduces new artifacts resulting in various forms of inconsistencies. These inconsistencies can be used to detect the forgery. This technique is popular as it does not need any prior information about the image. Existing techniques identify various traces of tampering and detect them separately with localization of tampered region.

2.2.1 General Framework for Forgery Detection

Forgery detection in pictures may be a 2 category downside. the most objective of passive detection technique remains to classify a given image as original or tampered. Most of the present techniques extract options from image at that time choose an acceptable classifier so classify the options. Here we have a tendency to describe a general structure of image meddling detection consisting of following steps shown in Figure 2.

Image preprocessing is the first step. Before image could be subjected to feature extraction operation some preprocessing is done on the image under consideration such as enhancement, filtering, cropping, DCT transformation, conversion from RGB to grayscale. Algorithms discussed here after may or may not involve this step depending on the algorithm. After this comes feature extraction. Feature set for each class which differentiates it from other classes but at the same time remains invariant for a particular class are selected. The most desirable feature of the selected feature set is to have a small dimension so that computational complexity is reduced and have a large interclass difference. This is the foremost vital step and every one algorithms swear in the main on this step for forgery detection. This step for all same algorithms is mentioned singly with the algorithms. once this can be Classifier picks. supported extracted feature set acceptable classifier is either designated or designed. presumably an outsized coaching set provides a far better playacting classifier. The extracted options may need thereforeme preprocessing so on scale back their dimension and per se the procedure quality while not touching the machine learning. The only real purpose of classifier is to classify a picture either as original or cast. varied classifiers are used like neural networks, SVM and LDA . Finally some forgeries like copy move and junction could need post process that involve operations like localization of duplicate regions.

Figure 3: Example image of typical copy-move forgery. Left: Original image Right: Tampered image

2.2.2 Copy-move Forgery Detection

Copy-move is that the most well liked and customary image change of state technique due to the benefit with that it may be disbursed . It involves repetition of some region in a picture and moving constant to another region within the image. Since the derived region belong to constant image so the dynamic vary and color remains compatible with the remainder of the image. associate degree example of copy-move forgery is shown in Figure 3.

The original image is cast to get the tampered image; Fountain has been disguised by repetition a vicinity from identical image and pasting it over them. Post process operation like blurring is employed to decrease the result of border irregularities between the 2 pictures.

Among the initial makes an attempt Fredrich projected ways to notice copy-move forgery. separate trigonometric function rework (DCT) of the image blocks was used and their penning sorting is taken to avoid the machine burden. Once sorted the adjacent identical combine of blocks ar thought-about to be copy-moved blocks. Block matching algorithmic program was used for balance between performance and complexness. This methodology suffers from the disadvantage that it cannot notice little duplicate regions.

Popescu and Farid suggested a method using principal component analysis (PCA) for the overlapping square blocks. The computational cost and the number of computations required are considerably reduced O (NtN log N), where Nt is the dimensionality of the truncated PCA representation and N the number of image pixels. Detection accuracy of 50% for block size of 32x32 and 100% for block size of 160x160 was obtained. Although this method has reduced complexity and is highly discriminative for large block size but accuracy reduces considerably for small block sizes and low JPEG qualities. To combat computational complexity Langille and Gong proposed use of k-dimensional tree which uses a method that searches for blocks with similar intensity patterns using matching techniques. The resulting algorithm has a complexity of O(NaNb) where Na is neighbourhood search size and Nb is the number of blocks. This method has reduced complexity as compared to the earlier methods.

Gopi et al.,developed a model that used auto regressive coefficients as feature vector and artificial neural network (ANN) classifier to detect image tampering. 300 feature vectors from different images are used to train an ANN and the ANN is tested with another 300 feature vectors. Percentage of hit in identifying the digital forgery is 77.67% in experiment in which manipulated images were used to train ANN and 94.83% in experiment in which a database of forged images was used.

Myna et al.,proposed a method which uses log polar coordinates and wavelet transforms to detect and also localize copy-move forgery. Application of wavelet transform to input image results in dimensionality reduction and exhaustive search is carried out to identify the similar blocks in the image by mapping them to log-polar coordinates and for similarity criterion phase correlation is used. The advantage of this method is reduced image size and localization of duplicate regions.

XiaoBing and ShengMin developed a technique for localization of copy-move image forgery by applying SVD which provides the algebraic and geometric invariant feature vectors. The proposed method has reduced computational complexity and strong against retouching operation applies radix sort technique to the overlapping block which is followed by median filtering and CCA (connected component analysis) for tamper detection this method localizes detection without effecting image quality and is simple and efficient too. This method has used radix sort as an alternative to lexographical sorting which has considerably improved the time efficiency.

Bashar et al.,developed a method that detects duplication victimisation 2 strong options supported DWT and kernel principal element analysis (KPCA). KPCA-based projected vectors and multi resolution ripple coefficients behind image-blocks square measure organized within the sort of a matrix on that authorship sorting has been allotted. Translation Flip and translation Rotation also are known victimisation international geometric transformation and therefore the labeling technique to sight the forgery. This methodology eliminates the off-set frequency threshold that differentwise is to be manually adjusted as in other detection ways.

Sutthiwan et al.,presented a method for passive-blind color image forgery detection which is a combination of image features extracted from image luminance by applying a rake – transform and from image chroma by using edge statistics. The technique results in 99% accuracy.

Liu et al., proposed use of circular block and Hu moments to detect the regions which have been rotated in the tampered image. Sekeh et al., suggested a technique based on clustering of blocks implemented using local block matching method. Huang et al.,worked on enhancing the work done by Fridrich et al.,in terms of the processing speed. The algorithm is shown to be straightforward, simple, and has the capability of detecting duplicate regions with good sensitivity and accuracy. However there is no mention of robustness of the algorithm against geometric transformation.

Xunyu and Siwei presented a technique that uses region duplication by means of estimating the transform between matched SIFT key points that is invariant to distortions that occurs due to image feature matching. The algorithm results in average detection accuracy of 99.08% but the method has one limitation duplication in smaller region is hard to detect as key points available are very few.

Kakar and Sudha developed a new technique based on transform-invariant features which detecting copy-paste forgeries but requires some post processing based on the MPEG7 image signature tools. Feature matching that uses the inherent constraints in matched feature pairs so as to improve the detection of cloned regions is used which results in a feature matching accuracy of more than 90%.

Muhammad et al., projected a copy-move forgery detection technique supported II moving ridge rework (DyWT). DyWT being shift invarient is a lot of appropriate than DWT. Image is rotten into approximate and detail subbands that ar futher divided into overlapping blocks and also the similarity between blocks is calculated. supported high similarity and unsimilarity pairs ar sorted. mistreatment thresholding, matched pairs ar obtained from the sorted list.

Hong shao et al., projected a part correlation technique supported polar growth and adaptive band limitation. Fourier rework of the polar growth on overlapping windows try is calculated ANd an adaptive band limitation procedure is applied to get a matrix wherever peak is effectively increased. once estimating the rotation angle of the forgery region, a looking algorithmic rule within the sense of seed filling is dead to show the total duplicated region. This approach will find duplicated region with high accuracy and hardiness to rotation, illumination adjustment, and blur and JPEG compression.

Gavin Lynch developed increasing block rule for duplicate region detection. during this technique image is split into overlapping blocks of size SxS. for every block gray worth is calculated to be its dominant feature. supported the comparison of this dominant issue a affiliation matrix is made. If the affiliation matrix features a row of zeros, then the block similar to this row isn't connected to the other block within the bucket. this fashion duplicate regions area unit detected. This technique is sweet at distinguishing the situation and form of the cast regions and direct block comparison is evaded sacrifice in performance time.

Copy-move detection proposed by Sekeh offers improved time complexity by using sequential block clustering. Clustering results in reduced search space in block matching and improves time complexity as it eliminates several block-comparing operations. When number of cluster is greater than threshold, local block matching is more efficient than lexicographically sorting algorithm.

Table 1: various copy-move forgery detection algorithms

Detection and localization method for copy-move forgery is proposed in based on SIFT features. Novelty of the work consists in introducing a clustering procedure which operates in the domain of the geometric transformation and deal with multiple cloning too.

A robust method based on DCT and SVD is proposed in to detect copy-move forgery. The image is divided into fixed-size overlapping blocks and 2D-DCT is applied to each block, then the DCT coefficients are quantized to obtain a more robust representation of each block followed by dividing these quantized blocks into non overlapping sub-blocks and SVD is applied to each sub-block, then features are extracted to reduce the dimension of each block using its largest singular value, feature vectors are lexicographically sorted, and duplicated image blocks will be matched by predefined shift frequency threshold.

All methods discussed above that are able to detect and localize copy move forgery and cloned regions in an image are computationally complex and require human interpretation of the results.

2.2.3 Image Splicing

Image splice forgery technique involves composition or merging of 2 or a lot of pictures changing the initial image considerably to supply a cast image. just in case pictures with differing background are united then it becomes terribly troublesome to create the borders and boundaries indiscernible. Figure 4 below shows associate degree example of image splice wherever the face of 2 completely different folks is combined to create a cast image.

Splicing detection is a complex problem whereby the composite regions are investigated by a variety of methods. The presence of abrupt changes between different regions that are combined and their backgrounds, provide valuable traces to detect splicing in the image under consideration. Farid suggested a method based on bi-spectral analysis to detect introduction of un-natural higher-order correlations into the signal by the forgery process and is successfully implemented for detecting human-speech splicing.

Ng and Chang suggested an image-splicing detection method based on the use of bicoherence magnitude features and phase features. Detection accuracy of 70% was obtained. Same authors later developed a model for detection of discontinuity caused by abrupt splicing using bi-coherence .

Fu et al., projected a technique that enforced use of Hilbert-Huang remodel (HHT) to obtain options for classification. applied math natural image model outlined by moments of characteristic functions was to differentiate the spliced pictures from the first pictures.

Chen et al., proposed a method that obtains image features from moments of wavelet characteristic and 2-D phase congruency which is a sensitive measure of transitions in a spliced image, for splicing detection.

Figure 4: Osama Bin Ladin's Spliced Image which Became Viral on the Net

Zhang et al., developed a splicing detection method that utilizes moment features extracted from the multi size block discrete cosine transform (MBDCT) and image quality metrics (IQMs) which are sensitive to spliced image. It measures statistical difference between spliced and original image and has a broad area of application.

Ng and Tsui and weight unit T.T. developed a way that uses linear geometric invariants from the only image and therefore extracted the CRF signature options from surfaces linear in image irradiance. In authors developed AN edge-profile based mostly technique for extraction of CRF signature from one image. within the planned technique the reliable extraction depends on the very fact that edges ought to be straight and wide.

Qing Zhong and Andrew explained a technique based on extraction of neighboring joint density features of the DCT coefficients, SVM classifier is applied for image splicing detection. The shape parameter of generalized Gaussian distribution (GGD) of DCT coefficients is utilized to measure the image complexity

Wang et al., developed a splice detection methodology for color pictures supported grey level cooccurrence matrix (GLCM). GLCM of the edge edge image of image saturation is used. Zhenhua et al. developed a junction detection methodology supported order data point filters (OSF). Feature extraction is guided by edge sharpness live and a visible strikingness. Fang et al. provides AN example that produces use of the sharp boundaries in color pictures. The technique appearance for the consistency of color division within the neighborhood pixels of the boundary. The author suggests that the irregularity at the colour edge is critical proof that the image has been tampered.

Zhang et al., developed a way that creates use of planate homography constraint to identify the pretend region roughly and an automatic technique for extraction exploitation graph cut with machine-driven feature choice to isolate the pretend object. Zhao et al., developed a technique supported vividness house. grey level run length texture feature is employed. Four grey level run-length run-number (RLRN) vectors on totally different directions obtained from de-correlated vividness channels were used as distinctive options for detection of image junction and for classification SVM was utilized as classifier. Liu et al. [71] developed a technique supported mensuration consistency of illumination. mensuration consistency was utilized in shadows by formulating color characteristics of shadows that is measured by shadow matte worth

The ways mentioned above have some limitations like the detection ways fail when measures like blur ar accustomed conceal the sharp edges disturbances once junction. The requirement of edges to be wide for reliable extraction is additionally a limitation. Moreover minor and localized meddling could go unseen. Table two below offers comparison of few image junction detection ways.

Table 2: Comparison of Splicing Detection Methods

2.2.4 Image Retouching

Image retouching is another sort of image forgery tool that is most typically used for commercial and aesthetic applications. Retouching operation is distributed largely to reinforce or cut back the image options. Retouching is additionally done to make a convincing composite of 2 images which can need rotation, resizing or stretching of 1 of the image. Example is shown below in figure 5; this photograph was free by Persia army to exaggerate their army strength by merely showing four missile instead of 3 within the original image.

Image retouching detection is administrated by attempting to seek out the blurring, enhancements, color changes and illumination changes within the solid image. Detection is simple if the first image is obtainable but blind detection is difficult task. For this kind of forgery 2 type of modification is completed either world or native. native modification is completed typically in copy-move and in conjunction forgery. distinction sweetening that's administrated just in case of retouching is completed at world level and for detection of meddling these area unit investigated. For illumination and changes in distinction world modification is administrated.

A classifier is meant to live distortion between the doctored and original image. the previous could incorporates several operations as amendment in blurring and brightness. Again the classifier performs well just in case variety of operations area unit disbursed on the image.

Algorithm described by M. C. Stamm and K. J. R. Liu ,is a technique that doesn't solely observe world enhancements however also suggests strategies for bar graph exploit. an identical model supported the probabilistic model of element values is elaborate in this approximate the detection of distinction enhancement. Histograms for entries that square measure possibly to occur with corresponding artifacts due to sweetening square measure known. this system provides terribly correct leads to case the enhancement isn't normal. variety of sweetening and gamma correction localization algorithms square measure offered which will simply observe the image modification and sweetening each globally and domestically.

Find Out How UKEssays.com Can Help You!

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

View our academic writing services

[16] Presents a way that detects distinction changes creating use of worldwide modification by police work positive or negative changes within the image supported Binary similarity live IQM. IQMs might offer substantial traces to sight the changes within the statistics. On the opposite hand, binary similarity measures options offer the variations. Appreciably correct and effective results are made just in case image is extremely changed.

Cao et al. [17] developed a way for detection of gamma correction for image forgery detection. Then technique relies on estimation of bar chart characteristics that square measure calculated by patterns of the height gap options. These options square measure discriminated by the precomputed bar chart for the gamma correction detection in pictures. Results propose that this technique is extremely effective for each international and native gamma correction modifications.

Figure 5: Re-sampled image: Iran army's Missile Launch

In [18] a way for detection of retouching is usually recommended supported the bi-Laplacian filtering .This technique appearance for matching blocks on the premise of a KD tree for every block of the image. this system works well on uncompressed pictures and compressed highresolution pictures. Accuracy additionally depends on space of the tampered region for high-level compressed pictures.

Two novel algorithms were developed in [19] to find the distinction improvement concerned manipulations in digital pictures. It focuses on the detection of world distinction improvement applied to JPEG-compressed pictures. The bar chart peak/gap artifacts incurred by the JPEG compression and constituent worth mappings area unit analyzed in theory, and distinguished by identifying the zero-height gap fingerprints. Another rule in same paper proposes to identify the composite image created by imposing distinction adjustment on either one or each source regions. The positions of detected block wise peak/gap bins area unit clustered for recognizing the distinction improvement mappings applied to completely different supply regions. Both algorithms area unit terribly effective.

Techniques supported the photo-response non-uniformity (PRNU) that discover the absence of the camera PRNU, a form of camera fingerprint, square measure explored in [20]. This formula detects image forgeries mistreatment sensing element pattern noise. A Markoff random field take choices together on the whole image instead of one by one for every component. This formula shows higher performance and a wider exercise.

Given below in Table 3 ,a comparison of ways for detection of image retouching is shown.

Many methods have been proposed and discussed for retouching forgery. Again the limitation here remain that most methods work well if the image is greatly modified in comparison to the original image. Moreover, the human intervention required to interpret the result makes them non blind techniques.

2.2.5 Lighting Condition

Images that are combined while tampering are taken in varying lighting conditions. It becomes tough to match the lighting condition from combining pictures. This lighting inconsistency within the composite imagees can be used for detection of image tampering. Initial attempt during this regard was created by Johnson and Farid [21]. They projected a way for estimating the direction of Associate in Nursing illuminating source of illumination at intervals one degree of freedom to observe forgery. By estimating direction of sunshine supply for various objects and folks in a picture, inconsistencies in lighting square measure uncovered within the image and meddling are often detected.

Table 3: Comparison of Methods for Detection of Retouching Forgery

Johnson and Farid [23] projected a model supported lighting inconsistencies owing to presence of multiple light-weight sources. This model is motivated from earlier model [21] however it generalizes this model by estimating a lot of complicated lighting and may be custom-made to one lighting supply.

Johnson and Farid [24] calculable 3-D direction to a lightweight supply by means that of the light's reflection within the human eye. These reflection referred to as reflective highlights area unit a strong clue as to the situation and form of the sunshine sources. Inconsistencies in location of the sunshine supply can be wont to discover change of state.

Kee and Farid [26] represented a way to estimate a 3-D lighting atmosphere with a lowdimensional model to approximate the model's parameters from one image. Inconsistencies within the lighting model square measure used as indication of forgery. Yingda et al., [27]described a technique supported inconsistency in source of illumination direction. the strategy known as as neighborhood technique was wont to calculate surface traditional matrix of image within the blind identification rule with detection rate of 87.33%

Fan et al., [28] projected a technique that represented that ways based on forgery detection using 2d lighting system are often fooled simply and gave a promising technique based on shape from shading. This approach is a lot of general however the problem of estimation of 3D shapes of objects remains.

Carvalho [29] represented a technique for image forgery detection supported inconsistencies in the color of the illumination. data from physics and applied mathematics primarily based fuel estimators on image regions of comparable material square measure used. From these texture and edge primarily based features square measure extracted. SVM meta fusion classifier is employed and detection rate of eighty six is obtained. This approach needs negligible user interaction. The advantage of those ways is that they create the lighting inconsistencies within the tampered image terribly tough to cover. A table of comparison for few of those ways is given below.

3 Conclusion

Many forgery detection techniques have been proposed by different people till now.In this paper a brief overview of forgery detection and various ways of tampering images are discussed.An attempt is made to bring in various potential algorithms that signify improvement in image authentication techniques.From the information of the image authentication techniques we have a tendency to infer that Passive or blind techniques which require no previous information of the image into consideration have a major advantage of no need of special equipments to insert the code into the image at the time of generation, over active techniques.

Table 4: Comparison of Methods for Detection of Forgeries based on Lighting Conditions

The techniques which are discussed above have been developed till now are mostly cable of detecting the forgery and only a few can localize the tampered area.There are many issues with currently available technologies.Most of the things require human to operate.Automating these will be tough.

In practice since an image forgery analyst or crime investigator may not be able to know which forgery technique is used by criminal to tamper the image, using a specific authentication technique may not be reasonable.Hence there is still an utmost need of forgery detection techniques which could detect any type of forgery.The current algorithms are time consuming and more complex in nature.There is a need to develop techniques which could detect all types of forgeries with lesser computational complexity and highly efficient.

References

[1] N. L. Htun, M. M. S. Thwin and C. C. San, 2018, "Evidence Data Collection with ANDROSICS Tool for Android Forensics," 2018 10th International Conference on Information Technology and Electrical Engineering (ICITEE), Kuta, pp. 353-358. doi: 10.1109/ICITEED.2018.8534760

[2] S. K. Yarlagadda et al., "Shadow Removal Detection and Localization for Forensics Analysis," ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, United Kingdom, 2019, pp. 2677-2681. doi: 10.1109/ICASSP.2019.8683695

[3] T. N. C. Doan, F. Retraint and C. Zitzmann, "Blind forensics tool of falsification for RAW images," 2017

IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), Bilbao, 2017, pp. 018-023. doi: 10.1109/ISSPIT.2017.8388312

[4] S. Wu, X. Xiong, Y. Zhang, Y. Tang and B. Jin, "A general forensics acquisition for Android smartphones with qualcomm processor," 2017 IEEE 17th International Conference on Communication Technology (ICCT), Chengdu, 2017, pp. 1984-1988. doi: 10.1109/ICCT.2017.8359976

[5] S. Wu, X. Xiong, Y. Zhang, Y. Tang and B. Jin, "A general forensics acquisition for Android smartphones with qualcomm processor," 2017 IEEE 17th International Conference on Communication Technology (ICCT), Chengdu, 2017, pp. 1984-1988. doi: 10.1109/ICCT.2017.8359976

[6] J. Waleed, D. A. Abdullah and M. H. Khudhur, "Comprehensive Display of Digital Image Copy-Move Forensics Techniques," 2018 International Conference on Engineering Technology and their Applications (IICETA), Al-Najaf, 2018, pp. 155-160. doi: 10.1109/IICETA.2018.8458084

[7] Z. Chen, B. Tondi, X. Li, R. Ni, Y. Zhao and M. Barni, "Secure Detection of Image Manipulation by Means of Random Feature Selection," in IEEE Transactions on Information Forensics and Security, vol. 14, no. 9, pp. 2454-2469, Sept. 2019. doi: 10.1109/TIFS.2019.2901826

[8] S. Li, Q. Sun and X. Xu, "Forensic Analysis of Digital Images over Smart Devices and Online Social Networks," 2018 IEEE 20th International Conference on High Performance Computing and Communications; IEEE 16th International Conference on Smart City; IEEE 4th International Conference on Data Science and Systems (HPCC/SmartCity/DSS), Exeter, United Kingdom, 2018, pp. 1015-1021. doi: 10.1109/HPCC/SmartCity/DSS.2018.00168

[9] H. M. S. Parvez, S. Sadeghi, H. A. Jalab, A. R. Al-Shamasneh and D. M. Uliyan, "Copy-move Image Forgery Detection Based on Gabor Descriptors and K-Means Clustering," 2018 International Conference on Smart Computing and Electronic Enterprise (ICSCEE), Shah Alam, 2018, pp. 1-6. doi: 10.1109/ICSCEE.2018.8538432

[10] V. Tuba, R. Jovanovic and M. Tuba, "Digital image forgery detection based on shadow HSV inconsistency," 2017 5th International Symposium on Digital Forensic and Security (ISDFS), Tirgu Mures, 2017, pp. 1-6. doi: 10.1109/ISDFS.2017.7916505

[11] S. Dadkhah, M. Koppen, S. Sadeghi, K. Yoshida, H. A. Jalab and A. A. Manaf, "An efficient ward-based copy-move forgery detection method for digital image forensic," 2017 International Conference on Image and Vision Computing New Zealand (IVCNZ), Christchurch, 2017, pp. 1-6. doi: 10.1109/IVCNZ.2017.8402472

[12] Zou, R., Lv, X., Wang, B. 57210202823;56268598400;55991487900; Blockchain-based photo forensics with permissible transformations (2019) Computers and Security, 87, art. no. 101567, . https://www.scopus.com/inward/record.uri?eid=2-s2.0-85069913489doi=10.1016DOI: 10.1016/j.cose.2019.101567

[13] Ramesh Babu, P., Sreenivasa Reddy, E. 57210724229;57210763943; A novel framework design for semantic based image retrieval as a cyber forensic tool (2019) International Journal of Innovative Technology and Exploring Engineering, 8 (10), pp. 2801-2808. https://www.scopus.com/inward/record.uri?eid=2s2.0-85071301524doi=10.35940DOI: 10.35940/ijitee.J9593.0881019

[14] J. Ravan and Thanuja, "Image forgery Detection against Forensic Image Digital Tampering," 2018 International Conference on Computational Techniques, Electronics and Mechanical Systems (CTEMS), Belgaum, India, 2018, pp. 315-321. doi: 10.1109/CTEMS.2018.8769121

[15] W. v. d. Meer, K. R. Choo, M. Kechadi and N. Le-Khac, "Investigation and Automating Extraction of Thumbnails Produced by Image Viewers," 2017 IEEE Trustcom/BigDataSE/ICESS, Sydney, NSW, 2017, pp. 1076-1081. doi: 10.1109/Trustcom/BigDataSE/ICESS.2017.355

[16] G. Boato, F. G. B. D. Natale and P. Zontone, "How digital forensics may help assessing the perceptual impact of image formation and manipulation", Proc. Fifth Int. Workshop on Video Processing and Quality Metrics for Consumer Electronics – VPQM 2010, (2010).

[17] G. Cao, Y. Zhao and R. Ni, "Forensic estimation of gamma correction in digital images", Proc. 17th IEEE Int. Conf. on Image Processing, (ICIP'2010), (2010), pp. 2097–2100.

[18] X. F. Li, X. J. Shen and H. P. Chen, "Blind identification algorithm for the retouched images based on biLaplacian", Comput. Appl., vol. 31, (2011), pp. 239–242.

[19] G. Cao, Y. Zhao, R. Ni and X. Li, "Contrast Enhancement-Based Forensics in Digital Images", IEEE Transactions on Information Forensics and Security, vol. 9, no. 3, (2014), pp. 515-525.

[20] G. Chierchia, G. Poggi, C. Sansone and L. Verdoliva, "A Bayesian-MRF Approach for PRNU-Based Image Forgery Detection", Information Forensics and Security, IEEE Transactions, vol. 9, no. 4, (2014), pp. 554- 567.v

[21] M. Johnson and H. Farid, "Exposing digital forgeries by detecting inconsistencies in lighting", Proc. ACM multimedia and security workshop, (2005), pp. 1–10.

[22] M. Johnson and H. Farid, "Exposing digital forgeries by detecting inconsistencies in lighting", Proc. ACM multimedia and security workshop, (2005), pp. 1–10.

[23] M. Johnson and H. Farid, "Exposing digital forgeries in complex lighting environments", IEEE Trans Inf Forensics Security, vol. 3, no. 2, (2007), pp. 450–61.

[24] M. Johnson and H. Farid, "Exposing digital forgeries through specular highlights on the eye", Proc. International workshop on information hiding, (2007), pp. 311–25.

[25] H. Chen, S. Xuanjing and Y. Lv, "Blind Identification Method for Authenticity of Infinite Light Source Images", Fifth International Conference on Frontier of Computer Science and Technology (FCST), (2010), pp. 131-135.

[26] E. Kee and H. Farid, "Exposing digital forgeries from 3-D lighting environments", IEEE International Workshop on Information Forensics and Security (WIFS), (2010), pp. 1-6.

[27] L. Yingda, S. Xuanjing and C. Haipeng, "An improved image blind identification based on inconsistency in light source direction", Supercomput, vol. 58, no. 1, (2011), pp. 50–67.

[28] W. Fan, K. Wang, F. Cayre and Z. Xiong, "3D Lighting-Based Image Forgery Detection Using ShapeFromShading", 20th European Signal Processing Conference EUSIPCO, (2012), pp. 1777-1781.

[29] T. J. De Carvalho, C. Riess, E. Angelopoulou and H. Pedrini, "Exposing Digital Image Forgeries by Illumination Color Classification", IEEE Transactions on Information Forensics and Security, vol. 8, no. 7, (2013), pp. 1182–1194.

 

Cite This Work

To export a reference to this article please select a referencing style below:

Give Yourself The Academic Edge Today

  • On-time delivery or your money back
  • A fully qualified writer in your subject
  • In-depth proofreading by our Quality Control Team
  • 100% confidentiality, the work is never re-sold or published
  • Standard 7-day amendment period
  • A paper written to the standard ordered
  • A detailed plagiarism report
  • A comprehensive quality report
Discover more about our
Essay Writing Service

Essay Writing
Service

AED558.00

Approximate costs for Undergraduate 2:2

1000 words

7 day delivery

Order An Essay Today

Delivered on-time or your money back

Reviews.io logo

1858 reviews

Get Academic Help Today!

Encrypted with a 256-bit secure payment provider