In the last years many image forensic (IF) algorithms have been proposed to reveal traces of processing or tampering. On the other hand, Anti-Forensic (AF) tools have also been developed to help the forger in removing editing footprints. Inspired by the fact that it is much harder to commit a perfect crime when the forensic analyst uses a multi-clue investigation strategy, we analyse the possibility o ered by the adoption of a data fusion framework in a Counter-Anti-Forensic (CAF) scenario. We do so by adopting a theoretical framework, based on Dempster-Shafer Theory of Evidence, to synergically merge information provided by IF tools and CAF tools, whose goal is to reveal traces introduced by anti-forensic algorithms. The proposed system accounts for the non-trivial relationships between IF and CAF techniques; for example, in some cases the outputs from the former are expected to contradict the output from the latter. We evaluate the proposed method within a representative forensic task, that is splicing detection in JPEG images, with the forger trying to conceal traces using two di erent counter-forensic methods. Results show that decision fusion strongly limits the e ectiveness of AF methods.
Starting from the concept that many image forensic tools are based on the detection of some features revealing a particular aspect of the history of an image, in this work we model the counter-forensic attack as the injection of a specific fake feature pointing to the same history of an authentic reference image. We propose a general attack strategy that does not rely on a specific detector structure. Given a source image x and a target image y, the adversary processes x in the pixel domain producing an attacked image ~x, perceptually similar to x, whose feature f(~x) is as close as possible to f(y) computed on y. Our proposed counter-forensic attack consists in the constrained minimization of the feature distance Φ(z) =│ f(z) - f(y)│ through iterative methods based on gradient descent. To solve the intrinsic limit due to the numerical estimation of the gradient on large images, we propose the application of a feature decomposition process, that allows the problem to be reduced into many subproblems on the blocks the image is partitioned into. The proposed strategy has been tested by attacking three different features and its performance has been compared to state-of-the-art counter-forensic methods.
An optical noninvasive inspection tool is presented to, in vivo, better characterize biological tissues such as human skin.
The method proposed exploits a multispectral imaging device to acquire a set of images in the visible and NIR range.
This kind of information can be very helpful to improve early diagnosis of melanoma, a very aggressive cutaneous
neoplasm, incidence and mortality of which continues to rise worldwide. Currently, noninvasive methods (i.e.
dermoscopy) have improved melanoma detection, but the definitive diagnosis is still achieved only by invasive method
(istopathological observation of the excised lesion). The multispectral system we developed is capable of imaging layers
of structures placed at increasing depth, thanks to the fact that light propagates into the skin and reaches different depths
depending on its wavelength. This allows to image many features which are less or not visible in the clinical and
dermoscopic examination. A new semeiotics is proposed to describe the content of multispectral images. Dermoscopic
criteria can be easily applied to describe each image in the set, however inter-images correlations need new suitable
descriptors. The first group of new parameters describes how the dermoscopic features, vary across the set of images.
More aspects are then introduced. E.g. the longest wavelength where structures can be detected gives an estimate of the
maximum depth reached by the pigmented lesion. While the presence of a bright-to-dark transition between the
wavebands in the violet to blue range, reveals the presence of blue-whitish veil, which is a further malignancy marker.
Melanoma is a very aggressive cutaneous neoplasm, incidence and mortality of which continues to rise worldwide.
Identification of initial melanoma may be difficult because it may be clinically, and sometimes also dermoscopically,
indistinguishable from benign lesions. Currently definitive diagnosis is made only by histopathological observation of
the excised lesion. Several tools have been developed to help detecting malignant lesions. Dermoscopy highlights
numerous characteristic features of the lesion and of the pigmented network. The method we propose exploits a
multispectral imaging device to acquire a set of images in the visible and NIR range. Thanks to the fact that light propagates into the skin and reaches different depths depending on its wavelength, such a system is capable of imaging
layers of structures placed at increasing depths. Therefore a new semeiotics is proposed to describe the content of such images. Dermoscopic criteria can be easily applied to describe each image in the set, however inter-images correlation needs new suitable descriptors. The first group of new parameters describes how the dermoscopic ones, vary across the set of images. More features are then introduced. E.g. the longest wavelength where structures can be detected gives an
estimate of the maximum depth reached by the pigmented lesion. While the presence of a bright-to-dark transition
between the wavebands in the violet to blue range, reveals the presence of blue-whitish veil, which is a further
malignancy marker.
Though the current state of the art of image forensics permits to acquire very interesting information about
image history, all the instruments developed so far focus on the analysis of single images. It is the aim of this
paper to propose a new approach that moves the forensics analysis further, by considering groups of images
instead of single images. The idea is to discover dependencies among a group of images representing similar or
equal contents in order to construct a graph describing image relationships. Given the pronounced effect that
images posted on the Web have on opinions and bias in the networked age we live in, such an analysis could be
extremely useful for understanding the role of pictures in the opinion forming process. We propose a theoretical
framework for the analysis of image dependencies and describe a simple system putting the theoretical principles
in practice. The performance of the proposed system are evaluated on a few practical examples involving both
images created and processed in a controlled way, and images downloaded from the web.
Nowadays, photographs are one of the most used media for communication. Images are used for the representation
of documents, Cultural goods, and so on: they are used to pass on a wedge of historical memory of the society.
Since its origin, the photographic technique has got several improvements; nevertheless, photos are liable to
several damages, both concerning the physical support and concerning the colors and figures which are depicted
in it: for example, think about scratches or rips happened to a photo, or think about the fading or red (or yellow)
toning concerning the colors of a photo. In this paper, we propose a novel method which is able to assess the original beauty of digital reproductions of aged photos, as well as digital reproductions of faded goods. The method is based on the comparison of the degraded image with a not-degraded one showing similar contents; thus, the colors of the not-degraded image can be transplanted in the degraded one. The key idea is a dualism between the analytical mechanics and the color theory: for each of the degraded and not-degraded images we compute first a scatter plot of the x and y normalized coordinates of their colors; these scatter diagrams can be regarded as a system of point masses, thus provided of inertia axes and an inertia ellipsoid. Moving the scatter diagram of the degraded image over the one belonging to the not-degraded image, the colors of the degraded image can be restored.
In this paper we present a new method for the detection of forgeries in digital videos, using the sensor's pattern noise. The
camera pattern noise is a unique stochastic high frequency characteristic of imaging sensors and the detection of a forged
frame in a video is determined by comparing the correlation between the noise within the frame itself and the reference
pattern noise with an empirical threshold. The reference pattern is created for the identification of the camera and the
authentication of the video too. Such a pattern is defined as self building because it is created from the video sequence
during the time develop, with a technique applied frame per frame, by averaging the noise extracted from each frame. The
method has been inherited from an existing system created by Fridrich et al.1 for still images. By using this method we
are able to identify if all the scenes of a video sequence have been taken with the same camera and if the number and/or
the content of the frames of the video have been modified. A large section of the paper is dedicated to the experimental
results, where we demonstrate that it is possible to perform a reliable identification even from video that has undergone
MPEG compression or frame interpolation.
In the framework of the activities carried out by the European Network of Excellence for Cryptology ECRYPT, a 4-year
network of excellence funded within the Information Societies Technology (IST) Programme of the European Commission's
Sixth Framework Programme (FP6), it was proposed to launch the BOWS Contest. BOWS, acronym of Break Our
Watermarking System, was designed to allow to investigate how and when an image watermarking system can be broken
though preserving the highest possible quality of the modified content, in case that the watermarking system is subjected to
a world-wide attack. BOWS contest was not intended as an attempt to prove how well-performing a watermarking system
is, but it was expected by means of this test to better understand which are the disparate possible attacks, perhaps unknown
at the moment of the start of the contest, the BOWS participants could carry out to perform their action and comprehend
in this way the degree of difficulty of breaking the embedded watermark. This paper describes how the Contest has been
designed and analyzes the general progress of the attacks during the Contest.
KEYWORDS: Digital watermarking, Information security, Quantization, Sensors, Computer programming, Legal, Detection and tracking algorithms, Modulation, Data hiding, Chemical elements
Recently the research in the watermarking field has concentrated its attention to the security aspects. In a watermarking application one of the most sensitive steps from the point of view of security, is the watermark extraction process: here, a prover has to prove to a verifier that a given watermark is present into the content. In the design of the system, it has to be considered that the prover is not a trusted party: the prover could try to exploit the knowledge acquired during watermark extraction to remove the embedded code and, consequently, to undermine the security of the watermarking system. To tackle this particular issue, it has been proposed to use some cryptographic techniques, defined zero-knowledge protocols, for building a secure layer on top of the watermarking channel, able to protect the watermarking algorithm against a possible information leakage. Up till now, zero-knowledge protocols have been applied to spread-spectrum based detectable watermarking algorithms. In this paper, a novel zero-knowledge protocol designed for a Spread Transform Dither Modulation (ST-DM) watermarking algorithm, belonging to the class of the informed watermarking systems, is proposed.
KEYWORDS: Data hiding, Error analysis, Video, Visualization, Computer programming, Video coding, Error control coding, Data processing, Reconstruction algorithms, Standards development
In this paper, a new data hiding-based error concealment algorithm
is proposed. The method allows to increase the video quality in
H.264/AVC wireless video transmission and Real-Time applications,
where the retransmission is unacceptable. Data hiding is used for
carrying to the decoder the values of 6 inner pixels of every
macro-block (MB) to be used to reconstruct lost MBs into Intra
frames through a bilinear interpolation process. The side
information concerning a slice is hidden into another slice of the
same frame, by properly modifying some quantized AC coefficients
of the Integer Transform of the 16 blocks 4x4 composing the MBs of
the host slice. At the decoder, the embedded information can be
recovered from the bit-stream and used in the bilinear interpolation to reconstruct the damaged slice. This method, although allowing the system to remain fully compliant with the standard, improves the performance with respect to the conventional error concealment methods adopted by H.264/AVC, from the point of view of visual quality and Y-PSNR. In particular, it is possible to improve the result of the interpolation process adopted by H.264/AVC, reducing the distance between interpolating pixels from 16 to 5.
KEYWORDS: Digital watermarking, Quantization, Monte Carlo methods, Modulation, Error analysis, Distortion, Data modeling, Image compression, Algorithm development, Performance modeling
In this paper, the performance of ST-DM watermarking in presence of two categories of non additive attacks, such as the
gain attack plus noise addition, and the quantization attack, are evaluated. The work has been developed by assuming
that the host features are independent and identically distributed Gaussian random variables, and that a minimum distance
criterion is used to decode the embedded information. The theoretical bit error probabilities are derived in closed form,
thus permitting to evaluate the impact of the considered attacks on the watermark at a theoretical level. The analysis is
validated by means of extensive Monte Carlo simulations. Moreover, Monte Carlo simulations permitted to abandon the
hypothesis of normally distributed host features, in favor of more realistic models based on a Laplacian or a Generalized
Gaussian pdf. The overall result of our analysis is that ST-DM exhibits excellent performance in all cases with the only
noticeable exception of the gain attack.
KEYWORDS: Digital watermarking, Image processing, Calibration, Cultural heritage, CRTs, RGB color model, Digital imaging, Image quality, Image compression, Mathematical modeling
The goal of this paper is to present the research that has been carried out over the last 10 years in the Image Processing and Communications Lab of the University of Florence for developing applications for the cultural heritage field. In particular research has focused on the following issues: high resolution acquisition of paintings by means of mosaicing techniques, colour calibration of the acquisition devices, tools for forecasting the results of restoration processes (in particular with reference to the cleaning process), systems for producing virtually restored digital copies of paintings (in particular for filling in cracks and lacunas). The problems related to the distribution of the digital copies have also been considered, in particular with reference to the watermarking of
the images for copyright protection. The methodologies developed by the Lab with reference to the above mentioned issues will be described, and the main results discussed.
Over the past years the progresses of electronic imaging have encouraged researchers to develop applications for the fine arts sector. In particular the aspects that have been mostly investigated have regarded, the high quality acquisition of paintings (both from the point of view of spatial resolution and of color calibration), the actual restoration of the works (for giving to restorers an aid to forecast the results of the tasks they choose), the virtual restoration (to try to build a digital copy of the painting as it was at the origin), and the diagnosis (to automatically highlights, evaluate and monitor the possible damages that a work has suffered). Partially related to image processing are also the technologies for 3D acquisition and modeling of statues. Finally particular care has been given recently also to the distribution of the digital copies of cultural heritage objects over the Internet, thus posing novel
problems regarding the effective browsing of digital multimedia archives, and the protection of the Intellectual Property connected to art-works reproductions. The goal of this paper is to review the research results that have been obtained in the past in this field, and to present some problems that are still open and can represent a challenging research field for the future.
KEYWORDS: Data hiding, Medical imaging, Radiography, Image compression, Quantization, Image quality, Chemical elements, Image storage, Algorithms, Data communications
Current research on data hiding is more and more demonstrating
that many applications can benefit from these technologies: among
these, medical data management. Current medical record formats
store in separated fields image data and the textual information,
so that the link between image and patient occasionally could get
mangled by protocol converters or tampering attacks. Moreover, if
an intruder can access to the database, he is able to modify the
attached text. Embedding patient's information directly into the
image through data hiding technology can represent an useful
safety measure. Data hiding technologies suitable for such an
application must satisfy specific requirements, the most important
are: a high payload reliably identifying a patient; the
preservation of the quality of the host medical image, the
robustness to content modification. According to this analysis, a
comparison between different data hiding approaches will be
presented, to evaluate the most suitable algorithms for medical
applications. In particular two different kind of algorithms will
be taken into account: one algorithm based on Bayes theory will be
compared with algorithms following the new approach of modelling
data hiding as communication with side-information at the
transmitter. These methods will be tested and compared in the
framework of medical data management in order to identify benefits
and drawbacks of both the different approaches for such an
application.
The future development of networked multimedia services is conditioned by the achievement of efficient methods to protect data owners against non-authorised copying and redistribution of the material put on the network, to grant that the Intellectual Property Rights (IRP) are well respected and the assets properly managed. A Notice and Takedown Procedure is considered, based on a self-regulatory regime, and as a possible implementation of this system, an Intelligent Agent based platform is proposed.
The aim of this paper is to present a novel approach to the decoding and the detection of multibit, multiplicative, watermarks embedded in the frequency domain. Watermark payload is conveyed by amplitude modulating a pseudo-random sequence, thus resembling conventional DS spread spectrum techniques. As opposed to conventional communication systems, though, the watermark is embedded within the host DFT coefficients by using a multiplicative rule. The watermark decoding technique presented in the paper is an optimum one, in that it minimizes the bit error probability. The problem of watermark presence assessment, which is often underestimated by state-of-the-art research on multibit watermarking, is addressed too, and the optimum detection rule derived according to the Neyman-Pearson criterion. Experimental results are shown both to demonstrate the validity of the theoretical analysis and to highlight the good performance of the proposed system.
KEYWORDS: Digital watermarking, Visual process modeling, Contrast sensitivity, Visual system, Digital imaging, Eye, Data hiding, Systems modeling, Spatial frequencies, Optical inspection
The problem of evaluating the maximum number of information bits that can be hidden within an image is considered; usually it is addressed by looking at the watermarking process as a communication task, in which the signal, i.e. the watermark, is transmitted over the channel, the host data acts the part of. Thus the maximum number of information bits is the capacity of the watermark-channel. By relying on experimental results in which the dependence of the watermark capacity upon the watermark strength G is evidenced, the knowledge of the maximum allowed watermark level, under the constraint of watermark invisibility, is required. G is often interactively adjusted to the image at hand, because no simple algorithm exists that permits to fit the watermark level according to the characteristics of the host image. Hence, a novel algorithm to model the Human Visual System has been developed which considers frequency sensitivity, local luminance and contrast masking effects. The proposed method exploits a block based DCT decomposition of the image, that permits to trade off between spatial and frequency localisation of the image features and disturbs. Through this model, the maximum allowable watermark strength is determined in a completely automatic mode and then the value of watermark capacity is computed.
KEYWORDS: Video, Video compression, Standards development, Networks, Video coding, Telecommunications, Image compression, Data communications, Mobile communications, Video processing
The recent development of video compression algorithms allowed the diffusion of systems for the transmission of video sequences over data networks. However, the transmission over error prone mobile communication channels is yet an open issue. In this paper, a system developed for the real time transmission of H263 video coded sequences over TETRA mobile networks is presented. TETRA is an open digital trunked radio standard defined by the European Telecommunications Standardization Institute developed for professional mobile radio users, providing full integration of voice and data services. Experimental tests demonstrate that, in spite of the low frame rate allowed by the SW only implementation of the decoder and by the low channel rate a video compression technique such as that complying with the H263 standard, is still preferable to a simpler but less effective frame based compression system.
The extension of gray level watermarking to the color case is one of the open issues watermarking researchers are still facing with. To get rid of the correlation among image color bands, a new approach is proposed in this paper which is based on the decorrelation property of the Karhunen-Loeve Transform (KLT). First, the KLT is applied to the RGB components of the host image, then watermarking is performed independently in the DFT domain of the KL-transformed bands. In order to preserve watermark invisibility, embedding is achieved by modifying the magnitude of mid-frequency DFT coefficients according to an additive-multiplicative rule. Different weights are used for the three KL bands to further enhance invisibility. On the decoder side, KL decorrelation is exploited to optimally detect the watermark presence. More specifically, by relying on Bayes statistical decision theory, the probability of missing the watermark is minimized subject to a fixed false detection rate. Basing on the Neymann-Pearson criterion, the watermark presence is revealed by comparing a likelihood function against a threshold, if the former is above the latter the decoder decides for the watermark presence, otherwise such an hypothesis is rejected. Experimental results are shown proving the robustness of the algorithm against the most common image manipulations, and its superior performance with respect to conventional techniques based on luminance watermarking.
In this paper, a new watermarking system for copyright protection of digital images is presented. The method operates in the frequency domain, by embedding a pseudo-random sequence of real numbers in a selected set of DFT coefficients of the image. Moreover, the masking characteristics of the Human Visual System are exploited for watermark hiding and a synchronization pattern is introduced into the watermarked image to cope with geometrical attacks. By relying on statistical decision theory, a new decoding algorithm which is optimum for non-additive full-frame DFT watermarks is then derived. Experimental results highlight both the superiority of the novel detector scheme with respect to conventional correlation-based decoding, both the robustness of the overall system against a large set of attacks aiming at removing the watermark.
Digital watermarking has been indicated as a technique in the position to cope with the problem of Intellectual Property Rights (IPR) protection of images; this result should be achieved by embedding into the data an unperceivable digital code, namely the watermark, carrying information about the copyright status of the work to be protected. In this paper, the practical feasibility of IPR protection through digital watermarking is investigated. The most common requirements application scenarios impose to the watermarking technology to satisfy are discussed. Watermarking schemes are first classified according to the approach used to extract the embedded code and then the impact, such a classification has on watermark usability, is investigated form an application point of view. As it will be shown, the effectiveness of watermarking as an IPR protection tool turns out to be heavily affected by the detection strategy, which as to be carefully matched to the application at hand. Finally, the practical case of the Tuscany and Gifu Art Virtual Gallery has been considered in detail, to further explain in which manner a watermarking technique can be actually used.
KEYWORDS: Digital watermarking, Discrete wavelet transforms, Eye, Quantization, Binary data, Eye models, Visual process modeling, Image compression, Linear filtering, Tin
The growth of the Internet and the diffusion of multimedia applications requires the development of techniques for embedding identification codes into images, in such a way that it can be granted their authenticity and/or protected the copyright. In this paper a novel system for image watermarking, which exploits the similarity exhibited by the Digital Wavelet Transform with respect to the models of the Human Visual System, for robustly hiding watermarks is presented. In particular, a model for estimating the sensitivity of the eye to noise, previously proposed for compression applications, is used to adapt the watermark strength to the local content of the image. Experimental results are shown supporting the validity of the approach.
KEYWORDS: Digital watermarking, Digital imaging, Data hiding, Quantization, Statistical analysis, Interference (communication), Analytical research, Data modeling, Process modeling
An evaluation of the number of bits that can be hidden within an image by means of frequency-domain watermarking is given. Watermarking is assumed to consist in the modification of a set of full-frame DCT (DFT) coefficients. The amount of modification each coefficient undergoes is proportional to the magnitude of the coefficient itself, so that an additive- multiplicative embedding rule results. The watermark-channel is modeled by letting the watermark be the signal and the image coefficients the noise introduced by the channel. To derive the capacity of each coefficient, the input (i.e. the watermark) and the output (i.e. the watermarked coefficients) of the channel are quantized, thus leading to a discrete- input, discrete-output model. Capacity is evaluated by computing the channel transition matrix and by maximizing the mutual input/output information. Though the results we obtained do not take into account attacks, they represent a useful indication about the amount of information that can be hidden within a single image.
KEYWORDS: Digital watermarking, Tin, Image compression, Image processing, Multimedia, Signal processing, Image quality, Digital filtering, Image filtering, Visual system
Notwithstanding the high availability of image contents, multimedia products propose very low quality pictures, primarily due to the fact that effective systems for the copyright protection of multimedia works are unavailable. A digital image can, in fact, be easily reproduced obtaining as many identical copies of it as we want, without any possibility to prevent it.To get around the problem, further than scrambling the data to be protected by means of cryptographic techniques, a code carrying information about IPR could be invisibly embedded into them, in such a way to provide a mean to control their distribution. This is the aim of image watermarking techniques. In this paper general issues related to copyright protection of digital data as well as some items referring to the embedding of a watermark in the frequency domain, are discussed. Results are also presented showing the robustness of the proposed algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.