KEYWORDS: Digital watermarking, Video, Multimedia, Information security, Video compression, 3D modeling, 3D video compression, Receivers, Standards development, Visualization
MPEG-4 is an international object-based standard that provides technological basis for digital television, interactive graphics and multimedia applications. These objects can be natural or synthetic e.g. textures, 3D objects, videos or sounds. In this paper we suggest an integrity approach to protect the content of MPEG-4 data. The essential part of this approach is to embed a robust watermark into each visual, audio and 3D object. The content fragile watermark verifying the integrity of a scene is the sum of all information retrieved from the robust watermarks extracted from the objects of the scene. The information of the fragile watermark will be distributed redundantly to all robust watermarks of the scene. Another essential part of our approach is to embed a part of the scene description or object descriptors as a watermark message into the video or audio streams. The amount of embedded information depends on the payload of the watermarking algorithms. We also analyze the possibility of embedding equivalent information into 3D models, depending on the application.
Due to the inherent combination of image, video, audio and 3D-models in new MPEG standards like MPEG-4, robust 3D-watermarking is becoming more and more popular to ensure data authenticity and integrity. Beside the design of 3D-watermarking schemes, the evaluation is necessary. In the paper we analyze currently proposed 3D watermarking algorithms for weaknesses that may possibly be exploited in watermarking attacks and derive according design principles for improving algorithms. The paper makes contributions in three respects: First we analyze properties with respect to preservation of surface continuity and symmetries. Second, we analyze how algorithms proposed by Ohbuchi et al. and Praun et al. compensate for errors introduced through imperfect mesh resampling. For Ohbuchi et al. algorithm, we explain how spectral decomposition can be applied in directions other than canonical coordinate system axes in order to minimize errors introduced to resampling. With an experimental prototyped algorithm based on spectral decomposition, we demonstrate a significant increase of robustness of features with respect to errors introduced in resampling. Third, we analyzed the general effects caused by polygon simplification on watermarking algorithms, which, from the results from Praun et al. can be considered as a “critical” operation. As a first result of our analysis, we present a modification of Praun et al. watermarking scheme detector trying to improve compensation for these effects.
KEYWORDS: Information security, Telecommunications, Systems modeling, Cryptography, Computing systems, Steganography, Data modeling, Computer simulations, Image encryption, Digital watermarking
Intuitively, the security of a steganographic communication between two principals lies in the inability of an eavesdropper to distinguish cover-objects from stego-objects, that is objects which contain secret messages. A system should be already considered insecure, if an eavesdropper can suspect the presence of secret communication. Several definitions of steganographic security were proposed in the literature. However, they all consider only perfectly secure steganographic systems, where even a computationally unbounded observer cannot detect the presence of a secret message exchange. Second, it might be difficult to construct secure schemes usable in practice following these definitions. Third, they all require the knowledge of the probability distribution of normal covers; although it might be possible in certain cases to compute this probability, it will in general be infeasible to obtain. In this paper, we propose a novel approach for defining security in steganographic systems. This definition relies on a probabilistic game between the attacker and a judge. Given the ability to observe the normal communication process and the steganographic system, the attacker has to decide whether a specific object (given to him by a judge) is in fact a plain cover or a stego-object. We discuss the applicability of this new definition and pose the open problem of constructing provably secure steganographic systems.
There is a wide consensus among the feature film production studios that the Internet era brings a new paradigm for film distribution to cinemas worldwide. The benefits of digital cinema to both producers and cinemas are numerous: significantly lower distribution and maintenance costs, immediate access to film libraries, higher presentation quality, and strong potential for developing new business models. Despite these advantages, the studios are still reluctant to jump into the digital age. The main showstopper for digital cinema is the danger of widespread piracy. Piracy already costs Hollywood an estimated two billion dollars annually and digital cinema without proper copyright enforcement could increase this number. In this paper, we present a copyright management system that aims at providing the set of necessary security tools: standard cryptographic primitives and copyright protection mechanisms that enable a reliable and secure feature film delivery system.
One of the main problems, which darkens the future of digital watermarking technologies, is the lack of detailed evaluation of existing marking schemes. This lack of benchmarking of current algorithms is blatant and confuses rights holders as well as software and hardware manufacturers and prevents them from using the solution appropriate to their needs. Indeed basing long-lived protection schemes on badly tested watermarking technology does not make sense. In this paper we will present the architecture of a public automated evaluation service we have developed for still images, sound and video. We will detail and justify our choice of evaluation profiles, that is the series of tests applied to different types of wa-termarking schemes. These evaluation profiles allow us to measure the reliability of a marking scheme to different levels from low to very high. Beside the known StirMark transformations, we will also detail new tests that will be included in this platform. One of them is intended to measure the real size of the key space. Indeed, if one is not careful, two different watermarking keys may produce interfering watermarks and as a consequence the actual space of keys is much smaller than it appears. Another set of tests is related to audio data and addresses the usual equalisation and normalisation but also time stretching, pitch shifting. Finally we propose a set of tests for fingerprinting applications. This includes: averaging of copies with different fingerprint, random ex-change of part between different copies and comparison between copies with selection of most/less frequently used position differences.
KEYWORDS: Digital watermarking, Distortion, Visualization, Image quality, Image compression, Signal to noise ratio, Image processing, Databases, Image analysis, Digital imaging
KEYWORDS: Digital watermarking, Feature extraction, Visualization, Signal processing, Cryptography, Binary data, Video, Digital photography, Photography, Information security
After a brief reminder on the real difficulties that digital watermarking software still has to tackle -- especially some random geometric attacks such as StirMark -- we present an early overview of on-going solutions to make the survival of the watermark possible.
KEYWORDS: Digital watermarking, Visualization, Distortion, Image compression, Image quality, Image processing, Signal to noise ratio, Databases, Digital imaging, Nonlinear filtering
Since the early 90s a number of papers on 'robust' digital watermarking systems have been presented but none of them uses the same robustness criteria. This is not practical at all for comparison and slows down progress in this area. To address this issue, we present an evaluation procedure of image watermarking systems. First we identify all necessary parameters for proper benchmarking and investigate how to quantitatively describe the image degradation introduced by the watermarking process. For this, we show the weaknesses of usual image quality measures in the context watermarking and propose a novel measure adapted to the human visual system. Then we show how to efficiently evaluate the watermark performance in such a way that fair comparisons between different methods are possible. The usefulness of three graphs: 'attack vs. visual-quality,' 'bit-error vs. visual quality,' and 'bit-error vs. attack' are investigated. In addition the receiver operating characteristic (ROC) graphs are reviewed and proposed to describe statistical detection behavior of watermarking methods. Finally we review a number of attacks that any system should survive to be really useful and propose a benchmark and a set of different suitable images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.